
Justin
2.8K posts

Justin
@edtech808
educational technologist, gr 6-12


I can think of few people who sit so squarely at the intersection of tech and words than Every’s editor in chief, Kate Lee (@katelaurielee). She has honed her editorial sense in both the publishing and tech worlds—first as a literary agent, then with roles at Medium, WeWork, and Stripe Press. She has strong views on language and the highest bar for quality in written work. So I know that if she is adopting an AI tool, it’s the real deal. Kate’s approval is also a signal that something will be widely used by editorial teams in the future. She’s the canary in the coal mine. I had her @every’s AI & I podcast to talk about how she is building an AI-native editorial team. We discussed: - How she went from literary agent to tech, and why she thinks the skills transfer more than people expect - What it looks like to run a small editorial team in the AI era - Why automating copy editing is harder than it sounds - What she uses Claude for beyond the editing process This is a must-watch for editors, media operators, and anyone curious about what AI adoption looks like for a thoughtful knowledge worker. Watch below! Timestamps Introduction and Kate's early career as a literary agent: 00:01:00 From book publishing to tech—Medium, WeWork, and Stripe Press: 00:04:45 How Kate joined Every and what made the role click: 00:12:00 What it's like to be a knowledge worker at the frontier of AI: 00:27:00 The ‘aha’ moment: using AI to manage hundreds of applicants: 00:31:00 How Every's editorial team uses AI to enforce standards and train taste: 00:36:24 Publishing two reviews of major model releases on the same day: 00:45:06 What automating copy editing requires: 00:51:39

AI writing is detectable even when it accurately copies your style. The reason: humans are twice as linguistically varied as machines. We're inconsistent in ways that read as alive. More technically, the perplexity score of human writing tends to hover around 30 versus machines' 15. Consistency is the tell. This is one of the many bits of recent research in LLM stylometry, which is guiding the Spiral roadmap.

a bit about how I use Claude to help me write, instead of having Claude write for me x.com/trq212/status/…


Today I'm releasing my entire newsletter archive (350+ posts) and all podcast transcripts (300+ episodes) as AI-friendly Markdown files. Plus an MCP server and GitHub repo. A few months ago I shared my podcast transcripts on a whim, and y'all built the most amazing things—an RPG game, a parenting wisdom site, infographics, a Twitter bot, and 50+ other projects. Let's see what happens when I give you even more data. Grab the data here: LennysData.com. Paid subscribers get all of the data (some 350 posts and 300 transcripts). Free subscribers get a subset. I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to play with that AI tool you've been meaning to try. Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments here: lennysnewsletter.com/p/how-i-built-…. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Check out today's newsletter post for inspiration on what you could to build: lennysnewsletter.com/p/how-i-built-… LFG.

@swyx I had an idea the other day for an AI project to look at the transcripts of all the top podcasts and newsletters and see who has been best at predicting things. Tricky but could be super fun.


Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.











Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor


A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…



I hate deceptive logo walls

Introducing Dasher Tasks Dashers can now get paid to do general tasks. We think this will be huge for building the frontier of physical intelligence. Look forward to seeing where this goes!


