Justin

2.8K posts

Justin

Justin

@edtech808

educational technologist, gr 6-12

808 Katılım Şubat 2025
346 Takip Edilen112 Takipçiler
Justin
Justin@edtech808·
Dan Shipper 📧@danshipper

I can think of few people who sit so squarely at the intersection of tech and words than Every’s editor in chief, Kate Lee (@katelaurielee). She has honed her editorial sense in both the publishing and tech worlds—first as a literary agent, then with roles at Medium, WeWork, and Stripe Press. She has strong views on language and the highest bar for quality in written work. So I know that if she is adopting an AI tool, it’s the real deal. Kate’s approval is also a signal that something will be widely used by editorial teams in the future. She’s the canary in the coal mine. I had her @every’s AI & I podcast to talk about how she is building an AI-native editorial team. We discussed: - How she went from literary agent to tech, and why she thinks the skills transfer more than people expect - What it looks like to run a small editorial team in the AI era - Why automating copy editing is harder than it sounds - What she uses Claude for beyond the editing process This is a must-watch for editors, media operators, and anyone curious about what AI adoption looks like for a thoughtful knowledge worker. Watch below! Timestamps Introduction and Kate's early career as a literary agent: 00:01:00 From book publishing to tech—Medium, WeWork, and Stripe Press: 00:04:45 How Kate joined Every and what made the role click: 00:12:00 What it's like to be a knowledge worker at the frontier of AI: 00:27:00 The ‘aha’ moment: using AI to manage hundreds of applicants: 00:31:00 How Every's editorial team uses AI to enforce standards and train taste: 00:36:24 Publishing two reviews of major model releases on the same day: 00:45:06 What automating copy editing requires: 00:51:39

ZXX
0
0
0
5
Dan Shipper 📧
Dan Shipper 📧@danshipper·
@NickADobos it can't currently message back i don't think though i haven't tried yeah the message is from another codex!
English
2
0
4
493
Dan Shipper 📧
Dan Shipper 📧@danshipper·
something you should know: codex threads can now message each other! really useful if for example you want to have one chat thread to handle stacked prod deployments. just paste the thread id to your other codex chats and it'll message the deployment thread to take over!
Dan Shipper 📧 tweet media
English
17
9
225
12.2K
Justin
Justin@edtech808·
Lenny Rachitsky@lennysan

Today I'm releasing my entire newsletter archive (350+ posts) and all podcast transcripts (300+ episodes) as AI-friendly Markdown files. Plus an MCP server and GitHub repo. A few months ago I shared my podcast transcripts on a whim, and y'all built the most amazing things—an RPG game, a parenting wisdom site, infographics, a Twitter bot, and 50+ other projects. Let's see what happens when I give you even more data. Grab the data here: LennysData.com. Paid subscribers get all of the data (some 350 posts and 300 transcripts). Free subscribers get a subset. I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to play with that AI tool you've been meaning to try. Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments here: lennysnewsletter.com/p/how-i-built-…. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Check out today's newsletter post for inspiration on what you could to build: lennysnewsletter.com/p/how-i-built-… LFG.

ZXX
1
0
0
7
Justin
Justin@edtech808·
knowledge base of past work easily piped into agent systems
Lenny Rachitsky@lennysan

@swyx I had an idea the other day for an AI project to look at the transcripts of all the top podcasts and newsletters and see who has been best at predicting things. Tricky but could be super fun.

English
1
0
0
10
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic would have built this in a day and a dev would have tweeted the news. At OpenAI, an exec is telling you about a plan. That gap tells you everything. In the last 7 days, Anthropic shipped Dispatch, channels, voice mode, /loop, 1M context GA, MCP elicitation, persistent Cowork on mobile, Excel and PowerPoint cross-app context, inline charts, and 64k default output tokens. Felix Rieseberg tweeted "we're shipping Dispatch" and you could control your desktop Claude from your phone that afternoon. Every launch came from an engineering account or a GitHub release. In the same 7 days, OpenAI shipped GPT-5.4 mini and nano. Redesigned the model picker. Sunset the "Nerdy" personality preset. Announced three acquisitions. To find a comparable volume of shipped product from OpenAI, you have to rewind to December. This is the most underrated difference in AI right now. Anthropic PMs don't write PRDs. Boris Cherny, head of Claude Code, ships 10 to 30 PRs a day and hasn't written code by hand since November. 60 to 100 internal releases daily. Cowork was built with Claude Code in 10 days. The tools build the next version of the tools. Every cycle compresses the last one. Engineers are empowered to ship and announce. The entire org runs like a product team, not a corporation. OpenAI has the opposite problem. Fidji Simo is CEO of Applications, a title that exists because engineers aren't empowered to ship without executive approval chains. She joined from Instacart. Before that, a decade at Meta running the Facebook app. Since she arrived, OpenAI has acquired 12 companies for $11 billion in 10 months and announced a "superapp" consolidation through the Wall Street Journal. The exec responsible for shipping it is tweeting about "phases of exploration and refocus" on the product she hasn't shipped yet. That's what happens when you layer a Meta-style product org on top of an AI lab. Decisions go up. Shipping slows down. Announcements replace releases. Anthropic's product announcements come from the people who wrote the code. OpenAI's come from the C-suite and the press. One of those loops compounds. The other one meetings.
Fidji Simo@fidjissimo

Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.

English
72
98
1.2K
246.6K
Justin retweetledi
Arnaud Benard
Arnaud Benard@arnaudai·
We started Galileo AI in 2022, when generating user interfaces with AI felt impossible. We were early believers in a future where everyone creates beautiful UIs from a simple prompt. Since then, AI capabilities have grown exponentially, with Gemini models demonstrating incredible progress and opening up new scaling opportunities. Joining forces with Google is a natural fit, fueled by our shared vision to push AI boundaries for product builders.
English
1
3
108
21.9K
Justin retweetledi
Arnaud Benard
Arnaud Benard@arnaudai·
I’m excited to share that: 1. Galileo AI has been acquired by @Google. 2. We launched today the next generation of our product, powered by Gemini: Stitch More on this big news below.
Arnaud Benard tweet media
English
118
174
3.7K
472.2K
erin griffith
erin griffith@eringriffith·
A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…
English
156
240
3.1K
2.6M
weber
weber@weberwongwong·
excited to provide free FLORA accounts for students & faculty! education is very important for us especially since FLORA was founded out of an art & tech graduate program
Maxim Leyzerovich@round

flora.ai/edu

English
4
4
36
3.2K
Lenny Rachitsky
Lenny Rachitsky@lennysan·
Today I'm releasing my entire newsletter archive (350+ posts) and all podcast transcripts (300+ episodes) as AI-friendly Markdown files. Plus an MCP server and GitHub repo. A few months ago I shared my podcast transcripts on a whim, and y'all built the most amazing things—an RPG game, a parenting wisdom site, infographics, a Twitter bot, and 50+ other projects. Let's see what happens when I give you even more data. Grab the data here: LennysData.com. Paid subscribers get all of the data (some 350 posts and 300 transcripts). Free subscribers get a subset. I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to play with that AI tool you've been meaning to try. Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments here: lennysnewsletter.com/p/how-i-built-…. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Check out today's newsletter post for inspiration on what you could to build: lennysnewsletter.com/p/how-i-built-… LFG.
Lenny Rachitsky tweet media
English
170
297
2.5K
599.1K
Ryan
Ryan@ohryansbelt·
@eringriffith this is what happened for those that are curious x.com/ohryansbelt/st…
Ryan@ohryansbelt

Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor

English
2
0
35
18K
Ryan
Ryan@ohryansbelt·
Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor
Ryan tweet media
erin griffith@eringriffith

A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…

English
286
502
6K
3.3M
Justin retweetledi
Matt Shumer
Matt Shumer@mattshumer_·
DoorDash is laying the groundwork for a crazy move here. Agents will be able to 'hire' humans to do tasks for them in the real world. And this will collect insane amounts of training data for robotics. Kind of genius, kind of terrifying.
Andy Fang@andyfang

Introducing Dasher Tasks Dashers can now get paid to do general tasks. We think this will be huge for building the frontier of physical intelligence. Look forward to seeing where this goes!

English
121
119
2.1K
499K