Dan Shipper 📧

24.8K posts

Dan Shipper 📧 banner
Dan Shipper 📧

Dan Shipper 📧

@danshipper

ceo @every — the only subscription you need to stay at the edge of AI

New York, NY Katılım Ocak 2009
2.1K Takip Edilen95.9K Takipçiler
Sabitlenmiş Tweet
Dan Shipper 📧
Dan Shipper 📧@danshipper·
BREAKING: Proof—a new product from @every It’s a live collaborative document editor where humans and AI agents work together in the same doc. It's fast, free, and open source—available now at proofeditor.ai. It’s built from the ground up for the kinds of documents agents are increasingly writing: bug reports, PRDs, implementation plans, research briefs, copy audits, strategy docs, memos, and proposals. Why Proof? When everyone on your team is working with agents, there's suddenly a ton of AI-generated text flying around—planning docs, strategy memos, session recaps. But the current process for collaborating and iterating on agent-generated writing is…weirdly primitive. It mostly takes place in Markdown files on your laptop, which makes it reminiscent of document editing in 1999. Proof lets you leave .md files behind. What makes Proof different? - Proof is agent-native: Anything you can do in Proof, your agent can do just as easily. - Proof tracks provenance: A colored rail on the left side of every document tracks who wrote what. Green means human, Purple means AI. - Proof is login-free and open source: This is because we want Proof to be your agent's favorite document editor. Check it out now, for free—no login required: proofeditor.ai
English
110
99
1.5K
571.4K
Dan Shipper 📧
Dan Shipper 📧@danshipper·
@stevekrouse @every not really, the point of the guide isn't to sound like katie! it's to get as much information about making writing that sounds like katie—quite a different use case not all writing needs to sound like a human!
English
1
0
2
347
Steve Krouse
Steve Krouse@stevekrouse·
@danshipper @every this kinda undercuts the point of the article, no? why list claude as an author if it's written in katie's voice?
Steve Krouse tweet media
English
2
0
2
533
Dan Shipper 📧
Dan Shipper 📧@danshipper·
i asked composer 2 to optimize my production QA process and pitted it against gpt-5.4 composer 2's response won (as judged by both 5.4 and opus 4.6):
Dan Shipper 📧 tweet media
English
10
7
96
7.5K
Cursor
Cursor@cursor_ai·
Composer 2 is now available in Cursor.
Cursor tweet media
English
452
724
8.1K
2.7M
Dan Shipper 📧 retweetledi
Dan Shipper 📧 retweetledi
Katie Parrott
Katie Parrott@kplikethebird·
I've been waxing rhapsodic about the value of an AI writing style guide for long enough that it felt rude not to write up a guide on how to make one. So we did. Now on @every every.to/guides/how-to-…
English
0
4
16
1.8K
Dan Shipper 📧
Dan Shipper 📧@danshipper·
How to never lose your job to AI: Just surf the models. Frontier models outclass humans at any form of knowledge that can be written down. But people who use frontier models in their field of expertise generate new, tacit, situational expertise that the models don't yet have—because the models can't be trained on how they will be used in the future. Humans can learn to use new models faster than new models can be trained that absorb what they find out, so you can continually "surf" on top of the model's intelligence to generate new expertise. This is a fundamental limitation of LLMs because they don't learn past their training data. Even few-shot learning doesn't account for this because whatever can be codified into a few shot prompt needs to be used in the correct situation—and this will always stay uncodified in the general case. Just surf the models. Reap the benefits of a totally new world.
English
44
36
344
24.7K
Dan Shipper 📧
Dan Shipper 📧@danshipper·
every is such a special place to work because i randomly bump into beautiful things like this in figma while looking for something else
Dan Shipper 📧 tweet media
English
1
0
29
2.5K
Yongrui Su
Yongrui Su@ysu_ChatData·
Strong framing. The advantage is not memorizing model facts, it is building new workflows faster than the labs can productize them. The part that compounds is domain specific know how, what actually works in your team, stack, and customer context. That is much harder to pretrain than people think.
English
1
0
5
575
Taayjus
Taayjus@taayjuss·
@danshipper true, but it's a treadmill... the moment that expertise gets written down or documented anywhere, it becomes training data. the lead keeps shrinking
English
1
0
1
148
Blake Urmos
Blake Urmos@blakeurmos·
@danshipper actually love this perspective... we will see if it holds up. I'm certainly not taking any chances and spend most of my time building and learning with the latest models.
English
1
1
3
150
Mingta Kaivo 明塔 开沃
@danshipper the tacit knowledge moat compounds. AudioWave workflows I run today didn't exist 6 months ago. no training data for how we use the model. by the time any model catches up, the workflow evolved again.
English
1
0
4
487
AISauce
AISauce@aisauce_x·
the most underrated part of this framing is that it makes learning a competitive advantage again. not learning facts. learning how to apply new tools in specific contexts faster than anyone else. the people who figure out how to use the model in their domain first set the ceiling for everyone else
English
1
0
6
494