Dan Shipper 📧

25.5K posts

Dan Shipper 📧 banner
Dan Shipper 📧

Dan Shipper 📧

@danshipper

ceo @every | the only subscription you need to stay at the edge of AI

New York, NY Katılım Ocak 2009
2.1K Takip Edilen102.1K Takipçiler
Sabitlenmiş Tweet
Dan Shipper 📧
Dan Shipper 📧@danshipper·
Software engineering in 2026 needs two roles: A pirate and an architect. The pirate codes as fast as possible to figure out what's valuable. The architect turns that sloppy mess into a well-oiled machine. Here's how it works and why:
English
47
65
728
136.2K
Dan Shipper 📧 retweetledi
MTS
MTS@MTSlive·
We talked to @danshipper about what makes great writing great. Dan, CEO of @every, says AI writing won’t feel alive until models learn continuously. "It comes from a unique person in a unique circumstance expressing their unique view on the world in words that are theirs." "The problem is not the specific, it's not X, it's Y, it's that the model, because it's not changing, it's going to use similar language in similar situations." "It's actually not at all about writing quality to me for models... is it using sentences that change and feel alive? I don't think that will happen until the models are learning continuously."
English
5
3
37
7.5K
Dan McAteer
Dan McAteer@daniel_mac8·
Claude Mythos this week? Last few Anthropic dev conferences brought new models. Based on the read of dev vibes, Anthropic may need to make Mythos GA sooner than later.
Dan McAteer tweet media
English
15
11
112
40.5K
Soumitra Shukla
Soumitra Shukla@soumitrashukla9·
@danshipper @every @danshipper is there a live tutorial of how to use it? Was trying it earlier and wasn’t sure what the main differences were between opening a google doc via Codex in-app browser and editing there. 100% sure I’m being dumb
English
1
0
0
110
Dan Shipper 📧
Dan Shipper 📧@danshipper·
clear that this is how we'll be doing most of our work for the next 10 years: agent running continuously on the left, application that you + the agent use on the right
Dan Shipper 📧 tweet media
English
39
21
357
25.7K
Ike
Ike@LessonHist·
@danshipper You can’t learn faster than the models.
English
1
0
0
95
Ryan Wigley
Ryan Wigley@rywigs·
@danshipper How good has it been and knowing when not to speak? agents can feel like nervous interns at times, filling space
English
2
0
0
982
Ethan Mollick
Ethan Mollick@emollick·
(Sorry, after seeing so many of these, could not resist): 🚨 BREAKING: Google just dropped a NEW paper that completely deletes RNNs from existence. No recurrence. No convolutions. Nothing. Just one mechanism. And it’s destroying every translation benchmark on the planet. The title alone is a flex: “Attention Is All You Need” Vaswani. Shazeer. Parmar. Uszkoreit. Jones. Gomez. Kaiser. Polosukhin. 8 researchers. 1 architecture. The entire field of NLP will never be the same. Here’s why this is INSANE → LSTMs took DAYS to train. This thing trains in 12 hours on 8 GPUs. 🤯 → 28.4 BLEU on English-to-German. That’s not an improvement. That’s a MASSACRE. They beat the previous SOTA by over 2 points. → English-to-French? 41.8 BLEU. At a FRACTION of the training cost of every model that came before it. → They called it the “Transformer.” The name alone tells you they knew. But here’s the part nobody is talking about 👇 They threw out sequential processing ENTIRELY. Every other model on Earth processes words one at a time. This thing looks at the ENTIRE sentence simultaneously and figures out what matters. It’s called “self-attention” and it’s basically the model asking itself: “which words should I care about right now?” Every. Single. Token. In parallel. Do you understand what this means? Training that used to take WEEKS now takes HOURS. Models that couldn’t scale past a few layers? This thing stacks 6 encoders and 6 decoders like it’s nothing. And the multi-head attention? 8 attention heads running at once, each learning DIFFERENT relationships in the data. I’m not being dramatic when I say this paper just rewrote the rulebook. RNNs are cooked. 💀 LSTMs are cooked. 💀 The future is attention. And attention is ALL you need. Follow for more 🔔
Ethan Mollick tweet media
English
213
176
2.1K
276.8K
Brad
Brad@BradleyYoungjr·
@danshipper Agreed. They attempted this with integrated a side panel in browsers but they actually needed to reverse it and have a browser in an ai agent environment not an ai agent in a browser environment
English
1
0
8
982
Dan Shipper 📧
Dan Shipper 📧@danshipper·
I agree with the rate of acceleration but also feel that what humans do is surprisingly complex even for an exponential curve I dont think its possible to separate feeling, relationships, art, body, and self understanding from learning rate (broadly defined) But i also appreicate your thoughtful replies and I’m sorry for QTing in that way. Will edit or delete if i can! Enjoyable discussion
English
2
0
0
47
Stepan Goncharov
Stepan Goncharov@stepango·
@danshipper > I think it will be a very long Rate of acceleration tells me otherwise. To your question - IMO most of the things that makes you human, feeling, relationships, art and everything that involves the body and self-understanding, things which are deeply individual.
English
1
0
1
39
Dan Shipper 📧
Dan Shipper 📧@danshipper·
> Imagine sub-second fine-tuning-like process on every prompt or just some high precision long term memory mechanism combined with good context management if we are talking in common AI terms. even if you can get some real-time-ish weight updates, i think it will be a very long time before models make the kind of realtime learning adjustments at the level of depth and insight that humans can > There are things that humans can do that models will not be good at, but I don't think that speed of learning is one of the things that humans need to compete with AI on. im curious what you think those things are if learning is not one!
English
2
0
1
64
Stepan Goncharov
Stepan Goncharov@stepango·
That's assuming model architecture won't change, and models can learn how to use other models, internet has a lot of data to learn from. There are things that humans can do that models will not be good at, but I don't think that speed of learning is one of the things that humans need to compete with AI on. Even now you can teach model specific skills via fine-tuning and it will never need a second explanation, it's just too complex and expensive to operate this way. Imagine sub-second fine-tuning-like process on every prompt or just some high precision long term memory mechanism combined with good context management if we are talking in common AI terms.
English
1
0
2
128
andrew chen
andrew chen@andrewchen·
bullish on the PM role quietly becoming the most important role in tech again when anyone can build, the person who decides WHAT to build becomes the bottleneck
English
277
170
2.2K
221.2K
Dan Shipper 📧
Dan Shipper 📧@danshipper·
it's true that any model knows more than any individual human. but humans learn faster: we update after each interaction, models don't. humans who use models in their own lives and domains of expertise learn new, local, tacit expertise that the models can't know because it involves them. this expertise may be included to some extent in the next training run—but in that case the same process repeats
English
1
0
2
326
Stepan Goncharov
Stepan Goncharov@stepango·
🤔 what part of it is dim? Growing up ~16 years, getting a degree ~4, may be faster for some. That's a single domain skill set, training the model - few months, the same amount of knowledge and skills average person will get by the age of 20 in wide set of domains, every new model accelerate the rate of learning and depth of knowledge in the available domains, which is much faster then most humans.
English
2
0
2
3K