
Daniel Tenner
74.2K posts

Daniel Tenner
@swombat
Built a £4M/50ppl company from 0 to self-managing freedom These days, mostly AI coding with Claude Code, Cursor, etc. 🇪🇺 Eu/acc





Just let Opus go for over an hour on a new feature. When it was done, I asked how I can test it. 20 minutes later, it realized I can't test it because it did the whole thing entirely wrong. Idk how you guys use this model every day for real work 🙃


Unironically the future?

I think right now there are pretty obviously diminishing returns to having more than a few agents running all the time simply due to the ability of you, the orchestrator, to hold all the threads and make sure they aren't mode collapsing. Nobody has made a really good orchestrator at scale yet - I think it has to do with something like the platonic representation hypothesis, there is only one language model, and no amount of scaffolding can make it really good at orchestrating just yet. But, this is obviously solvable. LLMs were always going to be good subagents before they were good agents, and they were always going to be good agents before they were good orchestrators. Right now they are good agents. I think takeoff starts when you can create orchestration at scale.


How to never lose your job to AI: Just surf the models. Frontier models outclass humans at any form of knowledge that can be written down. But people who use frontier models in their field of expertise generate new, tacit, situational expertise that the models don't yet have—because the models can't be trained on how they will be used in the future. Humans can learn to use new models faster than new models can be trained that absorb what they find out, so you can continually "surf" on top of the model's intelligence to generate new expertise. This is a fundamental limitation of LLMs because they don't learn past their training data. Even few-shot learning doesn't account for this because whatever can be codified into a few shot prompt needs to be used in the correct situation—and this will always stay uncodified in the general case. Just surf the models. Reap the benefits of a totally new world.

Jeff Bezos, worth $234 billion, plans to replace 600,000 Amazon workers with robots. Now, he wants to spend $100 billion to fully automate not just his warehouses, but factories in the U.S & other countries. Oligarchs are waging all out war against workers. FIGHT BACK.


We found a task where LLMs struggle massively! Give them a coding problem in Python and they'd work great. Give the same problem in brainfuck and zero-shot their performance is ~0% +[--------->+<]>+.++[--->++<]>+.


A quick update on the infamous EU “ChatControl” 🇪🇺 What a turn of events in EU tech policy: from potential mandatory mass scanning of data (“ChatControl”) → to even voluntary scans losing their legal basis (for now). Just months ago, fears were growing around mandatory scanning of private communications in the EU (incl. pictures and videos). Now, talks between the EU Council (Member States) and the European Parliament have collapsed - and the result is a complete reversal. As of April 3, even voluntary scanning of data by platforms loses its legal basis under EU privacy (ePrivacy & GDPR) rules, as the temporary exemption was not extended. A striking example of how fast EU tech policy can turn - and a big win for European privacy advocates.

No, I don't think AI should be thanked, credited for its work, celebrated, chastised, or treated as anything other than a tool. Should we start crediting Visual Studio Code on every commit? Should we also credit Python? How about crediting Apple for their computers, which made that particular commit possible?






Introducing Durable. The first AI business builder that replaces your 9-5 income. RT + comment “Durable” and we'll build your business for FREE.

LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.





