Bennett Hoffman

15.6K posts

Bennett Hoffman banner
Bennett Hoffman

Bennett Hoffman

@bennhoffman

humans are my spirit animal. https://t.co/MZdP929Y8A

New York, NY Katılım Temmuz 2009
814 Takip Edilen1.5K Takipçiler
Sabitlenmiş Tweet
Bennett Hoffman
Bennett Hoffman@bennhoffman·
Use the base model, not an instruct tuned one. Learn the subtle arts of evocation.
English
0
1
14
2.7K
Bennett Hoffman retweetledi
🎭
🎭@deepfates·
> All knowledge workers will feel [the superhuman power of AI]. A lot of you already do, you're just hiding it from your boss so you can have more free time while "working from home". This is what I call "dark productivity".. the reason we don't see economic effects as much yet
🎭@deepfates

You might think the "agents" thing is just coming for software engineers. Yeah, agents write code, code and code sells a bunch of tokens, But most people's work isn't code, it's memos or decks or whatever. Why this is false: Agents can do anything you can do on a computer, and they do it by spending output tokens to write code. The number of keypresses used by a consultant to do a task is not a good measurement of the number of tokens an agent would use. For example: one "deep research" report might be 20 pages of output tokens. But it also might have required more than 20 pages of output tokens to do all the searches, fetches, PDF parsing and interim summaries that you never even see as the user. It also had to input all the tokens of every document it read in searching — likely more than 20 pages, since the point of the report is to collect and summarize this information. So now we're at 3x tokens for the final output. That one report is so cheap, and so fast, then now you can do more research than ever. This is valuable! If your business relies on having good information about the world, you can probably find a way to make more money by doing 3 deep research reports and then synthesizing them. More tokens! Now you've kicked off three deep research reports you deserve a little treat, right? So you fire up your browser agent and tell it go find me some nice linen shirts for summer in my size. Open them in tabs so I can look through. Well your browser agent has to interact with the browser using some kind of tool and you know what that tool is? Code, baby. Tokens. And the tokens are so cheap. You got to understand. We're spending a lot in the aggregate, but in the moment it is "spend a nickel to for 10 minutes of being literally Superman". Like yes I'll just keep spending nickels actually. I will never stop being Superman at that price. All knowledge workers will feel this. A lot of you already do, you're just hiding it from your boss so you can have more free time while "working from home". And maybe it's better to protect yourselves from Jevons as long as possible, because once you get the bug it's hard to stop. You realize that you could be creating all of the businesses and projects and art you ever wanted and all you've got to do is put your instructions in the right order and put the nickels in the bag. I would happily bet against Anthropic's revenue spike being a brief "sugar high". So would most capital allocators! That is because they have already seen that software can eat the world. White collar knowledge work fundamentally changes in the face of agent economics and entirely new forms of knowledge production? It's happened already in finance: high frequency trading. Now it's happening in tech: high frequency software. Then we will have high frequency science, high frequency governance, high frequency engineering, high frequency medicine and high frequency law. Human society is about to be absolutely DDOSed by information at all levels of the stack. Our civilization was never meant to handle this many tokens. If anything can be done on a computer it will be turned into tokens instead of human actions and it will happen faster and in parallel. This stuff works, it is real, it is getting better. It is going to hit economically and socially this year and nobody is ready and I think it is important to start taking it seriously, instead of finding ever more arbitrary reasons to remain in denial.

English
5
10
123
6.6K
Bennett Hoffman retweetledi
robot
robot@alightinastorm·
the anti AI crap in gaming is retarded and the people are mislead there is a weird assumption where somehow a specific abstraction point pre-AI is magically considered skill based you can literally buy blueprints on unreal engine, put some glue between them and build your game, which is actually much easier than directing and instructing an AI to build things from scratch together make it make sense
Blake Robbins@blakeir

games are commercial art built with software. i’m not sure there’s another software industry as resistant to AI as gaming.

English
38
7
181
10.8K
Bennett Hoffman
Bennett Hoffman@bennhoffman·
I do think that the is a very good chance that, at least for bio-class problems, meta-optimization is (at least one of) the bottlenecks. By which I mean becoming much better about which experiments to run in the first place. AlphaFold provides a nice model. It hasn't "solved" the physics of protein folding, but it has helped radically narrow the search space. There's some boundary, likely poorly delineated, between closed, game like problems (chess, go) and unconstrained problems (physics, a poet's soul). Most of biology outside of protein folding looks like the latter, but when you give it an experimental platform that scales horizontally, it will become much more like the former.
English
0
0
0
14
Devon Eriksen
Devon Eriksen@Devon_Eriksen_·
@RokoMijic This would certainly be worth trying. But it's not a refutation of the possibility that intelligence might not be the bottleneck.
English
2
0
16
406
Devon Eriksen
Devon Eriksen@Devon_Eriksen_·
A few possible counterpoints here. First of all, "cancer" isn't one thing. It's a blanket catch-all term for all the ways a cell can go wrong which cause unrestricted growth. So, beating cancer actually means finding better and better ways to detect and remove those cells. This process is already happening and has been happening for some time. And unless democrats win a lot of elections and bust us back to the preindustrial age, there will certainly be a time where we can look back and say "we've beaten cancer", because cancer is a minor inconvenience requiring a house call and an injection, there's not going to be one moment where that suddenly happens. No great breakthrough, no eureka moment, just thousands of little inventions. But that's not the central question. The central question is "how much could a hypothetical AI that is smarter than the smartest human help with that?" The answer is certainly not zero. IQ matters. If it didn't, the third world would not be the third world. But if we assume that IQ matters infinitely, and our hypothetical superintelligence could beat all cancers with the flick of a metaphorical wrist, then that idea carries the hidden assumption that intelligence is the bottleneck in this whole process. I'm not sure we can just jump to that conclusion. Intelligence is the ability to analyze and understand data. Which means the quality of your thinking is only as good as the quality of your data. Let's take a different hypothetical which I think will make this more clear: Could a superintelligence solve physics? For that one, I would answer "no". Because in physics, hypotheses are derived with math, and then verified by experiment. And as we have recently seen, it is very easy for physicists as a group to come up with infinity varieties of string theory and other such things, hypotheticals which explain everything... but then fail when experimentally tested. The standard practice in modern physics is to move the goalposts and apply for another grant. So how would super-AI "solve" physics? No matter how brilliant it was, it would also have to test its ideas, requiring very complex and expensive hardware to do so. In physics, that's the bottleneck. Now, would super-AI, acting in other areas, transform the economy and make those experiments cheaper? Yes, maybe. But that's beside the point. The point is that physics cannot be solved with intelligence alone. Which might be true in other areas as well. Now, some people object to this line of thinking. They protest that a super-AI could run experiments in simulation. Well, okay, that's great for doing engineering in areas where the relevant physics is already understood. But to verify hypotheses in fundamental physics, you would need a model that simulates what you are trying to test, which would only be possible if you already had the understanding you are trying to achieve. Otherwise, the simulation would be like the string theory hypothesis... internally consistent, but not necessarily connected to reality in any meaningful way. Life is an IQ test, yes. But life is not a test of IQ alone.
Roko 🐉@RokoMijic

Miller's take that "Superintelligence" won't cure cancer and death is going to age really badly. Firstly, it's wrong by definition. "Superintelligence" means an AI that that greatly exceeds the cognitive performance of humans in all domains. That is literally the definition of "Superintelligence". So it must greatly exceed humans at curing cancer specifically. Presumably Miller thinks that humans are capable of curing cancer in principle (otherwise, why do we devote human researchers to this task?), therefore by definition any "Superintelligence" must be able to cure cancer. Secondly, Miller starts bounding the capabilities of "Superintelligence" by comparing it to contemporary LLM-based systems. There are two ways this could go wrong: - either LLM-based-systems are not capable of curing cancer, in which case they will never achieve "Superintelligence". Or, sufficient improvements may yield LLM-based systems that do actually cure cancer in which case they might make it to "Superintelligence" (or might not, if they are bad at some other task). I think people like Geoffrey Miller should just stop talking about "Superintelligence" if they are going to abuse the term like this. But set aside the definitional games: maybe AI systems that we can actually build will be bad at biomedical science? This is certainly the case today. Modern LLM-based systems are good at coding and at commonsense and generic research tasks, but not that good at anything else. LLMs work well when they get fast feedback. But, so do humans. Anyone sufficiently intelligent can get good at math and coding. Getting good at biology requires a lot of equipment. We haven't really connected modern AIs to automated labs yet. When we do, I do expect significant progress just as we saw progress when we connected AI to the internet. In a way, LLMs are just the result of connecting the preexisting AI stuff to large scale data. We already had neural language models in 2015. I used to work on language models, just before LLMs took off. Small language models are not impressive or that useful. So I have seen a full cycle of this playing out over a decade. x.com/gmiller/status…

English
25
8
228
12.6K
Bennett Hoffman retweetledi
Todd Saunders
Todd Saunders@toddsaunders·
I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.
English
343
688
7.2K
946.9K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Citadel Securities published this graph showing a strange phenomenon. Job postings for software engineers are actually seeing a spike. The graph here is short term but still it's super interesting and really strange. Is it Jevons paradox at play. When AI makes coding cheaper, companies actually may need a lot more software engineers, not fewer. When software is cheaper to build, companies naturally want to build a lot more of it. Businesses are now putting software into industries and tools where it was simply too expensive before. --- Chart from citadelsecurities .com/news-and-insights/2026-global-intelligence-crisis/
Rohan Paul tweet media
English
16
13
71
65.1K
Bennett Hoffman retweetledi
Devon Eriksen
Devon Eriksen@Devon_Eriksen_·
Professional basketball players are tall. That doesn't mean playing basketball will make you taller. It means that you will be more successful at playing basketball if you are tall. And great achievers don't ask questions like "who am I?" or "what is my purpose?", not because they don't need or don't care about the answers, but because they already have those answers. Whether they got those answers from a whole lot of navel-gazing, or whether the questions were easy for them really doesn't matter much. Mindlessly aping randomly selected traits of high achievers, when you don't know which traits are causal, which are controllable, and which are both, is not a path to great achievement. It's a path to a cargo cult, where you are waving two flags around in front of a radar dish made of sticks, waiting for John Frumm to land the magic plane. Even great achievers themselves usually don't know what it is about them that made them successful, much less whether and how these traits can be imitated by others. They are as much in the dark as the rest of us, except that their success and celebrity status can sometimes imbue them with a false sense of certainty over whatever notion they have. Unusual achievements are, by definition, unusual. If we knew how to systematically duplicate them, do this, don't do that, they would not be unusual. They would be the baseline. Sure, it pretty much has to be possible to investigate and learn how humans can be more productive, successful, accomplished. But interviewing a bunch of accomplished people is not a very fruitful way to get there. You don't know which things they say are important, and neither do they. If you want to learn something, run an experiment.
Marc Andreessen 🇺🇸@pmarca

And then he [squints, checks notes] went back to work and built NeXT and Pixar.

English
35
41
697
40.1K
Bennett Hoffman retweetledi
Crémieux
Crémieux@cremieuxrecueil·
Incredible! All the headlines about how vegetarian diets prevent cancer were based off of a pooled cohort study where they forgot to correct for multiple comparisons When you correct for them, the only surviving association is vegetarianism increasing one type of cancer's risk
Adam Rochussen@AdamRochussen

Big headlines the other week about this huge (1.8 million people, 3 continents! Wow!) study out of Oxford looking at the effect of different diets on cancer risk. Vegetarianism cures cancer!!! Just one problem. That's not what the data show. The study (nature.com/articles/s4141…) makes it's big claims based on unadjusted p-values (that aren't even numerically reported anywhere in the main paper). But as anyone with a brain knows, performing 80 different hypothesis tests is bound to produce some false positives. The authors adjust for false discoveries, but don't really take it into account when discussing their data. They also perform sensitivity analysis, but again ignore the findings when discussing their results. Journalists then picked up the narrative-convenient "significant" findings (while simultaneously ignoring inconvenient significant findings): BBC, Sky News, The Independent all reported the same claim: "A vegetarian diet can slash the risk of five types of cancer by as much as 30%, a new study has found.” Okay. But of the original 11 nominally significant findings in study, which made it through both multiple comparisons adjustment and sensitivity analysis? Just the one. Which one? Risk of oesophageal squamous cell carcinoma in vegetarians versus meat eaters. HR=1.93 (95% CI: 1.30-2.87). Yup.

English
29
230
2.5K
127.8K
Bennett Hoffman retweetledi
patagucci perf papi
patagucci perf papi@kenwheeler·
am i alone in feeling like anyone holding openclaw up as some kind of moated innovation has lost the plot entirely
Alex Volkov@altryne

"Every software company in the world, needs to have an @openclaw strategy" - Jensen at @NVIDIAAI GTC Framing OpenClaw as one of the most important open source releases ever, they have announced NemoClaw - a reference platform for enterprise grade secure Openclaw, with OpenShell, Network boundaries, security baked in.

English
105
41
1.3K
62.1K
Variety
Variety@Variety·
The creator of #KPopDemonHunters, Maggie Kang, dedicates her #Oscar “to Koreans everywhere”: “I am so sorry that it took so long to see us in a movie like this, but it is here. And that means that the next generations don’t have to go longing.” (via ABC/AMPAS) variety.com/2026/film/news…
English
389
920
3.3K
5.4M
Bennett Hoffman retweetledi
Crémieux
Crémieux@cremieuxrecueil·
Wow, this study is devastating for cynicism. Here's a TL;DR: In studies 1–3, participants indicated they thought cynics would do better on cognitive tasks. In studies 4–5, cynics were tested and 1 SD of cynicism was associated with 0.25 and 0.17 SDs lower cognitive ability in studies 4 and 5, respectively. In study 6, cynics were found to be - less educated in 29/30 countries - less literate in 28/30 countries - less numerate in 29/30 countries - less computer-literate in 23/26 countries Cynicism is simply not smart. Source: journals.sagepub.com/doi/10.1177/01…
Crémieux tweet mediaCrémieux tweet mediaCrémieux tweet media
Ethan Mollick@emollick

A worldwide survey of 200k people finds cynical people are thought of as smarter... but that, in reality, cynics test lower on cognitive & competency tests. As Stephen Colbert said: “Cynicism masquerades as wisdom, but it is the furthest thing from it.” journals.sagepub.com/doi/pdf/10.117…

English
231
562
3.2K
6M
Bennett Hoffman retweetledi
vittorio
vittorio@IterIntellectus·
paul ehrlich predicted hundreds of millions would starve in the 1970s, that england would cease to exist by 2000, that the battle to feed humanity was already lost and advocated spiking water supplies with sterilants to prevent the population bomb he wasn’t even close. the green revolution fed billions, famines collapsed, birth rates declined entirely on their own. every single major prediction in the population bomb was wrong, and the institutions that built 60 years of environmental policy on his thesis never updated the model he died in a world where south korea hit a 0.72 fertility rate, japan is shutting down schools faster than it can demolish them, every developed nation is spending billions begging its citizens to have kids, and somehow the new york times called his predictions “premature” as if the mass starvation is still coming some time in the future (it’s not) the most influential environmental thinker of the 20th century spent his entire career being wrong about the one thing he was famous for, and the policies his work inspired (population control programs, anti-natalist funding, development restrictions) actively accelerated the real crisis of a world that cant replace its own population
vittorio tweet media
English
75
166
1.4K
54.3K
Jon Burke 🌍
Jon Burke 🌍@jonburkeUK·
The sun doesn’t have a ‘choke point’.
Jon Burke 🌍 tweet media
English
951
616
2.3K
233.6K
Garry Tan
Garry Tan@garrytan·
I'm going to rile up the trolls with this right now but I am working on 3 different big projects simultaneously across 15 @conductor_build sessions all the time. In the last 7 days I'm averaging 17k lines of code per day, 35% tests, all thanks to gstack. (All mornings/nights/weekends on top of my real job at YC) I ran /retro (from gstack) on all three projects and this is what came back:
Garry Tan tweet media
English
109
27
650
134.9K
Bennett Hoffman retweetledi
Bennett Hoffman retweetledi
bone
bone@boneGPT·
It used to be illegal to do autopsies. There was a ban on human dissection. Medical science stagnated for centuries until some renegades said fuck it and paid grave robbers to learn how the human body worked. Imagine how many lives could have been saved if we didn't adhere to those ethical laws for centuries? Millions by now. These luddites are hellbent on slowing down the velocity of technological progress. They want a 100-page document before you can save your own dog. They are cancer manifest, mutations in our system that have outlived their usefulness and turned immortal, killing us. Ethics is why Europe is flooded with migrants instead of nuclear power. Ethics is their excuse for regulating themselves out of the AI race. Ethics is why Canada is putting depressed 20-year-olds to sleep. All in the name of ethics. Fuck ethics. Fuck regulations. Fuck your moral high ground bottlenecking my progress. Fuck these safetyists deciding which risks are OK for you to take. Accelerate.
Trung Phan@TrungTPhan

Australian tech entrepreneur Paul Conyngham explains how he used ChatGPT/AlphaFold (spent $3,000 with no biology background) to create a custom MRNA vaccine to treat his dog’s cancer tumors. Unreal.

English
76
401
3.7K
135.5K
Bennett Hoffman retweetledi
Patrick Collison
Patrick Collison@patrickc·
• According to the story, the dog's cancer has not been cured. • Absent all regulatory and manufacturing constraints, we could not just synthesize magic mRNA cancer cures. The technology is very promising, but it's not yet any kind of panacea. • The emergent system of regulators and manufacturers is indeed far too conservative, and small-scale experimentation is much harder than it should be. More people should read the first part of The Rise and Fall of Modern Medicine. Recommend @RuxandraTeslo, @PatrickHeizer for more.
English
153
298
4.3K
844K