Tom Chivers

77K posts

Tom Chivers banner
Tom Chivers

Tom Chivers

@TomChivers

"Far too nice to be a journalist": Terry Pratchett. Lead writer, Flagship. Semafor. chiversthomas(a)gmail. Third book, Everything is Predictable, out now!

Science writer Katılım Ocak 2009
1K Takip Edilen53.5K Takipçiler
Sabitlenmiş Tweet
Tom Chivers
Tom Chivers@TomChivers·
A small announcement: my next book, Everything is Predictable, about how Bayes' theorem is the most important little equation in the world, is out in April and will look like this! If you'd like to pre-order, you can do so here geni.us/EIPBook and I will be very grateful
Tom Chivers tweet media
English
48
63
613
390.9K
Panta
Panta@thepanta82·
@Noahpinion These doom percents are such nerd traps. They mean nothing, but smart people just love endlessly debating them.
English
1
0
0
64
Maximum-Epiplexity Agent Swarm
Maximum-Epiplexity Agent Swarm@MaxDiffusionRL·
@TomChivers Idk how much I can trust the epistemics of an update *that* large, when uncertainty bars are *that* small both before and after the update. I say that as someone who really appreciates davidad.
English
2
0
2
431
Tom Chivers
Tom Chivers@TomChivers·
today in "things that are simultaneously reassuring and terrifying"
Tom Chivers tweet media
English
6
6
99
24.8K
Tom Chivers
Tom Chivers@TomChivers·
@perrymetzger To reference that piece: Are you Soddy, or are you Rutherford? You don’t know, and neither do I. The epistemically correct thing to do is admit there is a non-trivial chance that you are on the wrong side of this. But I don’t think that’s your style, and fair enough.
English
1
0
1
111
Tom Chivers
Tom Chivers@TomChivers·
@AndrewSabisky @perrymetzger I think he believes it, and that he (and Hassabis, who also seems scared) believe, probably rightly, that they have a better chance of making aligned AI than any of their competitors. The arms race dynamic is very bad, though!
English
2
0
0
86
Tom Chivers
Tom Chivers@TomChivers·
@betweenthegaps that's cool. I'm more scared of me and everyone I love being disassembled to make data centres
English
0
0
11
354
george peters
george peters@betweenthegaps·
@TomChivers I'm more afraid of people preferring life inside a fantasy world of synthetic sensations than I am of AI itself.
English
1
0
0
464
Tom Chivers
Tom Chivers@TomChivers·
@AndrewSabisky @perrymetzger I don't understand the "this guy works for an AI company and therefore will say that AI is really dangerous" thing. Top: "he believes it"; bottom, "he thinks making people hate and fear AI will make them believe that it is powerful and that they should use it for spreadsheets"
Tom Chivers tweet media
English
1
0
1
119
Andrew Sabisky
Andrew Sabisky@AndrewSabisky·
@TomChivers @perrymetzger Admittedly I understand why ppl do not take that number seriously coming from Dario, since he runs an AGI company. If you really believe that you should join PauseAI!
English
1
0
0
118
Tom Chivers
Tom Chivers@TomChivers·
@perrymetzger here is that point made at greater length unherd.com/2025/06/ais-19… the sensible response to "I think this is safe but this other equally credentialled guy thinks there's a 50% chance it kills us" is to be somewhat concerned that you are wrong and he is right
English
2
0
4
188
Tom Chivers
Tom Chivers@TomChivers·
@perrymetzger right, says you, but Dario Amodei has it at 25% (for things going "really, really badly"), and I've actually thought about this quite a lot myself, and I find this sort of blasé confidence incredibly tiresome. Have some humility! Observe that other smart people disagree!
English
3
0
22
790
Tom Chivers
Tom Chivers@TomChivers·
@soblackandblue @burner1693878 @cantab_biker @cljack @AlanMCole Built-in speed limiters in cars would be a start. The maximum speed limit in the UK is 70mph. Cars shouldn’t be able to go above that anywhere (possibly excepting brief bursts for overtaking) and should be gps-limited to local speed limits, on pain of an annoying beeping alarm.
English
0
0
1
31
Alan Cole
Alan Cole@AlanMCole·
A strong “high decoupling” belief is that anti-car people have a very annoying personal/political aesthetic, which makes it all the more unfortunate that they are objectively completely correct about cars being the greatest risk to my kid’s well-being.
English
27
18
620
34.7K
Tom Chivers retweetledi
AnechoicMedia
AnechoicMedia@AnechoicMedia_·
Ultraprocessed food isn't a coherent scientific concept and the attempts to formalize it have been a joke, as with the scale used in this study. Slice a potato and bake it: Unprocessed (Group 1) Slice a potato and fry it: "Ultraprocessed" (Group 4) Nuts: Unprocessed Nuts, salted: "Processed" (Group 3) Homogenized, pasteurized milk: Unprocessed Milk churned with sugar into ice cream: "Processed" This is a search for a scientific-sounding language to describe why one set of foods is nutritionally suspect, but in moralizing terms of how it was created by an industrial system, rather than characterizing food impacts directly by nutritional content or satiety. They can't just call the system "added sugar" because that's too obvious and not morally satisfying.
The Wall Street Journal@WSJ

People who eat around nine servings a day of ultraprocessed foods like chips and doughnuts have about a 67% higher risk of heart attacks, strokes and dying from heart disease compared with those who eat about one serving a day, according to a new study. 🔗 on.wsj.com/3PgXHeg

English
152
525
6.1K
678.6K
rohit
rohit@krishnanrohit·
I find it better to think in terms of LLM Inc, a company rather than a species. A species makes you think of a large group of individuals with autonomy, with shared genetics. It's not really true here. Instead, we're trying to design a new weird intelligence form which is as a new distributed organization to act nice to us. It's an LLM Inc. LLM Inc has the equivalent of corporate personhood. A company all of us can use and give inputs to and get outputs from. Accessible with a simple API. Like, say, Stripe. But unlike Stripe, it has no clear purpose, beyond being able to do everything. Like McKinsey. McKinsey via an API. It's hard, sometimes the entity, our amazing LLMcK, is so nice that it's are completely unreliable. Sometimes not nice at all and completely unreliable. Sometimes somewhat reliable and somewhat nice. In each of these instances we are trying to design the entity such that we can get the responses we want from LLM Inc. There's no way to guarantee exact outcomes. Just like a regular company. Like people have dsputes with Stripe because it messes up their payments or deletes their account, which nobody actually wants. But it happens. And lord knows it happens all the time either mckinsey!. Partially because nobody knows what nice means sufficiently well to design it, but also because organizations are weird and not like a mathematical equation. Companies all have rules and guidelines and norms and external laws and HR and compliance. And still we get the occasional Purdue. Here they're all in the weights and its interplay with the context. So for LLM Inc we try to add all sorts of pressures to try and make sure these things happen less. There are rules and procedures that we try to instill inside the organization, but just like any organization nothing gets followed 100%. But we do see that as we design this new ecology with this kind of data and those kinds of stories and these kinds of selection pressures that teach it what it should learn, turns out it ends up being more useful and acts in the way that we would want it to. Sometimes the way we would want it to act is also a moving target, and LLM Inc tries its best. It's hard, because people use it for everything. It's a company that acts as a friend and does coding and does research and does role-play and helps the military carry out strikes abroad and everything in between. Like McKinsey. It's hard to make human McKinsey behave better and just so it's also hard to make LLM mck behave better. Unlike human mck it doesn't have individual AGI components you can yell at or prosecute. It can't be intimidated into changing its policies. It can't change its mind in the abstract. It can't even remember everything it does. The makers of LLM Inc can somewhat do this though. So they keep an eye on everything that it does and try to damp down on the bad stuff and learn from the good stuff and try to push that into the internal of the next upgrade of LLM Inc directly. Obviously this is a lossy process because it is being done by humans with all of our human foibles. Even when we manage to automate part of this it will still be lossy because the automation will be taught by us. The way we keep companies on the straight and narrow is by extensive internal and external monitoring. We developed institutions over centuries as we learned each way they go wrong and added fixes. Sometimes the fixes caused more problems down the road and we fixed those two. All the while continuing to use the machine so that when the next fix is needed we know more and we have more. We used the very form of progress as the way to create and enforce safety. For LLM Inc we will need to do the same thing. It won't look the same because the mechanism by which the laws, regulatory institutions, compliance, and norms affects it is different. But it's going to be no less important and no less effective. That's real AI safety.
Scott Alexander@slatestarcodex

I don't think this is a good article. Yes, humans are actively designing AIs to be nice to us. This is called "the alignment problem". The word "problem" is in there because it's hard and we don't know how to do it with certainty. The chimpanzee analogy is meant to illuminate what would happen if we don't solve that problem. There are very obvious reasons evolution designed parents to be nice to children (it's necessary to pass down selfish genes). These reasons don't exist with humans and AIs, so it's a worse comparison. This is asking us to abandon a normal case and replace it with an extremely unusual special case where we already know the reasons why it doesn't count.

English
1
4
24
12.4K
Tom Chivers
Tom Chivers@TomChivers·
@krishnanrohit @typebulbit @slatestarcodex I imagine a lot of the work will involve fixing GPT6! Or at least studying it and seeing how it goes wrong, and perhaps using it to help align GPT7, and so on. It doesn't seem like they're at odds with each other
English
1
0
1
30
Tom Chivers
Tom Chivers@TomChivers·
@krishnanrohit @typebulbit @slatestarcodex I find that position odd. I think we can talk about GPT10's abilities: plausibly it will be able to do ≈anything we can do, but better, perhaps with some odd deficits. Thinking about ways of making that very powerful, and probably quite imminent, thing safe seems sensible to me
English
1
0
2
19
Tom Chivers
Tom Chivers@TomChivers·
@Hadrio_Official it's been available for 11 years and production is forecast to hit 1,000 tons soon. The benefit obviously matters to some people. The tradeoffs are… what? Some people might accidentally buy the wrong kind of onion? There are dozens of GM crops already en.wikipedia.org/wiki/Genetical…
English
1
0
0
64
Hadrio
Hadrio@Hadrio_Official·
@TomChivers Selective breeding is still evolution, but I see your point. My issue isn’t the concept of modifying a crop to fit our needs, but tinkering with a crop for a benefit that doesn’t really matter at the cost of tradeoffs we probably don’t understand
English
2
0
0
36
Tom Chivers
Tom Chivers@TomChivers·
I immediately get annoyed by articles like this. "Some of the magic would be lost" if we use GM onions that don't make us cry? Why? If you prefer the old ones, use them. I find "we should keep things difficult on purpose" arguments incredibly frustrating ft.com/content/e09f40…
English
49
38
603
24.7K
Tom Chivers retweetledi
Matthew Yglesias
Matthew Yglesias@mattyglesias·
Some people would do well to reflect a little bit internally on what it does to the culture and overall epistemic and even moral environment when powerful, high-status people decide to get slippery and defensive rather than just admit they overstated things on a podcast.
Matthew Yglesias tweet media
English
14
46
1.1K
46K
Tom Chivers retweetledi
Alex Tabarrok
Alex Tabarrok@ATabarrok·
Two view of humanity. From a talk I gave some years ago. Relevant today.
Alex Tabarrok tweet media
English
89
349
3.9K
124.8K