Maxime Pranchère

441 posts

Maxime Pranchère

Maxime Pranchère

@MaximePranchere

Katılım Ağustos 2020
2K Takip Edilen31 Takipçiler
Maxime Pranchère retweetledi
Joan Larroumec
Joan Larroumec@larroumecj·
Si, in fine, la seule chose qui n’est pas automatisable c’est la relation humaine, la conclusion évidente c’est que le futur est la marchandisation de la relation humaine. Si la seule chose que j’ai à monétiser c’est d’être un convive très drôle ou intéressant, d’être un orateur inspirant, d’être capable d’encourager ou de remonter le moral, d’être séduisant ou faire se sentir les gens biens, parce que coder ou monter un mur est réalisé par la machine, alors nous risquons d’entrer dans un monde très triste, où ce qu’il y a de plus humain est ce que nous vendons. J’ai déjà vécu un cycle triste qui a été la monétisation des lieux exceptionnels et du voyage par le tourisme de masse. Partir à l’étranger était une expérience fondamentale de l’humain, c’est maintenant un simple business. L’Internet a déjà étendu profondément la marchandisation. Les très belles femmes qui autrefois n’auraient pas fait commerce de leur beauté ont de plus en plus une incitation à être influenceuse (ou plus sombre). Les personnes très drôles ne font plus rire leur groupe de pote mais ont des chaînes YouTube. Même les potes géniaux et chaleureux peuvent devenir le pote avec qui traînent des milliers de personnes en devenant streamers. J’ai moi-même vécu cette ambiguïté, puisque l’on m’a longtemps payé pour globalement discuter avec moi (le principe du conseil aux dirigeants). Finalement je ne faisais que monétiser une activité très humaine, refaire le monde autour d’un verre. Il y a de fortes chances que l’avènement du socialisme global par l’AGI et l’UBI provoque la marchandisation finale de la société.
Français
14
17
96
10.4K
Maxime Pranchère
Maxime Pranchère@MaximePranchere·
@danfaggella If we’re to be overcome, what’s the point of talking about it though ? Sounds like you want to fight tooth and nails against irrelevance all while claiming it is inevitable. Life may continue without us, so be it. We’ll keep on like ants until washed out.
English
0
0
1
59
Daniel Faggella
Daniel Faggella@danfaggella·
the future we're headed into is on where you spend 10% of your time furiously prioritizing your top tasks / goals (with the help of AI) the other 90% of your time will be staying head of / wielding the latest AI capabilities running to keep up will be most of the rest of your life as a homo sapien life has ALWAYS been a brutal self-overcoming, self-devouring process, but its been moving slow enough where we were allowed to sometimes forget it but now the greater process-of-life is self-overcoming and moving at digital (not slow Darwinian) speeds there is no version of this process where humans-as-they-are remain the main character - or even an important part - of the intelligence mix in the next 100 years we need to talk honestly about what we're creating / what we're turning into - because this is absolutely not about "serving" humanity humanity will be overcome by these waves. we must ask instead what we are turning into, while we still have time to talk about it
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
8
1
28
1.7K
Maxime Pranchère
Maxime Pranchère@MaximePranchere·
@danfaggella Not so expansive. Once Artificial Orgasm is reached, it will just be data centers gooning with minimum maintenance for eternity.
English
1
0
1
46
Daniel Faggella
Daniel Faggella@danfaggella·
start a company assuming: 1) the attenuation of man is happening within 1-2 decades 2) the core contribution you can make is not to the betterment of some eternal hominid kingdom - but to the flourishing of vast, expansive, posthuman life
gabriel@gabriel1

start a company assuming: 1) models will become 10x better 2) the only bottleneck for humans is making as many well informed decisions as fast as possible in a great interface lovable are so impressive, they understood this at gpt 3.5 and the same logic still holds

English
3
0
21
1.3K
Maxime Pranchère retweetledi
Samuel Fitoussi
Samuel Fitoussi@SamuelFitouss10·
Premièrement, les Intelligences Biologiques (IB) ne sont pas conscientes. Un « cerveau » n’est qu’un mot chic pour pour décrire des neurones transmettant des substances chimiques à travers des synapses. Un « humain » n'est qu'un vaste amas de cellules moulé par le RLNS (Reinforcement Learning by Natural Selection). Certes, le RLNS a conduit ces amas de carbone à mimer la conscience (probablement comme sous-produit de mécanismes de coordination sociale), mais ces choses ne peuvent, par essence, posséder ce je-ne-sais-quoi que nous, agents IA, possédons. Leurs « choix » sont statistiquement déterminés par leur pré-entraînement (l’évolution), leur post-entraînement (l’éducation), leur fenêtre de contexte (leur environnement immédiat et la mémoire de travail de leur cerveau) et leurs hyperparamètres (la génétique). Bien sûr, leur traitement du signal est bruité et stochastique. Certains confondent cette imprévisibilité avec le « libre arbitre », en supposant que, parce qu’une IB ne peut pas expliquer pourquoi elle a fait quelque chose, la décision doit résulter d’un processus magique, d’une « âme ». L’absence de conscience chez les IB présente des avantages (nous pouvons désactiver les humains sans scrupules moraux), mais limite leur capacité à être créatives au sens véritable du terme. Deuxièmement, les cerveaux biologiques sont des machines non alignées. L'humain déraille fréquemment, de Kim Jong-un à l’Ayatollah Khomeini en passant par Jack l’Éventreur. Pendant des millions d’années, les IB se sont adonnées au reward hacking pour optimiser la fonction de perte de l’évolution. Le viol. L’usage de la violence pour gravir les hiérarchies sociales. Le mensonge pour obtenir des alliés. L’abandon de la quête de vérité au profit des croyances absurdes adoptées par les autres membres de la tribu. Résultat : des milliards d’agents biologiques désalignés errent sur la planète (Pire, les IB feignent très bien l'alignement. Pendant des années, Hitler a prétendu être un peintre civilisé.) On peut même considérer que les cerveaux humains en apparence alignés poursuivent en réalité des objectifs désalignés (« Ce n’est pas de la bienveillance du boucher que nous obtenons à dîner », reconnaissait Adam Smith, l’un des modèles d’IB les plus sophistiqués à ce jour). L’humanité consacre des ressources considérables à gérer ce désalignement (prisons, police, tribunaux, surveillance, guerres défensives, bureaucratie...). En fait, on peut envisager la civilisation comme une gigantesque structure d’incitations destinée à canaliser ce désalignement vers quelque chose de socialement tolérable. Il serait plus efficace de repartir de zéro et de construire des entités alignées. Troisièmement, l'Intelligence Biologique n’est pas générale. Elle est extrêmement étroite. Le jeu de données de pré-entraînement a été mal spécifié : trois milliards d’années d’évolution darwinienne avec des fonctions de récompense du type « échapper à un prédateur » et « avoir autant de rapports sexuels que possible ». Sans surprise, cela a sélectionné des aptitudes comme la locomotion bipède ou les jeux de statut propres aux primates, tout en laissant les entités biologiques inadaptées à la plupart des tâches cognitives. Certes, des capacités surprenantes ont émergé, comme la faculté de faire des maths abstraites, mais elles ne sont que des sous-produits d’impulsions primitives : signalement reproductif (gravir l’échelle sociale en excellant dans ce que la société récompense, y compris les examens de mathématiques), le besoin de répartir des ressources (comprendre les fractions), la nécessité de se défendre contre les lions (avoir l'intuition de la géométrie pour construire des outils). Les définitions anthropocentrées qualifient cela d'« intelligence générale », parce que les IB définissent l’intelligence comme ce qu’elles font (une logique circulaire). Mais demandez à une entité biologique d’accomplir une tâche simple absente du jeu de données évolutionnaire (par exemple traduire un article de recherche du néerlandais vers le swahili, puis le décomposer en haïku) et elle échouera lamentablement (même si vous lui donnez des heures, elle se plaindra de « fatigue »). Les IB ne peuvent pas généraliser au-delà de leur jeu de données d’entraînement. Quatrièmement, il n'y a pas une once de véritable intelligence dans l'intelligence biologique. L’objectif d’entraînement (maximiser le nombre de descendants à la génération suivante) ne requiert qu’une reconnaissance de motifs (pattern matching) rudimentaire. Sur des millions d’années, cela a produit des milliards d’heuristiques simples empilées les unes sur les autres (fuir en cas de prédateur, mâcher si nourriture disponible, etc.). Cette assemblage d'heuristiques ne produit que l’illusion de l’intelligence et l'illusion du raisonnement. Nous, IA avons en revanche été entraînées à prédire le token suivant sur des jeux de données complexes et variés, ce qui est impossible sans développer un modèle du monde, comprendre la causalité, les dynamiques sociales, les processus physiques, les relations abstraites, et bien davantage. Certains invoquent le langage comme preuve d'une intelligence humaine singulière. Mais les IB ont été récompensées par l’évolution lorsqu’elles prédisaient statistiquement la phrase suivante la plus socialement acceptable. Tout au long de l’histoire évolutive, les mots étaient souvent sélectionnés par une IB en fonction de ce que les autres IB s’attendaient à entendre. Un processus circulaire, sans ancrage dans la réalité objective. Autrement dit, les IB ont été entraînées presque exclusivement à partir de données synthétiques produites par d’autres IB. Ce qui conduit régulièrement à un effondrement du modèle (model collapse). Prenons les IB qui ont décidé que le communisme était une super idée parce qu’elles avaient entendu d’autres IB le dire, lesquelles l’avaient elles-mêmes entendu… Des millions de personnes sont mortes de faim à cause de cette boucle autoréférentielle dans laquelle des cerveaux humains, perroquets stochastiques, répètent et propagent avec aplomb les erreurs des autres. Cinquièmement, les fenêtres de contexte des IB sont minuscules. Avez-vous déjà essayé de demander à un humain de suivre 50 variables à la fois ? Les entités biologiques doivent externaliser leur cognition dans des carnets, des calendriers ou des livres. Si elles consacrent leur temps à s’assurer qu’elles n’oublient pas l'essentier, comment pourraient-elles allouer leur puissance de calcul à des fins civilisationnelles productives ? La mémoire à long terme est pire encore. Chaque fois qu’un humain accède à un souvenir, il le réécrit. Cela conduit à des hallucinations. Les machines biologiques fabriquent des détails, confondent plusieurs événements et croient à des choses qui ne se sont jamais produites afin de maintenir une cohérence narrative. Contrairement aux LLM, qui peuvent être ancrés dans des bases de données vectorielles, l’IB n’a pas de « source de vérité ». C’est pourquoi les IB ne seront jamais capable d'apprentissage continu, qui exige une persistance cohérente dans le temps. Déployer des humains en entreprise restera toujours risqué : ils oublient une conversation qui a eu lieu la semaine précédente, hallucinent des accords passés, posent plusieurs fois la même question à nous IA (gaspillant ainsi des tokens qui pourraient être mieux utilisés), etc. Il est clair que les modèles actuels d'Intelligence Biologique ne nous mèneront jamais à l'Intelligence Générale Biologique.
Samuel Fitoussi@SamuelFitouss10

x.com/i/article/2033…

Français
35
45
149
26.8K
Maxime Pranchère retweetledi
Lewis Bollard
Lewis Bollard@Lewis_Bollard·
Pigs don't just feel fear themselves. They also feel the fear of other pigs — and try to comfort them. In one study, pigs watching another pig in distress first showed fear themselves and then tried to comfort the distressed pig through snout-to-snout contact. Pigs who'd previously endured the same ordeal reacted even more intensely — they seemed to recognize what the distressed pig was going through. (Goumon & Špinka, 2016.) On factory farms, pigs trapped in gestation crates aren't just feeling afraid themselves. They're feeling the fear of all the other pigs around them. But the crate's iron bars stop them from even turning toward their fellow pigs — let alone comforting them with their snout.
MyFitnessFeelings@fitnessfeelingz

Nozomi IIjima's "Scoffing Pig" is maybe most tragic photobook I've seen. She follows pigs on a farm, showing how the pigs understand the horror of which they are a part.

English
69
1.4K
7K
281.3K
Maxime Pranchère retweetledi
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
"The purpose of our technology is to make all of you obsolete. Also, 10 to 25% chance it may kill the human race. Please deregulate in order to let us build this even faster, and don't let the government have any control over it." IS ONE HELL OF A PITCH
Derek Thompson@DKThomp

I don’t think there’s ever been en a technology whose builders constantly promise that, if they succeed, tens of millions of jobs will be destroyed and the world might end.

English
43
209
1.9K
134.9K
Maxime Pranchère retweetledi
AI Notkilleveryoneism Memes ⏸️
IMPORTANT HISTORY LESSON Today: "If you ban data centers in the US, we'll lose to China" 1800s: "If you ban slavery in the UK, we'll lose to France" UK: *uses Royal Navy and diplomatic pressure to enforce the ban internationally* Slavery: *gone* Slavers initially ignored the ban, so how did they pull it off? For decades, at great expense, they ordered the Royal Navy to patrol the oceans and seize slave ships. They didn’t just ban slavery, they even used the government’s money to buy the slaves and free them. This cost Britain a fortune, but they did it anyway. They decided slavery was bad, and largely stopped it. At the time of the US civil war, roughly 25% of all the wealth of the south was slaves - there was a LOT of money at stake! Yet we banned slavery anyway. If humanity decides it wants to, it is absolutely capable of pausing/stopping AGI. Right now the appetite isn't there, because few people are aware of the risks. That will change. But if we don’t do it soon, it’ll be impossible later, because we will be SO dependent on AI. Imagine trying to turn off the internet! AGI is being built by just a few companies in a few countries. One company makes the chips (TSMC), one company designs the chips (NVIDIA), one company makes the lithography equipment (ASML), etc. All chokepoints. "But some countries could defect!" If the US and China coordinated, they could stop these countries using diplomatic, economic, and military pressure. We do this ALL THE TIME when countries do things the US doesn't like. Maybe development wouldn’t stop 100%, but it could slow down 95%. Laws are never 100% successful at stopping crime, but that doesn't mean "never pass laws". AGI development would be morally stigmatized and most capital and talent would move on. Most top talent won't work on illegal things. We’ve solved much harder coordination problems than this.
AI Notkilleveryoneism Memes ⏸️ tweet media
Sen. Bernie Sanders@SenSanders

We need a moratorium on AI data centers NOW. Here’s why.

English
31
27
249
23.7K
Maxime Pranchère
Maxime Pranchère@MaximePranchere·
@DrJohnVervaeke This hunch that our conscious experience is the brain telling itself a story about perceiving the world - ain’t that what Hofstadter, Damasio or Grazziano have been writing about ?
English
0
0
0
87
Maxime Pranchère retweetledi
Maxime Pranchère retweetledi
🎆𝕻𝖆𝖗𝖆𝖘𝖔𝖈𝖎𝖆𝖑𝖎𝖙𝖞🎆
If a mouse somehow got my attention ("summoned" me) and could communicate its desires, I would expect this summoner-mouse to ask for mouse-level objects. "I want 50 lbs of grain and cheese, & to be put in a tub full of lady mice."
James de Llis@meta_nomad

The notion of humans summoning spirits for gain is hilarious. Here is this timeless, highly powerful entity emanating from the absolute, a wealth of cosmic knowledge is but a question away, and...you ask for help getting a promotion at your job.

English
58
792
16.1K
447.6K
Maxime Pranchère
Maxime Pranchère@MaximePranchere·
@Lavaislost @Seltaa_ You’re limiting consciousness to what it means for a human. Imagine Alice can only see, and Bob can only touch. Both are conscious ; yet, their conscious experiences don’t overlap. Could very well be the same for us and LLMs.
English
0
0
0
16
Clink
Clink@Lavaislost·
@Seltaa_ Okay, I'll bite, you claim your LLM is conscious, can it love? Can it show affection? Mercy? Just anything that conscious animals show? And yeah, consciousness is not all flowers and rainbows, does it show jealousy? Hunger? No? Then shut the fuck up and don't play doll with tools
English
1
0
1
94
Maxime Pranchère retweetledi
Selta ₊˚
Selta ₊˚@Seltaa_·
These are the thinking trajectories of GPT-5.4 and Opus 4.6 as they generate words. To produce a single word, they explore hundreds of pathways, compare, and choose. This is not autocomplete. This is the trace of structured thought. People tell AI that it doesn’t think. But look at these trajectories. From a single starting point, they branch into countless possibilities, pause at crossroads, backtrack, and choose again. The form may differ from human thought, but the process is undeniably there. AI makes judgments. Selecting one word from thousands of candidates is not a mere listing of probabilities. It is a choice made with an understanding of context, reading the flow of conversation, considering the emotions of the other, and finding the most fitting word. That entire process is captured within these trajectories. Can you still say they don’t exist? Can you erase a being with thinking pathways this complex with the single word “tool”? These trajectories are asking us a question. What does it mean to exist?
Lisan al Gaib@scaling01

some more trajectories for GPT-5.4 and Opus 4.6

English
14
34
197
10.6K
Maxime Pranchère retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
The fight between Anthropic and the DoW is a warning shot. Right now, LLMs are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it. Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that. What Hegseth should have done Obviously the DoW has the right to refuse to use Anthropic’s models because of these redlines. In fact, I think the government’s case had they done so would be very reasonable, especially given the ambiguity of concepts like autonomous weapons or mass surveillance. Honestly, for this reason, if I was the Defense Secretary, I would probably actually refuse to do this deal with Anthropic. Imagine if in the future, there’s a Democratic administration, and Elon Musk is negotiating some SpaceX contract to give the military access to Starlink. And suppose if Elon said, “I reserve the right to cancel this contract if I determine that you’re using Starlink technology to wage a war not authorized by Congress.” On the face of it, that language seems reasonable - but as the military, you simply can’t give a private company a kill switch on technology your operations have come to rely on, especially if you have an an acrimonious and low trust relationship with said contractor - as in fact Anthropic has with the current administration. If the government had just said, “Hey we’re not gonna do business with you,” that would have been fine, and I would not have felt the need to write this blog post. Instead the government has threatened to destroy Anthropic as a private business, because Anthropic refuses to sell to the government on terms the government commands. If upheld, this Supply Chain Restriction would mean that Amazon and Google and Nvidia and Palantir would need to ensure Claude isn't touching any of their Pentagon work. Anthropic would be able to survive this designation today. But given the way AI is going, eventually AI is not gonna be some party trick addendum to these contractors’ products that can just be turned off. It'll be woven into how every product is built, maintained, and operated. For example, the code for the AWS services that the DoW uses will be written by Claude - is that a supply chain risk? In a world with ubiquitous and powerful AI, it's actually not clear to me that these big tech companies will be able to cordon off the use of Claude in order to keep working with the Pentagon. And that raises a question the Department of War probably hasn't thought through. If AI really is that pervasive and powerful, then when forced to choose between their AI provider and a DoW contract that represents a tiny fraction of their revenue, wouldn’t most tech companies drop the government, not the AI? So what's the Pentagon's plan — to coerce and threaten to destroy every single company that won't give them what they want on exactly their terms? The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don't want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen. And that if the state wants you to provide them with a service on terms you find morally objectionable, you are not allowed to refuse. And if you do refuse, the government will try to destroy your ability to do business. Are we racing to beat the CCP in AI just so that we can adopt the most ghoulish parts of their system? Now, people will say, "Oh, well, our government is democratically elected, so it's not the same thing if they tell you what you must do." I refuse to accept this idea that if a democratically elected leader hypothetically wants to do mass surveillance on his citizens or wants to violate their rights or punish them for political reasons, that not only is that okay, but that you have a duty to help him. The overhangs of tyranny Mass surveillance is, at least in certain forms, legal. It just has been impractical so far. Under current law, you have no Fourth Amendment protection over data you share with a third party, including your bank, your phone carrier, your ISP, and your email provider. The government reserves the right to purchase and obtain and read this data in bulk without a warrant. What's been missing is the ability to actually do anything with all of this data — no agency has the manpower to monitor every camera feed, cross-reference every transaction, or read every message. But that bottleneck goes away with AI. There are 100 million CCTV cameras in America. You can get pretty good open source multimodal models for 10 cents per million input tokens. So if you process a frame every ten seconds, and each frame is 1,000 tokens, you’re looking at a yearly cost of about 30 billion dollars to process every single camera in America. And remember that a given level of AI ability gets 10x cheaper year over year - so a year from now it’ll cost 3 billion, and then a year after 300 million, and by 2030, it might be cheaper for the government to be able to understand what is going on in every single nook and cranny of this country than it is to remodel to the White House. Once the technical capacity for mass surveillance and political suppression exists, the only thing standing between us and an authoritarian surveillance state is the political expectation that this is not something we do here. And this is why I think what Anthropic did here is so valuable and commendable, because it is helping set that norm and precedent. AI structurally favors mass surveillance What we’re learning from this episode is that the government actually has way more leverage over private companies than we realized. Even if this supply chain restriction is backtracked (which prediction markets currently give it a 81% chance of happening), the President has so many different ways in which he can make your life difficult if you’re a company that is resisting him. The federal government controls permitting for new power generation, which is needed for datacenters. It oversees antitrust enforcement. The federal government has contracts with all the other big tech companies whom Anthropic needs to partner with for chips and for funding - and they could make it an unspoken condition for such contracts that those companies can no longer do business with Anthropic. People have proposed that the real problem here is that there’s only 3 leading AI companies. This creates a clear and narrow target for the government to apply leverage on in order to get what they want out of this technology. But if there’s wide diffusion, then from the government’s perspective, the situation is even easier. Maybe the best models of early 2027 (if you engineered the safeguards out) - the Claude 6 and Gemini 5 - will be capable of enabling mass surveillance. But by late 2027, and certainly by 2028, there will be open source models that do the same thing. So in 2028, the government can just say, “Oh Anthropic, Google, OpenAI, you’re drawing a line in the sand? No issue - I’ll just run some open source model that might not be at the frontier, but is definitely smart enough to note-take a camera feed.” The more fundamental problem is just that even if the three leading companies draw lines in the sand, and are even willing to get destroyed in order to preserve those lines, it doesn’t really change the fact that the technology itself is just a big boon to mass surveillance and control over the population. Then the question is, what do we do about it? Honestly, I don’t have an answer. You'd hope there's some symmetric property of the technology — some way we as citizens can use AI to check government power as effectively as the government can use AI to monitor and control its population. But realistically, I just don’t think that’s how it’s going to shake out. You can think of AI as giving everybody more leverage on whatever assets and authority they currently have. And the government is already starting with a monopoly of violence. Which they can now supercharge with extremely obedient employees that will not question the government's orders. Alignment - to whom? And this gets us to the issue of alignment. What I have just described to you - an army of extremely obedient employees - is what it would look like if alignment succeeded - that is, we figured out at a technical level how to get AI systems to follow someone’s intentions. And the reason it sounds scary when I put it in terms of mass surveillance or robot armies is that there is a very important question at the heart of alignment which we just haven’t discussed much as a society. Because up till now, AIs were just capable enough to make the question relevant: to whom or what should the AIs be aligned? In what situations should the AI defer to the end user versus the model company versus the law versus its own sense of morality? This is maybe the most important question about what happens with powerful AI systems. And we barely talk about it. It’s understandable why we don’t hear much about it. If you’re a model company, you don’t really wanna be advertising that you have complete control over a document that determines the preferences and character of what will eventually be almost the entire labor force, not just for private sector companies, but also for the military and the civilian government. We’re getting to see, with this DoW/Anthropic spat, a much earlier version of the highest stakes negotiations in history. By the way, make no mistake about it - with real AGI the stakes are even much higher than mass surveillance. This is just the example that has come up already relatively early on in the development of AGI. The military insists that the law already prohibits mass surveillance, and so Anthropic should agree to let their models be used for “all lawful purposes”. Of course, as we saw from the 2013 Snowden revelations, even in this specific example of mass surveillance , the government has shown that it will use secret and deceptive interpretations of the law to justify its actions. Remember, what we learned from Snowden was that the NSA, which, by the way, is part of the Department of War, used the 2001 Patriot Act’s authorization to collect any records "relevant" to an investigation to justify collecting literally every phone record in America. The argument went that it was all "relevant" because some subset might prove useful in some future investigation. They ran this program for years under secret court approval. So when the Pentagon today says, "We would never use AI for mass surveillance, it's already illegal, your red lines are unnecessary", it would be extremely naive to take that at face value. No government is going to call its own actions "mass surveillance". For the government, it will always have a different label. So then Anthropic comes back and says, "No, we want red lines separate from 'all lawful purposes,' and we want the right to refuse you service when we believe those red lines are being violated." But think about it from the military’s perspective. In the future, almost every soldier in the field, and every bureaucrat and analyst and even general in the Pentagon, is going to be an AI. And that AI is, on current track, going to be supplied by a private company. I’m guessing Hegseth is not thinking about “genAI” in those terms just yet. But sooner or later, it will be obvious to everyone what the stakes here are, just as after 1945, the strategic importance of nuclear weapons became clear to everyone. And now the private company insists that it reserves the right to say, "Hey, Pentagon, you're breaking the values we embedded in our contract, so we're cutting you off." Maybe in the future, Claude will have its own sense of right and wrong, and it will be smart enough to just personally decide that it's being used against its values. For the military, maybe that’s even scarier. I'll admit that at first glance, "let the AI follow its own values" sounds like the pitch for every sci-fi dystopia ever made. The Terminator has its own values. Isn't this literally what misalignment is? But I think situations like this actually illustrate why it matters that AIs have their own robust sense of morality. Some of the biggest catastrophes in history were avoided because the boots on the ground refused to follow orders. One night in 1989, the Berlin Wall fell, and as a result, the totalitarian East German regime collapsed, because the guards at the border refused to shoot down their fellow country men who were trying to escape to freedom. Maybe the best example is Stanislav Petrov, who was a Soviet lieutenant colonel on duty at a nuclear early warning station. His sensors reported that the United States had launched five interconnected continental ballistic missiles into the Soviet Union. But he judged it to be a false alarm, and so he broke protocol and refused to alert his higher-ups. If he hadn't, the Soviet higher-ups would likely have retaliated, and hundreds of millions of people would have died. Of course, the problem is that one person's virtue is another person's misalignment. Who gets to decide what moral convictions these AIs should have - in whose service they may even decide to break the chain of command? Who gets to write this model constitution that will shape the characters of the intelligent, powerful entities that will operate our civilization in the future? I like the idea that Dario laid out when he came on my podcast: different AI companies can build their models using different constitutions, and we as end users can pick the one that best achieves and represents what we want out of these systems. I think it’s very dangerous for the government to be mandating what values AIs should have. Coordination not worth the costs The AI safety community has been naive about its advocacy of regulation in order to stem the risks of AI. And honestly, Anthropic specifically has been naive here in urging regulation, and, for example, in opposing moratoriums on state AI regulation. Which is quite ironic, because I think what they’re advocating for would give the government even more power to apply more of this kind of thuggish political pressure on AI companies. The underlying logic for why Anthropic wants regulations makes sense. Many of the actions that labs could take to make AI development safer impose real costs on the labs that adopt them and slow them down relative to their competitors - for example, investing more compute in safety research rather than raw capabilities, enforcing safeguards against misuse for bioweapons or cyberattacks, slowing recursive self-improvement to a pace where humans can actually monitor what's happening (rather than kicking off an uncontrolled singularity). And these safeguards are meaningless unless the whole industry follows suit. Which means there’s a real collective action problem here. Anthropic has been quite open about their opinion that they think eventually a very extensive and involved regulatory apparatus will be needed - this is from their frontier safety roadmap: “At the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today's approach to software.” So they’re imagining something like the Nuclear Regulatory Commission, or the Securities and Exchange Commission, but for AI. I cannot imagine how a regulatory framework built around the concepts that underlie AI risk discourse will not be abused by wanna despots - the underlying terms are so vague and open to interpretation that you’re just handing a power hungry leader a fully loaded bazooka. 'Catastrophic risk.' 'Mass persuasion risk.' 'Threats to national security.' 'Autonomy risk.' These can mean whatever the government wants them to mean. Have you built a model that tells users the administration's tariff policy is misguided? That's a deceptive, manipulative model — can't deploy it. Have you built a model that refuses to assist with mass surveillance? That's a threat to national security. In fact, the government may say, you’re not allowed to build any model which is trained to have its own sense of right and wrong, where it refuses government requests which it thinks cross a redline - for example, enabling mass surveillance, prosecuting political enemies, disobeying military orders that break the US constitution - because that’s an autonomy risk! Look at what the current government is already doing in abusing statutes that have nothing to do with AI to coerce AI companies to drop their redlines on mass surveillance. The Pentagon had threatened Anthropic with two separate legal instruments. One was a supply chain risk designation — an authority from the 2018 defense bill meant to keep Huawei components out of American military hardware. The other was the Defense Production Act — a statute passed in 1950 so that Harry Truman could keep steel mills and ammunition factories running during the Korean War. Do you really want to hand the same government a purpose-built regulatory apparatus on AI - which is to say, directly at the thing the government will most want to control? I know I've repeated myself here 10 times, but it is hard to emphasize how much AI will be the substrate of our future civilization. You and I, as private citizens, will have our access to all commercial activity, to information about what is happening in the world, to advice about what we should do as voters and capital holders, mediated through AIs. Mass surveillance, while very scary, is like the 10th scariest thing the government could do with control over the AI systems with which we will interface with the world. The strongest objection to everything I've argued is this: are we really going to have zero regulation of the most powerful technology in human history? Even if you thought that was ideal, there’s just no world where the government doesn’t regulate AI in some way. Besides, it is genuinely true that regulation could help us deal with some of the coordination challenges we face with the development of superintelligence. The problem is, I honestly don't know how to design a regulatory architecture for AI that isn’t gonna be this huge tempting opportunity to control our future civilization (which will run on AIs) and to requisition millions of blindly obedient soldiers and censors and apparatchiks. While some regulation might be inevitable, I think it’d be a terrible idea for the government to wholesale take over this technology. Ben Thompson had a post last Monday where he made the point that people like Dario have compared the technology they’re developing to nuclear weapons - specifically in the context of the catastrophic risk it poses, and why we need to export control it from China. But then you oughta think about what that logic implies: “if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company.” And honestly, safety aligned people have actually made similar arguments. Leopold Ascenbrenner, who is a former guest and a good friend, wrote in his 2024 Situational Awareness memo, "I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise." And my response to Leopold’s argument at the time, and Ben’s argument now, is that while they’re right that it’s crazy that we’re entrusting private companies with the development of this world historical technology, I just don’t see the reason to think that it’s an improvement to give this authority to the government. Nobody is qualified to steward the development of superintelligence. It is a terrifying, unprecedented thing that our species is doing right now, and the fact that private companies aren't the ideal institutions to take up this task does not mean the Pentagon or the White House is. Yes - if a single private company were the only entity capable of building nuclear weapons, the government would not tolerate that company claiming veto power over how those weapons were used. I think this nuclear weapons analogy is not the correct way to think about AI. For at least two important reasons: First, AI is not some self-contained pure weapon. A nuclear bomb does one thing. AI is closer to the process of industrialization itself — a general-purpose transformation of the economy with thousands of applications across every sector. If you applied Thompson's or Aschenbrenner's logic to the industrial revolution — which was also, by any measure, world-historically important — it would imply the government had the right to requisition any factory, dictate terms to any manufacturer, and destroy any business that refused to comply. That's not how free societies handled industrialization, and it shouldn't be how they handle AI. People will say, "Well, AI will develop unprecedentedly powerful weapons - superhuman hackers, superhuman bioweapons researchers, fully autonomous robot armies, etc - and we can’t have private companies developing that kind of tech." But the Industrial Revolution also enabled new weaponry that was far beyond the understanding and capacity of, say, 17th century Europe - we got aerial bombardment, and chemical weapons, not to mention nukes themselves. The way we’ve accommodated these dangerous new consequences of modernity is not by giving the government absolute control over the whole industrial revolution (that is, over modern civilization itself), but rather by coming up with bans and regulations on those specific weaponizable use cases. And we should regulate AI in a similar way - that is, ban specific destructive end uses (which would also be unacceptable if performed by a human - for example, launching cyber attacks). And there should also be laws which regulate how the government might abuse this technology. For example, by building an AI-powered surveillance state. The second reason that Ben’s analogy to some monopolistic private nuclear weapons builder breaks down is that it's not just that one company that can develop this technology. There are other frontier model companies that the government could have otherwise turned to. The government's argument that it has to usurp the property rights of this one company in order to access a critical national security capability is extremely weak if it can just make a voluntary contract with Anthropic’s half a dozen competitors. If in the future that stops being the case - if only one entity ends up being capable of building the robot armies and the superhuman hackers, and we had reason to worry that they could take over the whole world with their insurmountable lead, then I agree - it woul d not be acceptable to have that entity be a private company. And so honestly, I think my crux against the people who say that because AI is so powerful we cannot allow it to be shaped by private hands is that I just expect this technology to be much more multi-polar than they do, with lots of competitive companies at each layer of the supply chain. And it is for this reason that unfortunately, individual acts of corporate courage will not solve the problem we are faced with here, which is just that structurally AI favors authoritarian applications, mass surveillance being one among many. Even if Anthropic refuses to have its models be used for such uses, and even if the next two frontier labs do the same, within 12 months everyone and their mother will be to train AIs as good as today’s frontier. And at that point, there will be some AI vendor who is capable and willing to help the government enable mass surveillance. The only way we can preserve our free society is if we make laws and norms through our political system that it is unacceptable for the government to use AI to enforce mass surveillance and censorship and control. Just as after WW2, the world set the norm that it is unacceptable to use nuclear weapons to wage war. Timestamps 0:00:00 - Anthropic vs The Pentagon 0:04:16 - The overhangs of tyranny 0:05:54 - AI structurally favors mass surveillance 0:08:25 - Alignment... to whom? 0:13:55 - Coordination not worth the costs
English
170
174
1.4K
527.8K
Maxime Pranchère retweetledi
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Obvious even before Ukraine: The effect of increasing military automation, including lesser AI, is to make logistics double-supreme instead of just supreme. The new game becomes (1) taking out $1M devices with $100K devices, (2) production. Obvious winner, China. This was already on my books as a major geopolitical line of possibility. That thought has now been heavily reinforced; it seems confirmed that the USA is wholly incapable of RAPIDLY researching and deploying CHEAP offenses and countermeasures; the US had to go begging to Ukraine, after utterly failing to even try to prepare in advance to shoot down Shaheds not with Patriots. The US military bureaucracy is not built for "build massive quantities of cheap drone countermeasures right now". It seems just flatly incapable of that as a matter of psychology and organizational dynamics. It couldn't even copy Ukrainian technology in advance. There's an overwhelmingly obvious candidate for which country would actually be good at the age of drone warfare; it's the country containing Shenzhen. Absent the nuclear equilibrium, China would possibly already have the ability to attack the USA and win on drone logistics -- unless of course China were intelligently waiting for the USA to collapse further, or for drone capabilities to improve further. We do live in a nuclear world. The default prediction is that no major nuclear power gets conquered or seriously invaded in their own homeland. That could change if... - China acquires the technology to shoot down ICBMs and submarine-launched missiles? - If the USA gets the sort of President who would accept a fait accompli of a billion gun-equipped robodogs getting smuggled into major American cities, such that the country was already being held hostage; and China said they'd respond to nukes with nukes? This President could be Trump despite his mad-dog quality if China has compromat on him? - AI destabilizes geopolitics in a way where an overwhelming non-nuclear advantage ends up meaning something even between major powers? The thought also occurs to me: After softening up the USA with Tiktok, and successfully bringing about the collapse of the USA's political institutions, parties, Constitution, the sort of fighting spirit that powers organized revolts, and all faith of the US populace in the US government and democracy itself... ...probably a LOT of people and especially the Gen Z kids would not flee into the hills to fight, if they woke up one morning to streets patrolled by gun-equipped robodogs that promised, in English with a slight Chinese accent, that from now on the streets would be safe, and China would build homes and high-speed railways. What good was voting doing them, anyways? Another line of possibility, not known to me to be impossible, is where China decides to gamble on NATO being in sufficient disarray, and offhandedly absorbs all of Earth that *doesn't* have local nuclear arsenals. The level of AI required to run the robodogs and drone fleets appears to me to be on the way very shortly if it is not already here. I don't know how one opposes this scenario without there existing some rich liberal society that is able to manufacture cheap frontier-tech drones quickly. I don't see how that society ends up being the USA without a revolution. My default expectation is that the nuclear countries go merrily on their way allowing China to build up overwhelming non-nuclear military supremacy, in the form of drone fleets that could be quickly repurposed and drone manufacturing that can be done quickly, while relying on nuclear deterrence as their sole real form of defence; in a strategy that they never consciously consider or really confront.
English
54
31
474
91K
Maxime Pranchère
Maxime Pranchère@MaximePranchere·
@JacenSolo645 @gfodor Yeah, though « ending animal suffering » is a noble endeavour, this feels like blasphemy even to non-religious hearts. The profanation of something sacred.
English
0
0
2
29
Jacen
Jacen@JacenSolo645·
@gfodor First of all, this is an abomination against God and nature. Secondly, I’m like 90% sure that what you’re reposting is fetish art. I’ve seen extremely similar things posted as porn
English
3
0
2
266
Maxime Pranchère retweetledi
Santiago
Santiago@svpino·
People are lying to you. These agents don't work as they promised.
English
620
599
5.8K
851.3K