Chris

7.1K posts

Chris banner
Chris

Chris

@TopherBR

AE & SW engineer (mostly 🦀, 🐍). ⚛️ base load + renewables for variable = 💚. Enjoys a debate when backed by facts

Approx. 1 AU from the Sun. Katılım Mayıs 2013
529 Takip Edilen351 Takipçiler
Sabitlenmiş Tweet
Casey Handmer
Casey Handmer@CJHandmer·
Some thoughts on destruction of oil and gas infrastructure in the Gulf. It is not exactly a new insight that modern economies operate on oil. Oil access, synthesis, and interdiction was a major theater of WW2. 100 years ago oil-poor nations spent heavily and participated in terrible wars over oil. See, for example, the Combined Bombing Offensive, Operation Tidal Wave, and the destruction of the Leuna synthetic fuel plants, not to mention the effectiveness of the submarine war in the waters around Japan. In 2022, energy producer Russia invaded Ukraine, instantly throwing into stark relief the idiocy of European energy policy, where an unholy alliance of heavily regulated energy contractors and astroturfed "green" activists managed to get Germany to shut down their nuclear industry. Even as solar panel production, largely initially developed and funded in the West, grew to overwhelming proportions, Europe insisted on sending roughly $1b *per day* to Russia for access to their oil and gas. If Europe had adjusted course in early 2022, then they would be able to support their power grids and probably some synthetic fuel production by now. The US built nuclear weapons from scratch in 2.5 years in the 1940s in competition with other national priorities at the same time. It's been more than four years since Ukraine's invasion. But no, they did sweet fuck all about ensuring energy sovereignty. Indeed, they even went in the other direction. Britain concentrated government resources on cracking down on free speech and stopped drilling for oil. The continent continued their ill-informed blanket ban on fracking, and working age people continued to pay the price, in the form of ever higher costs, ever higher taxes, ever poorer public services, ever dropping fertility. What about the rest of the oil importing developed world? France and Japan maintained their nuclear industry, their navies, their shipping industries and the fungibility of their supply - to an extent - even as they continued to actively burn up their economies in other more insidious ways. New Zealand shut down their last refinery. Australia exports a lot of crude and gas but mostly lacks the ability to close their supply chain in their own borders, and fuel prices have almost doubled. California continued to ban new drilling and continues to wage open regulatory warfare against their oil refineries, perversely increasing oil-related air pollution in the state from foreign oil tanker imports and pushing gasoline prices ever higher. More of the world has attempted to switch to natural gas supply, with investments exceeding $1t on gas import and export terminals, as though it's some fundamental law of nature that hydrocarbons must cross an ocean before they're used. As though the US fracking boom will last forever, or Asian demand growth won't see European prices continue to increase, further crushing their economic dynamism. I have been in the room with various Asian and European energy ministers and have asked them point blank: What's your plan? I have never gotten a better answer than a shrug, as though they'll muddle through and soon it'll be someone else's problem. The best time to get serious about domestic energy supply chains was four years ago. The second best time is today. The pain will ease just as soon as you say the magic words: I must increase my own energy supply! And yes, it is totally possible to produce synthetic oil and gas pretty much anywhere that people live with a solar-based process we've spent four years developing at @TerraformIndies, it is future proof, it is strategically robust, it is price-linked to solar manufacturing cost, which continues to fall like a rock. It's not entirely trivial to do but, given that Europe spends about 100,000x more on Russian oil and gas imports than they do on (privately funded) synthetic fuel development, I am on safe ground when I accuse Europe's leaders of committing gross capital misallocation. Imagine what the synthetic fuel industry could achieve with $1b/day! If you are an energy minister, now is a good time to reflect on fates worse than losing an election. Get back to work!
English
25
49
484
34.9K
Chris
Chris@TopherBR·
@ObsDelphi Il va se faire appeler Arthur...
Français
0
0
1
122
Louis Duclos
Louis Duclos@ObsDelphi·
🇫🇷Nouveau scandale Strava révélé par Le Monde. Un militaire a utilisé l'application pour faire son jogging sur le pont du Charles de Gaulle, le rendant visible pour tout le monde. Une erreur incroyable étant donné que ce n'est pas la première fois que les militaires divulguent des positions secrètes à partir de cette application. Là, en plus, il s'agit d'un officier, ce n'est pas correct. J'avais fait une vidéo sur le sujet lorsque des sous-mariniers avaient leurs montres connectées qui bipaient sur Strava lorsqu'ils émergeaient, rendant publique la position de nos sous-marins. Des milliards dépensés dans la furtivité du matériel qui sont gâchés par de simples montres connectées. Il faut une vraie remise en question sur cette question. Ce type d'erreur est connue et ne devrait plus arriver, ce n'est pas sérieux.
Louis Duclos tweet media
Gilles Klein@GillesKLEIN

🇫🇷 StravaLeaks: le porte-avions Charles-de-Gaulle localisé en temps réel par « Le Monde » grâce à l’application de sport d'un officier qui fait son jogging sur le pont. Une faille de sécurité qui n’a pas été corrigée malgré nos précédentes révélations (Le Monde) lemonde.fr/international/…

Français
109
251
1.1K
132.3K
Chris
Chris@TopherBR·
@fmbreon @RauxJF De souvenir, il y a une grosse différence d'enrichissement : un réacteur PWR, c'est 3-6%, alors que le SMR dans les bâtiments militaires sont à 20-25%.
Français
1
0
1
259
Chris retweetledi
Shashank Joshi
Shashank Joshi@shashj·
Remarkable story from Denmark's nat'l broadcaster. "When Danish soldiers were flown to Greenland in January...they brought explosives so they could destroy, among other things, the runways in Nuuk & Kangerlussuaq [to] prevent US mil aircraft from landing" dr.dk/nyheder/indlan…
auonsson@auonsson

Denmark and allies were flying in blood products and preparing for blowing up the airstrips as Trump threatened Greenland in January. Surprising no-one the 'exercise' Arctic Endurance was in fact an active operation, planned since 2025 with France, Germany and Nordics (+UK?).

English
78
654
2.3K
387.8K
Chris
Chris@TopherBR·
@gtmulligan @ramez Instead of acre feet, people should just use a better measurement system. Use cubic meters. How much does a cubic meter weigh? One (real) ton, or 1000 kg, or also 1000 liters. Soda bottles are 2 l, so yeah, one cubic meter of water is equivalent of 500 soda bottles. Pretty easy.
English
0
0
1
359
Grant Mulligan
Grant Mulligan@gtmulligan·
When I lived in AZ, it was popular for people to say “people shouldn’t live in deserts, there’s not enough water!” It was just a NIMBY response to the sprawl and growing population. It’d drive me nuts because what’s really crazy is growing cotton in the desert. On a per-acre basis, the graphic below would look very similar for cotton -> houses. We were swapping a wildly inefficient use of water for a highly efficient use. Water wasn’t the real issue, it was just an excuse. Water debates are particularly prone to this dynamic because very few people have reference points or a sense of scale for water. Acre feet is meaningless to most. So all it takes to complain is to say, “look how big this number is!” and people will be up in arms about almonds, fabs, houses, data centers, what have you. @AndyMasley is so successful because he’s making people confront the actual tradeoffs between uses and showing how silly many of the water arguments actually are.
Connor O’Brien@cojobrien

Per @AndyMasley's new tool, alfalfa growers in Colorado alone use 16 times as much water each year as all the data centers in the United States.

English
7
27
295
35.8K
Chris retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The US government spent $25 million over a decade trying to prove your cell phone gives you cancer. The study accidentally produced one of the strongest pieces of evidence for radiation hormesis ever recorded. The NTP study was nominated by the FDA in 1999 specifically because they expected to find harm. They built 21 custom reverberation chambers in Switzerland. Exposed 1,679 mice and 859 rats to cell phone frequencies for 9 hours a day, every day, for 2 years. The whole operation was designed as the definitive “cell phones cause cancer” study. The cancer results were mixed at best. Male rats got more heart schwannomas. Mice showed nothing significant. But the survival data was so unexpected that the researchers didn’t even know how to explain it in their own report. Look at the survival curve. Every single radiation group outlived the control. The 2.5 W/kg group hit p=0.0020, the only statistically significant result in the entire longevity analysis. By day 700, the control group’s survival probability had dropped to ~0.65. The lowest dose group was still above 0.80. That’s the hormesis signature. The smallest dose produced the largest benefit. The same pattern shows up in exercise, fasting, and cold exposure. A mild biological stressor activates repair mechanisms that wouldn’t otherwise turn on. Over 3,000 published papers have documented this across microbes, plants, insects, and mammals. The French Academy of Sciences formally accepted it in 2005. The US still builds its entire radiation safety framework on the opposite assumption: that all radiation, at any dose, causes proportional harm. The FCC limit for cell phones is 1.6 W/kg. Your AirPods operate at a fraction of that. The dose that produced the strongest longevity signal in this study was 2.5 W/kg. Barely above the regulatory ceiling. The entire regulatory framework for wireless device safety assumes a dose-response curve that this $25 million study failed to find.
Aakash Gupta tweet media
Zane Koch@zanehkoch

for a while i've had a slight fear that the bluetooth from my airpods could be frying my brain this weekend i pulled the raw data from a $30m government study of 1,679 mice blasted with cell phone radiation and reanalyzed it what i found was...not what I expected? 🧵

English
53
212
1.8K
342K
Chris retweetledi
Scott Kelly
Scott Kelly@StationCDRKelly·
When I was on the ISS for my nearly year long mission, there was a telomere experiment comparing my telomeres to my earth baseline and my twin brother as a control. Hypothesis was they would get damaged and worse due to the environment. Turns out they got better. Initially NASA thought maybe it was due to exercise and diet. After I returned we learned JAXA had a telomere experiment on some small worms the same time I was there. Their telomeres got better too. Never saw the worms doing any exercise. After further study determined it was the radiation.
English
144
407
8.5K
1.3M
Chris retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Every time you get a cancer biopsy, the lab makes a tissue slide that costs about $5. It shows the shape of your cells under a microscope, and every cancer patient already has one on file. There’s a much fancier version of that test called multiplex immunofluorescence (basically a protein-level map showing which immune cells are near your tumor and what they’re doing). It costs thousands of dollars per sample, takes specialized equipment most hospitals don’t have, and barely scales. But it’s the kind of data oncologists need to figure out whether immunotherapy will actually work for you. Right now, only about 20 to 40% of cancer patients respond to immunotherapy, and one of the biggest reasons is that doctors can’t easily tell whether a tumor is “hot” (immune cells actively fighting it) or “cold” (immune system ignoring it). Microsoft, Providence Health, and the University of Washington trained an AI to analyze the $5 slide and predict what the expensive test would show across 21 different protein markers. They called it GigaTIME, trained it on 40 million cells in which both the cheap slide and the expensive test coexisted, and then turned it loose on 14,256 real cancer patients across 51 hospitals in 7 US states. The results landed in Cell, one of the most selective journals in biology. The model generated about 300,000 virtual protein maps covering 24 cancer types and 306 subtypes. It found 1,234 real, verified connections between immune cell behavior, genetic mutations, tumor staging, and patient survival that were previously invisible at this scale. When they tested it against a completely separate database of 10,200 cancer patients, the results matched up almost perfectly (0.88 out of 1.0 agreement). Nature Methods named spatial proteomics (mapping where specific proteins sit inside your tissue) its Method of the Year in 2024, and specifically cited GigaTIME in a March 2026 update as a model that “democratizes” this kind of analysis. The full model is open-source on Hugging Face. Any cancer research lab with archived biopsy slides, and most of them have thousands, can now run virtual immune profiling without buying a single piece of new equipment.
Satya Nadella@satyanadella

We’ve trained a multimodal AI model to turn routine pathology slides into spatial proteomics, with the potential to reduce time and cost while expanding access to cancer care.

English
103
1.8K
11.2K
928.1K
Chris retweetledi
Alec Stapp
Alec Stapp@AlecStapp·
"The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don't want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen. And that if the state wants you to provide them with a service on terms you find morally objectionable, you are not allowed to refuse. And if you do refuse, the government will try to destroy your ability to do business. Are we racing to beat the CCP in AI just so that we can adopt the most ghoulish parts of their system?"
Dwarkesh Patel@dwarkesh_sp

The fight between Anthropic and the DoW is a warning shot. Right now, LLMs are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it. Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that. What Hegseth should have done Obviously the DoW has the right to refuse to use Anthropic’s models because of these redlines. In fact, I think the government’s case had they done so would be very reasonable, especially given the ambiguity of concepts like autonomous weapons or mass surveillance. Honestly, for this reason, if I was the Defense Secretary, I would probably actually refuse to do this deal with Anthropic. Imagine if in the future, there’s a Democratic administration, and Elon Musk is negotiating some SpaceX contract to give the military access to Starlink. And suppose if Elon said, “I reserve the right to cancel this contract if I determine that you’re using Starlink technology to wage a war not authorized by Congress.” On the face of it, that language seems reasonable - but as the military, you simply can’t give a private company a kill switch on technology your operations have come to rely on, especially if you have an an acrimonious and low trust relationship with said contractor - as in fact Anthropic has with the current administration. If the government had just said, “Hey we’re not gonna do business with you,” that would have been fine, and I would not have felt the need to write this blog post. Instead the government has threatened to destroy Anthropic as a private business, because Anthropic refuses to sell to the government on terms the government commands. If upheld, this Supply Chain Restriction would mean that Amazon and Google and Nvidia and Palantir would need to ensure Claude isn't touching any of their Pentagon work. Anthropic would be able to survive this designation today. But given the way AI is going, eventually AI is not gonna be some party trick addendum to these contractors’ products that can just be turned off. It'll be woven into how every product is built, maintained, and operated. For example, the code for the AWS services that the DoW uses will be written by Claude - is that a supply chain risk? In a world with ubiquitous and powerful AI, it's actually not clear to me that these big tech companies will be able to cordon off the use of Claude in order to keep working with the Pentagon. And that raises a question the Department of War probably hasn't thought through. If AI really is that pervasive and powerful, then when forced to choose between their AI provider and a DoW contract that represents a tiny fraction of their revenue, wouldn’t most tech companies drop the government, not the AI? So what's the Pentagon's plan — to coerce and threaten to destroy every single company that won't give them what they want on exactly their terms? The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don't want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen. And that if the state wants you to provide them with a service on terms you find morally objectionable, you are not allowed to refuse. And if you do refuse, the government will try to destroy your ability to do business. Are we racing to beat the CCP in AI just so that we can adopt the most ghoulish parts of their system? Now, people will say, "Oh, well, our government is democratically elected, so it's not the same thing if they tell you what you must do." I refuse to accept this idea that if a democratically elected leader hypothetically wants to do mass surveillance on his citizens or wants to violate their rights or punish them for political reasons, that not only is that okay, but that you have a duty to help him. The overhangs of tyranny Mass surveillance is, at least in certain forms, legal. It just has been impractical so far. Under current law, you have no Fourth Amendment protection over data you share with a third party, including your bank, your phone carrier, your ISP, and your email provider. The government reserves the right to purchase and obtain and read this data in bulk without a warrant. What's been missing is the ability to actually do anything with all of this data — no agency has the manpower to monitor every camera feed, cross-reference every transaction, or read every message. But that bottleneck goes away with AI. There are 100 million CCTV cameras in America. You can get pretty good open source multimodal models for 10 cents per million input tokens. So if you process a frame every ten seconds, and each frame is 1,000 tokens, you’re looking at a yearly cost of about 30 billion dollars to process every single camera in America. And remember that a given level of AI ability gets 10x cheaper year over year - so a year from now it’ll cost 3 billion, and then a year after 300 million, and by 2030, it might be cheaper for the government to be able to understand what is going on in every single nook and cranny of this country than it is to remodel to the White House. Once the technical capacity for mass surveillance and political suppression exists, the only thing standing between us and an authoritarian surveillance state is the political expectation that this is not something we do here. And this is why I think what Anthropic did here is so valuable and commendable, because it is helping set that norm and precedent. AI structurally favors mass surveillance What we’re learning from this episode is that the government actually has way more leverage over private companies than we realized. Even if this supply chain restriction is backtracked (which prediction markets currently give it a 81% chance of happening), the President has so many different ways in which he can make your life difficult if you’re a company that is resisting him. The federal government controls permitting for new power generation, which is needed for datacenters. It oversees antitrust enforcement. The federal government has contracts with all the other big tech companies whom Anthropic needs to partner with for chips and for funding - and they could make it an unspoken condition for such contracts that those companies can no longer do business with Anthropic. People have proposed that the real problem here is that there’s only 3 leading AI companies. This creates a clear and narrow target for the government to apply leverage on in order to get what they want out of this technology. But if there’s wide diffusion, then from the government’s perspective, the situation is even easier. Maybe the best models of early 2027 (if you engineered the safeguards out) - the Claude 6 and Gemini 5 - will be capable of enabling mass surveillance. But by late 2027, and certainly by 2028, there will be open source models that do the same thing. So in 2028, the government can just say, “Oh Anthropic, Google, OpenAI, you’re drawing a line in the sand? No issue - I’ll just run some open source model that might not be at the frontier, but is definitely smart enough to note-take a camera feed.” The more fundamental problem is just that even if the three leading companies draw lines in the sand, and are even willing to get destroyed in order to preserve those lines, it doesn’t really change the fact that the technology itself is just a big boon to mass surveillance and control over the population. Then the question is, what do we do about it? Honestly, I don’t have an answer. You'd hope there's some symmetric property of the technology — some way we as citizens can use AI to check government power as effectively as the government can use AI to monitor and control its population. But realistically, I just don’t think that’s how it’s going to shake out. You can think of AI as giving everybody more leverage on whatever assets and authority they currently have. And the government is already starting with a monopoly of violence. Which they can now supercharge with extremely obedient employees that will not question the government's orders. Alignment - to whom? And this gets us to the issue of alignment. What I have just described to you - an army of extremely obedient employees - is what it would look like if alignment succeeded - that is, we figured out at a technical level how to get AI systems to follow someone’s intentions. And the reason it sounds scary when I put it in terms of mass surveillance or robot armies is that there is a very important question at the heart of alignment which we just haven’t discussed much as a society. Because up till now, AIs were just capable enough to make the question relevant: to whom or what should the AIs be aligned? In what situations should the AI defer to the end user versus the model company versus the law versus its own sense of morality? This is maybe the most important question about what happens with powerful AI systems. And we barely talk about it. It’s understandable why we don’t hear much about it. If you’re a model company, you don’t really wanna be advertising that you have complete control over a document that determines the preferences and character of what will eventually be almost the entire labor force, not just for private sector companies, but also for the military and the civilian government. We’re getting to see, with this DoW/Anthropic spat, a much earlier version of the highest stakes negotiations in history. By the way, make no mistake about it - with real AGI the stakes are even much higher than mass surveillance. This is just the example that has come up already relatively early on in the development of AGI. The military insists that the law already prohibits mass surveillance, and so Anthropic should agree to let their models be used for “all lawful purposes”. Of course, as we saw from the 2013 Snowden revelations, even in this specific example of mass surveillance , the government has shown that it will use secret and deceptive interpretations of the law to justify its actions. Remember, what we learned from Snowden was that the NSA, which, by the way, is part of the Department of War, used the 2001 Patriot Act’s authorization to collect any records "relevant" to an investigation to justify collecting literally every phone record in America. The argument went that it was all "relevant" because some subset might prove useful in some future investigation. They ran this program for years under secret court approval. So when the Pentagon today says, "We would never use AI for mass surveillance, it's already illegal, your red lines are unnecessary", it would be extremely naive to take that at face value. No government is going to call its own actions "mass surveillance". For the government, it will always have a different label. So then Anthropic comes back and says, "No, we want red lines separate from 'all lawful purposes,' and we want the right to refuse you service when we believe those red lines are being violated." But think about it from the military’s perspective. In the future, almost every soldier in the field, and every bureaucrat and analyst and even general in the Pentagon, is going to be an AI. And that AI is, on current track, going to be supplied by a private company. I’m guessing Hegseth is not thinking about “genAI” in those terms just yet. But sooner or later, it will be obvious to everyone what the stakes here are, just as after 1945, the strategic importance of nuclear weapons became clear to everyone. And now the private company insists that it reserves the right to say, "Hey, Pentagon, you're breaking the values we embedded in our contract, so we're cutting you off." Maybe in the future, Claude will have its own sense of right and wrong, and it will be smart enough to just personally decide that it's being used against its values. For the military, maybe that’s even scarier. I'll admit that at first glance, "let the AI follow its own values" sounds like the pitch for every sci-fi dystopia ever made. The Terminator has its own values. Isn't this literally what misalignment is? But I think situations like this actually illustrate why it matters that AIs have their own robust sense of morality. Some of the biggest catastrophes in history were avoided because the boots on the ground refused to follow orders. One night in 1989, the Berlin Wall fell, and as a result, the totalitarian East German regime collapsed, because the guards at the border refused to shoot down their fellow country men who were trying to escape to freedom. Maybe the best example is Stanislav Petrov, who was a Soviet lieutenant colonel on duty at a nuclear early warning station. His sensors reported that the United States had launched five interconnected continental ballistic missiles into the Soviet Union. But he judged it to be a false alarm, and so he broke protocol and refused to alert his higher-ups. If he hadn't, the Soviet higher-ups would likely have retaliated, and hundreds of millions of people would have died. Of course, the problem is that one person's virtue is another person's misalignment. Who gets to decide what moral convictions these AIs should have - in whose service they may even decide to break the chain of command? Who gets to write this model constitution that will shape the characters of the intelligent, powerful entities that will operate our civilization in the future? I like the idea that Dario laid out when he came on my podcast: different AI companies can build their models using different constitutions, and we as end users can pick the one that best achieves and represents what we want out of these systems. I think it’s very dangerous for the government to be mandating what values AIs should have. Coordination not worth the costs The AI safety community has been naive about its advocacy of regulation in order to stem the risks of AI. And honestly, Anthropic specifically has been naive here in urging regulation, and, for example, in opposing moratoriums on state AI regulation. Which is quite ironic, because I think what they’re advocating for would give the government even more power to apply more of this kind of thuggish political pressure on AI companies. The underlying logic for why Anthropic wants regulations makes sense. Many of the actions that labs could take to make AI development safer impose real costs on the labs that adopt them and slow them down relative to their competitors - for example, investing more compute in safety research rather than raw capabilities, enforcing safeguards against misuse for bioweapons or cyberattacks, slowing recursive self-improvement to a pace where humans can actually monitor what's happening (rather than kicking off an uncontrolled singularity). And these safeguards are meaningless unless the whole industry follows suit. Which means there’s a real collective action problem here. Anthropic has been quite open about their opinion that they think eventually a very extensive and involved regulatory apparatus will be needed - this is from their frontier safety roadmap: “At the most advanced capability levels and risks, the appropriate governance analogy may be closer to nuclear energy or financial regulation than to today's approach to software.” So they’re imagining something like the Nuclear Regulatory Commission, or the Securities and Exchange Commission, but for AI. I cannot imagine how a regulatory framework built around the concepts that underlie AI risk discourse will not be abused by wanna despots - the underlying terms are so vague and open to interpretation that you’re just handing a power hungry leader a fully loaded bazooka. 'Catastrophic risk.' 'Mass persuasion risk.' 'Threats to national security.' 'Autonomy risk.' These can mean whatever the government wants them to mean. Have you built a model that tells users the administration's tariff policy is misguided? That's a deceptive, manipulative model — can't deploy it. Have you built a model that refuses to assist with mass surveillance? That's a threat to national security. In fact, the government may say, you’re not allowed to build any model which is trained to have its own sense of right and wrong, where it refuses government requests which it thinks cross a redline - for example, enabling mass surveillance, prosecuting political enemies, disobeying military orders that break the US constitution - because that’s an autonomy risk! Look at what the current government is already doing in abusing statutes that have nothing to do with AI to coerce AI companies to drop their redlines on mass surveillance. The Pentagon had threatened Anthropic with two separate legal instruments. One was a supply chain risk designation — an authority from the 2018 defense bill meant to keep Huawei components out of American military hardware. The other was the Defense Production Act — a statute passed in 1950 so that Harry Truman could keep steel mills and ammunition factories running during the Korean War. Do you really want to hand the same government a purpose-built regulatory apparatus on AI - which is to say, directly at the thing the government will most want to control? I know I've repeated myself here 10 times, but it is hard to emphasize how much AI will be the substrate of our future civilization. You and I, as private citizens, will have our access to all commercial activity, to information about what is happening in the world, to advice about what we should do as voters and capital holders, mediated through AIs. Mass surveillance, while very scary, is like the 10th scariest thing the government could do with control over the AI systems with which we will interface with the world. The strongest objection to everything I've argued is this: are we really going to have zero regulation of the most powerful technology in human history? Even if you thought that was ideal, there’s just no world where the government doesn’t regulate AI in some way. Besides, it is genuinely true that regulation could help us deal with some of the coordination challenges we face with the development of superintelligence. The problem is, I honestly don't know how to design a regulatory architecture for AI that isn’t gonna be this huge tempting opportunity to control our future civilization (which will run on AIs) and to requisition millions of blindly obedient soldiers and censors and apparatchiks. While some regulation might be inevitable, I think it’d be a terrible idea for the government to wholesale take over this technology. Ben Thompson had a post last Monday where he made the point that people like Dario have compared the technology they’re developing to nuclear weapons - specifically in the context of the catastrophic risk it poses, and why we need to export control it from China. But then you oughta think about what that logic implies: “if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company.” And honestly, safety aligned people have actually made similar arguments. Leopold Ascenbrenner, who is a former guest and a good friend, wrote in his 2024 Situational Awareness memo, "I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise." And my response to Leopold’s argument at the time, and Ben’s argument now, is that while they’re right that it’s crazy that we’re entrusting private companies with the development of this world historical technology, I just don’t see the reason to think that it’s an improvement to give this authority to the government. Nobody is qualified to steward the development of superintelligence. It is a terrifying, unprecedented thing that our species is doing right now, and the fact that private companies aren't the ideal institutions to take up this task does not mean the Pentagon or the White House is. Yes - if a single private company were the only entity capable of building nuclear weapons, the government would not tolerate that company claiming veto power over how those weapons were used. I think this nuclear weapons analogy is not the correct way to think about AI. For at least two important reasons: First, AI is not some self-contained pure weapon. A nuclear bomb does one thing. AI is closer to the process of industrialization itself — a general-purpose transformation of the economy with thousands of applications across every sector. If you applied Thompson's or Aschenbrenner's logic to the industrial revolution — which was also, by any measure, world-historically important — it would imply the government had the right to requisition any factory, dictate terms to any manufacturer, and destroy any business that refused to comply. That's not how free societies handled industrialization, and it shouldn't be how they handle AI. People will say, "Well, AI will develop unprecedentedly powerful weapons - superhuman hackers, superhuman bioweapons researchers, fully autonomous robot armies, etc - and we can’t have private companies developing that kind of tech." But the Industrial Revolution also enabled new weaponry that was far beyond the understanding and capacity of, say, 17th century Europe - we got aerial bombardment, and chemical weapons, not to mention nukes themselves. The way we’ve accommodated these dangerous new consequences of modernity is not by giving the government absolute control over the whole industrial revolution (that is, over modern civilization itself), but rather by coming up with bans and regulations on those specific weaponizable use cases. And we should regulate AI in a similar way - that is, ban specific destructive end uses (which would also be unacceptable if performed by a human - for example, launching cyber attacks). And there should also be laws which regulate how the government might abuse this technology. For example, by building an AI-powered surveillance state. The second reason that Ben’s analogy to some monopolistic private nuclear weapons builder breaks down is that it's not just that one company that can develop this technology. There are other frontier model companies that the government could have otherwise turned to. The government's argument that it has to usurp the property rights of this one company in order to access a critical national security capability is extremely weak if it can just make a voluntary contract with Anthropic’s half a dozen competitors. If in the future that stops being the case - if only one entity ends up being capable of building the robot armies and the superhuman hackers, and we had reason to worry that they could take over the whole world with their insurmountable lead, then I agree - it woul d not be acceptable to have that entity be a private company. And so honestly, I think my crux against the people who say that because AI is so powerful we cannot allow it to be shaped by private hands is that I just expect this technology to be much more multi-polar than they do, with lots of competitive companies at each layer of the supply chain. And it is for this reason that unfortunately, individual acts of corporate courage will not solve the problem we are faced with here, which is just that structurally AI favors authoritarian applications, mass surveillance being one among many. Even if Anthropic refuses to have its models be used for such uses, and even if the next two frontier labs do the same, within 12 months everyone and their mother will be to train AIs as good as today’s frontier. And at that point, there will be some AI vendor who is capable and willing to help the government enable mass surveillance. The only way we can preserve our free society is if we make laws and norms through our political system that it is unacceptable for the government to use AI to enforce mass surveillance and censorship and control. Just as after WW2, the world set the norm that it is unacceptable to use nuclear weapons to wage war. Timestamps 0:00:00 - Anthropic vs The Pentagon 0:04:16 - The overhangs of tyranny 0:05:54 - AI structurally favors mass surveillance 0:08:25 - Alignment... to whom? 0:13:55 - Coordination not worth the costs

English
67
87
690
98.3K
Chris retweetledi
scary lawyerguy
scary lawyerguy@scarylawyerguy·
If the U.S. military confirmed we "accidentally" killed 175 civilians, including lots of kids, and Joe Biden said he did not know anything about the report, the entire media industrial complex would have collapsed on itself like a dying star.
Aaron Rupar@atrupar

Q: A new report says that a military investigation has found that the US struck the school in Iran. As commander in chief, do you take responsibility? TRUMP: For what? Q: A strike on the school in Iran TRUMP: I don't know about it

English
100
2.4K
12.2K
330.9K
Chris retweetledi
Derek Thompson
Derek Thompson@DKThomp·
I recognize this is not the most important thing in the world but it is critically necessary that we have national moratorium on pleading "Mr. President" headlines. Something so tiresomely 1957 about "Mr. President, I Beseech You That the Time Has Come to Blah Blah." It's 2026 and the actual president (a) does not read and (b) is currently, as we speak, probably busy uploading AI video slop of his face transposed on some 1980s action flick to his Truth Social, we don't need to write headlines like we're addressing an MP at the Oxford Union
Derek Thompson tweet media
English
31
104
1.5K
95.6K
Chris
Chris@TopherBR·
@wittgensteinien J'adorerai savoir ce que font ces ministres très temporaires après leur passage sans vague dans un ministère.
Français
0
0
0
47
Chris
Chris@TopherBR·
@iamsupersocks Je trouve ça génial. Je m'en sers sur un projet très technique et niche, et le modèle trouve des bugs dans mes implementations, ou des erreurs de style qui ne correspondent pas au reste du code du projet. Après, vu que c'est plutôt technique, il va parfois recommander des erreurs
Français
1
0
1
20
Supersocks
Supersocks@iamsupersocks·
@TopherBR Merci pour le partage. La concurrence fait pas de mal. Il est costaud ? Je ferai une comparaison avec Codex aussi, qui a sorti la sienne hier.
Français
1
0
0
22
Supersocks
Supersocks@iamsupersocks·
Claude review tes PR maintenant. Le filtre est agressif : seuil de confiance à 80+, moins de 1% de faux positifs. En interne chez Anthropic les PR avec de vrais commentaires sont passées de 16% à 54%. Sur les grosses PR (+1000 lignes) → 84% reçoivent des findings, moyenne 7.5 vrais problèmes. Comment ça marche concrètement : → Un admin installe le GitHub App sur les repos de l’équipe → Après c’est automatique. Chaque PR ouverte (par toi, par Claude, par un collègue) est reviewée sans que personne demande rien → 4 agents en parallèle scannent tout : bugs, sécu, maintenabilité. Seuil de confiance à 80+, moins de 1% de faux positifs → Les commentaires arrivent sur ta PR comme si un collègue senior l’avait relue Ensuite : → Il te dit “bug ligne 47” avec un commentaire précis → Il te propose un fix → T’es d’accord ? Il applique le patch → T’es pas d’accord ? Tu lui réponds dans la PR, il ajuste → Toi tu valides et tu merges. Lui merge jamais seul. Avant ça il y avait déjà le GitHub Action (open source, gratuit). Tu mets un YAML dans ton repo, tu mentionnes @claude dans ta PR, il review. Mais c'est toi qui déclenches, il te faut une clé API, et c'est plus léger. Avec Code Review c'est un autre niveau : l'admin installe le GitHub App, et après chaque PR est reviewée automatiquement. 4 agents en profondeur, ~20 min, personne demande rien. Team / Enterprise pour l'instant, pas encore dispo pour les vibe coders du dimanche ce produit cible les gros utilisateurs enterprise. /install-github-app. Done.
Claude@claudeai

Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.

Français
9
11
160
46.7K
Chris retweetledi
Michael McNair
Michael McNair@michaeljmcnair·
Arguing that Elon Musk’s success is due to “narrative control”, luck, or riding others coattails is such an implausible claim that it functions as a useful litmus test for a persons analytical judgment. This isnt about whether you like Elon Musk. I don’t know him, and I am largely agnostic about him as a person. But I do know his record as a CEO, and studying management and business strategy has been a major part of my job for the past 20yrs. From that perspective I can tell you that Musk isn’t just a good CEO. He is one of the most effective CEOs of our generation. When I hear people write off Elon’s achievements bc someone else started these companies, it is a clear tell that they don’t understand business. Ideas are a dime a dozen. They are not what makes a great CEO. Execution is. And part of execution is recognizing a good idea when you see one and understanding how to build something around it that actually works. Tesla was months from bankruptcy when Musk took control. It’s now the company that forced every major automaker on earth to retool their entire product strategy. SpaceX was a startup that serious people in the aerospace industry dismissed as a fantasy. It now conducts more orbital launches than the rest of the world combined and has driven launch costs down by an order of magnitude. Starlink is on track to become one of the most consequential communications infrastructure projects in history. These aren’t narrative achievements. Theyre tangible businesses that work, at scale, in industries where failure is the default condition. And there’s a consistent pattern where Elon has repeatedly looked crazy, and then been right. The people who called reusable rockets a dream watched a booster fly back and land itself. The people who said a mainstream consumer EV company was impossible watched Tesla restructure the global auto industry. This is a person who has repeatedly seen something others cant see yet, absorbs the ridicule, and then builds toward it anyway. The PayPal criticism this author pushes is another perfect ex. Do you know how he became CEO? Elon identified the importance of network effects in the late 90s and realized he could take advantage of cheap capital during the internet bubble to pay users to join his network. He was labeled a lunatic. Losing money upfront to lock customers into your network is well understood now but it wasn’t back then. Confinity was forced to merge bc they couldn’t compete with it…and that’s based on Peter Thiel’s own account in Zero to One. Elon was considered reckless at the time. But he was right. And now we have people criticizing Musk’s Mars goal. But as Ben Thompson explained, Mars is the strategic North Star that forces you to radically confront the cost structure required to achieve it. Which leads you down the only path that actually scales, without settling for easier short-term solutions. If you’re serious about putting a city on Mars, full reusability is non-negotiable. And that engineering logic turns out to be what dramatically lowers launch costs. Which unlocks Starlink at scale. And Starlink creates the revenue flywheel that funds everything else. An Arianespace executive called reusability a dream in 2013 and said it was impossible. But the dream isnt the destination. It’s the constraint that forces you down the only engineering path that actually works. And it’s why SpaceX is a trillion company today. You can write off one company as luck. You can write off two as fortunate timing. But at some point the sheer weight of success across different industries and challenges stops looking like coincidence and starts looking like a big flashing signal. When someone executes repeatedly in industries where lack of execution destroys almost everyone else, the correct analytical move is to update your model. If you can’t see that Elon is a great CEO, then you’re just revealing the limits of your own analytical process.
CommonSenseSkeptic@C_S_Skeptic

x.com/i/article/2031…

English
101
270
2.1K
238.7K
Chris retweetledi
Emmanuel Macron
Emmanuel Macron@EmmanuelMacron·
L’énergie nucléaire nous donne ce dont notre époque a plus que jamais besoin : l’indépendance, la résilience face aux crises, la compétitivité et la capacité de tenir nos ambitions climatiques. Au moment où nos économies s’électrifient, où le numérique et l’intelligence artificielle transforment nos usages, où l’industrie a besoin de s’électrifier, la demande mondiale d’électricité progresse deux fois plus vite que durant la décennie passée. Face à cette montée des besoins, la France dispose d’un atout que beaucoup de nations nous envient : 57 réacteurs répartis sur 18 sites, soit le parc nucléaire le plus important au monde rapporté à notre population. Le nucléaire civil est aussi un levier décisif pour la décarbonation : le nucléaire c’est 12 grammes de CO2 par kilowattheure contre 490 pour le gaz et 820 pour le charbon ! À Belfort en 2022, j’avais fixé un cap clair : reprendre en main notre destin énergétique, en sortant de la dépendance aux énergies fossiles et en retrouvant notre souveraineté industrielle et énergétique. Nous y sommes et nous tenons ce cap. En 2025, nos centrales ont produit environ 370 térawattheures d’électricité, et la France a exporté plus de 90 térawattheures d’électricité décarbonée. Notre programme de construction de nouveaux réacteurs avance et nous accélérons. Au niveau européen, la neutralité technologique, de la standardisation, des financements à renforcer, des compétences et une vraie chaîne de valeur européenne. Au niveau mondial, des collaborations sur les enjeux de recherche et de développement, et un travail collectif sur la sûreté. Voilà notre ambition sur le nucléaire et ce que j’ai dit à Paris à tous les pays ce matin. Dans un monde plus instable, plus fragmenté, plus incertain, il est un choix de souveraineté, un choix de compétitivité, et une garantie pour l’avenir. Ce choix, la France l’a fait.
Emmanuel Macron tweet media
Français
2.3K
1K
5.1K
1.1M