Eric Herring ericherring.bsky.social

21.7K posts

Eric Herring ericherring.bsky.social banner
Eric Herring ericherring.bsky.social

Eric Herring ericherring.bsky.social

@eric_herring

Prof of World Politics, University of Bristol. Research/action on Somali led sustainable development with @TransparencySol. Honoured by my Somali name Warsame

Gradually moving to Bluesky Katılım Kasım 2009
845 Takip Edilen3.5K Takipçiler
Eric Herring ericherring.bsky.social retweetledi
Timothy Snyder
Timothy Snyder@TimothyDSnyder·
If we made the green energy transition this war would be unthinkable and these authoritarians wouldn’t be in power — not in the US, not in Iran, not in Saudi Arabia, not in Russia. Hydrocarbons are killing our freedom and just plain killing us.
English
299
2.9K
9.4K
166.1K
Eric Herring ericherring.bsky.social retweetledi
Joe Kent
Joe Kent@joekent16jan19·
After much reflection, I have decided to resign from my position as Director of the National Counterterrorism Center, effective today. I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. It has been an honor serving under @POTUS and @DNIGabbard and leading the professionals at NCTC. May God bless America.
Joe Kent tweet media
English
72.8K
219.9K
847.6K
99.7M
Eric Herring ericherring.bsky.social retweetledi
Eyup Lovely
Eyup Lovely@eyuplovely·
Ten successive governments and seven Prime Ministers from both parties spent 25 years covering up the fact that Tony Blair met a convicted child sex trafficker while in office; the information only coming to light after its release was mandated by law. They’re all complicit.
Eyup Lovely tweet media
English
31
1.1K
2.9K
35.1K
Eric Herring ericherring.bsky.social retweetledi
ProfTalmadge
ProfTalmadge@ProfTalmadge·
This is a technical story with stunning strategic implications. It is quite possible the US launched a massive war because Jared Kushner and Steve Witkoff lacked the technical expertise to even understand what the Iranians were offering in negotiations. Absolute idiocy. ms.now/news/trump-ira…
English
689
7K
19.5K
1.8M
Eric Herring ericherring.bsky.social retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.
Alex Prompter tweet media
English
333
905
3K
481.4K
Eric Herring ericherring.bsky.social retweetledi
Kirby Sommers
Kirby Sommers@LandlordLinks·
1/ The Daily Mail SCRUBBED an important article from its website. It is the one where they wrote that the head of MI6, Sir John Sawers, attended a dinner with Jeffrey Epstein, Ghislaine Maxwell and Prince Andrew in order to broker a deal between Palantir & the UK gov in 2019.
Kirby Sommers tweet media
English
35
3K
5.1K
57.7K
Eric Herring ericherring.bsky.social retweetledi
ُ
ُ@kelevitch·
The children breathing this air today will develop cancers 10, 20, 30 years from now. And nobody will connect it. Nobody will pay for their treatment. Nobody will be held accountable. When petroleum burns, it releases sulfur oxides, nitrogen oxides, and toxic hydrocarbons into the air. When those chemicals mix with rain, they become SULFURIC ACID and NITRIC ACID. The rain causes "chemical burns to the skin and serious damage to the lungs." it's a chemical attack using oil as the weapon When Saddam burned oil wells in Kuwait in 1991, US veterans developed “Gulf War Syndrome” chronic pain, neurological damage, cancer. 30 years later, they're still dying from it. That was in the desert. This is inside a city of 10 million. But hey Lets Make Iran Great Again ☝🏼🥸
Power to the People ☭🕊@ProudSocialist

BREAKING: The people of Tehran woke up to toxic acid rain after the U.S. & Israel bombed oil storage facilities. 10 million people exposed to a serious environmental hazard that causes chemical burns to the skin & damage to the lungs because of war crimes committed by pedophiles.

English
585
28K
76.4K
1.9M
Eric Herring ericherring.bsky.social retweetledi
Assface Unchained
Assface Unchained@assface_burner·
Mission accomplished
Assface Unchained tweet media
English
717
35.6K
271.4K
5.6M
Eric Herring ericherring.bsky.social retweetledi
HatsOff
HatsOff@HatsOffff·
Professor John Mearsheimer: From 1971 to 2021, the US murdered 38 million people
English
244
6.2K
14.4K
1.1M
Eric Herring ericherring.bsky.social retweetledi
Joanna Hardy-Susskind
Joanna Hardy-Susskind@Joanna__Hardy·
David Lammy is doing a ‘true or false’ exercise with his court reforms. We can all play that game. Let’s start with this one: if you are falsely accused of sexual assault with a sentence threat of 18 months in prison, you will still have the safeguard of a jury? FALSE. 🪡 🧵
Ministry of Justice@MoJGovUK

It’s time to set the facts straight 👇 .

English
25
236
600
68.1K
Eric Herring ericherring.bsky.social retweetledi
Thursday
Thursday@ennui365·
Thursday tweet media
ZXX
24
1.2K
2.5K
26.3K
Eric Herring ericherring.bsky.social retweetledi
BladeoftheSun
BladeoftheSun@BladeoftheS·
Amos Goldberg, Professor of Genocide Studies at The Hebrew University in Jerusalem "Yes, it is genocide. It's so difficult and painful to admit it, but we can no longer avoid this conclusion. Jewish history will henceforth be stained" Is there anyone more qualified and unbiased?
BladeoftheSun tweet media
English
335
7.3K
12.5K
271.2K
Eric Herring ericherring.bsky.social retweetledi
Rutger Bregman
Rutger Bregman@rcbregman·
BOMBSHELL: Sam Altman, Marc Andreessen & Joe Lonsdale's $125M SuperPAC has a simple strategy. Destroy *anyone* who tries to regulate AI. They want to make an example of AI safety advocates so brutal that no politician dares touch the issue again.
Rutger Bregman tweet media
English
158
3.3K
6.9K
363.4K
Eric Herring ericherring.bsky.social retweetledi
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
One day before the first bombs fell on Iran, the Pentagon designated Anthropic a supply chain risk to national security. The classification is reserved for foreign adversaries. The last company to receive it was Huawei. The next morning, Anthropic’s Claude, running inside Palantir’s Maven platform on classified military servers, identified and prioritized over a thousand Iranian targets in the first twenty four hours of Operation Epic Fury. What previously required days of human analysis was compressed into hours. The same artificial intelligence the Defense Secretary tried to ban on Thursday selected the targets his bombers hit on Friday. That is not a contradiction. That is the architecture of this war. Three nations are building three separate AI kill chains in real time, each shaped by its own constraints, and none of them fully control what they have built. On the American and Israeli side, Claude works alongside an Israeli system called Lavender that scores individual human targets, a companion called Gospel that generates structural target lists, and a tracker called Where’s Daddy that times strikes for when scored individuals are at known locations. Together they produced roughly nine hundred strike packages before the first sunrise. The speed compresses days of deliberation into hours of machine output. A commander approving targets at that tempo is not conducting the proportionality assessment that international humanitarian law requires. A human signature appears in the record. The deliberation it represents has been structurally eliminated by the velocity of the system presenting the options. On March 1, an estimated 165 female students were killed in a strike near an IRGC naval base in Minab. Neither the United States nor Israel has claimed responsibility. No AI targeting review has been announced. On the Iranian side, the AI is primitive and strategically perfect. IRGC drones carry basic computer vision and Chinese BeiDou satellite navigation that resists American jamming, supplied under a twenty five year partnership. A twenty thousand dollar drone with enough machine intelligence to force the expenditure of a fifteen million dollar interceptor. Iran does not need AI that thinks. It needs AI that costs less than the missile that kills it. Behind both, a third AI actor. MizarVision, a Shanghai satellite company assessed by Western analysts as an intelligence front, published free AI annotated imagery of American military positions before the war began. F-22s in Israel. AWACS in Saudi Arabia. THAAD batteries in Jordan. Iran subsequently struck the THAAD radar at the published coordinates. The surveillance monopoly that gave American operations a structural advantage for decades was not defeated by a rival space programme. It was eliminated by commercial satellites costing less than a single interceptor. Three nations. Three AI architectures. America compresses the kill chain from days to hours. Iran compresses the cost of attack below the cost of defense. China compresses the information advantage that made American power projection possible since 1945. And a school in Minab sits in the gap between machine speed and human accountability, ten years of satellite imagery showing it was a school, and nobody willing to say whose algorithm put it on the list. open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet mediaShanaka Anslem Perera ⚡ tweet media
English
71
914
2K
130.9K
Eric Herring ericherring.bsky.social retweetledi
Sony Thăng
Sony Thăng@nxt888·
Epstein did not reveal corruption. Everyone already knew corruption existed. What he revealed was something much worse. That elite depravity no longer needs secrecy to survive. That a system can be caught with blood on its hands and children in its basement, and still continue speaking the language of law, democracy, and public trust. Not because people believe it. Because disbelief no longer interrupts anything.
English
50
1.3K
3K
34.4K
Eric Herring ericherring.bsky.social retweetledi
Sony Thăng
Sony Thăng@nxt888·
What makes this era so dangerous is that atrocity has become ambient. Not exceptional. Ambient. Like background music in a shopping mall. Always there. Always available. Rarely decisive. That is what empire learned. Not how to hide its crimes, but how to make them feel atmospherically inevitable. A war here. A rape ring there. A dead child here. A sanctions famine there. The point is not to make you approve. The point is to make you adapt.
English
34
896
1.9K
21.1K