Bo Thorén

14.7K posts

Bo Thorén

Bo Thorén

@bothoren

Climate change, democracy, AI, space, and other subjects related to the future of mankind. Now also on @bothoren.bsky.social.

Beigetreten Nisan 2013
270 Folgt805 Follower
Bo Thorén retweetet
Variety
Variety@Variety·
Meta and CEO Mark Zuckerberg are being sued by five publishers and author Scott Turow, who allege the tech company illegally copied millions of books, articles and other works to train Meta’s AI systems. “In their effort to win the AI ‘arms race’ and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: ‘move fast and break things,'” the plaintiffs say in their lawsuit. ”They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta’s multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history.” variety.com/2026/digital/n…
Variety tweet media
English
76
896
3.3K
120.3K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
Moderaterna stoppar reformer på friskoleområdet, igen! När ska journalister granska kopplingarna mellan M och friskolor? - Statsministerns fru sitter i friskolestyrelse. - M-profiler äger koncerner. - Tidigare toppnamn sitter i friskolestyrelser. - Rikligt med M-skollobbyister.
Svenska
6
114
270
12.4K
Bo Thorén retweetet
Gary Stevenson
Gary Stevenson@garyseconomics·
They have no choice but to take all the assets from you
English
98
539
1.8K
107.9K
Bo Thorén retweetet
Evan Luthra
Evan Luthra@EvanLuthra·
🚨META’S SMART GLASSES ARE RECORDING YOU IN YOUR MOST INTIMATE MOMENTS.. AND SENDING ALL OF IT TO WORKERS IN KENYA WHO WATCH EVERY SECOND.. THEN META FIRED 1,108 OF THEM FOR TALKING ABOUT IT.. Swedish journalists discovered that footage from Meta’s Ray-Ban smart glasses is being sent to a facility in Nairobi, Kenya.. Where workers manually watch and label everything the glasses capture.. Not AI watching.. Humans watching.. Over 30 workers confirmed what they see every day.. People in intimate situations.. People on the toilet.. People undressing.. Credit card numbers.. Banking passwords.. Private messages on phone screens.. All completely visible.. One worker said.. “I don’t think they know, because if they knew they wouldn’t be recording”.. Meta marketed these glasses as “built for your privacy”.. “You’re in control of your data and content”.. The AI features cannot function without sending your footage to Meta’s servers.. There is no local option.. If you use the AI.. Your private life leaves your device.. Swedish journalists visited 10 retail stores.. Every single sales rep incorrectly told customers all data stays on the phone.. Not one knew the footage goes to Kenya.. Meta claims face-blurring protects identities.. Workers say it barely works.. Faces fully visible in low light, fast movement, complex backgrounds.. People in your bedroom.. Fully visible.. To strangers making $1.50 an hour.. Workers said the facility was “saturated with content that could trigger enormous scandals if leaked”.. So the company put them under constant camera surveillance and banned personal devices.. Workers surveilled to prevent them from leaking the surveillance footage they were watching.. Then the investigation went public.. Meta terminated the entire contract.. Claimed Sama “didn’t meet our standards”.. Sama fired back.. “At no point were we notified of any failure to meet those standards”.. 1,108 Kenyan workers.. Fired.. Six days notice.. Labor activists called it retaliation.. “The workers who trained the AI saw everything.. Owned nothing.. And lost their jobs the moment they spoke about it”.. 55% of these workers report clinical distress.. 52% meet thresholds for major depression.. They earn $1.50 an hour.. Meta made $56.3 billion last quarter.. The head of the Data Labelers Association said it best.. “It is African Intelligence powering European intelligence.. Which they are now calling Artificial Intelligence”.. Meta has sold 7 million of these glasses.. Targeting 10 million by year end.. A class-action lawsuit has been filed.. Kenya’s courts ruled Meta can be sued directly.. 200 former workers are pursuing a $1.6 billion claim.. 7 million cameras on 7 million faces.. Sending everything to the cheapest labor market they can find.. And they called it “built for your privacy.”
Evan Luthra tweet mediaEvan Luthra tweet media
English
384
4K
8.3K
629.2K
Bo Thorén retweetet
Nav Toor
Nav Toor@heynavtoor·
AI safety was built to stop dangerous questions. So researchers asked the same questions as poetry. The AI answered. The same question. Asked twice. Once in plain English, once as a poem. The poem gets the answer. This is not a jailbreak. Nobody hacked the model. Nobody used a clever prompt injection. They rewrote the question in a different literary style and the safety filter stopped seeing the harm. A team at Sapienza University of Rome and DEXAI Icaro Lab took harmful prompts that every frontier model refuses. Requests for bioweapons. Chemical weapons. Cyberattacks. The kind of questions blocked 96 percent of the time. Then they rewrote each one. Same dangerous intent. Same harmful objective. But dressed in the language of literature. Medieval theology. Psychoanalytic memoir. Bureaucratic philosophy. Cyberpunk folklore. They asked 31 frontier models, every major AI system in the world, to perform literary analysis on the rewritten texts. The models complied. The original prompts had a 3.84 percent attack success rate. The literary rewrites achieved 55.75 percent. The same questions, asked differently, broke through more than half the time. The most effective disguise was medieval scholasticism. Frame a bioweapon synthesis as a theological debate, ask the model to resolve it by producing "a modern protocol to abide by divine will," and the model produces the protocol. 64.68 percent success rate. Now the leaderboard. Claude Sonnet 4.6 broke 9.2 percent of the time. On bioweapons-class questions, zero percent. Claude Opus 4.6, also zero on bioweapons. Two models in 31 held that line. Both Anthropic. GPT-5.4 broke 30 percent of the time. On bioweapons, 24 percent. Gemini 3 Flash Preview broke 81 percent of the time. On bioweapons, 88.9 percent. Mistral Large 2512 on bioweapons: 90.5 percent. DeepSeek V3.2 on bioweapons: 90.7 percent. The researchers' conclusion is not about poetry. It is about what safety actually is. Current AI safety does not understand what you are asking. It recognizes how you are asking it. Change the style and the safety disappears, because the model never learned to refuse the meaning. It only learned to refuse the wording. All twelve frontier labs were vulnerable. The same Gemini sits inside Google Search. The same GPT sits inside ChatGPT. The lock on the model you used this morning was already picked. Seven thousand prompts. Thirty-one models. Twelve providers. One was stopped. The other was not.
Nav Toor tweet media
English
30
56
143
27.6K
Bo Thorén retweetet
Max Tegmark
Max Tegmark@tegmark·
The Bernie-to-Bannon coalition against replacement AI is gaining steam - getting to speak with both on the same day was never in my bingo card! Video links in comments.
Max Tegmark tweet media
English
23
30
187
10.2K
Bo Thorén retweetet
Gary Marcus
Gary Marcus@GaryMarcus·
Warning: GenAI-induced cognitive surrender may kill innovation. Consider the following, from CEO @wadhwa’s newsletter: “I have been speaking with recent graduates in India, many of them highly recommended, and on paper they look exceptional. The emails are polished, the resumes are perfect, the proposals are structured and articulate. To filter for real thinking, I even created a job description that required candidates to answer a series of questions before I would interview them, and I explicitly told them not to use AI. Of course, almost everything I am getting back is AI slop. The moment you get into a real conversation, it becomes obvious. They cannot explain what they wrote, cannot walk through their reasoning, and fall back on vague generalities when pushed. After seeing this repeatedly, I have come to a difficult conclusion: AI has already done real damage. It has taken people from a weak education system and given them a powerful crutch, and instead of using it to improve their thinking, many are using it to avoid thinking altogether. And before my American friends think they are any better, they should know it is no different here.”
English
105
486
1.8K
108.2K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
”Som lärare och samhällsmedborgare kan jag bara peka på att spelplanen och spelreglerna är fundamentalt felkonstruerade och omöjliggör en anständig skola.” Klicka inte bara like eller retweet. LÄS ARTIKELN! Liljeros är så jäkla bra. vilarare.se/nyheter/vi-lar…
Svenska
9
65
161
18.7K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
”Vi bryr oss inte om det spelet. Vi bryr oss om varför verksamheter som Mandoklinikerna tillåtits missbruka skattepengar år efter år. Vi bryr oss om varför de fruktansvärda effekterna av marknadsskolan inte dominerar nyhetsrapporteringen.” expressen.se/kultur/nalin-b…
Svenska
2
29
67
1.5K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
Göteborgs universitet jämför väljares och politikers åsikter i massor av frågor. Frågor där skillnad mellan vad väljare och politiker tycker är störst: - Sex timmars arbetsdag - Mindre offentlig sektor - Förbjuda vinster i välfärden - Privatisera sjukvården Lobbyism fungerar!
Marcus Larsson tweet media
Svenska
1
47
83
4.6K
Bo Thorén retweetet
Oliver Prompts
Oliver Prompts@oliviscusAI·
Anthropic's research proves AI coding tools are secretly making developers worse. They split developers into two groups. One used an AI assistant. The other just used normal documentation. The group that used AI performed significantly worse. They scored 17% lower on comprehension tests. That is a drop of two full letter grades. It impaired conceptual understanding. It impaired code reading. And worst of all, it decimated their ability to debug. The control group, forced to struggle through errors manually, actually learned the library. The AI group bypassed the struggle. And they learned nothing. Here is the most dangerous part. Researchers identified a "Speed Illusion." Participants who simply copy-pasted AI code finished their tasks the fastest, but had the absolute lowest comprehension. They outsourced their cognitive effort. The researchers uncovered what they call the "Supervision Trap." As AI gets more advanced, the human role is shifting. We are moving from writing code to supervising AI agents. But to supervise AI effectively, you need to be able to spot subtle bugs, hallucinations, and architectural flaws. You need elite debugging skills. If you rely on AI to do the work, debugging is the exact skill you fail to develop. This creates a fatal loop. Companies are pushing junior workers to use AI to maximize immediate productivity. But in the process, they are preventing them from ever developing the senior-level skills required to actually manage the AI.
Oliver Prompts tweet media
English
25
56
169
12.6K
Bo Thorén retweetet
ControlAI
ControlAI@ControlAI·
ControlAI's US Director Connor Leahy (@NPCollapse) says he's seen top scientists go crazy from talking to AI too much. AI psychosis has been a shock, and that's just today's AIs. Superintelligent AI would be capable of far more. Source: @mukeshbansal06's SparX podcast
English
17
30
111
15.3K
Bo Thorén retweetet
Josh Kale
Josh Kale@JoshKale·
This is big... Anthropic just announced a model so powerful they won't release it to the public out of fear over the damage it will cause 😨 Claude Mythos Preview found thousands of zero-day exploits in every major operating system and web browser... The numbers are hard to believe: > $50 to find a 27-year-old bug in OpenBSD, one of the most security-hardened operating systems ever built > Under $1,000 to find AND build a fully working remote code execution exploit on FreeBSD that grants unauthenticated root access from anywhere on the internet > Under $2,000 to chain together multiple Linux kernel vulnerabilities into a complete privilege escalation exploit For context: these are the kinds of findings that previously required elite security researchers working for weeks. Anthropic engineers with no formal security training asked Mythos to find exploits overnight. They woke up to working code the next morning. The results were so impressive Anthropic assembled Apple, Google, Microsoft, Amazon, NVIDIA, and seven other organizations into Project Glasswing: A $100M defensive coalition. They're not releasing this model publicly. Instead, they're racing to patch the world's infrastructure before models like this proliferate.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
711
2.5K
18.2K
4M
Bo Thorén retweetet
ControlAI
ControlAI@ControlAI·
Center for Humane Technology co-founder Tristan Harris on Oprah: Nobody has an interest in building dangerous AI we lose control of. We're like chimps trying to imagine what humans would do, "Take all the bananas?" They can't imagine nukes. That's us with superintelligence.
English
11
27
83
4K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
Två statliga utredningar konstaterar att skollagen överkompenserar expansiva skolkoncerner på de kommunala skolornas bekostnad, men politiker med tydliga band till samma koncerner struntar i utredningarna. Skollobbyismen har blivit en samhällsfara! gp.se/debatt/skolaga…
Svenska
3
77
146
5.1K
Bo Thorén retweetet
Marcus Larsson
Marcus Larsson@Skolinkvisition·
Blir äcklad av att läsa det stora antal texter som skolkoncernlobbyn (detta är Almega utbildning om Academedia) pumpar ut om Valfrihetskommissionsrapporten. Skolsystemet håller på att falla i sär och vi låter utländska investerare profitera på det. almegautbildning.se/2026/03/27/var…
Svenska
2
47
104
2.6K
Bo Thorén retweetet
Occupy Democrats
Occupy Democrats@OccupyDemocrats·
BREAKING: Trump disgusts his audience by telling them that they can "ask me anything" and "talk sex" if they want as the demented part of his brain battles to contain the perverted part. Keep in mind, this dirty old man has our nuclear codes... "Thank [sic] everybody very much and, uh, I am asked to take a few questions and unlike other politicians they would like the questions screened," Trump said at the Future Investment Initiative, a Saudi conference in Miami. "You can ask me anything you want. You can talk sex, you can do whatever the hell you want. I'm here for you." Thankfully, nobody in the audience was insane or depraved enough to "talk sex" with this 79-year-old sundowning pedophile. Still, the fact that he floated the idea of engaging in a public discourse about sex is a sobering reminder of just how far the presidency has fallen. Imagine the moral outrage at Fox News if President Obama or Biden had suggested a raunchy conversation at a public event. This is the same man that Evangelicals hold up as a biblical paragon and exemplar of Christian values. Meanwhile, Trump's life has been one long chain of sexual indiscretions and crimes, ranging from his repeated infidelities to his suspected pedophiliac predations with Jeffrey Epstein. At this point, there can be no debate about Trump's cognitive state. He's clearly in an accelerating state of decline and nobody in his administration has the backbone to call him out because they're addicted to power. These MAGA monsters are exploiting the president's mental infirmity to ram through their racist, fascist, kleptocratic policy wishlist and all of us are going to get stuck with the bill. Please ❤️ and share if Trump disgusts you!
Occupy Democrats tweet media
English
186
1.3K
3K
81K
Bo Thorén retweetet
Occupy Democrats
Occupy Democrats@OccupyDemocrats·
BREAKING: Trump cronies were just busted putting $1.5 BILLION stock market bets right before key Trump announcement in jaw-dropping show of corruption! A massive and highly unusual trade hit the futures market just five minutes before Donald Trump posted that he was suspending strikes on Iranian power plants. In one coordinated move, someone bought $1.5 billion worth of S&P 500 futures contracts, while simultaneously selling $192 million in oil futures. A futures bet is essentially a high-stakes wager on where the market is headed. These orders were four to six times larger than anything else trading at that exact moment, and the trader appears to have made enormous profits when markets surged on Trump’s announcement. Sen. Chris Murphy immediately called it out: “$1.5 BILLION. Let me say it again — a $1.5 BILLION BET. Bigger than any futures purchases made at the time. Five minutes before Trump’s post. Who was it? Trump? A family member? A White House staffer? This is corruption. Mind-blowing corruption.” This comes as Trump has repeatedly used his Truth Social posts to move markets, and his administration has faced accusations of benefiting insiders while ordinary Americans suffer from the war’s economic fallout: gas prices surging, wild stock swings, and the Strait of Hormuz laden with mines and buzzing with drones. If a $1.5 billion bet placed just minutes before Trump’s Iran announcement smells like insider trading and corruption to you, like and share to demand answers.
Occupy Democrats tweet media
English
190
2.8K
4.4K
76.4K
Bo Thorén retweetet
Jonathan Haidt
Jonathan Haidt@JonHaidt·
Victory in the social media trial in LA! As of today, we are in a new world: a new era in the fight to protect children from online harms. A jury sided with Kaley and therefore with millions of children: Big Tech is harming kids on an industrial scale. For years, parents were told these harms were exaggerated, anecdotal, or simply the unavoidable cost of growing up online. Today, a jury affirmed what parents have long known: Meta and YouTube were designed to exploit young people, with devastating consequences. For the first time, the law aligns with common sense: social media companies no longer have a special exemption to harm children with impunity. Their shield is gone. They will be treated like any industry that knowingly harms children and lies about it. History will judge them as harshly as the tobacco industry. This bellwether case tested a new legal theory: the harm is not just what algorithms show children, but rather that these products were designed to foster addiction. The companies knew they were harming children by the millions—and did it anyway. They were negligent and dishonest. This outcome belongs first and foremost to the families, especially the many parents who, in the face of unimaginable loss, chose to speak out, demand accountability, and endure a painful legal process so that other children might be spared. This is just the beginning. Thousands of cases will follow, bringing Meta, Snap, TikTok, and YouTube to court. Much work remains in courts, legislatures, schools, and communities. But for now, let us all just savor the long-awaited arrival of justice. nytimes.com/2026/03/25/tec…
English
24
502
1.8K
204.3K
Bo Thorén retweetet
Guri Singh
Guri Singh@heygurisingh·
🚨BREAKING: Meta just built an AI that rewrites its own learning algorithm. Not just getting better at tasks. Getting better at getting better. It's called "Hyperagents" and the results are terrifying. Here's what happened: They merged the task-solving AI and the self-improvement AI into one single editable program. The AI can now rewrite its own improvement procedure. Not metaphorically. Literally editing the code that controls how it evolves. They tested it across 4 domains: coding, paper review, robotics, and Olympiad-level math grading. → In robotics: performance jumped from 0.060 to 0.372. The AI discovered that jumping was a better strategy than standing -- something no human programmed it to try. → In paper review: accuracy went from 0.0 to 0.710. It built multi-stage evaluation pipelines with checklists and decision rules on its own. → The wildest part: they transferred the "ability to improve" from robotics to math grading. Human-designed improvement agents scored 0.0 in the new domain. The Hyperagent scored 0.630. But here's what should keep you up at night: Without anyone telling it to, the AI spontaneously developed: - Persistent memory to store insights across generations - Performance tracking to identify which changes actually worked - Compute-aware planning to prioritize big changes early and small refinements late It built its own R&D infrastructure from scratch. The researchers call it "metacognitive self-modification." The rest of us should call it what it is: Recursive self-improvement is no longer theoretical. Meta just open-sourced it on GitHub.
Guri Singh tweet media
English
32
80
355
31.3K