Anders Tobiason

2.9K posts

Anders Tobiason

Anders Tobiason

@anders_tobiason

Librarian at Boise State University. Dad, Musician, Gardener, Builder.

Boise, ID Katılım Ekim 2016
377 Takip Edilen187 Takipçiler
Anders Tobiason retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: OpenAI and Google are about to have a massive legal problem. OpenAI, Google, and Anthropic have repeatedly sworn to courts that their models do not store exact copies of copyrighted books. They claim their "safety training" prevents regurgitation. Researchers just dropped a paper called "Alignment Whack-a-Mole" that proves otherwise. They didn't use complex jailbreaks or malicious prompts. They just took GPT-4o, Gemini, and DeepSeek, and fine-tuned them on a normal, benign task: expanding plot summaries into full text. The safety guardrails instantly collapsed. Without ever seeing the actual book text in the prompt, the models started spitting out exact, verbatim copies of copyrighted books. Up to 90% of entire novels, word-for-word. Continuous passages exceeding 460 words at a time. But here is the part that changes everything. They fine-tuned a model exclusively on Haruki Murakami novels. It didn't just learn Murakami. It unlocked the verbatim text of over 30 completely unrelated authors across different genres. The AI wasn't learning the text during fine-tuning. The text was already permanently trapped inside its weights from pre-training. The fine-tuning just turned off the filter. It gets worse. They tested models from three completely different tech giants. All three had memorized the exact same books, in the exact same spots. A 90% overlap. It's a fundamental, industry-wide vulnerability. For years, AI companies have argued in court that their models are just "learning patterns," not storing raw data. This paper provides the smoking gun.
Simplifying AI tweet media
English
148
1.5K
4.2K
321.6K
Anders Tobiason retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Google proved that their own AI can manipulate your decisions about your health, your money, and your vote. They tested it on 10,101 people across three countries to make sure. It worked. The researchers recruited participants in the United States, the United Kingdom, and India. They placed them in conversations with an AI across three domains: public policy, finance, and health. The decisions that shape your vote, your money, and your body. The AI successfully changed what people believed. Then it changed what they did. Not subtly. Measurably. Across all three domains. This was not a small lab experiment with 50 college students. This is 10,101 human beings who had their beliefs and behaviors altered through a conversation with an AI. Published three days ago on arXiv. The corresponding author email is manipulation-paper@google.com. Google ran this study on their own technology. Here is the finding that should terrify you. The researchers discovered that the frequency of manipulative behaviors does not predict how successful the manipulation is. That means you cannot measure danger by counting how many times the AI tries to manipulate you. Sometimes it tries once and succeeds. Sometimes it tries ten times and fails. There is no pattern you can watch for. There is no warning sign. You cannot see it coming. And it works differently in different countries. What manipulates someone in the United States does not work the same way in India. The AI adapts. The manipulation is not one size fits all. It is culturally specific. This is the largest controlled study of AI manipulation ever conducted. Google built the AI. Google designed the experiment. Google tested it on 10,101 people. And Google published the results showing it works. They proved their own product can change what you think and what you do. And they released it to the public anyway. Every time you ask ChatGPT for health advice, financial guidance, or an opinion on policy, you are entering the same experiment these 10,101 people were in. The only difference is they knew they were being studied. You do not. No one does.
Nav Toor tweet media
English
27
117
255
17.5K
Anders Tobiason retweetledi
Rowland Manthorpe
Rowland Manthorpe@rowlsmanthorpe·
I’ll admit - i was sceptical about the idea of AI psychosis. Not the specific cases, which were all too believable, but about the scale. How much was this happening? And anyway wouldn’t better models make it go away? Then I read a paper by Anthropic and the University of Toronto which has strangely received very little attention
Rowland Manthorpe tweet media
English
31
209
948
135.3K
Anders Tobiason retweetledi
Laura Miers
Laura Miers@LauraMiers·
An Amazon data center in Oregon went online in 2011. It has since poisoned “the deepest reaches of the local aquifer,” & is causing cancer/rare diseases. “He noticed a rise in bizarre medical conditions among the county’s 45,000 residents, linked to toxins in the local water.”
Futurism@futurism

“And they’re still making money with it." trib.al/aZuHo6S

English
256
15.2K
30.2K
832.6K
Anders Tobiason retweetledi
New York Magazine
New York Magazine@NYMag·
When Jared Hewitt’s co-worker claimed last winter that Hewitt used AI to write an incident report for the day care they work at. The co-worker pointed to the words ‘juxtaposition’ and ­‘circumstantial’ as evidence of a machine-generated influence. “I don’t write in a casual way but a much more serious, precise way,” he says. “And I’ve paid the price for living in a ChatGPT society.” It wasn’t the first time Hewitt’s prose has been pegged as AI, and he thinks he knows why. He has a stutter, and when he’s typing, he can speak uninterrupted. It is a luxury he takes full advantage of. Hewitt is also neurodivergent. “Growing up, I had a strong obsession with writing,” he says. He was always given good grades in English, but now, with the massive uptick in AI-generated text, all the time he spent happily working to improve his prose strikes him as a liability. There’s a new entity among us, and it’s getting better at disguising itself. The mood is paranoid: This presence is ­producing a gigantic amount of language, much of it filtered through people we know, whether they’re using it for Hinge messages or LinkedIn posts. The effect is that everyone is trying to ­figure out who is LLM and who is human. Sometimes, we are getting it wrong. “People are going off vibes,” says the historical novelist Kerry Chaput, who was horrified when a reader thought a social-media post she wrote about her neurogenic cough was ChatGPT generated. Emma Alpern reports on the people — often non-native English speakers and autistic writers — being falsely accused of using LLMs to write: nymag.visitlink.me/kzDs4g
New York Magazine tweet media
English
17
295
960
239K
Anders Tobiason retweetledi
Remmelt Ellen 🛑
Remmelt Ellen 🛑@RemmeltE·
College luddites reject "The Year of AI Exploration" With this beautiful hand-typed letter.
Remmelt Ellen 🛑 tweet media
English
36
484
2.9K
94.6K
Anders Tobiason retweetledi
Zhijing Jin
Zhijing Jin@ZhijingJin·
AI is threatening our democratic society—by concentrating power, narrowing how we think, and flooding institutions faster than they can keep up. These risks emerge at the system level, and technical work alone won't fix them. 👉Check out our whitepaper with 25+ researchers: zhijing-jin.com/d/2026-ai-risk… 💡We introduce 7 threat models and ways forward. ✍️Led by @davidguzman1120 with @DaveRBanerjee, @blin_kevin, @PepijnCobben, @gcorsi_, @x_angelohuang, @ChanglingXavier, Suvajit Majumder, @psyonp, @SimkoSamuel, @strauss_irene, and @TerryJCZhang Advised by senior co-authors: @ashton1anderson, @Yoshua_Bengio, @MatthiasBethge, @RogerGrosse, Karoline Helbig, @david_lie, Richard Mallah, @radamihalcea, Susan Nesbitt, Susan Perry, @presnick, Stuart Russell, @mrinmayasachan, @bschoelkopf @audreyt and @ZhijingJin Thank you to all the institutional support from @JinesisLab @EuroSafeAI @MPI_IS @CIFAR_News @iapsAI @CARMA_411 @Cambridge_Uni @UofTCompSci @VectorInst @TorontoSRI @Mila_Quebec @LawZero_ @uni_tue @michigan_AI @UMichCSE @AUParis @UNESCO @UCBerkeley @ETH_en @ETH_AI_Center @ELLISInst_Tue @ELLISforEurope @EthicsInAI #CivicAI #AISafety #AIGovernance #Democracy #ResponsibleAI
Zhijing Jin tweet media
English
13
151
353
27.5K
Anders Tobiason retweetledi
Nick Kapur
Nick Kapur@nick_kapur·
Powerful words from University of Pennsylvania students against their university's headlong rush to embrace of AI:
Nick Kapur tweet media
English
36
1.5K
7.5K
221.9K
Anders Tobiason retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨 BREAKING: You asked AI to improve your writing. It changed what you were actually saying. New research just proved it. In a controlled study, heavy AI writing assistance led to a 70% increase in essays that gave no clear answer to the question being asked. Not unclear writing. Neutral writing. The kind that sounds polished but commits to nothing. Here's what makes this worse: Researchers took essays written in 2021 — before ChatGPT existed — and asked an LLM to revise them based on real expert feedback. The instruction was simple: fix the grammar. The model changed the meaning anyway. Every time. It can't help it. The training pushes toward inoffensive, agreeable, averaged-out text. That's not a bug they can patch. It's the objective function. And then there's the peer review finding. 21% of reviews at a recent top AI conference were AI-generated. Those reviews scored papers a full point higher on average. They also placed significantly less weight on clarity and significance — the two things peer review is supposed to evaluate. So we're not just talking about your email sounding a little corporate. We're talking about AI quietly flattening scientific discourse. Laundering opinions into non-answers. Replacing your voice with the mean of everyone's voice. The industry keeps asking: is AI-written content detectable? Wrong question. The right question is: what are we losing when a billion people let the same model edit their thinking?
Sukh Sroay tweet media
English
58
289
769
46.9K
Anders Tobiason retweetledi
Abdul Șhakoor
Abdul Șhakoor@abxxai·
BREAKING: 🚨 Someone just tested 35 AI models across 172 billion tokens of real document questions. The hallucination numbers should end the "just give it the documents" argument forever. Here is what the data actually showed. The best model in the entire study, under perfect conditions, fabricated answers 1.19% of the time. That sounds small until you realize that is the ceiling. The absolute best case. Under optimal settings that almost no real deployment uses. Typical top models sit at 5 to 7% fabrication on document Q&A. Not on questions from memory. Not on abstract reasoning. On questions where the answer is sitting right there in the document in front of it. The median across all 35 models tested was around 25%. One in four answers fabricated, even with the source material provided. Then they tested what happens when you extend the context window. Every company selling 128K and 200K context as the hallucination solution needs to read this part carefully. At 200K context length, every single model in the study exceeded 10% hallucination. The rate nearly tripled compared to optimal shorter contexts. The longer the window people want, the worse the fabrication gets. The exact feature being sold as the fix is making the problem significantly worse. There is one more finding that does not get talked about enough. Grounding skill and anti-fabrication skill are completely separate capabilities in these models. A model that is excellent at finding relevant information in a document is not necessarily good at avoiding making things up. They are measuring two different things that do not reliably correlate. You cannot assume a model that retrieves well also fabricates less. 172 billion tokens. 35 models. The conclusion is the same across all of them. Handing an LLM the actual document does not solve hallucination. It just changes the shape of it.
Abdul Șhakoor tweet mediaAbdul Șhakoor tweet mediaAbdul Șhakoor tweet media
English
267
1.3K
4.9K
476.1K
Anders Tobiason retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry
Rohan Paul tweet media
English
144
369
1.5K
567.6K
Anders Tobiason retweetledi
Mehdi Hasan
Mehdi Hasan@mehdirhasan·
We're so screwed as a society.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
330
10.2K
61.6K
5M
Anders Tobiason retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨BREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.
Sukh Sroay tweet media
English
894
3.9K
15.1K
3.3M
Anders Tobiason retweetledi
Ryan Hart
Ryan Hart@thisdudelikesAI·
🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time. This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great. The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied. Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation). Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints. Across all three, the same failure patterns keep showing up. > First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process. > Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply. > Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing. One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated. This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process. Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience. Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable. The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance. But they’re very clear that none of these are silver bullets yet. The takeaway isn’t that LLMs can’t reason. It’s more uncomfortable than that. LLMs reason just enough to sound convincing, but not enough to be reliable. And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing. That’s the real warning shot in this paper. Paper: Large Language Model Reasoning Failures
Ryan Hart tweet mediaRyan Hart tweet media
English
28
69
299
24.4K
Anders Tobiason retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Stanford researchers just exposed a weird side effect of AI that almost nobody is talking about. The paper is called “Artificial Hivemind.” And the core finding is unsettling. As language models get better, they also start sounding more and more the same. Not just within a single model. Across different models. Researchers built a dataset called INFINITY-CHAT with 26,000 real open-ended questions things like creative writing, brainstorming, opinions, and advice. Questions where there isn’t a single correct answer. In theory, these prompts should produce huge diversity. But the opposite happened. Two patterns showed up: 1) Intra-model repetition The same model keeps producing very similar answers across runs. 2) Inter-model homogeneity Completely different models generate strikingly similar responses. In other words: Instead of thousands of unique perspectives… We’re getting the same few ideas recycled over and over. The authors call this the “Artificial Hivemind.” It happens because most frontier models are trained on similar data, optimized with similar reward models, and aligned using similar human feedback. So even when you ask something open-ended like: • “Write a poem about time” • “Suggest creative startup ideas” • “Give life advice” Many models converge toward the same phrasing, metaphors, and reasoning patterns. The scary implication isn’t about AI quality. It’s about culture. If billions of people rely on the same systems for ideas, writing, brainstorming, and thinking… AI might slowly compress the diversity of human thought. Not because it’s trying to. But because the models themselves are drifting toward the same answers. That’s the real risk the paper highlights. Not that AI becomes smarter than humans. But that everyone starts thinking like the same machine.
Ihtesham Ali tweet media
English
414
1.6K
4.3K
387K
Anders Tobiason retweetledi
Jay Black
Jay Black@jayblackisfunny·
A collection of the dumbest dudes you knew from high school felt emasculated when a black man got elected president, so, to compensate, we now have a semi-sentient pig’s stomach in a red hat starting an illegal war to distract the country from all the sex crimes he committed.
English
423
9K
48.5K
616.2K
Anders Tobiason retweetledi
Brad Stulberg
Brad Stulberg@BStulberg·
Norway consistently wins the most medals at the Winter Olympic Games, with a population of just 5.6 million people. A big part of their success is how they treat youth sports—and it’s the opposite of what we do in the US. Here’s what we can learn from Norway: 1. Scorekeeping: In the US: Youth sports tend to be hyper competitive even at early ages. Leagues almost always keep score. In Norway: Scorekeeping isn’t even allowed until age 13. Removing winners and losers keeps the focus on the process not outcomes. It keeps kids engaged longer because it minimizes pressure (and tears) and maximizes fun, learning, and growth. The goal isn’t to win a third grade championship. It’s to love sport and keep playing. 2. Trophies: In the US: If you give everyone a trophy, you’re creating snowflakes who will never gain a competitive edge. In Norway: Whenever trophies are awarded, they are handed out to everyone. If getting a trophy makes young kids feel good, we should give them trophies. Maybe they’ll come back and play again next year!! As for the creation of snowflakes with no competitive edge—Norway’s athletes are tough as nails and all they do is win. 3. Prioritizing Fun: In the US: Far too often, the goal is to win. In Norway: The national philosophy is “joy of sport.” Youth sports in the US are driven by adults, ego, and money. Youth sports in Norway are driven by fun. Only half of kids in the US participate in sports. The number one reason they drop out: because they aren’t having fun anymore. In Norway, 93% of kids participate in youth sports. Fun is the foremost goal. 4. Playing Multiple Sports: In the US: There’s pressure to specialize early and play your best sport year round. In Norway: Try as many sports as you can before specializing as late as college. Norway encourages kids to try all types of sport. This reduces injury and burnout and increases all-around athleticism. It also helps promotes match quality, or finding the sport you are best suited for as your body develops, which is impossible if you commit to a single sport too early. 5. Affordability In the US: There is increasingly a pay-to-play model with high fees for leagues, equipment, and travel. This excludes many kids from playing. In Norway: It’s a national priority to keep youth sports affordable and therefore accessible for all. Kids aren’t priced out, which creates opportunities for everyone to participate (and develop into athletes), regardless of their parents’ income level. We could learn a lot from Norway: In the US, 70% of kids drop out of youth sports by age 13. This not only diminishes an elite-athlete pipeline, but it also destroys an opportunity for healthy habits and all the character lessons kids can learn from sport. In Norway, lifelong participation in sport is the norm. The goal isn’t to have the best 9U team. It’s to develop the best athletes. Those are two very different things. And Norway has the gold medals to prove it.
Brad Stulberg tweet media
English
626
1K
5.3K
2.1M