Rob Cain

75.8K posts

Rob Cain banner
Rob Cain

Rob Cain

@cain_rob

A Periodic Table of Swearing, Science Tidbits & Middle Class Picnic Chit Chat. Mastodon: [email protected]

UK Sumali Mart 2009
3K Sinusundan939 Mga Tagasunod
Alex Prompter
Alex Prompter@alex_prompter·
Everyone assumes LLMs are the future of AI. The permanent foundation. The layer everything else gets built on. I’m not so sure. The historical parallel that fits best isn’t the one most people want to hear. LLMs are Edison’s DC power grid: → Genuinely revolutionary → Commercially dominant → Solving real problems right now → But architecturally limited in ways that can’t be patched Right domain. Wrong architecture. And the evidence is already here. Hallucination isn’t a bug. It’s the architecture. Researchers have formally proven that LLMs cannot learn all computable functions and will therefore inevitably hallucinate when used as general problem solvers. That’s not a training data problem. That’s math. A separate paper demonstrated that hallucinations stem from the fundamental mathematical and logical structure of LLMs, making it impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. And here’s the part that really gets you: There’s a direct link between hallucination and creativity in LLMs. It may be impossible to eliminate hallucination without impairing the model’s most crucial capabilities. → The thing that makes LLMs creative is the same thing that makes them lie → Fix one, you break the other → That’s not a tradeoff you engineer away. That’s a design constraint. DC power had the exact same structural problem. It couldn’t transmit electricity over long distances. Not because the engineering was bad. Because the physics made it impossible. You needed AC. A fundamentally different approach. The “AC power” of AI is already being built. And it has names. This isn’t theoretical. People are already building the replacement architectures. Yann LeCun left Meta and raised $1 billion to prove LLMs are a dead end. AMI Labs raised $1.03 billion in seed funding at a $3.5 billion valuation in March 2026, making it the largest seed round in European history. His thesis is simple: LLMs predict the next word. That’s not intelligence. That’s autocomplete at scale. His core technology, JEPA, operates in latent space, learning abstract representations of reality rather than surface patterns. LeCun used a vivid analogy: using an LLM to understand the real world is like teaching someone to drive by just talking. A Turing Award winner didn’t just write a paper about it. He quit his job and bet a billion dollars on it. Mamba is proving transformers aren’t the only game in town. Mamba achieves 5x higher throughput than Transformers with linear scaling in sequence length. Thanks to intensive research in 2023-2025, non-transformer architectures have reached parity with Transformers on key language benchmarks, and in some cases surpassed them. Hybrid architectures are already shipping. By 2026, models built on hybrid transformer-SSM architectures can ingest hundreds of pages of text at once, far beyond vanilla GPT-3 or GPT-4. The alternatives aren’t coming. They’re here. Meanwhile, look at what the industry is building to keep LLMs functional: → Agents (because the model can’t verify its own outputs) → Tool use (because the model can’t interact with the real world) → Reasoning chains (because the model can’t reason natively) → RAG (because the model can’t reliably recall facts) These aren’t features. These are workarounds. When you need that many patches, you’re running longer DC power lines and wondering why the voltage keeps dropping. Now the part everyone actually needs: which skills survive the transition? When DC shifted to AC, some electrical engineers thrived and some went extinct. The ones who thrived understood circuits, load management, and power distribution at a fundamental level. Those principles worked on any architecture. The ones who didn’t? They only knew DC-specific wiring. The same split is coming. And it’s coming faster than people think. Here are the skills that transfer no matter what replaces transformers: → Systems thinking for AI workflows. Breaking complex tasks into steps an AI can execute. This works whether the AI is a transformer, an SSM, JEPA, or something we haven’t built yet. Architectures change. The need for structured task decomposition doesn’t. → Evaluation and verification. Knowing if AI output is right. LLMs have a “Self-Correction Blind Spot” where they can recognize errors but lack the reasoning pathways to correct them.  Whatever comes next will still need humans who can evaluate quality. This skill gets MORE valuable, not less. → Data literacy. Understanding what data an AI needs, how to structure it, what’s clean vs. noisy. Every AI architecture runs on data. Past, present, future. The people who understand data will always have leverage. → AI-augmented workflow design. Not “how to write a good prompt” but “how to redesign a business process so AI handles the right parts and humans handle the right parts.” This is architecture-agnostic. It transfers to anything. → Domain expertise + AI fluency. The most powerful combination is stacking AI fluency on top of deep domain expertise.  A lawyer who understands AI beats a prompt engineer who doesn’t understand law. Every time. Regardless of what model they’re using. → Clear problem definition. Prompt engineering is just one implementation of a deeper skill: translating human intent into machine-executable instructions. Whether that instruction is a prompt, an API call, a config file, or something that doesn’t exist yet, the ability to define what you want is permanent. And here’s what DOESN’T transfer: → Memorizing specific model behaviors (“Claude does X, GPT does Y”) → Platform-specific tricks that only work on one tool → Building your identity around a single product name → “Prompt engineer” as a job title instead of a thinking skill The difference is simple: → Transferable skills = understanding WHY something works → Non-transferable skills = memorizing HOW a specific tool works WHY survives paradigm shifts. HOW doesn’t. The bottom line The principle behind LLMs is permanent. The architecture probably isn’t. That’s not bearish on AI. That’s the most bullish take possible. It means the best is still ahead of us. Use LLMs hard right now. Build with them. Ship on them. But build your skills around the PRINCIPLES, not the PRODUCTS: → Learn systems thinking, not just prompting → Learn evaluation, not just generation → Learn data literacy, not just tool literacy → Learn workflow design, not just model tricks → Stack domain expertise on top of AI fluency The people who do this will thrive in the transformer era AND whatever comes after it. Edison built a working power grid that lit up Manhattan. It was real, valuable, and changed the world. AC still replaced it.
Alex Prompter tweet media
English
84
100
466
45.2K
Rob Cain
Rob Cain@cain_rob·
@alex_prompter i agree with this POV. we are on a path of discovery & invention - this is not the end point,(if there even is such a thing); we are but part way on a side road.
English
0
0
0
2
Rob Cain nag-retweet
SciTech Era
SciTech Era@SciTechera·
Reminder: AI just broke drug discovery speed. Scientists have developed a new AI system called LigandForge that can design protein-binding peptides up to 10,000× - 1,000,000× faster than current methods. LigandForge also achieved 83% correlation with experimental binding affinity data, showing strong agreement with real world measurements. It also generated 8,556 diverse peptide candidates, indicating the model isn’t just copying known structures but exploring new designs. AND NOTE: Peptides matter because they can control proteins directly, and proteins control almost everything in disease. Right now, we struggle to target many of them, which slows down cures. If we learn to design peptides fast with AI, we can quickly create precise treatments for things like cancer and targeted drug delivery, turning years of trial-and-error into a much faster lol.. we are living in the fastest SciTech acceleration era!
SciTech Era tweet media
SciTech Era@SciTechera

AI just broke drug discovery speed. Scientists have developed a new AI system called LigandForge that can design protein-binding peptides up to 10,000× - 1,000,000× faster than current methods 👀 Researchers used a single-pass discrete diffusion model that directly generates peptide sequences from a protein binding pocket, skipping slow steps like structure prediction and docking. In new study, published on bioRxiv, evaluated the system across ~150 protein targets and compared it to methods like BindCraft and BoltzGen. Results were HUGE 👀! LigandForge achieved ~83% correlation with experimental binding affinity data, showing strong agreement with real-world measurements. It also generated 8,556 diverse peptide candidates, indicating the model isn’t just copying known structures but exploring new designs. And NOTE Peptides matter because they can control proteins directly, and proteins control almost everything in disease. Right now, we struggle to target many of them, which slows down cures. If we learn to design peptides fast with AI, we can quickly create precise treatments for things like cancer and targeted drug delivery, turning years of trial-and-error into a much faster lol.. We are living in the fastest SciTech acceleration era..

English
2
17
47
1.7K
Rob Cain nag-retweet
People's Daily, China
The Lawa Hydropower Station, located on the upper reaches of the Jinsha River in southwest China, has successfully hoisted the 1,007-tonne rotor for its No.4 unit into place. With an installed capacity of 2 million kW, the station is expected to generate an average of 9 billion kWh of clean electricity annually upon completion, cutting CO2 emissions by 7.41 million tonnes each year.
People's Daily, China tweet mediaPeople's Daily, China tweet media
English
1
8
30
2.3K
Rob Cain
Rob Cain@cain_rob·
@DakdaR22 Lavrov can go fuck his mother's corpse.
English
0
0
0
10
D.Radka, #NAFO 🇨🇿🤝🇺🇦
Lavrov asks West for security guarantees for Russia from "Kyiv regime"... in fifth year of ongoing special operations from Russia side - "Kyiv in 3 days* 😂🧐
D.Radka, #NAFO 🇨🇿🤝🇺🇦 tweet media
English
342
524
2.5K
59.6K
Rob Cain nag-retweet
The Associated Press
BREAKING: A judge dismisses President Trump's $10 billion lawsuit against The Wall Street Journal over reporting on his ties to Jeffrey Epstein. apnews.com/article/trump-…
English
306
3.5K
14.7K
536.8K
Rob Cain
Rob Cain@cain_rob·
@MichaelRosenYes you can be certain they are not straight players nor doing it out of kindness. block the bastards.
English
0
0
0
32
Michael Rosen 💙💙🎓🎓 NICE 爷爷
Mysterious. I'm getting scores of tweets from anonymous people with a name+lots of numbers with hardly any followers and who've been on X for many years. Someone cranking up the bot-machine somewhere.
English
23
70
524
6.1K
Rob Cain
Rob Cain@cain_rob·
@EdwardJDavey this language, we understand. it is direct & it is truthful. well done Ed.
English
0
0
0
8
Rob Cain nag-retweet
Ed Davey
Ed Davey@EdwardJDavey·
Donald Trump is no leader of the free world. He’s a dangerous and corrupt gangster. The Prime Minister needs to call off the King’s state visit to Washington before it’s too late.
English
4K
11.9K
37.4K
659.2K
Rob Cain
Rob Cain@cain_rob·
@_annieversary i'm not even sure it works. author might at least have derived EML production for simple addition & subtraction - even though subtraction used in definition of EML anyway.
English
0
0
0
264
Rob Cain nag-retweet
Give A Shit About Nature
Give A Shit About Nature@giveashitnature·
Single-use coffee pods produce 56 billion pods of plastic waste per year globally. Stack them end to end, and they’d wrap around the Earth multiple times. A French press or drip maker produces zero. Switch once and never buy pods again. The coffee is better anyway.
Give A Shit About Nature tweet mediaGive A Shit About Nature tweet media
English
173
1.4K
3.6K
305.2K
Rob Cain nag-retweet
Volcaholic 🌋
Volcaholic 🌋@volcaholic1·
Two new studies are challenging the “wind turbines kill birds” claim. One AI backed analysis of 2,007 bird flight paths at an offshore wind farm in Aberdeen recorded ZERO COLLISIONS over 19 months, suggesting risks may be far lower than critics claim. euronews.com/2026/04/11/two…
English
32
245
702
17.2K
Republicans against Trump
Republicans against Trump@RpsAgainstTrump·
Q: Did you post that picture of yourself depicted as Jesus? Trump: I did post it and I thought it was me as a doctor. And had to do with red cross as a red cross worker
English
3.9K
1.5K
8.8K
1.3M
Rob Cain
Rob Cain@cain_rob·
@RpsAgainstTrump he is just SO full of shit. even when he doubles down, it's just even more shit on top of shit. in fact, it's shit all the way down with Trump.
English
0
0
0
7
Rob Cain
Rob Cain@cain_rob·
@Keir_Starmer or, you could simply tell Trump to FUCK THE FUCK OFF. sorted.
English
0
0
0
3
Keir Starmer
Keir Starmer@Keir_Starmer·
The ongoing closure of the Strait of Hormuz is deeply damaging. Getting global shipping moving is vital to ease cost of living pressures. The UK has convened more than 40 nations who share our aim to restore freedom of navigation. This week the UK and France will co-host a summit to advance work on a coordinated, independent, multinational plan to safeguard international shipping when the conflict ends.
English
12K
2.2K
13.1K
2.5M
Zohar Komargodski
Zohar Komargodski@ZoharKo·
A cute paper that says the following: If quantum computers succeed (say 10^3 logical qubits) then either P=NP or the "classical variables at the Planck scale" alternative theories (e.g. by 't Hooft) have to be wrong. arxiv.org/abs/2604.06322
English
10
19
132
12.1K
Rob Cain
Rob Cain@cain_rob·
@ZoharKo i'm not sure this makes any sense whatsoever. seems full of ludicrous assumptions & assertions & ends up telling us what we already know. think they need to lay off the kiff a bit. ;)
English
0
0
0
20
Rob Cain
Rob Cain@cain_rob·
@skdh the trouble with conspiracy theories is that they get in the way of the real conspiracies. pretty sure its a well known & well used tactic.
English
0
0
0
5
Sabine Hossenfelder
i think people are drawn to conspiracy theories because that way they get to make sense of a world that just doesn't make sense
English
805
90
1.3K
70.7K
Rob Cain nag-retweet
SzabadonMagyarul 🇬🇧🇭🇺🇺🇦🇪🇺
Peter Magyar, during his international press conference, confirmed that Szijjarto, Orbán's foreign minister, has barricaded himself with some of his closest colleagues and is destroying and shredding evidence about his treason (documents about the sanctions against russians).
English
398
12.6K
35.2K
1.3M