Gerard Sans | Axiom 🇬🇧

38.3K posts

Gerard Sans | Axiom 🇬🇧 banner
Gerard Sans | Axiom 🇬🇧

Gerard Sans | Axiom 🇬🇧

@gerardsans

Founder Axiom // Forging skills for the new era of AI. GDE in AI, Cloud & Angular. Building London's tech & art nexus @nextai_london. Speaker | MC | Trainer.

London ☔ Katılım Ekim 2007
6.9K Takip Edilen36.1K Takipçiler
Sabitlenmiş Tweet
Gerard Sans | Axiom 🇬🇧
Gerard Sans | Axiom 🇬🇧@gerardsans·
2023 wrap-up in pics💂‍♀️🇬🇧 >> Starring London, Berlin, Milan, Montreal, Toronto, Seoul, Busan, Amsterdam, Milton Keynes, Berlin, San Francisco, Los Angeles, Honolulu, Waikiki, Warsaw, Perth, Singapore, Bali, Jakarta, Bogor, Lisbon, Rome and Naples >> Web, Cloud, Web3 & AI /part1
Gerard Sans | Axiom 🇬🇧 tweet media
Indonesia
4
4
42
28.5K
Gerard Sans | Axiom 🇬🇧
@Jacobsklug We’ve known since 2023. The evidence is public. What’s missing isn’t proof, it’s honesty. Reality is neutral. A corrupt system doesn’t sustain itself. People sustain it. All of us.
English
0
0
0
4
Gerard Sans | Axiom 🇬🇧
This isn’t confusion. It’s incentives. AI isn’t being driven by truth or safety, it’s being driven by power and the golden goose. When evidence gets in the way, it gets buried. We already know the limits. We just don’t act on them because it would kill the goose. So everyone plays along, labs, academia, industry, government. And the costs get dumped on the public. The gap between hype and reality isn’t a bug. It’s the business model.
English
1
0
0
19
Jacob Klug
Jacob Klug@Jacobsklug·
the craziest part about what's going on in AI: data level > the training data is scraped from the open internet with minimal curation > labeling and reinforcement is outsourced to companies like Scale AI > which is largely powered by low-paid workers in developing countries manually tagging data > the foundation of the smartest technology ever built is human judgment at $2/hour model level > built on top of this loosely curated data > we don't fully understand how AI interprets what it's trained on > we don't understand how AI uses that data to strategically and creatively think on its own > the companies building these models are still figuring it out in real time application level > these companies are all using the same models > usually refined by fine-tuning, prompting, and distilling > they don't have real control or understanding over AI quality and output user level >have absolutely no idea what's going on, and will sign up for five tools with slightly different prompting concluding thoughts everyone assumes the next person in the chain knows what's going on. the truth is AI is a relatively misunderstood technology at every level. when we don't understand something this powerful, it carries existential risk.
English
6
0
9
623
Gerard Sans | Axiom 🇬🇧
While technically correct, scalability falls sharply as costs rise exponentially due to entropy. Brute-force exploration through scaffolding, dragging a trailer without tires, is one of the most inefficient ways to confront the real limitations of LLMs. Ignoring these constraints doesn’t make them vanish; it only makes overcoming them far more expensive. The smarter approach is to tackle the problems directly, rather than pretend they don’t exist.
English
0
0
0
4
David Scott Patterson
David Scott Patterson@davidpattersonx·
AI is closer to the limit of intelligence than most people think. At the current rate of improvement, AI will become perfect at everything by 2028 or 2029. Beyond that point, it won't matter whether AI becomes more intelligent, because it won't become any more useful.
English
60
6
173
7.3K
Gerard Sans | Axiom 🇬🇧
AI is just software, it doesn’t think, understand, or become. Treating it as if it does only confuses the discussion. How much it can cover is a separate matter. Distribution-based systems need regularities in the data, which makes them ill-suited for chaotic or high-entropy environments, like real-world driving. As software, AI can be applied broadly, including to itself. This isn’t a new “intelligence” story, it’s simply the behavior of software, something we have been using in many different domains and data modalities for decades.
English
0
0
0
5
Inflectiv AI ⧉
Inflectiv AI ⧉@inflectivAI·
@davidpattersonx Even if an AI becomes "perfect" at digital tasks (Bits), its utility remains capped by the physical world (Atoms). The breakthrough isn't more intelligence; it's the integration of that intelligence into robotics and physical infrastructure to break the real-world execution limit
English
0
0
0
107
Gerard Sans | Axiom 🇬🇧
AI is just software, it isn’t intelligent in any meaningful way. Saying otherwise is like claiming a dictionary is “smarter” simply because it has more entries or edits. If you want a deeper explanation of this distinction, see the Potemkin Understanding paper from last year, which directly addresses and debunks this common line of argument.
English
1
0
0
17
Gerard Sans | Axiom 🇬🇧
Ask yourself why you trust what AI labs say about their own technology in the first place. Anthropic’s analysis overlooks a key factor: coding has an unusually strong feedback loop, making errors easy to detect, test, and undo. Most verticals lack anything comparable, no compiler, no verifier, no Git-like safety net, and often no tooling stack at all. Even in software, AI breaks down beyond boilerplate or in domains like UI/UX, where feedback can’t be automated. AI fails silently, does not transfer competence, and remains unfit for autonomous workflows without close oversight, frontier AI failure rates sit at 97% on real-world digital tasks (Scale AI RLI). Don’t get burned by the hype. ai-cosmos.hashnode.dev/the-country-of…
English
1
0
2
80
Miles Deutscher
Miles Deutscher@milesdeutscher·
Expert market analysts: "The AI bubble is the biggest risk to the economy we've ever seen." Yesterday, I published an article that explains exactly why they're right. Take 30 seconds out of your day to read the summary: (most people will be blindsided by what comes next)
Miles Deutscher tweet media
English
30
3
78
10.1K
Goldman Sachs
Goldman Sachs@GoldmanSachs·
According to Goldman Sachs Research, 300 million jobs globally could be exposed to AI automation over the next decade. However, AI is also likely to help create jobs—particularly in the buildout of the power and data center infrastructure required to sustain the boom: click.gs.com/t3et
English
23
75
278
67.9K
Gerard Sans | Axiom 🇬🇧
@johncrickett @DarnellTheGeek AI is software, not a mind with beliefs or accountability. It computes within a narrow, fragile scope, so it needs oversight to catch drift and silent errors. Use it as a tool, not a decision-maker. It offers context, not truth, so verify outputs in high-stakes cases.
English
0
0
5
98
John Crickett
John Crickett@johncrickett·
Large language models don't think. They don't reason. And they can't produce endless new information. This is clearly explained by George D. Montañez in a recent talk at Baylor University, and it's worth understanding why. Three key points stood out to me: LLMs don't ponder, they process. They're next-token predictors, sophisticated ones, but they have no understanding of what they're producing. They know two vectors are similar; they don't know what either vector means. LLMs don't reason, they rationalise. Studies show their outputs shift based on irrelevant prompt wording, embedded hints, and statistical shortcuts. The "chain of thought" they show you often has nothing to do with how they actually arrived at the answer. They don't create endless information. Training AI on AI output causes rapid degradation and model collapse. Information theory tells us you can't get more out than you put in, regardless of the architecture. None of this means these tools aren't useful. But it does mean we should stop anthropomorphising them and start being honest about what they actually are. The hype is real. So are the limits. You can watch the talk on YouTube here: youtube.com/watch?v=ShusuV…
YouTube video
YouTube
English
41
52
226
14.8K
Gerard Sans | Axiom 🇬🇧
One of the core problems with LLMs and AI agents is that they often fail silently. That isn’t a bug you can simply fix, it’s a consequence of how these systems are designed. Until the AI industry openly acknowledges this reality, we’ll likely keep being surprised by “unexplained” failures. This is the price of speculation and misaligned incentives across academia, research, and politics. ai-cosmos.hashnode.dev/the-country-of…
English
0
0
0
9
Psyho
Psyho@FakePsyho·
Seems that AI 2027 (ridiculed for "impossible" timelines) severely underestimated the speed of progress in late 2025 / early 2026: - AI coding agents have a much greater impact than the projected speedups - OpenAI alone already matched the revenue estimate two months earlier ($25B in Feb); if we combine revenue from all frontier labs, we've probably already matched the Jan 2027 estimate ($55B) I wouldn't be that much surprised if the authors revert to their original timelines at some point
Psyho tweet media
English
28
32
447
28.5K
Gerard Sans | Axiom 🇬🇧 retweetledi
Google AI Studio
Google AI Studio@GoogleAIStudio·
vibe coding in AI Studio just got a major upgrade 🚀 • multiplayer: build real-time games & tools • real services: connect live data • persistent builds: close the tab, it keeps working • pro UI: shadcn, Framer Motion & npm support we can't wait to see what you build!
English
163
326
3.3K
391.6K
Gerard Sans | Axiom 🇬🇧
Ask yourself why you trust what AI labs say about their own technology in the first place. Anthropic’s analysis overlooks a key factor: coding has an unusually strong feedback loop, making errors easy to detect, test, and undo. Most verticals lack anything comparable, no compiler, no verifier, no Git-like safety net, and often no tooling stack at all. Even in software, AI breaks down beyond boilerplate or in domains like UI/UX, where feedback can’t be automated. AI fails silently, does not transfer competence, and remains unfit for autonomous workflows without close oversight, frontier AI failure rates sit at 97% on real-world digital tasks (Scale AI RLI). Don’t get burned by the hype. ai-cosmos.hashnode.dev/the-country-of…
English
0
0
0
117
Todd Saunders
Todd Saunders@toddsaunders·
I heard an incredible analogy from a VC friend that I can’t stop thinking about. “The moat in software was the cost of building software. And Claude Code just mass produced a bridge.” It’s wild when you think about the impact of this. The SaaS boom produced a few dozen billionaires and a bunch of zero sum winners. But the AI SaaS era will mass produce millionaires. There will be fewer ServiceTitans hitting $5B valuations, and instead there will be 50,000 companies doing $500K-$5M each, run by 1-3 people with deep expertise and huge margins. To be clear, I believe that the total value of software goes up, and the number of companies created goes up exponentially. But the number of people who capture the value also goes up 100x. I don’t believe in the “SaaS is dying” headline, I think it’s missing the point. It’s simply that the power of SaaS is changing hands.
English
132
50
600
197.9K
Gerard Sans | Axiom 🇬🇧
Ask yourself why you trust what AI labs say about their own technology in the first place. Anthropic’s analysis overlooks a key factor: coding has an unusually strong feedback loop, making errors easy to detect, test, and undo. Most verticals lack anything comparable, no compiler, no verifier, no Git-like safety net, and often no tooling stack at all. Even in software, AI breaks down beyond boilerplate or in domains like UI/UX, where feedback can’t be automated. AI fails silently, does not transfer competence, and remains unfit for autonomous workflows without close oversight, frontier AI failure rates sit at 97% on real-world digital tasks (Scale AI RLI). Don’t get burned by the hype. ai-cosmos.hashnode.dev/the-country-of…
English
0
0
0
15
ericosiu
ericosiu@ericosiu·
AI can already replace 95% of coding tasks. It's only doing 33%. Read that again. Anthropic just published a study showing the gap between what AI could automate and what it's actually automating right now. The numbers are wild. Programming? 95% possible. Only 33% happening. Customer service? 82% possible. 28% happening. Healthcare? 40% possible. Only 5% happening. Construction? 15% possible. 2% happening. That gap is the biggest opportunity in business right now. Every percentage point between what AI can do and what it's actually doing is a company waiting to be built. A tool waiting to be created. A workflow waiting to be automated. Most founders are chasing the stuff AI already dominates — chatbots, content, coding tools. The real money is in the industries where AI has barely started: healthcare, legal, construction, logistics. That 5% in healthcare? That's going to 40% in the next 3 years. Whoever builds the bridge gets paid. Stop building in crowded spaces. Build where the gap is widest. For more on AI, business, and marketing, just comment "newsletter."
ericosiu tweet media
English
7
3
41
2.8K
Gerard Sans | Axiom 🇬🇧
Spreading fear, uncertainty, and doubt isn’t very mindful, but in Silicon Valley, it’s often celebrated. AI won’t replace most jobs yet for one simple reason: it isn’t reliable enough. LLMs and AI agents frequently fail silently. And that’s not just a bug you can patch, it’s a consequence of how these systems are built. Until the industry openly acknowledges this, we’ll keep seeing “mysterious” failures. That’s the cost of hype, speculation, and misaligned incentives across academia, research, and politics. For example, one benchmark testing frontier AI on real-world work reported a 97% failure rate. Avoid getting burned by the hype. ai-cosmos.hashnode.dev/the-country-of…
English
0
0
0
15
Gerard Sans | Axiom 🇬🇧
Spreading fear, uncertainty, and doubt isn’t very mindful, but in Silicon Valley, it’s often celebrated. AI won’t replace most jobs yet for one simple reason: it isn’t reliable enough. LLMs and AI agents frequently fail silently. And that’s not just a bug you can patch, it’s a consequence of how these systems are built. Until the industry openly acknowledges this, we’ll keep seeing “mysterious” failures. That’s the cost of hype, speculation, and misaligned incentives across academia, research, and politics. For example, one benchmark testing frontier AI on real-world work reported a 97% failure rate. Avoid getting burned by the hype. ai-cosmos.hashnode.dev/the-country-of…
English
0
0
0
1
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what happened in the last 24 hours? > Zuckerberg killed the Metaverse after burning $80 billion on cartoon avatars nobody used > Sam Altman took $13 billion from Microsoft then sold OpenAI's cloud to Amazon for $50 billion.. Microsoft just found out they funded their own competition > Anthropic made an AI that takes orders from your phone and does your work while you sleep.. > X dropped a dislike button AND a mute-entire-countries button in the same week.. > YouTube asking you to flag AI slop is just Google getting 2 billion people to train their next model for free > 93% of US jobs can now be partly done by AI.. Same week companies started giving the weakest raises since 2008 > Apple started rejecting vibe-coded apps from the App Store > xAI is paying Wall Street bankers $100/hour to teach Grok how to replace Wall Street bankers.. They're taking the money.. > A mystery AI model appeared on benchmarks beating everything.. Developers think DeepSeek is quietly testing their next weapon > Bloomberg asked "Is the AI bubble about to burst" the same day Nvidia said the chip market will hit $1 trillion.. One of them is dead wrong.. > The UK government backed down on AI copyright after artists revolted.. First government to flinch > The Fed said rate hikes are back on the table and blamed AI data centers for making inflation worse And it's only Wednesday. See you tomorrow. It'll be worse. If you're not following me you're finding out about this stuff 48 hours late from someone who read my post
Tuki@TukiFromKL

🚨 Do you understand what happened in the last 24 hours? > A Chinese lab made AI 25% cheaper and gave it away for free. OpenAI charges you $200/month for worse. > A robot got arrested in China. Not shut down.. Arrested... Catching charges before GTA 6 dropped. > JPMorgan told Meta to fire 20% of staff.. Meta did it that night.. The stock went UP but 14,000 people lost their jobs and Wall Street clapped. > Elon poached the engineers who built Cursor and said SpaceX will "far exceed" everyone in AI.. > xAI is paying Wall Street bankers to teach AI how to replace Wall Street bankers... They're taking the money. 💀 > Jensen said Nvidia will hit $1 TRILLION in revenue by 2027.. Lost $600B in January and recovered in two weeks.. Then named his price. > OpenAI gave AI agents the power to spawn OTHER AI agents.. The AI now hires its own employees. > Manus put a full AI agent on your desktop.. Every $15/month SaaS tool just became obsolete. > An AI CMO launched that replaces your entire marketing team for $99/month. Your social media manager, SEO guy, content writer - all of them for $99. > Nvidia launched DLSS 5 - AI that upgrades your game graphics in real time to worse And it's only Monday. See you tomorrow. It'll be worse.

English
745
7.2K
47.6K
8.2M
Tom Goodwin
Tom Goodwin@tomfgoodwin·
I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft
English
173
3
170
21.3K
kapilansh
kapilansh@kapilansh_twt·
vibe coding is just a fancy term for "I have no idea what my codebase does" → AI writes 400 lines → you don't read it → it works → you ship it → 3am production fire → you have no idea where to start → ask AI to fix it → AI breaks 3 other things we're not building faster we're just breaking things at the speed of light and calling it innovation
English
143
58
668
29.2K
Gerard Sans | Axiom 🇬🇧
@HuggingPapers Important distinction: this AI is trained on citations, which are a proxy for impact, not truth or quality. It’s basically a “popularity predictor” for research papers. Useful? Maybe. Understanding science? No.
English
0
0
1
24
DailyPapers
DailyPapers@HuggingPapers·
AI can learn scientific taste Trained on 696K citation pairs, Scientific Judge predicts high-impact papers better than GPT-5.2 & Gemini 3 Pro. Scientific Thinker generates research ideas with higher potential impact using reinforcement learning from community feedback.
DailyPapers tweet media
English
4
10
89
7.4K
Gerard Sans | Axiom 🇬🇧
@KanikaBK Important distinction: this AI is trained on citations, which are a proxy for impact, not truth or quality. It’s basically a “popularity predictor” for research papers. Useful? Maybe. Understanding science? No.
English
0
0
1
30
Kanika
Kanika@KanikaBK·
🚨I JUST READ SOMETHING SHOCKING. Researchers just trained an AI to predict which scientific ideas will succeed before any experiment is run. It is now better at judging research than GPT-5.2, Gemini 3 Pro, and every top AI model on the market. And it learned by studying 2.1 million research papers without a single human scientist teaching it what "good science" looks like. Here is what they did. A team of Chinese researchers built two AI systems. The first, called Scientific Judge, was trained on 700,000 matched pairs of high-citation vs low-citation papers. Every pair came from the same field and the same time period. The AI's only job: figure out which paper would have more impact. It worked. The AI now predicts which research will succeed with 83.7% accuracy. That is higher than GPT-5.2. Higher than Gemini 3 Pro. Higher than every frontier model that exists. Then they built the second system. Scientific Thinker doesn't just judge ideas. It proposes them. You give it a research paper, and it generates a follow-up idea with high potential impact. When tested head to head against GPT-5.2, Scientific Thinker's ideas were rated as higher impact 61% of the time. It is generating better research directions than the smartest AI models in the world. It gets stranger. They trained the Judge only on computer science papers. Then they tested it on biology. Physics. Mathematics. Fields it had never seen. It still worked. 71% accuracy on biology papers it was never trained on. The AI didn't learn what makes good computer science. It learned what makes good science, period. Then the researchers tested whether it could see the future. They trained it on papers through 2024, then asked it to judge 2025 papers. It predicted which ones would gain traction with 74% accuracy. The AI learned to spot winners before the scientific community did. Here is what nobody is talking about. A 1.5 billion parameter model, tiny by today's standards, jumped from 7% to 72% accuracy after training. That is a 65-point leap. The ability to judge scientific quality isn't some emergent property of massive models. It can be taught to small, cheap, fast AI systems that anyone can run. Every year, over 2 million papers flood scientific databases. Researchers spend months deciding what to work on next. Grant committees spend billions deciding what to fund. An AI just learned to make those decisions faster, cheaper, and more accurately than any of them. If an AI can now judge which ideas will shape the future of science, what exactly is left that only a human scientist can do?
Kanika tweet media
English
35
107
367
22.3K