Cathal Dempsey

2.2K posts

Cathal Dempsey banner
Cathal Dempsey

Cathal Dempsey

@DCathal

FCR Media - Facilitating Commerce and Relationships - https://t.co/62nwEJ3HW3, https://t.co/qNw9tRXCCj , https://t.co/o0H0D5ZdGd , https://t.co/MOD4eNmjZ5

Dublin City, Ireland Katılım Mart 2009
5.2K Takip Edilen1.1K Takipçiler
Cathal Dempsey retweetledi
Cloudflare Developers
Cloudflare Developers@CloudflareDev·
Introducing EmDash — the spiritual successor to WordPress. Serverless. TypeScript. Securely sandboxed plugins via Dynamic Workers. cfl.re/3NPVfev
English
56
273
1.7K
466.1K
Cathal Dempsey retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M
Cathal Dempsey retweetledi
BURKOV
BURKOV@burkov·
LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982
BURKOV tweet media
English
396
1.1K
11.6K
3M
Cathal Dempsey retweetledi
Vala Afshar
Vala Afshar@ValaAfshar·
Sir @IanMcKellen shares Shakespeare’s words from 400 years ago on immigrants
English
7
74
216
25.2K
Cathal Dempsey retweetledi
Matthew Prince 🌥
Matthew Prince 🌥@eastdakota·
We cannot have a fair market for AI when Google leverages their search monopoly to see 3.2x as much of the web as OpenAI, 4.8x as much as Microsoft, and more than 6x as much as nearly everyone else. Most data wins in AI. Google needs to play by the same rules as everyone else.
Cloudflare@Cloudflare

Google's dual-purpose crawler creates an unfair #AI advantage. To protect publishers and foster competition, the UK’s Competition and Markets Authority must mandate crawler separation for search and AI. cfl.re/4t84kPz

English
145
62
980
635.9K
Cathal Dempsey retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over". To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk. That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented. This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live. TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
English
1.5K
2.2K
21.8K
23.7M
Cathal Dempsey retweetledi
n8n.io
n8n.io@n8n_io·
Your devs built it. Now your whole team can use it. 🤝 With Chat Hub, you can expose powerful n8n AI agents to Sales, Support, Marketing and more without them needing to understand nodes or JSON. ✅ Centralized Security ✅ Familiar UI ✅ Zero "Shadow AI" risks. Check it out: blog.n8n.io/introducing-ch…
n8n.io tweet media
English
7
14
172
10.3K
Cathal Dempsey retweetledi
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
I just watched a really great conversation about the future of AI. Every politician should watch it before they join the lemmings saying that regulation of AI will interfere with innovation. youtube.com/watch?v=rGAA59…
YouTube video
YouTube
English
167
369
1.8K
404.9K
Cathal Dempsey retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Demis started taking shots at OpenAI: “I think actions speak louder than words. With Sam and others claiming AGI is around the corner, why would you bother with ads then?” Gemini won’t have ads for now, but they’ll watch closely how ChatGPT handles it.
English
156
157
3.1K
387.3K
Cathal Dempsey retweetledi
Vala Afshar
Vala Afshar@ValaAfshar·
This is one of the most extraordinary things you will see, by Marula Eugster Rigolo
English
787
4.6K
21.5K
2.1M
Cathal Dempsey retweetledi
Internal Tech Emails
Internal Tech Emails@TechEmails·
Warren Buffett on email
Internal Tech Emails tweet media
English
5
18
442
50.7K
Cathal Dempsey retweetledi
Internal Tech Emails
Internal Tech Emails@TechEmails·
Warren Buffett emails Microsoft exec August 21, 1997
Internal Tech Emails tweet mediaInternal Tech Emails tweet mediaInternal Tech Emails tweet mediaInternal Tech Emails tweet media
English
16
51
791
213.7K
Cathal Dempsey retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Every so often @G_S_Bhogal puts together a list of interesting ideas. His latest: gurwinder.blog/p/26-useful-co… For 2026, I asked Gemini 3 Pro to do the same. Here is Gemini’s verbatim: 1. The Promethean Gap We are currently living in what philosopher Günther Anders called the Promethean Gap: the widening disparity between our capacity to create technology and our capacity to imagine its consequences. In 2026, as AI agents begin to act autonomously, we will produce effects we cannot emotionally or morally comprehend. The tool has outpaced the user’s conscience. 2. Jevons Paradox As AI makes intelligence and content creation cheaper and more efficient, we will not use less of it; we will use exponentially more. Expect an explosion of demand for compute and energy, not a reduction. Efficiency does not lead to conservation; it leads to consumption. 3. Audience Capture The digital trap where creators are slowly reshaped by the feedback loops of their followers. To maintain engagement, they become caricatures of themselves, adopting extreme views they don't actually hold because that is what the algorithm (and the audience) rewards. In 2026, entire political movements are victims of audience capture. 4. Epistemic Trespassing A growing annoyance where experts in one field (like computer science) confidently assert authority in another (like biology or geopolitics) without realizing their competence doesn't transfer. As polymaths vanish, trespassers rise. 5. Hyperstition A portmanteau of "hype" and "superstition." It is a fiction that makes itself true. The classic example is a run on a bank; the modern example is a meme coin or a political narrative. If enough people believe a fake scenario is real, they act in ways that bring it about. We are now manufacturing our own reality. 6. The Shirky Principle "Institutions will try to preserve the problem to which they are the solution." As AI threatens to solve complex problems (like translation or basic legal work), watch old institutions fight to keep the problems alive to justify their existence. 7. Gall’s Law "A complex system that works is invariably found to have evolved from a simple system that worked." You cannot build a complex AI-integrated society from scratch. It must grow from functioning, simple roots. Attempts to engineer a perfect complex system instantly (like a new smart city or government) always fail. 8. The Tocqueville Paradox As living standards improve, people’s tolerance for remaining dissatisfactions decreases. The better things get, the angrier we feel about what is still wrong. This explains why 2026 might feel like a time of crisis despite objective metrics of abundance. 9. Constructal Law A physics principle stating that for a flow system (like a river, a tree, or society) to persist, it must evolve to provide easier access to the currents that flow through it. Society is reorganizing itself not for fairness or happiness, but to maximize the flow of information and energy. 10. Ergodicity The difference between the group average and the individual experience. If 100 people play Russian Roulette, the "group average" survival rate is 98.3%. But if one person plays it 100 times, the survival rate is 0%. Do not confuse the safety of the collective with your safety as an individual. 11. Legibility A concept from James C. Scott. States and algorithms want to make complex, messy human lives "legible"—easy to measure, tax, and categorize. But in the process of tidying up the mess (standardizing education, farming, or speech), they often destroy the hidden ecosystem that made it work in the first place. 12. The Semmelweis Reflex The knee-jerk tendency to reject new evidence because it contradicts established norms. Named after Ignaz Semmelweis, who was ridiculed for suggesting doctors wash their hands. In 2026, the "crazy" ideas about health or tech that experts mock may be the ones that save you.
English
24
106
694
87.1K
Cathal Dempsey retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Always enjoy discussing the big picture with @FryRsquared. We talked about the frontiers of computability, the nature of the mind, and why I’m optimistic that AI will help us understand the universe’s deepest mysteries. + this wraps up another season of the award-winning @GoogleDeepMind Podcast - huge congrats to the team!
Google DeepMind@GoogleDeepMind

We’re using AI to work on root node problems – fundamental scientific challenges that unlock societal benefits. 🧪 From fusion and superconductors to entirely new materials, our CEO @DemisHassabis discusses what comes next after #AlphaFold – all on our podcast with @fryrsquared. ↓ Timecodes: 01:42 2025 progress 05:14 Jagged intelligence 07:32 Mathematical version of AlphaGo? 09:30 Science vs commercialization 12:42 Scaling 17:43 Genie and simulation 25:47 Evolution in simulation 28:26 AI bubble 31:56 Building ethical AI 34:31 AGI 44:44 Turing machines 49:06 How it feels to lead

English
81
196
1.8K
198.9K