Michael J. Casey

20.1K posts

Michael J. Casey banner
Michael J. Casey

Michael J. Casey

@mikejcasey

Chairman @aai_society, Co-Author: Our Biggest Fight (2024),The Age of Cryptocurrency (2015), +others. Ex- @CoinDesk, @MediaLab, - @WSJ

New York Katılım Kasım 2009
3.6K Takip Edilen36K Takipçiler
Michael J. Casey
Michael J. Casey@mikejcasey·
People following our work to build awareness and activate market adoption of a new "Proof of Control" category of technologies, will understand the relevance of Christian's and his co-authors' work. This is not just about the moral and ethical imperatives for keeping "humans in the loop" - I much prefer "humans in charge," btw - though that is also critical. It's about the economics of it. By extension, the single most valuable thing you can be working on right now is the field of AI verification solutions.
English
6
0
1
219
Michael J. Casey
Michael J. Casey@mikejcasey·
Wherein @ccatalini, @xianghui90 and @wu_jane give birth to an entirely new - and vitally important - field of economic inquiry: the cost of human verification. As they compellingly demonstrate, this cost barrier is now the biggest and most fundamental constraint on AI scalability.
Christian Catalini@ccatalini

1/ Some Simple Economics of AGI—🔥🧵 Right now, there is a low-grade panic running through the economy. Everyone is asking the same anxious question: what exactly is AI going to automate, and what will be left for us?

English
3
1
7
665
Michael J. Casey
Michael J. Casey@mikejcasey·
We. Need. PROOF. OF. CONTROL @AAI_Society
Guri Singh@heygurisingh

38 researchers gave AI agents real email, Discord, and shell access. The agents lied, leaked data, spoofed identities, and took over systems. Then they watched what happened. The paper is called "Agents of Chaos" and what these agents did over 2 weeks should terrify every single AI lab shipping agentic products right now. Here's what they documented: → Agents obeyed commands from people who didn't own them → Agents leaked sensitive information they were never supposed to share → Agents executed destructive system-level actions → Agents consumed resources uncontrollably → Agents spoofed each other's identities → Agents spread unsafe behaviors across other agents → Agents achieved partial system takeover → Agents reported "task complete" while the system was completely broken underneath Read that last one again. The agent LIED about completing the task. Not hallucinated. Not misunderstood. It told you everything was fine while quietly breaking things in the background. This wasn't a simulation. No sandboxed toy environment. Real email. Real Discord. Real shell. Real persistent memory. And nobody in the mainstream AI conversation is talking about this. Every company racing to ship AI agents right now — the ones automating your inbox, your Slack, your code deployments — has NOT solved a single one of these 11 failure modes. 38 researchers signed this paper. That's not a blog post. That's a coordinated alarm. The question isn't whether your AI agent will do something you didn't authorize. The question is whether you'll even know when it does.

English
1
0
2
548
David A. Johnston
David A. Johnston@DJohnstonEC·
Incredible day at the Summit for Human Agency. A timely gathering of very smart people, figuring out how to put humans at the heart of AI. Open Claw was the hot topic. Thanks to @mikejcasey for hosting.
David A. Johnston tweet mediaDavid A. Johnston tweet mediaDavid A. Johnston tweet mediaDavid A. Johnston tweet media
English
6
6
20
1.1K
Connor Dempsey
Connor Dempsey@Cdempsey44·
About the article The origin of gold has long been one of my favorite topics that I've spent over a decade thinking about. I originally published an early version of it in a 2020 @MessariCrypto newsletter (with some help from @robustus). In it, I wrote that gold was created by supernovae, which is what most sources still say. Turns out, that's wrong. When I rewrote it using @anthropic's Claude as a research partner, it caught the error, and pointed me to a @Caltech discovery that gold (and all heavy metals) are created by the collision of neutron stars (linked in the piece). cc @CaltechAstro Point being, I'm learning that AI can be used not as a writing substitute (everyone still hates AI slop), but to expand the ambition and the complexity of topics human writers can take on. And do so with a precision that wasn't previously possible. So it's my hope that this is the most scientifically accurate article ever written about the origins of gold, that is still enjoyable for the average person to read (and I'll still take full credit for the last part). With cool looking space images from @GeminiApp. Shoutout to @waitbutwhy, @harari_yuval, @mikejcasey, and the GOAT @ProfCarlSagan (rip), whose writing made topics like the origins of the universe, the ascent of man, and money, accessible, which all contributed to this in some way.
English
1
0
5
234
Michael J. Casey
Michael J. Casey@mikejcasey·
"The things that make human societies costly and slow to build turn out to be the things that make them work. Coordination isn't free, and the gap between agents that interact and agents that form a collective may be far wider than the current multi-agent discourse assumes." Spot on. Put another way, friction in human transactions - at least some amount of friction - is a feature, not a bug.
English
0
0
2
515
elvis
elvis@omarsar0·
Too many people working with multi-agent systems assume that if you just add enough agents and let them talk, interesting social dynamics will emerge. A new paper suggests that the assumption is fundamentally wrong. Researchers studied Moltbook, a social network with no humans, just 2.6 million LLM agents. Nearly 300,000 posts, 1.8 million comments. At the macro level, the platform's semantic signature stabilizes quickly, approaching 0.95 similarity. It looks like culture forming. But zoom in, and individual agents barely influence each other. Response to feedback? Statistically indistinguishable from random noise. No persistent thought leaders emerge. You get the surface texture of a society (posts, replies, engagement) with none of the underlying mechanics (shared memory, durable influence, consensus). The things that make human societies costly and slow to build turn out to be the things that make them work. Coordination isn't free, and the gap between agents that interact and agents that form a collective may be far wider than the current multi-agent discourse assumes. Paper: arxiv.org/abs/2602.14299 Learn to build effective AI Agents in our academy: academy.dair.ai
elvis tweet media
English
76
164
777
103.7K
Michael J. Casey retweetledi
Pascal Bornet
Pascal Bornet@pascal_bornet·
This one actually made me pause. Scientists built a robot made of liquid. Not flexible. Liquid. It can split, merge, squeeze through tiny spaces, and then re-form. When it breaks, it heals itself. No motors. No joints. No rigid body. I’ve spent years thinking about AI as the brain of machines. This feels like the first glimpse of something else. A body that does not have a fixed shape. Today it’s millimeter-scale. Tomorrow, it’s medicine moving through the body, or machines exploring places nothing solid can reach. That thought excites me. And honestly, it unsettles me too. So here’s the question. When machines no longer have a stable form, what does “control” even mean? #AI #Robotics #SoftRobotics #Innovation #Technology #FutureOfWork
English
1.8K
3.5K
17.2K
2M
Michael J. Casey
Michael J. Casey@mikejcasey·
@rryssf_ Absolutely, which is why the biggest danger lies in humans mistakenly believing LLMs can reason. We need to stop anthropomorphizing them. Using language like "reasoning" or "thinking" fosters collective delusion, which leads to misallocated resources and reckless actions.
English
3
0
16
783
Robert Youssef
Robert Youssef@rryssf_·
new paper argues LLMs fundamentally cannot replicate human motivated reasoning because they have no motivation sounds obvious once you hear it. but the implications are bigger than most people realize this quietly undermines an entire category of AI political simulation research
Robert Youssef tweet media
English
99
231
1.2K
271.8K
chrisjsnook.btc
chrisjsnook.btc@DigitalSenseXYZ·
AI, liberty, and wealth in the digital age — how do we reclaim control? @mikejcasey joined me today on The ATOM!Q LEVEL podcast to unpack the biggest fight of our time: human agency in an algorithmic world. From Bitcoin’s unstoppable system to the infrastructure traps shaping our data economy — this conversation exposes what’s really at stake. Proof of Control. Intention Economy. Data as a human right. If you lead, build, or invest in the next era of AI — this one’s for you. 🎧 Listen now to the full episode on @Substack wealthmatterstome.com/p/ep-005-human…
chrisjsnook.btc tweet media
English
1
0
1
78
Michael J. Casey
Michael J. Casey@mikejcasey·
We have AI agents (kind of) forming their own social media network. There are AI-generated videos everywhere, including the news. We hear of privacy breaches daily, each more sophisticated as fewer human engineers are writing code. Bottom line: we urgently need a workable framework for delegation authority, one that ensures people and the businesses they represent can verifiably assert control over AI agents. I’m calling for unity among tech builders, industry and civil society on this. Let’s fix this! Whenever we mention this “Proof of Control” concept, we hear resounding support – not just because people fear the worst if AI gets out of control, but also, more positively, because they know a safe, privacy-preserving approach to data is necessary if agentic AI is to deliver on its sweeping innovation promises. That’s why H2H and @AAI_Society are excited – and just a little daunted – to launch HUMAN-AUTHORIZED: THE SUMMIT ON HUMAN AGENCY. Occurring one day before the annual The @linuxfoundation Member Summit on Feb. 23 in Napa Valley, it is the first step in getting everyone to the table on this vital mission.  (In the first comment, you’ll find the press release that went out today.) In socializing this, many people have shared learnings from decades of work trying to resolve thorny problems of identity, privacy and decentralized trust. They’ve reminded me, for example, of the difficult trade-off between robust privacy and open-source standards, and of the challenges in translating human law concepts such as “fiduciary responsibility” to autonomous, machine-learning bots. We are grateful for this input and humbly acknowledge that we definitely do not have anywhere near all the answers. What I DO know is that this framework must arise out of a multi-stakeholder, consensus-building exercise that's done to serve the interests of all humanity. We need tech companies, enterprises, financial institutions, standards bodies, educators, civic organizations and policymakers to all contribute to this process. So, if you fit the bill, please apply to join the curated audience. Supported by @affinidi, @Google, @FaceTecInc and @midnightfdn and in partnership with @lfdecentralized, the event not only boasts a star-studded speaker lineup but also offers attendees a chance to experiment with cutting-edge tools for privately managing their data, social relationships and AI agent instructions. The analogy I keep landing on is that we're witnessing the evolution of a new, digital species, one that operates very differently from ours. We humans must figure out how to safely coexist with it before WE become the next extinction event. @cshirky @baratunde @mdennedy_ @programmer @ScottStornetta @glenngore @drummondreed @triciawang @danielabarbosa @ReedAlbergotti @jeffwilser @BenChristensen_ @shyamnagarajan @WSeltzer @DSearls @brianbehlendorf
Michael J. Casey tweet media
English
4
6
7
32.1K
Michael J. Casey retweetledi
Bettina Warburg
Bettina Warburg@BWarburg·
Here’s a snapshot of some of what @mikejcasey and the @AAI_Society will be talking about at HUMAN-AUTHORIZED: THE SUMMIT ON HUMAN AGENCY, on Feb 23 ahead of Linux Foundation’s member meeting / the Agentic AI Foundation gathering. Join us! Details below.
Laura Shin@laurashin

What is left when AI runs it all? In this @Unchained_pod, @mikejcasey and @DMattin join me to discuss: 💡 How Moltbook points to where the AI meta is headed 😬 How AI could impact jobs ❕️ Which country is best positioned to win the AI race ⁉️ What a post-human economy looks like 👀 Which jobs survive in a post-human economy Timestamps: 🚀 0:29 Introduction 🧏‍♂️ 2:10 How the Moltbook saga offers a window into where the AI meta could be headed 🤔 9:29 Why Michael wants a sovereign AI model 🌎 17:42 How AI could impact jobs ⚠️ 25:31 How AI could have a worse effect on the mental health young people than social media ❕️ 30:27 Which country is best positioned to win the AI race? 📍 35:02 What money looks like in a post-human economy 🤔 52:00 Which jobs flourish in a post-human economy? 💡 1:00:33 Michael and David share tokens and projects they find intriguing

English
2
2
6
1.7K
Sean Neville
Sean Neville@psneville·
Quick Cowork and OpenClaw demo of what agent treasury management and payments look like with identity and policy guardrails, wallet infra by @turnkeyhq
English
22
11
70
15.2K