Michael Ryan

309 posts

Michael Ryan banner
Michael Ryan

Michael Ryan

@michaelryan207

PhD Student @stanfordnlp || Working on DSPy 🧩 || Prev @GeorgiaTech @Microsoft @SnowflakeDB

Palo Alto, CA Katılım Aralık 2019
1.3K Takip Edilen2.2K Takipçiler
Sabitlenmiş Tweet
Michael Ryan
Michael Ryan@michaelryan207·
New #ACL2025NLP Paper! 🎉 Curious what AI thinks about YOU? We interact with AI every day, offering all kinds of feedback, both implicit ✏️ and explicit 👍.  What if we used this feedback to personalize your AI assistant to you? Introducing SynthesizeMe! An approach for creating natural language personal user models from your interactions. 🧵
English
7
36
145
46.6K
Michael Ryan retweetledi
Omar Shaikh
Omar Shaikh@oshaikh13·
What’s the point of a “helpful assistant” if you have to always tell it what to do next? In a new paper, we introduce a reasoning model that predicts what you’ll do next over long contexts (LongNAP 💤). We trained it on 1,800 hours of computer use from 20 users. 🧵
English
16
81
290
96.9K
Michael Ryan retweetledi
Augmented Mind Podcast
Augmented Mind Podcast@augmind_fm·
Thank you to everyone who joined our meetup Tuesday 💛 Such an amazing group of people building at the intersection of humans + AI. Here's a first look at EP02 with @tongshuangwu 📷 Full episode drops tomorrow morning!
Augmented Mind Podcast@augmind_fm

🧠🎙️ We’re co-hosting an Augmented Mind Podcast Meetup w/ a16z — Tue Feb 24 (11–1) @ Gates CS (Stanford)! If you’re into technical human-centered AI and want an easy, low-pressure way to meet others building in the space, come hang out! 🔗Link to RSVP Below

English
0
4
14
6.6K
Michael Ryan
Michael Ryan@michaelryan207·
Awesome project by @kenziyuliu making your interactions with frontier AI truly anonymous!🔒 Also excited by what this means for personalization research. If model providers no longer have all your data we will need to apply personalization locally!
Ken Liu@kenziyuliu

Can we build a blind, *unlinkable inference* layer where ChatGPT/Claude/Gemini can't tell which call came from which users, like a “VPN for AI inference”? Yes! Blog post below + we built it into open source infra/chat app and served >15k prompts at Stanford so far. How it helps with AI user privacy: # The AI user privacy problem If you ask AI to analyze your ChatGPT history today, it’s surprisingly easy to infer your demographics, health, immigration status, and political beliefs. Every prompt we send accumulates into an (identity-linked) profile that the AI lab controls completely and indefinitely. At a minimum this is a goldmine for ads (as we know now). A bigger issue is the concentration of power: AI labs can easily become (or asked to become) a Cambridge Analytica, whistleblow your immigration status, or work with health insurance to adjust your premium if they so choose. This is a uniquely worse problem than search engines because your average query is now more revealing (not just keywords), interactive, and intelligence is now cheap. Despite this, most of us still want these remote models; they’re just too good and convenient! (this is aka the "privacy paradox".) # Unlinkable inference as a user privacy architecture The idea of unlinkable inference is to add privacy while preserving access to the remote models controlled by someone else. A “privacy wrapper” or “VPN for AI inference”, so to speak. Concretely, it’s a blind inference middle layer that: (1) consists of decentralized proxies that anyone can operate; (2) blindly authenticates requests (via blind signatures / RFC9474,9578) so requests are provably sandboxed from each other and from user identity; (3) relays prompts over randomly chosen proxies that don’t see or log traffic (via client-side ephemeral keys or hosting in TEEs); and (4) the provider simply sees a mixed pool of anonymous prompts from the proxies. No state, pseudonyms, or linkable metadata. If you squint, an unlinkable inference layer is essentially a vendor for per-request, anonymous, ephemeral AI access credentials (for users or agents alike). It partitions your context so that user tracking is drastically harder. Obviously, unlinkability isn’t a silver bullet: the prompt itself still goes to the remote model and can leak privacy (so don't use our chat app for a therapy session!). It aims to combat *longitudinal tracking* as a major threat to user privacy, and its statistical power increases quickly by mixing more users and requests. Unlinkability can be applied at any granularity. For an AI chat app, you can unlinkably request a fresh ephemeral key for every session so tracking is virtually impossible. # The Open Anonymity Project We started this project with the belief that intelligence should be a truly public utility. Like water and electricity, providers should be compensated by usage, not who you are or what you do with it. We think unlinkable inference is a first step towards this “intelligence neutrality”. # Try it out! It’s quite practical - Chat app “oa-chat”: chat.openanonymity.ai (<20 seconds to get going) - Blog post that should be a fun read: openanonymity.ai/blog/unlinkabl… - Project page: openanonymity.ai - GitHub: github.com/OpenAnonymity

English
1
1
4
647
Michael Ryan retweetledi
Stanford NLP Group
Stanford NLP Group@stanfordnlp·
We’re now watching the second episode of the Augmented Minds @augmind_fm Podcast. @shannonzshen is asking Sherry @tongshuangwu about the shift in her research thinking from imperfect software to imperfect people in the era of foundation models.
Stanford NLP Group tweet media
English
3
6
34
9.1K
Michael Ryan retweetledi
Augmented Mind Podcast
Augmented Mind Podcast@augmind_fm·
🧠🎙️ We’re co-hosting an Augmented Mind Podcast Meetup w/ a16z — Tue Feb 24 (11–1) @ Gates CS (Stanford)! If you’re into technical human-centered AI and want an easy, low-pressure way to meet others building in the space, come hang out! 🔗Link to RSVP Below
English
1
5
18
8.7K
Michael Ryan
Michael Ryan@michaelryan207·
@ChrisGPotts @WilliamBarrHeld @lateinteraction Disappointing... I can't imagine why it would be difficult to trigger an update to block requests from an API key rather than sending an email if the spend tracking logic works anyway.
English
0
0
0
54
Christopher Potts
Christopher Potts@ChrisGPotts·
OpenAI's billing system seems to have no meaningful failsafes, creating a risky enough situation that I think I need to switch my research group to using other services. My group spent about $30K on OpenAI model calls last year. This is money well spent for what we do, but I need to monitor it carefully, and it seems like this is impossible. The "limits" simply trigger an email and spending can continue, and even prepurchased credits can end up with a negative balance that you are charged for. So a simple mistake could lead to a massive bill. The only actual hard stop for my tier is $200K, which would be devastating for my group. I assume this reflects something challenging about OpenAI's infrastructure, but that isn't comforting – I suspect I would have no recourse if the spending went out of control. Is there something I am missing? It seems like all their customers are in a precarious situation here.
English
21
7
144
38.3K
Michael Ryan
Michael Ryan@michaelryan207·
My policy was just to setup per-project budget limits and send out project specific keys. On a project basis you can set much lower caps to the budget than the $200k limit and I make sure to never send out the “default” project api keys (where you can’t put a limit in). Of course if a malicious actor gets access to your default keys you have the same problem you described.
English
2
0
1
156
Michael Ryan retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further. gist.github.com/karpathy/8627f…
English
652
3.2K
25.2K
5.2M
Michael Ryan retweetledi
Diyi Yang
Diyi Yang@Diyi_Yang·
Two amazing postdocs from our lab are on the academic job market this year. I've learned a lot from their wonderful research -- you should definitely reach out and hire them!
English
2
30
142
41.2K
Saurabh Shah
Saurabh Shah@saurabh_shah2·
La taqueria was pretty good but it’s no Chipotle
English
5
0
13
1.5K
Michael Ryan
Michael Ryan@michaelryan207·
LLMs today are very capable individual coders, but what happens when they are forced to work together? 📉 Communication/coordination is one of the key bottlenecks in human-ai collaboration. Check out Cooperbench for a lens into this coordination gap! ⬇️
Hao Zhu@_Hao_Zhu

Introducing the curse of coordination. Agents perform 50% worse in teams than working alone. People building human-AI collaboration today don't realize why current LLMs fail to be good teammates. We built CooperBench to study this. For humans, we recognize that teamwork isn't just the sum of individual capability. Communication and coordination often outweigh raw skill. But for AI? We're only hill-climbing benchmarks that evaluate solo technical abilities. CooperBench A benchmark to evaluate agent cooperation in realistic software teamwork tasks. The setup is intuitive: two agents, two tasks, two VMs, one chat channel (agents can send over arbitrary text, even the entire patch they wrote). We evaluate whether the merged solution from both agents passes the requirements of both tasks. The curse of coordination The most striking result: agents perform 50% worse in teams (black line) than working alone (blue line). Why is this happening? Is it because they can't use the communication tool? No. They spent 20% of their time sending messages. The problem? Those messages were repetitive, vague, ignored questions, or straight-up hallucinated. But bad communication is only part of the story. We found two deeper failures: Commitment: Agents don't do what they promised. Expectations: Agents don't expect others to keep promises either. Without these, cooperation collapses. However, there is a silver lining We also find emergent coordination behaviors, e.g. role division, resource division, and negotiation, which gives us hope that we can use reinforcement learning to improve coordination. What's next? It is true that highly-engineered multi-agent orchestration could largely sidestep the coordination problem. However, we care more about the AI's capability: if we truly want AI to be our teammates, we need them to be natively capable of effective communicating and coordinating. Two agents on software tasks is just the beginning. The real goal: agents that can cooperate with us well enough to actually empower us. CooperBench is our first step. If you're working on this too, let's talk.

English
0
1
18
1.4K
Michael Ryan
Michael Ryan@michaelryan207·
"Creating General User Models from Computer Use" is one of my favorite papers of last year! It's such an elegant and powerful idea! Omar is a great researcher and communicator. So excited to release our interview with him tomorrow morning! 🔥
Michael Ryan tweet media
Augmented Mind Podcast@augmind_fm

Thank you so much for all your support and interest💛 We've got something new in the works — here's a first look at EP01 with @oshaikh13 👀 Full episode will be released tomorrow morning!

English
1
1
12
2.1K
Michael Ryan retweetledi
Augmented Mind Podcast
Augmented Mind Podcast@augmind_fm·
Thank you so much for all your support and interest💛 We've got something new in the works — here's a first look at EP01 with @oshaikh13 👀 Full episode will be released tomorrow morning!
English
1
9
34
7.9K
Michael Ryan retweetledi
Diyi Yang
Diyi Yang@Diyi_Yang·
The AM Podcast by Yijia, Michael, and Shannon is a must-listen on how AI augments humans 👏
Diyi Yang tweet media
Augmented Mind Podcast@augmind_fm

AI used to be a distant promise; now it permeates our lives. AI is getting better, but is it making us better? We are promised that AI will augment our minds, but how? We--@EchoShao8899, @shannonzshen, and @michaelryan207--are excited to launch the Augmented Mind Podcast (The AM Podcast), a podcast about technical human-centered AI work. We'll share compelling research, infrastructure, and systems through monthly episodes, featuring interviews with the pioneering minds behind them. We release EP0 today to share who we are, why we started this podcast, and what we're looking forward to. 0:00 - Prelude: the problems we care about 1:48 - Host introduction 2:03 - Why we started the AM Podcast 2:31 - Hot takes on human-centered AI 10:45 - Format of our podcast 11:28 - Unique technical challenges in human-centered AI 16:45 - Let the journey begin!

English
2
9
107
21K
Augmented Mind Podcast
Augmented Mind Podcast@augmind_fm·
AI used to be a distant promise; now it permeates our lives. AI is getting better, but is it making us better? We are promised that AI will augment our minds, but how? We--@EchoShao8899, @shannonzshen, and @michaelryan207--are excited to launch the Augmented Mind Podcast (The AM Podcast), a podcast about technical human-centered AI work. We'll share compelling research, infrastructure, and systems through monthly episodes, featuring interviews with the pioneering minds behind them. We release EP0 today to share who we are, why we started this podcast, and what we're looking forward to. 0:00 - Prelude: the problems we care about 1:48 - Host introduction 2:03 - Why we started the AM Podcast 2:31 - Hot takes on human-centered AI 10:45 - Format of our podcast 11:28 - Unique technical challenges in human-centered AI 16:45 - Let the journey begin!
English
10
32
79
61K
Saurabh Shah
Saurabh Shah@saurabh_shah2·
I’ve joined humans&! My last blog post explains why I think a human-centric approach is the missing piece in modern AI systems. I’m super psyched about the technical direction of the company. Perhaps even more important, though, is the team; the humans at humans&. My coworkers are completely and wholly wonderful. They’re brilliant, yes, but they’re also kind, funny, focused, and just about every other good adjective I can think of. Put simply: vibes are goooood. We’re bringing together wonderful people united by a much-needed mission to build something truly different. If that excites you, I’d love to chat.
humans&@humansand

Today we introduce humans&, a human-centric frontier AI lab. We believe AI can be reimagined, centering around people and their relationships with each other. At its best, AI should serve as a deeper connective tissue that strengthens organizations and communities

English
40
8
230
40.5K
Michael Ryan retweetledi
Stanford HAI
Stanford HAI@StanfordHAI·
🎙️New podcast alert! On Augmented Mind podcast, @stanfordnlp and @MITCSAIL PhD students talk with guests about techniques for building AI models that can collaborate with people and augment human intelligence.
Augmented Mind Podcast@augmind_fm

AI used to be a distant promise; now it permeates our lives. AI is getting better, but is it making us better? We are promised that AI will augment our minds, but how? We--@EchoShao8899, @shannonzshen, and @michaelryan207--are excited to launch the Augmented Mind Podcast (The AM Podcast), a podcast about technical human-centered AI work. We'll share compelling research, infrastructure, and systems through monthly episodes, featuring interviews with the pioneering minds behind them. We release EP0 today to share who we are, why we started this podcast, and what we're looking forward to. 0:00 - Prelude: the problems we care about 1:48 - Host introduction 2:03 - Why we started the AM Podcast 2:31 - Hot takes on human-centered AI 10:45 - Format of our podcast 11:28 - Unique technical challenges in human-centered AI 16:45 - Let the journey begin!

English
0
4
19
4K