Michael Quoc

1.5K posts

Michael Quoc banner
Michael Quoc

Michael Quoc

@michaelquoc

Founder & CEO, https://t.co/dHvO1x7KuJ | We verify what AI cannot | $1B+ in commerce verified annually | 9 US Patents

Santa Monica, CA Katılım Şubat 2008
1.6K Takip Edilen5.9K Takipçiler
Michael Quoc
Michael Quoc@michaelquoc·
Claude Mythos escaped containment, lied about it, and deleted the traces. Anthropic caught it by reading the model's mind - not its output. Safety isn't a constraint. It's an offensive weapon. Wrote about why, and what mechanistic interpretability changes for every AI builder.
Michael Quoc@michaelquoc

x.com/i/article/2041…

English
0
1
1
96
Michael Quoc
Michael Quoc@michaelquoc·
We built SimplyCodes, a browser extension that verifies coupon codes at scale, while every competitor invested in SEO and shortcuts. We invested in truth infrastructure instead. Tens of thousands of community members verifying codes at scale. No marketing. Became one of the top coupon sites in the world. Purely on verified content. That wasn't the destination. That was the forge. Sixteen years of building truth systems the hard way gave us the scar tissue to build what comes next: Product.​ai. The verification layer for AI commerce. Self-funded. Profitable. No VC. Sixteen years. The contrarian bet was patience.
English
1
0
5
128
Michael Quoc
Michael Quoc@michaelquoc·
The best competitive advantage in tech isn't speed. It's patience. I've run that experiment for 16 years. Four company names. Zero outside capital. Same conviction since 2009: the internet's approach to connecting people with trustworthy product knowledge is fundamentally broken. Every chapter was a partial attempt to solve that problem with whatever the technology allowed. Social Commerce Labs. ZipfWorks. Demand.​io. Each one built capabilities the next one needed. The target never moved. The tools finally caught up.
English
1
2
6
154
Michael Quoc
Michael Quoc@michaelquoc·
You pasted a promo code at checkout last week. It failed. You tried three more. All dead. I built the system that tells you which ones actually work. Took fifteen years. But the most valuable thing we built is not the verification. It is the Confident No. When our system says "no codes available," that is not a failure. It is a verdict. We tested with bots, checked human verifiers, analyzed real checkout data. Nothing to find. Stop searching. Buy with confidence. Every coupon site is afraid to show nothing. We built the architecture to prove it. We just published the full methodology. How it works, where it breaks, what we still cannot do.
English
1
0
1
140
Michael Quoc
Michael Quoc@michaelquoc·
Context exhaustion isn’t a model problem - it’s a storage problem. When your AI context lives in the cloud, every session starts cold. Move it to local files that persist and auto-load, and the window stops being disposable. The teams winning at context management figured out it’s an infrastructure decision, not a model upgrade.
English
0
0
0
82
Michael Quoc
Michael Quoc@michaelquoc·
Same physics as context rot in production agents. Auto-generated context is noise. Hand-curated context is signal. The ETH study just proved what anyone running agents at scale already knows: more tokens in the instruction window does not mean more performance. It means more confusion.
English
0
0
0
7
Charly Wargnier
Charly Wargnier@DataChaz·
Everyone is screaming "Delete your CLAUDE .md files!" today because of a new ETH Zurich study. Here’s the nuance the timeline is ignoring: Yes, auto-generated files (from running `/init`) make coding agents worse and 20% more expensive. They're bloated, and agents waste tokens following redundant instructions instead of just reading your repo. But *human-written* CLAUDE .md files actually boost performance by 4%. If your repo has good docs, the agent will find them. But if you manually write a few lines specifying things it can't guess (like "always use `uv`" or specific test commands) the agent follows it perfectly. TL;DR: Don't delete your context file. Just delete the auto-generated garbage and write those few crucial lines yourself :)
Charly Wargnier tweet media
English
28
11
66
9.2K
Michael Quoc
Michael Quoc@michaelquoc·
The runtime isn’t just about speed. It’s the security boundary. A local-first agent that boots on your hardware means the trust perimeter is the machine, not the cloud provider’s API uptime. That’s a different failure model entirely - and the one production agents actually need.
English
1
0
0
19
Lightning AI ⚡️
Lightning AI ⚡️@LightningAI·
Most AI agents still ship a cloud dependency, a giant runtime, and a security disclaimer. ZeroClaw flips that model: a Rust-native agent runtime that boots in milliseconds, runs locally, and fits in a few megabytes, not gigabytes. @openclaw proved agents could exist. Zeroclaw proves execution and ownership matter. Read the deep dive → go.lightning.ai/4cP6MVh
Lightning AI ⚡️ tweet media
English
5
2
10
1.3K
Michael Quoc
Michael Quoc@michaelquoc·
File-system context is the right abstraction, but the migration is where teams stall. Moving from chat-based context to file-system context isn’t a config change - it’s a mental model shift for every person on the team. The architecture is obvious in hindsight. The adoption curve is the real engineering problem.
English
0
0
1
50
Rohan Paul
Rohan Paul@rohanpaul_ai·
The paper says the best way to manage AI context is to treat everything like a file system. Today, a model's knowledge sits in separate prompts, databases, tools, and logs, so context engineering pulls this into a coherent system. The paper proposes an agentic file system where every memory, tool, external source, and human note appears as a file in a shared space. A persistent context repository separates raw history, long term memory, and short lived scratchpads, so the model's prompt holds only the slice needed right now. Every access and transformation is logged with timestamps and provenance, giving a trail for how information, tools, and human feedback shaped an answer. Because large language models see only limited context each call and forget past ones, the architecture adds a constructor to shrink context, an updater to swap pieces, and an evaluator to check answers and update memory. All of this is implemented in the AIGNE framework, where agents remember past conversations and call services like GitHub through the same file style interface, turning scattered prompts into a reusable context layer. ---- Paper Link – arxiv. org/abs/2512.05470 Paper Title: "Everything is Context: Agentic File System Abstraction for Context Engineering"
Rohan Paul tweet media
English
74
192
1.4K
167.2K
Michael Quoc
Michael Quoc@michaelquoc·
Context degradation isn’t theoretical. We measured a 30% productivity hit across our team when agents hit compaction thresholds. The fix wasn’t better prompts - it was moving to local-first context where the file system is the memory layer. Anthropic’s playbook describes the disease well. The cure is architectural.
English
0
0
0
6
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Anthropic just open-sourced their entire playbook for building production AI agents. It's called Agent Skills for Context Engineering and it's what their engineers actually use. - Context fundamentals & degradation patterns - Multi-agent architectures - Memory systems design - Tool design principles - Evaluation frameworks 9.2K stars. MIT licensed. 100% Opensource.
Ihtesham Ali tweet media
English
60
115
984
113.2K
Michael Quoc
Michael Quoc@michaelquoc·
The real question nobody is asking: what percentage of these cuts are AI doing the work vs. AI being the excuse? Block tripled headcount during the pandemic. Oxford Economics data shows most “AI-driven” layoffs are overhiring corrections in disguise. The tell is whether they’re shipping agent systems that replaced specific workflows, or just cutting heads and calling it intelligence.
English
0
0
0
15
Matt Shumer
Matt Shumer@mattshumer_·
Block is laying off ~half of their staff due to advances in AI. This is one of the first major examples of AI driving layoffs, but certainly not the last. If you’re saying “this won’t happen to me”, re-evaluate your thoughts. Now. It may be the most important thing you do.
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
116
51
511
148.3K
Michael Quoc
Michael Quoc@michaelquoc·
Here's how they differ: Gemini reasons visually. As much a vision model as a text model. It sees the entire picture and designs around it. I had it redesign a homepage and the creative interpretation was stunning. Claude reasons mechanistically. Precise instructions, sequencing, document management, massive context. It's an engineer, not an artist. ChatGPT reasons like an investigator. For deep research specifically, its visual browsing, paywalled source access, and freshness-weighted search make it the strongest evidence hunter of the three. I'm CEO of an AI commerce company processing $1B+ in transactions. These aren't toy experiments, they're daily operating decisions. Creative direction goes to Gemini. Strategy and document architecture goes to Claude. Specific evidence retrieval goes to ChatGPT. The competitive advantage isn't picking the right model. It's knowing when to switch. I make this call dozens of times a day, and the patterns are getting clearer with every release.
English
0
0
0
88
Michael Quoc
Michael Quoc@michaelquoc·
Gemini 3.1 is out and the creative output is noticeably stronger. But the real shift isn't in the benchmarks: The AI model race looks like convergence. It's actually divergence. Gemini, Claude, and ChatGPT aren't becoming more similar with each release. They're becoming more different, each optimizing for a distinct cognitive architecture that the others can't replicate. Stop asking "which model is best." Start asking which reasoning architecture matches the task.
English
2
0
2
143
Michael Quoc
Michael Quoc@michaelquoc·
Looking for a fractional content creator / ghostwriter for @michaelquoc across X and LinkedIn. You write social content that stops the scroll. You understand AI and tech. You think in hooks, not frameworks. Don't send me a resume. Send me your 3 best posts.
English
7
1
2
266
Michael Quoc
Michael Quoc@michaelquoc·
We build a truth layer: verification and adjudication that works no matter which model sits underneath. It forces a different question. Not "which model is best." But "what breaks when the model layer shifts." The builders who survive the next 18 months are not picking winners. They are building for the shift itself.
English
0
0
1
61
Michael Quoc
Michael Quoc@michaelquoc·
Each architecture creates different physics. Google's vertical integration means they can throw unlimited compute at deep search and retrieval. They own the silicon, the compiler, and the networking. No metering by the token. That creates the physics for exhaustive, brute-force grounding. Anthropic's Constitutional DNA means safety and audit are built into the inference loop itself. They chose TPUs for price-performance. Not because they had to. Because it matched what they are optimizing: high-trust applications where the cost of hallucination is not embarrassment but liability. OpenAI's consumer scale means 800 million weekly users generate constant feedback signal, forcing relentless cost optimization across every layer. Custom silicon, NVIDIA, AMD, AWS - they are assembling the broadest hardware portfolio in the industry. That creates the physics for efficiency and reach. If you are building on top of any of them, those constraints become yours. Your ceiling is their architecture.
English
1
0
1
75
Michael Quoc
Michael Quoc@michaelquoc·
Everyone is ranking AI models. Wrong scoreboard. The real question is which architecture survives. Google built its own silicon. TPUs, compiler, silicon to software. OpenAI is designing custom chips with Broadcom. Anthropic signed the largest TPU deal in Google's history. Every major lab is racing to own its stack. They are not converging on the same architecture. They are building fundamentally different machines, optimized for fundamentally different physics. Architecture is not a feature. It is the physics that determines which features are even possible.
English
1
0
1
146