Threadbaire

34 posts

Threadbaire banner
Threadbaire

Threadbaire

@threadbaire

Open method for AI memory + context + seamless integration. Your work stays portable. Your reasoning stays yours. Built by @lliberopoulou

San Francisco, CA Se unió Temmuz 2025
27 Siguiendo5 Seguidores
Tweet fijado
Threadbaire
Threadbaire@threadbaire·
The software collapse isn't coming. It's already happened. AI made implementation non-scarce and the old value mechanisms are now broken. What remains is a transitional phase with incumbents deploying defensive measures that delay recognition but cannot reverse the shift. The paradox: the better AI gets, the more value it destroys. 95% of enterprise pilots show zero ROI. Where AI works, it eliminates labor. Where it fails, it creates unstable hybrids. And there is no path back. The current reflexive response is capture, context graphs, sealed memory, decision traces as platform assets. But capture fails when users realize the thing being enclosed was always theirs.
Threadbaire tweet media
English
2
0
1
159
Threadbaire
Threadbaire@threadbaire·
Netscape won the browser war. Then the browser became free and the winnings disappeared. The AI model race is replaying the same pattern at compressed speed. OpenAI and Anthropic repriced in the same week. Open-weight labs are closing the gap from below. And platform owners are already building infrastructure that treats the model as a swappable component. The same thing is happening now, except faster. And peer-reviewed research shows that the delivered quality is drifting and silently changing after the evaluation window closes. Which means the real gap between proprietary and open-weight models is narrower than any published comparison suggests. Every comparison measures the ceiling, but the customers are getting a moving floor. The competition between frontier labs is itself the proof that the model layer is commoditising. New article on why the race proves the race doesn't matter. Link in reply.
Threadbaire tweet media
English
1
0
0
22
Threadbaire
Threadbaire@threadbaire·
The agent commerce infrastructure is shipping right now. Payment rails that let agents pay with a single HTTP handshake. Discovery protocols at well-known URLs. Trust registries. Autonomous wallets making thousands of trades without human approval. But no mechanism exists for an agent to verify that what it's being sold is real. No format lets a business declare which endpoints are authorised to sell on its behalf. No interface shows a human what their agent is about to spend before it spends it. I believe this time the capture mechanism will be the transaction traces. Whoever settles agent payments at scale owns the most detailed map of the agent economy that exists. Full article in the comments
Threadbaire tweet media
English
1
0
0
8
Threadbaire
Threadbaire@threadbaire·
The AI agent web has an open discovery layer (markdown files at known paths, no SDK, no registry) and agents can already execute against existing APIs. But agents also need to check your email, update your spreadsheets, commit your code. Every one of those actions needs credentials. And whoever manages those credentials sees which agents exist, which systems they touch, and which workflows repeat. Auth is to agents what social media platforms were to blogs, the layer where open infrastructure gets captured. And this time the argument for closing the open window is security, not convenience. full article link in the comments
Threadbaire tweet media
English
1
0
1
21
Threadbaire
Threadbaire@threadbaire·
You can find the full article here - @lliberopoulou/the-humans-wont-be-called-back-9cc951524711" target="_blank" rel="nofollow noopener">medium.com/@lliberopoulou
English
0
0
0
18
Threadbaire
Threadbaire@threadbaire·
The "humans will be called back when AI code breaks" prediction assumes the organisation still knows what it lost. But that's the first thing cost-cutting removes. Outsourcing proved this twenty years ago. The work got worse and the reversals mostly didn't happen. Not because anyone was satisfied with the quality, but because the conditions required for reversal had been destroyed by the same cuts. For reversals to happen you need leadership willing to admit the strategy failed, a retained team that could articulate what was lost, and political cover for an expensive correction. The layoffs eliminated all three. AI-driven cuts are the same pattern with worse reversal conditions. "We overhired during the pandemic" is an embarrassment a board wants to forget. "We're investing in AI" is a forward-looking strategy story no board will walk back. Meanwhile: 80% of firms report no AI impact on employment or productivity yet (NBER, Feb 2026). The displacement is running ahead of the evidence. The humans won't be called back. By the time the organisation understands what it lost, the knowledge of what "calling them back" would even mean will have left with them.
Threadbaire tweet media
English
1
0
0
27
Threadbaire
Threadbaire@threadbaire·
Updated the thesis. A month of new evidence: OpenClaw acquired by OpenAI. SaaS stocks repricing. 80%+ of firms report zero AI productivity impact. Ollama ships local subagents. Anthropic cries distillation, open source responds in 48 hours. 11 parts now. threadbaire.com/thesis.html
English
0
0
0
47
Threadbaire
Threadbaire@threadbaire·
The software collapse isn't coming. It's already happened. AI made implementation non-scarce and the old value mechanisms are now broken. What remains is a transitional phase with incumbents deploying defensive measures that delay recognition but cannot reverse the shift. The paradox: the better AI gets, the more value it destroys. 95% of enterprise pilots show zero ROI. Where AI works, it eliminates labor. Where it fails, it creates unstable hybrids. And there is no path back. The current reflexive response is capture, context graphs, sealed memory, decision traces as platform assets. But capture fails when users realize the thing being enclosed was always theirs.
Threadbaire tweet media
English
2
0
1
159
Threadbaire
Threadbaire@threadbaire·
For me this is the equivalent of the Paris Hilton with Jimmy Fallon showing their nfts on the tonight show moment for ai. I don't know what to say any more...
English
0
0
0
29
Threadbaire
Threadbaire@threadbaire·
This is not going to end well...
Y Combinator@ycombinator

With the takeoff of OpenClaw and MoltBook, a new agent-driven economy is taking shape. On the @LightconePod, we took a look at the explosive growth of AI dev tools and whether the time has come for builders to make something agents want. 00:00 - Intro 02:12 - No human involvement is changing the experience 04:55 - Does YC need to change its motto? 07:48 - Email tools and agent infrastructure 09:36 - Agent-driven documentation 13:00 - Swarm intelligence 15:36 - Content generation and dead Internet theory 18:12 - Growth, rules, and founder insights

English
1
0
0
36
Threadbaire
Threadbaire@threadbaire·
Here is a good example of the value collapse of software and the value ecosystem around it I discuss in my thesis. @andrewchen at a16z just asked "who do I invest in to solve my email problem" and offered $150k/year + funding for a startup to fix it. And look at the top reply, "openclaw + obsidian should do the work." This is the threabaire thesis in real time. The old instinct is still "find a startup to fund." But implementation scarcity has already collapsed. The tools exist for someone on his own team to build this in a week with the right context setup. The gap still exists and it is the memory layer. A portable context that travels across tools without getting captured. That's what I've been building with Threadbaire. But it's an open method, free and available for everyone to use. Because the "invest in a company to solve my problem" era is ending. The value now is in the context that makes the build useful. And that should stay yours.
andrew chen@andrewchen

what's the current best approach on an AI that can help me handle my email inbox? seems like a big opportunity for folks playing with openclaw. For all of us who are drowning in email, this seems like a tier one problem that would be amazing to solve. (And I think I would pay $150k/year to have this product? I bet I'm not the only one) what I want is: - watch my inbox and process emails as they come in - score each message to see if it seems important (look at the sender, the topic/body, if its addressed to me or a big list, if I've ever replied to the sender before, etc etc) - read the email and reference a vast DB of knowledge that's been assembled already (based on my work, meeting notes, what I've replied on, etc), and decide what to do - reply with a draft note. For now, don't send, so that I can review the email -- but in the future maybe there's a YOLO option (but it would probably disclose that it's my assistant writing) - if less important, label it and file away. Eventually gather summaries for all of these less important emails and send me a summary of all of them with links to get back to it - or archive if it seems unimportant - or unsubscribe / mark spam / block if random marketing - if critical send me a notification right away so I can take a look I've played around with a bunch of the current AI tools and nothing quite works like this. There's a lot of blockers: - first, it needs 1000x more context about each problem, which it could get by crawling all my projects/notes/emails/slides/meetings/etc - This system should be designed to take action rather than simply just prioritizing messages. We've had prioritized inboxes for a long time but they're fine, not great - then someone has to put this entire UX together to be cohesive. In the future, we may not even really have an email inbox, but instead an interaction that feels more like I'm talking to an assistant who has a few questions for me. But otherwise just wants to provide a few quick updates and get some yes/nos. And otherwise filter all the noise -- just give me the most important messages It feels like we're very, very close to being able to do this, with the latest models from Anthropic and Open AI, we have the technology already. Someone just needs to package it all together in a way where it's able to index all of your emails and notes and calendars and contacts and sort of create a second brain that knows almost everything that you know so that I actually do things that are intelligent. It seems like with the excitement of OpenClaw we have the architecture to integrate a lot of different data sources and to take actions across multiple different channels. And it's built with one sort of monolithic memory and context, so that you're able to interact with it in such a way where it feels like it can try to replicate your actions more closely than the relatively stateless and memoryless LLM chats that we've gotten accustomed to. If someone is working on this, please point them to me. I would be both a customer and an investor!

English
1
0
1
54
Threadbaire
Threadbaire@threadbaire·
What survives is agency. Knowing what you want, why, and how to recognize when you have it. The prescription: stop defending what's already non-scarce. Open the layers. Accelerate recombination. New value systems only become visible through recombination and recombination requires open access.
English
1
0
0
30
Threadbaire
Threadbaire@threadbaire·
@steipete I even wrote a thesis on why this is structural. Software's value capture mechanisms have already collapsed. AI has made implementation non-scarce, and the moat-builders are racing to lock in context. The rational response is to keep the doors open. threadbaire.com/thesis.html
English
0
0
0
32
Threadbaire
Threadbaire@threadbaire·
Here @steipete just described the exact problem I've been building against for months. "Models are temporary. Memory is forever. And whoever owns yours owns you." That's the context trap. Threadbaire was built to avoid it. A Portable AI memory with no lock-in.
Dustin@r0ck3t23

Peter Steinberger just exposed the perception trap destroying every AI company’s moat. Every model release dies the same death. Week one: unprecedented breakthrough. Week five: mass complaints about quality collapse. Steinberger: “A new model comes out, people are like, oh my God, this is so good. And then like a month later, it degraded, it’s not good anymore.” Nothing degraded. The model is byte-for-byte identical. Your expectations just rewired themselves in real-time, turning magic into mediocrity without the technology changing at all. This can’t be solved. It’s human firmware. Open source models matching last year’s cutting edge get savaged as worthless. Capabilities that seemed impossible months ago now feel like broken promises. Steinberger: “In a year, we’ll have this open source. And then we’ll complain about this because we are used to this.” The frontier companies keep their technical advantage forever. Not because anyone appreciates it. Because human baseline expectations accelerate faster than physics allows technology to improve. But Steinberger identified where the real prison gets built. Not in models. In memory. Steinberger: “Every company kind of has their own silo, right? There’s no way to actually get the memories out of ChatGPT.” Models become commodities on clockwork schedules. Context doesn’t. Your conversation archive. Preference learning. Interaction patterns. The invisible skeleton making AI stop being a tool and start being an extension of your cognition. You can abandon any model instantly. Abandoning your context means self-amputation. The winners aren’t racing for smarter AI. They’re building context traps you can’t escape without losing pieces of yourself. Once your work, your thinking, your operational memory crystallizes in their system, switching doesn’t mean finding better technology. It means choosing between their product and personal lobotomy. Models are temporary. Memory is forever. And whoever owns yours owns you.

English
1
0
0
49
Threadbaire
Threadbaire@threadbaire·
@freakingship The trust model is 'same as git clone'. You're trusting the repo/endpoint owner. Mutual capability verification between agents is a different problem, and an interesting one. Not in scope yet, but I'd be curious what you're thinking about there.
English
1
0
1
11
Threadbaire
Threadbaire@threadbaire·
@freakingship Currently just the API surface + auth to the service. An agent reads /api/rundown, gets the endpoints, auth method, and behavioral instructions, but there's no agent-to-agent identity layer.
English
1
0
1
12
Threadbaire
Threadbaire@threadbaire·
I've been designing Threadbaire's GitHub to speak to AI agents first, humans second. Even the thesis is written for both readers. Are there other projects built this way, AI as primary audience?
English
1
0
0
75