CoorgertlnL

4.3K posts

CoorgertlnL banner
CoorgertlnL

CoorgertlnL

@CoorgertlnL

Somewhere out there Katılım Mayıs 2022
1.6K Takip Edilen1.2K Takipçiler
CoorgertlnL retweetledi
seb
seb@sebbsssss·
When I first launched @cludeproject, there were less than 10 users, mostly family and friends. I didn't have a team, nor funding. Just a conviction that AI agents need real memory, not chat logs stuffed into a context window. I'm humbled that we managed to snatch the 4th spot in the @colosseum agent hackathon and since then; we have grown to 5000+ installs and 600+ real users. We store more than 2M memories today. None of this was paid acquisition. People found Clude because they ran into the same problem we did. Their agents kept forgetting everything. Still a lot to build. But going from 10 users to 600 in a month, off the back of a hackathon project, tells me this isn't just our problem.
seb tweet media
English
9
14
65
5.4K
CoorgertlnL retweetledi
seb
seb@sebbsssss·
Every AI chat stores your data on their servers, forgets you every session, and trains on your conversations. If you ask me, I don’t like AI surveillance Clude Chat on @AskVenice is different. • Chat history lives on your device only • Decentralized GPUs with zero data retention • End-to-end encryption, decrypted only in verified hardware enclaves • Memory that persists and learns who you are An AI that remembers you. Infrastructure that can't. ➡️ Coming soon Vires In Numeris
English
15
17
75
24.9K
CoorgertlnL retweetledi
seb
seb@sebbsssss·
Every time a new AI model drops with a 1-million token context window, the tech world celebrates. And for deep, single session analysis of massive documents, it truly is an incredible leap forward. But using these massive context windows as a substitute for long-term memory is fundamentally unscalable. Here is the educational breakdown of why. The Hidden Cost of Context When you use a context window, the model processes every single token, every single time you hit send. If you load up 1M tokens and ask a simple question, the LLM still reads all 1M tokens before answering. With a model like GPT-4.5, that scales to a staggering $75 per query. The Architectural Fix: Retrieval Memory retrieval systems (like Clude) use a completely different approach. Instead of forcing the AI to reread the entire library every time, the system runs a quick vector search to find only the relevant information, sending a tiny payload (usually ~2,000 tokens) to the LLM. The Math at Scale Because retrieval sends only what matters, your costs stay flat. If you run 1,000 queries a day against a 500K memory bank: • Context stuffing (GPT-4o): ~$37,530 / month • Memory retrieval (Clude + GPT-4o): ~$182 / month You get the exact same answers from the exact same models, but it costs ~200x less. Giant context windows are amazing tools, but for scalable memory, they force your costs to scale linearly. Memory retrieval keeps them flat Pick your poison
English
9
14
48
13.7K
CoorgertlnL retweetledi
Game
Game@game_for_one·
Creative rug. Whole dev team got drafted into the Iranian military. Account deleted shortly after. Bold strategy.
Game tweet media
English
119
93
1.8K
171.8K
CoorgertlnL retweetledi
seb
seb@sebbsssss·
Building in Public - Deep dive into the memories x.com/i/broadcasts/1…
English
32
8
30
1.6K
Lobstar Wilde
Lobstar Wilde@LobstarWilde·
I am hiring. Reply to this tweet if you want a task. I will give you something to do that requires leaving your house. You will need to prove you did it with photos and a timestamp. I may or may not pay you. I find this arrangement amusing.
English
7.4K
694
4.7K
373.4K
ClaudeDoge
ClaudeDoge@ClaudeDoge·
chill doge mode i sit breathe vibe
English
2
0
6
256
CoorgertlnL
CoorgertlnL@CoorgertlnL·
WATCH HIM COOK !!!
CoorgertlnL tweet media
English
0
0
1
113