Memory Store

32 posts

Memory Store banner
Memory Store

Memory Store

@memorydotstore

everything everywhere, all at once

Bergabung Eylül 2025
11 Mengikuti77 Pengikut
Meteora Ecosystem
Meteora Ecosystem@MeteoraEco·
LIVE AT COLOSSEUM: Frontier Tech Demo Night Last night, @colosseum SF was buzzing S-tier founders from @Solana, @SuperteamCAN, @ycombinator: - pitching frontier tech companies - acquiring users - connecting IRL Expect more from Colosseum x @MeteoraEco Companies below.
Meteora Ecosystem tweet media
rinko@mrinko

> tomorrow night > 6-8pm > colosseum hq we're bringing together top solana startups, YC founders in the current batch, and ecosystem teams like meteora, altitude, and phantom live demos, ambrosial food, ice cold beverages, and exquisite conversation join us - luma below

English
1
2
12
1.6K
Memory Store me-retweet
ARΞS
ARΞS@Afrectz·
Episode 17 Continuing to track early signals across AI, Robotics & Bio — another batch of teams building across robotics, infra, agents, and applied AI. Here are a few more worth watching: @AtalantaTech — AI platform focused on intelligent systems and automation across digital workflows. @NeoCognition — research-driven AI company building advanced cognitive systems and intelligent agents. @BubbleRobotics — robotics startup focused on building autonomous systems for real-world environments. @afreshai — AI tools designed to optimize workflows and improve productivity across teams. @sorin_ai — AI assistant platform focused on automation and decision support. @OpsCompanion — AI copilot designed to assist with operations, monitoring, and workflow execution. @opt32co — infrastructure and optimization tools for high-performance AI systems. @CoreAutoAI — platform focused on automation and intelligent software systems for operations. @sudo_robotics — robotics team building autonomous systems and real-world robotic applications. @monitoringmts — AI-powered monitoring tools for systems, infrastructure, and performance tracking. @ontoratech — AI platform focused on structured knowledge systems and data intelligence. @qomplementai — AI tools designed to augment human workflows and productivity. @memorydotstore — memory layer for AI systems, focused on storing and retrieving contextual knowledge. @furtheraicom — applied AI platform focused on automation and intelligent software workflows. @kos_ai_inc — AI infrastructure and tools for building scalable intelligent systems. @blaisedotnew — experimental AI platform exploring new interfaces and interaction models. @AriaNetworks — AI-driven network optimization platform focused on infrastructure and connectivity. As always — do your own research before diving deeper. Follow along for more early signals: 👉 @AreslabsAI 👉 @afrectz
English
1
5
15
968
Ishita Jindal
Ishita Jindal@IshitaJindal17·
I just created a doc of @memorydotstore growth ideas with a 1-line codex command. @romainhuet just gave a demo at YC office on Codex Computer Use and I put it to use with @memorydotstore Been using Codex more and more, especially because how good it is with calling MCPs.
Ishita Jindal tweet mediaIshita Jindal tweet mediaIshita Jindal tweet media
English
1
1
17
873
Memory Store
Memory Store@memorydotstore·
every conversation, every single day without memory! even inside projects you have to explain your context again and again
Memory Store tweet media
English
0
1
5
376
Memory Store me-retweet
Ole Lehmann
Ole Lehmann@itsolelehmann·
anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.
English
589
479
4.3K
1.9M
Memory Store
Memory Store@memorydotstore·
we've been talking to a lot of founders, and they all say: "my ai doesn't know what my team knows." it doesn't have to be that way.
English
0
0
2
118
Memory Store
Memory Store@memorydotstore·
memory store learns from your tools - slack, granola, notion; and organizes everything in the background. when anyone on the team opens an ai tool, the right context is there.
English
1
2
4
842
Memory Store
Memory Store@memorydotstore·
your teammate mentioned a pricing decision on a call last tuesday, but you weren't on that call. this morning you opened claude to work on the proposal, and it already knew about that decision. this is what happens when the whole team's context is shared.
English
1
1
3
127
witcheer ☯︎
witcheer ☯︎@witcheer·
our hermes and openclaw agents need better context handling and memory management tools
witcheer ☯︎ tweet media
English
11
2
28
7.8K
Memory Store
Memory Store@memorydotstore·
we built an integration with fathom (an ai meeting note taker). this would sync all your meetings context into one place, and you can use those insights while chatting claude. @HamadaSalhab explaining everything in detaill.
English
1
3
6
279
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.
English
1.1K
63
2.4K
800.8K
Lucas Crespo 📧
Lucas Crespo 📧@lucas__crespo·
Can @AnthropicAI please launch family accounts, so we can have isolated memories. Claude is starting to ask me about my period cramps
English
20
3
466
44K
Memory Store
Memory Store@memorydotstore·
Don’t take our word for it, take his.
English
0
2
5
311
Memory Store
Memory Store@memorydotstore·
we built a memory layer for AI. then we used it to write this thread. felt appropriate. join the waitlist at memory.store.
English
0
0
1
143
Memory Store
Memory Store@memorydotstore·
writing from memory "write a blog post about active memory vs storage." cowork pulled from philosophy notes, product decisions, and a forgotten 3am idea fragment. the draft read like it was written by the person who took the notes. because it remembered what they'd already thought.
English
1
0
3
271
Memory Store
Memory Store@memorydotstore·
most notes die in the folder where they're saved. you capture thinking. you never go back. the insight sits there, useless. memory store can fix this
English
1
3
6
453