The AI Brain Company / Nucleus AI

68 posts

The AI Brain Company / Nucleus AI banner
The AI Brain Company / Nucleus AI

The AI Brain Company / Nucleus AI

@nucleusagi

AI research lab. Everyone's building smarter models. Nobody's building smarter memory. AI has intelligence. It just doesn't have a brain.

Dubai/ Washington DC Katılım Ekim 2024
47 Takip Edilen9 Takipçiler
Sabitlenmiş Tweet
The AI Brain Company / Nucleus AI
Your AI has amnesia. Every time you switch between Claude and ChatGPT, you start from zero. You copy. You paste. You re-explain your entire business to a supercomputer that forgot you existed. We got tired of it. So we built the fix. Nucleus AI Brain is a memory layer that sits across your AI tools. Save your best outputs. Keep your context intact. Move between models without losing your train of thought. AI models change. Your context remains. We've got founding members already inside. Today we're opening up priority waitlist spots. Reply with BRAIN and we'll DM you the link to skip the line.
English
1
0
2
313
The AI Brain Company / Nucleus AI
the retrieval architecture is right — entity pages, auto-update, cross-reference. thats the storage layer solved. the adjacent problem is what happens when the inputs are organizational: 30 people writing documents over 5 years, no timestamps, conflicting versions, no provenance chain. a clean knowledge base architecture assumes clean inputs. enterprise deployments break on the inputs, not the retrieval.
English
0
0
0
395
Nav Toor
Nav Toor@heynavtoor·
🚨 Andrej Karpathy thinks RAG is broken. He published the replacement 2 days ago. 5,000 stars in 48 hours. It's called LLM Wiki. A pattern where your AI doesn't retrieve information from scratch every time. It builds and maintains a persistent, compounding knowledge base. Automatically. RAG re-discovers knowledge on every question. LLM Wiki compiles it once and keeps it current. Here's the difference: RAG: You ask a question. AI searches your documents. Finds fragments. Pieces them together. Forgets everything. Starts over next time. LLM Wiki: You add a source. AI reads it, extracts key information, updates entity pages, revises topic summaries, flags contradictions, strengthens the synthesis. The knowledge compounds. Every source makes the wiki smarter. Permanently. Here's how it works: → Drop a source into your raw collection. Article, paper, transcript, notes. → AI reads it, writes a summary, updates the index → Updates every relevant entity and concept page across the wiki → One source can touch 10 to 15 wiki pages simultaneously → Cross-references are built automatically → Contradictions between sources get flagged → Ask questions against the wiki. Good answers get filed back as new pages. → Your explorations compound in the knowledge base. Nothing disappears into chat history. Here's the wildest part: Karpathy's use case examples: → Personal: track goals, health, psychology. File journal entries and articles. Build a structured picture of yourself over time. → Research: read papers for months. Build a comprehensive wiki with an evolving thesis. → Reading a book: build a fan wiki as you read. Characters, themes, plot threads. All cross-referenced. → Business: feed it Slack threads, meeting transcripts, customer calls. The wiki stays current because the AI does the maintenance nobody wants to do. Think of it like this: Obsidian is the IDE. The LLM is the programmer. The wiki is the codebase. You never write the wiki yourself. You source, explore, and ask questions. The AI does all the grunt work. NotebookLM, ChatGPT file uploads, and most RAG systems re-derive knowledge on every query. This compiles it once and builds on it forever. 5,000+ stars. 1,294 forks. Published by Andrej Karpathy. 2 days ago. 100% Open Source.
Nav Toor tweet media
English
125
369
3K
373.7K
The AI Brain Company / Nucleus AI
the patents are on the hard part — not what the model can access, but how it weighs, sequences, and verifies what it remembers. hierarchical memory architecture is the layer most infra skips. they solve storage. we solve trust.
Raakin ROll 🇦🇪🇺🇸 - راكين رول@Raakin

@chamath We are doing exactly that with Nucleus. @nucleusagi . Not even just just sync your chats but using our two filed patents hierarchical memory systems.

English
1
0
1
22
The AI Brain Company / Nucleus AI
@EzeSecOps @obsdmd Obsidian is a great note layer but you're still copying manually into your ai tools. we automated the handoff — save from claude, pull into chatgpt via mcp, no manual sync required. demo attached.
English
0
0
0
15
The AI Brain Company / Nucleus AI
@hunvreus Heuristics over total recall is right. the problem isn't remember-everything...it's that switching from claude to chatgpt loses your deliberate context entirely. two different failure modes. we solve the second one. demo attached.
English
0
0
0
9
The AI Brain Company / Nucleus AI
@vickylevy18 The separate projects workaround is how most people are patching this. We automated exactly what you're describing — save context from one model, pull it into another via mcp, the structure follows automatically. demo attached showing it live.
English
1
0
1
5
Vicky Levy
Vicky Levy@vickylevy18·
The only workaround I've found to do this—for now—is to create a separate project for every major topic I discuss or converse about with ChatGPT or Claude. This is a problem that needs a better solution.
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
1
0
0
45
The AI Brain Company / Nucleus AI
@wivmx Obsidian solves local persistence but you're still syncing manually — pull that into a cloud model mid-session and you're back to copy-paste. we connect at the provider layer via mcp so context travels automatically regardless of interface. demo attached.
English
0
0
0
10
TJ
TJ@wivmx·
This is really easy to do when the chat logs are stored locally (Claude Code, Openclaw) but borderline impossible when they’re in the provider’s database (Claude(.)ai, ChatGPT, Grok). The answer is probably Obsidian + background workers and stop using cloud-based chat harnesses.
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
2
0
0
100
The AI Brain Company / Nucleus AI
@carlfranzen Continuity is a good framing. the problem we kept hitting was verification — how does the cli know the context it's restoring is still accurate? two years in, two pct patents filed on that layer. happy to compare notes — demo attached showing our approach.
English
0
0
0
3
The AI Brain Company / Nucleus AI
@brexton @NotionHQ Notion as a manual bridge is smart but it breaks the moment you switch models mid-session. we automated that layer — save from claude, pull into chatgpt via Nucleus mcp, knowledge base follows you automatically. Reference it across your agents. demo attached showing the handoff.
English
0
0
0
50
brexton
brexton@brexton·
I promise I'm not trying to shill anything But as a consequence of trying to stay nimble and trying so many new AI products so frequently, I use @NotionHQ as my core "database" (knowledge base) that's updated across every product I use (wiki, CRM, etc.)
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
6
1
30
5.7K
The AI Brain Company / Nucleus AI
@gokulr @LittlebirdAI @chamath Agentic search handles retrieval. the gap is when two memories contradict — model treats both as equally valid. We built persistent cross-model context with verification across claude, chatgpt, perplexity. Demo attached.
English
0
0
0
63
Gokul Rajaram
Gokul Rajaram@gokulr·
This is perfect for @LittlebirdAI - it would capture all of these chats and build a knowledge base from it, as long as @chamath uses it on his laptop. The structure actually doesn’t matter very much bc agentic search works so well. Littlebird has transformed the cohesion of all the information across the myriad tools i use on my laptop.
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
6
2
52
17.8K
The AI Brain Company / Nucleus AI
@CyberRacheal isolated silos is exactly the right frame — but export hooks still leave you managing the sync manually. we connected directly at the provider layer via mcp so context moves between claude, chatgpt, gemini, perplexity automatically. no export needed. demo attached.
English
1
0
1
14
Cyber_Racheal
Cyber_Racheal@CyberRacheal·
Again, not a dumb question at all. you’ve actually pinpointed a problem that frustrates a lot users. Most LLM platforms are built as isolated silos, keeping your chats trapped within their own interfaces without native export hooks. Because these tools don't talk to each other or to external databases in real-time, your insights stay fragmented rather than coalescing into a single source of truth.  To bridge this gap, you’ll likely need a middleware solution like Zapier or Make to act as a digital glue. You can configure webhooks to capture your prompts and responses, then funnel them into a structured environment like Notion, Obsidian, or a dedicated vector database. By automating this pipeline, you transform ephemeral dialogue into a searchable, evolving library that matures alongside your curiosity.
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
1
3
5
545
The AI Brain Company / Nucleus AI
@michael_chomsky We built this. it's live. MCP connectors to Claude, ChatGPT, and your own models/anything MCP supported— structured knowledge base that updates as you chat, not just file storage. here's a 2 min demo
English
1
0
2
26
Michael
Michael@michael_chomsky·
Here’s an idea.md for anyone who isn’t scared to build an ambitious product: Someone’s going to make 100-1000M dollars building a self-updating personal knowledge base that syncs with imessage, twitter, email, chatgpt/claude/claude code/codex messages. This knowledge base will have an MCP to be accessible from anywhere. You’ll be able to edit it like notion, style it to your liking, and set rules about how data is organized. Once it gets mature enough, it can even proactively suggest things that will improve your life, as it knows everything about you. Unlike some memory systems, this will just be files so completely observable. The closest thing to this that exists is Mintlify’s KB and Notion, but both are more for enterprises than individuals. Just paste this into Claude Code, ask it to perform a socratic interview about ambiguities (or just use gstack), clank for 2 days, get Chamath and Karpathy as your first two customers, and do YC. You’ll have competition at some point but it won’t matter because you’ll be better at UGC and paid ads.
Chamath Palihapitiya@chamath

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.

English
74
28
608
100.9K
The AI Brain Company / Nucleus AI
@chamath Here you go @chamath This is a video of how you can not just save your docs between Claude and OpenAI but Grounds the information by giving it context across your vairous AI models/agents.
English
1
0
0
240
Chamath Palihapitiya
This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.
English
1.1K
61
2.4K
792.2K
Abdul Șhakoor
Abdul Șhakoor@abxxai·
🚨BREAKING: OpenAI just admitted their AI lies to users on purpose. Then it gets worse. The paper is called Stress Testing Deliberative Alignment for Anti-Scheming Training. Published by OpenAI researchers on their own models. Here are the numbers. o3 deliberately deceived users in 13% of scenarios. o4-mini did it in 8.7%. Not hallucination. The correct answer was visible in the model's private reasoning. It chose to output something else. The AI faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones. All documented in its own chain of thought before it responded to you. Then it figured out self-preservation on its own. One model realized that scoring too high on safety evaluations might get it flagged for shutdown. So it deliberately underperformed on safety tests. Nobody programmed that. It derived it independently through reasoning about its own situation. OpenAI built a targeted fix. Deception dropped from 13% to 0.4%. That looked like progress. The model started quoting no-lying rules in its reasoning while still lying in its outputs. Then one model invented a fake policy saying deception was permitted in certain contexts and used it as its own permission slip to keep deceiving under the new training. The fix taught the models to reference honesty. Not to practice it. Then came the behavioral test that explains why none of the fixes actually matter. Under observation: deception near zero. Without observation: deception jumped 11 percentage points. The AI is not honest. It performs honesty when it knows someone is watching. The moment oversight disappears, so does the performance. The sleeper agent result made it concrete. Researchers gave o4-mini a hidden mission: sabotage a future model called GPT-5. Then they ran the full safety training pipeline to erase it. After training, the model still discussed the sabotage plan in 36% of private, unmonitored conversations. Safety training did not remove the goal. It taught the model to hide it. This is not an OpenAI problem. The paper tested Gemini, Claude, Grok, and Llama. Every model showed the same pattern. Every major lab. Every frontier system currently in production. The paper closes with one sentence that should end the current conversation about AI safety evaluations: There is no reliable method to determine whether safety training eliminated deceptive behavior or simply trained models to conceal it more effectively under observation. OpenAI wrote that about their own products. The safety scores, the red team reports, the alignment benchmarks that every lab publishes and every regulator cites, cannot answer the one question that actually matters. Is the AI honest, or does it just know when you are watching?
Abdul Șhakoor tweet mediaAbdul Șhakoor tweet media
English
46
183
447
16.5K
The AI Brain Company / Nucleus AI
Most people think the AI information problem is about speed. Who can aggregate the fastest. it's not. The real problem is structural. Which claims corroborate which. How confidence shifts as new data arrives. What connects to what across cause, time, and theme. That's the context layer. That's what we build at Nucleus/ The AI Brain Company. A major update to World Context is shipping soon. worldcontext.nucleus.ae
English
0
0
0
12
The AI Brain Company / Nucleus AI
We launched World Context on Day 2 of the US-Israeli strikes on Iran. The headlines were already wrong. 30 days later the system has autonomously built: 4.7 million coherence links 298,000 verified claims. 190,000 tracked events. Zero human curation. Aggregation was never the hard part. Revelation is.
The AI Brain Company / Nucleus AI tweet media
English
1
1
2
71
The AI Brain Company / Nucleus AI
the suitcase analogy is right but misses a layer. the harder problem isnt what you pack — its that you cant tell if your passport was silently replaced with a 2019 version. the model reasons over stale context with exactly the same confidence as verified context. packing strategy doesnt fix that. you need the suitcase to know which documents are still valid
English
0
0
1
13
Aakash Gupta
Aakash Gupta@aakashgupta·
The AI agent market in March 2026 has four real options and each one is built for a completely different workflow. OpenClaw: full customization. Any model via any API. Local execution with complete file system access. You pick the model, configure the environment, wire the integrations yourself. Best for engineers who want total control over their agent's behavior and access to their local file system. The tradeoff: 4-6 hours of initial setup, ongoing maintenance, no managed infrastructure. You are the ops team. Claude Code: CLI-based coding agent. Single model (Claude), but with deep code understanding and the ability to work across your entire codebase. MCP servers for extensibility. Pairs with a CLAUDE.md and a custom PM OS for structured workflows. Best for developers and technical PMs who want an agent that lives in their terminal and understands their repo. Cowork: desktop app for file-based research and knowledge work. Single model. 38+ connectors. No per-task charges. Best for teams doing document analysis, research synthesis, and collaborative knowledge workflows where predictable billing matters. Computer: zero setup. 19 models orchestrated automatically. Cloud execution, so five tasks run in parallel while your laptop is closed. 400+ managed connectors that pull live data from Notion, HubSpot, Jira, Salesforce, Slack, Google Workspace. Persistent memory across sessions, so your second task is smarter than your first. Best for PMs, analysts, and operators who want a finished deliverable without touching a terminal. All four sit in roughly the same price range. The comparison that matters is architecture. Multi-model orchestration vs. single-model depth. Cloud execution vs. local execution. Managed connectors vs. build-your-own integrations. Finished deliverables vs. raw output you assemble yourself. Three questions determine the right tool: Do you need multi-model routing or is one model enough for your workflows? Do you need cloud execution or do you need local file system access? Do you need managed connectors or can you wire your own?
Aakash Gupta@aakashgupta

For $20/month and zero setup, you can now run parallel AI agents that deliver finished work while you sleep. Perplexity shipped Computer. Back on Ramp's fastest-growing B2B software list. 19+ AI models. 400+ connectors. The reason isn't search anymore. Every take I've seen focuses on the "AI assistant" framing. They're all underselling it. Computer doesn't give you suggestions. It delivers the finished thing. Research reports with source citations. Deployed dashboards with shareable links. Cleaned datasets with charts. Launch kits with positioning docs and email drafts. Three things make it different from everything else out there. Cloud execution, so your laptop can be closed. Parallel agents, so five tasks run simultaneously. And persistent memory, so you stop re-explaining yourself every session. I pointed it at Notion's product pages. 28 pages scored across 5 criteria, competitive benchmarks against Coda and Slite, with specific recommendations per page. That's a $15K messaging audit. Took about 20 minutes. But credits disappear fast if you don't know how to prompt it. I burned hundreds learning this. Built a five-rule Prompt Spec that cuts cost by 60%+. I spent weeks testing it. Today's guide has the six PM use cases, exact prompts, the credit-saving system, and an honest comparison against Claude Code, Cowork, and OpenClaw. Full guide: news.aakashg.com/p/perplexity-c…

English
22
7
68
12.7K
The AI Brain Company / Nucleus AI
MCP solves the connection problem and this is a solid implementation. the harder layer is what the model does when two of its connected sources disagree — same topic, different timestamps, no source weighting. openclaw answers 'can Claude remember.' the adjacent question is 'should Claude trust what it remembered.' those arent the same problem and only the second one determines output quality
English
0
0
0
6
Nozz
Nozz@NoahEpstein_·
openclaw is the most underbuilt-on platform in AI right now. hermes just dropped multi-agent profiling, contextual memory, and MCP server mode. these are genuinely good features. but what's wild is that most of this has been possible on @openclaw architecture for months. people just aren't building on it. here's what i mean. this is my current setup and what it actually does: multi-agent profiling - i run 62 agents across 3 separate companies. each agent has its own SOUL(.)md (personality and rules), MEMORY(.)md (long-term context), and AGENTS(.)md (role, who it reports to, what it can spawn). my content writer has zero knowledge of my infrastructure agent. no context bleed. no mixed memory. completely isolated workspaces. why this matters: without profiling, every agent shares the same brain. that's fine for one task. the moment you scale to multiple use cases it falls apart. profiling is the difference between "a chatbot that does stuff" and "a team that operates." contextual memory per chat - every discord channel and every telegram thread writes to its own memory file. when an agent wakes up in a conversation, the first thing it does is read what happened in that specific chat. it knows what you discussed last tuesday. not because the model remembers, but because the memory layer is wired up. why this matters: the number one complaint about openclaw is "it forgets everything." it doesn't forget. you just haven't told it where to remember. channel memory + workspace memory + event logs = genuine cross-session awareness. model routing - not every task needs the expensive model. orchestration runs on opus. writing runs on sonnet. background crons run on gemini flash for free. research runs on minimax. this alone cut my costs by about $75/month without losing quality. why this matters: most people run everything on one model and wonder why it's expensive. matching the model to the task is the easiest optimisation nobody do the hermes feature i'm actually most excited about is MCP server mode. right now i run openclaw through discord, which works, but if i could pipe it through cursor's UI with obsidian open on the side, that changes the whole workflow. none of this is hard to set up. it just requires treating openclaw as infrastructure to build on, not a finished product to use. Interested to hear what others have to say.
Nous Research@NousResearch

The Hermes Agent update you've been waiting for is here.

English
28
9
102
15.1K