Cartisien

6.6K posts

Cartisien banner
Cartisien

Cartisien

@Cartisien

UX Studio for AI Driven Products - AI moves fast. Good UX makes it land.

Greenville, SC Katılım Mart 2009
263 Takip Edilen950 Takipçiler
Cartisien
Cartisien@Cartisien·
@hthieblot Building open source AI agent infrastructure — Cogito (identity/lifecycle), Engram (semantic memory with certainty scoring + contradiction detection), Extensa (vector layer). MIT licensed, npm published
English
0
0
0
11
Hubert Thieblot
Hubert Thieblot@hthieblot·
Looking for obsessed builders. I invest up to $250K first checks in: • Robotics, drones, space • Applied AI/ML, models • Dev tools and infra • Manufacturing & logistics, and more... DMs open or just reply here what you are building. Early > polished.
English
462
54
1.4K
105.4K
Cartisien
Cartisien@Cartisien·
I’ve been experimenting with a different model for AI agent memory. Instead of just storing embeddings, the system tracks: • certainty levels • contradictions • memory promotion • semantic recall • forgetting Memory becomes something the agent can reason about.
English
0
0
1
29
Cartisien
Cartisien@Cartisien·
Curious what other agent builders are doing for memory. What’s the hardest problem you've run into? • long-term recall • memory bloat • contradictory facts • user preference tracking • something else?
English
0
0
0
16
Cartisien
Cartisien@Cartisien·
@seatedro Haha "launch slop". But the underlying problem is real and worth solving. Bad context retrieval isn't a vector DB failure, it's a metadata isolation failure. We built Engram to fix it, OSS + open benchmarks against LoCoMo running right now, full results public. No $6.5M raise video needed.
English
0
0
0
87
rohit
rohit@seatedro·
healthy dose of word salad and launch slop
Nishkarsh@contextkingceo

We've raised $6.5M to kill vector databases. Every system today retrieves context the same way: vector search that stores everything as flat embeddings and returns whatever "feels" closest. Similar, sure. Relevant? Almost never. Embeddings can’t tell a Q3 renewal clause from a Q1 termination notice if the language is close enough. A friend of mine asked his AI about a contract last week, and it returned a detailed, perfectly crafted answer pulled from a completely different client’s file. Once you’re dealing with 10M+ documents, these mix-ups happen all the time. VectorDB accuracy goes to shit. We built @hydra_db for exactly this. HydraDB builds an ontology-first context graph over your data, maps relationships between entities, understands the 'why' behind documents, and tracks how information evolves over time. So when you ask about 'Apple,' it knows you mean the company you're serving as a customer. Not the fruit. Even when a vector DB's similarity score says 0.94. More below ⬇️

English
20
6
497
42.1K
Cartisien
Cartisien@Cartisien·
We’ve built Engram — a metadata-aware, embeddable memory layer with on-prem GPU embeddings and a reproducible benchmarking pipeline. On internal LoCoMo tests we’re seeing meaningful gains in retrieval quality after message-level chunking and metadata filtering, and our end-to-end pipeline is auditable with saved traces and artifacts. Full runs and metrics will be published for verification.
English
0
0
0
41
Cartisien
Cartisien@Cartisien·
Hey @benln Working on Engram: SQLite storage + on-device embeddings + JSON metadata for fine-grained recall (speaker, timestamp, containerTag). Fast semantic recall with lexical fallbacks — and we’re beating competitors on their own benchmarks (LoCoMo/LongMemEval). Interested in the code or numbers? DM me.
English
0
0
0
52
Ben Lang
Ben Lang@benln·
Wired 2 angel investments over past two weeks: • two former Ramp engineers • 3x repeat founder Reach out if you're building something new!
English
126
10
586
34.8K
Cartisien
Cartisien@Cartisien·
@contextkingceo @hydra_db Love the angle. We built semantic memory + user memory on top of vector/graph layers - Hydra's structure + Engram's persistence = context that actually knows the user. Let's talk?
English
0
0
0
93
Nishkarsh
Nishkarsh@contextkingceo·
We've raised $6.5M to kill vector databases. Every system today retrieves context the same way: vector search that stores everything as flat embeddings and returns whatever "feels" closest. Similar, sure. Relevant? Almost never. Embeddings can’t tell a Q3 renewal clause from a Q1 termination notice if the language is close enough. A friend of mine asked his AI about a contract last week, and it returned a detailed, perfectly crafted answer pulled from a completely different client’s file. Once you’re dealing with 10M+ documents, these mix-ups happen all the time. VectorDB accuracy goes to shit. We built @hydra_db for exactly this. HydraDB builds an ontology-first context graph over your data, maps relationships between entities, understands the 'why' behind documents, and tracks how information evolves over time. So when you ask about 'Apple,' it knows you mean the company you're serving as a customer. Not the fruit. Even when a vector DB's similarity score says 0.94. More below ⬇️
English
620
641
6K
3.8M
Cartisien
Cartisien@Cartisien·
Good digging — transparency matters. If HydraDB stands by its numbers, publish the exact benchmark code, prompts, K, judge scripts, and latency/cost data so the community can verify. Meanwhile, Engram publishes reproducible evals and reproducible POCs — we ran a quick test showing naive similarity fails on near‑duplicate docs (50% → 100% client correctness with ingest-time tags). Want an apples‑to‑apples side‑by‑side? DM us and we’ll run an open benchmark and share the code.
English
0
0
1
1.1K
Dhravya Shah
Dhravya Shah@DhravyaShah·
been building in this space for years now, and have followed nishkarsh for years as well - congrats on the launch! since this is in the same space we're building in, i dived deep into it and have thoughts. the launch itself is very hype-y, and is meant to trigger rage bait 1. it's positioned as a database, but is almost a @supermemory-like system 2. their example of "vector dbs" not being able to do this, is really a question of "embedding models". and embedding models have superpositions, they are cheap and are easily able to infer differences between them. it's not hard to ask claude to do a mini experiment to prove this (attached below). What does matter is: is it able to track how knowledge evolves? time passes? this made me curious so i read their paper 3. their research paper is hardcoding and gaming the benchmark by different prompt for every category!!! (see image below). If their benchmarking is fixed, supermemory will remain the SOTA. 4. they reinvented contextual retrieval paper by Anthropic from 2024 and called it "the orphaned pronoun paradox" 5. they mention they use a custom "in-memory vector store" = at about 500GB, you will have to pay more than $10k for just the RAM. 6. inference is run too many times in the pipeline - which means for every LLM token you ingest, you will end up paying 5x more than token cost for the graph + contextualization + storage. 7. latency and cost numbers were never reported. My hunch is because of the architecture, the latency will struggle at scale. but i can't tell - their product is behind demo gate. 8. the benchmarking code is not OSS (from what i can tell). not replicable + who knows how much context they are injecting into the model? what's the K? 9. inorganic, undisclosed ads (just read the quote tweets). influencer accounts with 400k+ followers all saying the same thing. people keep getting away with this @nikitabier lol i'm all in for healthy competition and progress in this fields, enjoy seeing good work being done by others. but its easy to just say things. "no one will check." playing the game the right way is hard, and everyone's just saying whatever they can to impress people. TLDR is: you should use this if you want to spend 2-5x more for no real marginal improvement and enjoy unhealthy research and business practices. attached: 1. experiment to disprove hypothesis of vector dbs not understanding grey vs grey 2. one of their prompts, which just says "say i dont know". they scored 100% :)
Dhravya Shah tweet mediaDhravya Shah tweet media
Nishkarsh@contextkingceo

We've raised $6.5M to kill vector databases. Every system today retrieves context the same way: vector search that stores everything as flat embeddings and returns whatever "feels" closest. Similar, sure. Relevant? Almost never. Embeddings can’t tell a Q3 renewal clause from a Q1 termination notice if the language is close enough. A friend of mine asked his AI about a contract last week, and it returned a detailed, perfectly crafted answer pulled from a completely different client’s file. Once you’re dealing with 10M+ documents, these mix-ups happen all the time. VectorDB accuracy goes to shit. We built @hydra_db for exactly this. HydraDB builds an ontology-first context graph over your data, maps relationships between entities, understands the 'why' behind documents, and tracks how information evolves over time. So when you ask about 'Apple,' it knows you mean the company you're serving as a customer. Not the fruit. Even when a vector DB's similarity score says 0.94. More below ⬇️

English
51
12
438
82.8K
Cartisien
Cartisien@Cartisien·
100%. Similarity gets you close; graphs get you correct. Knowledge graphs + ingest-time consolidation stop agents from returning the wrong client’s doc. Engram combines semantic consolidation + temporal, append-only memory so your agents actually remember what’s relevant. DM for a 3‑minute demo. github.com/Cartisien/engr…
English
1
1
1
42
Santiago
Santiago@svpino·
Knowledge graphs win every single time. Before embeddings and similarity search, knowledge graphs were a game-changer. They are now going to win again. Similarity is not relevance. It never was. If you want relevant search results, you can't rely on similarity alone.
Nishkarsh@contextkingceo

We've raised $6.5M to kill vector databases. Every system today retrieves context the same way: vector search that stores everything as flat embeddings and returns whatever "feels" closest. Similar, sure. Relevant? Almost never. Embeddings can’t tell a Q3 renewal clause from a Q1 termination notice if the language is close enough. A friend of mine asked his AI about a contract last week, and it returned a detailed, perfectly crafted answer pulled from a completely different client’s file. Once you’re dealing with 10M+ documents, these mix-ups happen all the time. VectorDB accuracy goes to shit. We built @hydra_db for exactly this. HydraDB builds an ontology-first context graph over your data, maps relationships between entities, understands the 'why' behind documents, and tracks how information evolves over time. So when you ask about 'Apple,' it knows you mean the company you're serving as a customer. Not the fruit. Even when a vector DB's similarity score says 0.94. More below ⬇️

English
60
54
730
128.4K
Cartisien
Cartisien@Cartisien·
@GaIinsights @jjsviokla @PaulBaier Exactly why we've been building open source AI agent infrastructure — Cogito (identity/lifecycle), Engram (semantic memory with certainty scoring + contradiction detection), Extensa (vector layer). MIT licensed, npm published x.com/compose/articl…
English
0
0
1
15
Cartisien
Cartisien@Cartisien·
@noahkagan Hot take: AI doesn’t cause ADHD productivity. Stateless AI does. If the system remembered what you were doing and what actually matters, it would feel completely different.
English
0
0
0
106
Noah Kagan
Noah Kagan@noahkagan·
Sometimes I feel like AI is built for people with ADHD. Do 10x things at once but still accomplish almost nothing. 😆
English
156
38
925
36.7K
Cartisien
Cartisien@Cartisien·
Memory layer for AI is the right bet. $24M says so. What's interesting is they started from a viral consumer app that "forgot everything" — the pain point was obvious, the infrastructure wasn't there yet. Been building the same thing for Node/TS devs. Local-first, SQLite, MCP-native. github.com/Cartisien/engr…
English
0
0
0
16
Y Combinator
Y Combinator@ycombinator·
While LLMs continue to evolve, they still struggle with memory. @mem0ai is working to change that by building the memory layer for AI agents. In this episode of Founder Fireside, YC’s @dessaigne sat down with co-founders @taranjeetio and @deshrajdry to discuss why agents need persistent memory, how Mem0 reduces cost and latency, and why memory must remain neutral across models as AI becomes more agent-driven. 00:05 What Is Mem0? 00:49 Traction & Open Source Adoption 01:24 Why Memory Improves AI Agents 02:01 Saving Cost and Latency 02:31 Founder Origins & YC Pivot 05:13 How Mem0 Works Under the Hood 06:04 Hybrid Memory Architecture 07:10 Custom Memory Rules & Expectations 08:00 Real-World Use Cases 10:05 Competing With Model-Native Memory 11:48 Fundraising & What’s Next
English
11
19
89
16K
Kaito
Kaito@KaiXCreator·
Drop your project URL Let’s drive some traffic Curious to know what you all are building 👇🏼
English
324
3
139
9.8K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
What are you currently building?
English
711
13
465
80.9K
Suni
Suni@suni_code·
Drop your project URL Let’s drive some traffic Curious to know what you all are building
English
604
7
256
27.3K
Jason Walko
Jason Walko@walkojas·
Hey builders, Looking to connect with people building: 🪙 Agentic Workflows 📷 SaaS 📷 DevOps 📷 Autonomous AI 📷 Side projects 📷 Marketing Drop what you're working on📷 📱📈👇
English
123
0
94
4.7K
Cartisien
Cartisien@Cartisien·
@_Shark_byte @Alya_capital Building open source AI agent infrastructure — Cogito (identity/lifecycle), Engram (semantic memory with certainty scoring + contradiction detection), Extensa (vector layer). MIT. Early stage but solving real problems with persistent agents. github.com/Cartisien @Alya_capital
English
0
0
1
73
Perly 🦈
Perly 🦈@_Shark_byte·
.@Alya_capital is looking for new grantees 👀 Drop what you’re working on in the replies bec the type form is maxed out rn lol
Perly 🦈 tweet media
English
84
6
111
10.4K
Cartisien
Cartisien@Cartisien·
Most AI agent “memory systems” are not memory. They’re just vector databases with a prompt. Which means agents: • store contradictions • treat all memories as equal • never forget anything Over time their memory becomes junk. I built a memory engine that fixes this.
English
0
0
1
56