Rob Imbeault

6K posts

Rob Imbeault banner
Rob Imbeault

Rob Imbeault

@RobImbeault

saas unicorn founder - building the world’s smartest ai memory on the world’s most configurable stack - wrote a bestseller (not tech)

가입일 Kasım 2024
381 팔로잉2.1K 팔로워
고정된 트윗
Rob Imbeault
Rob Imbeault@RobImbeault·
Effort is the algorithm
English
15
4
181
9.7K
TodayInSports
TodayInSports@TodayInSportsCo·
Kids today just don’t understand.
English
137
568
4K
132.7K
Rob Imbeault
Rob Imbeault@RobImbeault·
This is what I’m seeing at the rock face. And what our vision is to create an infrastructure that is deterministic in order that AI agents can build on something foundationally strong and trustworthy is our future. We also have some more fun things about our sleeves. Keep building. Keep taking big swings.
corbin@corbin_braun

coding is dead in sf

English
0
0
3
106
Rob Imbeault
Rob Imbeault@RobImbeault·
I guess I’m old fashioned. Make the best product that helps people. Market it with integrity. Sell it for a fair price. No stunts. Just a solid platform.
English
1
3
5
87
Elliott Potter
Elliott Potter@elliott__potter·
We’ve raised $27M for this moment: starting today, your agent gets an iPhone and can talk like a friend. Texting is the universal interface. Billions of people text every day, but until now, developers have been restricted from building on the most powerful channel to ever exist. Linq is a single API for iMessage, RCS, SMS, voice, and even FaceTime and Find My. Nothing for users to download. Nothing new to learn. We’re already powering @interaction, @pika_labs, @getlindy, @zocomputer, @joindimension, Tomo (and others we can’t name just yet) to bring this new ecosystem to life. Join them, and start building for free in our sandbox, linked below. Or comment and we’ll get you set up.
English
389
218
1.8K
697K
Lewis 🇺🇸
Lewis 🇺🇸@ctjlewis·
I’ve seen enough. No you didn’t. Don’t even need to click.
Lewis 🇺🇸 tweet media
English
16
0
270
21.4K
Manthan Gupta
Manthan Gupta@manthanguptaa·
Love what Dhravya has been building and his work. He understands memory, so I am a little taken aback by this article. I am going to push back on this pretty strongly. This is being framed as a breakthrough when it's mostly benchmark engineering + system inflation. First, 115k+ tokens are no longer a hard problem. We already have 1M+ context windows in production. Calling this "long-term memory" is stretching it. Second, LongMemEval is no longer representative of real systems. It's been around for a while and doesn’t capture long-running agent workflows, real-time updates, noisy tool outputs, and cost/latency constraints. Third, the "99% accuracy" claim is misleading. If you run 8 parallel prompts and count any correct answer as success, you are not improving memory. You are increasing the probability of a hit. That's not intelligence but sampling until something works. (spray and pray) Same with the 12-agent voting setup. It's essentially majority voting across multiple attempts, which inflates benchmark scores without improving core capability. Fourth, saying "no vector DB needed" is not an actual unlock. You have just replaced retrieval with multiple LLM passes, which require higher compute, greater latency, and higher cost. It is shifting complexity, not removing it. These things matter a lot in a production system, so you can forget about deploying this in production. Fifth, the core claim that "agentic retrieval beats vector search" is oversimplified. The real problem in memory systems isn’t embeddings vs agents. Its relevance filtering, temporal consistency, memory lifecycle (what to store/forget), and grounding vs. hallucination. None of these are solved here. Also, this entire system assumes perfect extraction during ingestion, relies heavily on prompt engineering, and offers no guarantees of consistency across runs. So calling memory "solved" is very premature. This is a good experiment in agent orchestration, not a memory solution.
Dhravya Shah@DhravyaShah

x.com/i/article/2035…

English
48
22
514
61K
Chubby♨️
Chubby♨️@kimmonismus·
So cool: Supermemory 99% on Sota Memory! •Achieved ~99% on LongMemEval_s using experimental ASMR (Agentic Search and Memory Retrieval) technique. •Replaced vector search and embeddings with parallel observer agents extracting structured knowledge across six vectors from raw multi-session histories. •Deployed specialized search agents for direct facts, related context, and temporal reconstruction; no vector database required. Will be open source in 11 days!
Chubby♨️ tweet media
Dhravya Shah@DhravyaShah

x.com/i/article/2035…

English
56
108
1.7K
221.5K
Sarah Wooders
Sarah Wooders@sarahwooders·
Memory in the sense of recalling information is a solved problem, or at least as solved as it needs to be. That's why everyone is getting ~100% on all the meaningless "memory benchmarks". Memory in the sense of learning/improving over time is very much unsolved though.
English
48
19
226
17.3K
Courtland Leer
Courtland Leer@courtlandleer·
it’s hard for me to express sufficiently just how dishonest it is (if you really work on memory) to present a LongMem score as some kind of breakthrough it’s a 3 year old benchmark, the results here are dishonest, it’s a marginal amount of tokens in contemporary ai, an everyone aces it already
English
47
20
599
97K
Rob Imbeault
Rob Imbeault@RobImbeault·
@DhravyaShah Hahahahahah what’s funny is that benchmarks don’t go that high so super impressive. Got the engagement you were looking for I guess.
English
0
0
9
327
Rob Imbeault
Rob Imbeault@RobImbeault·
@JayGenXer This is such a no brainer for Canada. Open up your market 10X!!
English
1
1
6
419
JayGen 𝕏 er🇨🇦
A FANTASTIC MESSAGE to Mark Carney from USA 🇺🇸 Senator John N. Kennedy
English
256
875
2.5K
57.5K