Sasun Hambardzumyan
51 posts




Jack's right: "Companies move fast or slow based on information flow." But framing it as a worker hierarchy problem is losing the plot. Look at where the actual work is moving: agents. Quick history: Email got messy. Slack fixed it. Then humans kept dropping balls anyway. Someone's offline, a thread dies, marketing has no idea what eng shipped, the handoff never happens. And now Slack itself is the slog. What if you could spend a fraction of the time in it? Meanwhile, your agents are in the pre-Slack era: • Your Claude Code agent has no clue what your coworker's OpenClaw agent decided yesterday. • Marketing's agent can't see what sales's agent promised the customer. • Product's agent has no idea what engineering's agent already shipped. Same company, same project, totally separate brains. The fastest workers on your team are stuck on the slowest part of your stack. Deeplake Hivemind fixes it. One install and your agents share memory across sessions, across teammates, across tools: Claude Code, OpenClaw, Codex, whatever. When one agent learns something, every agent on your team knows. No Slack pings. No status updates. No "wait, did you tell the VP?" Just shared context, flowing automatically. Slack was for humans. Hivemind is for the things actually doing the work now. Comment HIVEMIND and we'll DM you $100 in free credits. Run the experiment with your crew.

A banger post by our CTO @khustup on how he made Postgres Serverles and spin up under second. We built a serverless, PostgreSQL-compatible database. Not a modified PostgreSQL deployment. PostgreSQL provides the interface. DuckDB provides the query execution. Deeplake provides the storage engine. The architecture makes a different set of tradeoffs than traditional PostgreSQL. We think those tradeoffs are right for agent workloads: bursty, ephemeral, storage-heavy, and analytical. Link: deeplake.ai/blog/serverles…

Jensen just announced the start of the GPU-accelerated database era at #GTC26. AI runs on GPUs. But your data still runs on CPUs. That mismatch is breaking the AI stack. For the last two months, we’ve been busy solving this problem. Excited to announce Deeplake becoming the GPU Database. Deeplake brings your database directly onto the GPU, eliminating the CPU <-> GPU bottleneck for AI workloads. The pendulum has switched. GPU-native queries are now 10× faster and an order of magnitude cheaper to run. Last week we even put up a 101 banner in San Francisco. And this is just the beginning. We’re planning a huge set of announcements starting this week. Stay tuned.






Today excited to open-source Deep Lake PG = Postgres + Deep Lake Biggest bottleneck of AI having impact on GDP is unlocking data in Enterprises. Every AI team I know is stitching Postgres → Vector DB → Warehouse → Lakehouse → Catalog. All to give their agents basic memory and reasoning. We replaced the entire data ecosystem with one database. Deep Lake PG is now open source. Stateless + multimodal knowledge + SQL queries + vectors in a single place. Build on top of the database that powers our own Scientific Agent, a trove of 175TB+ of multimodal data.

The Genesis Mission calls for new ways to accelerate scientific discovery. This is our contribution Multimodal search across 25M papers is a step toward science discovery that moves at the speed of curiosity. Releasing, - Visually indexed scientific paper dataset with open access 25M papers, 450M+ visually indexed pages. Total 175TB+. All on Deep Lake - Open-source scientific data agent that achieves 48% SOTA on Humanity's Last Exam with tools including the indexed scientific research dataset. Excited to see what discoveries you all uncover with this. Try it and share your most interesting findings.


Your GTM ops team wastes 70% of its time on manual data prep. It’s time to fix it. The endless cycle of manual data preparation, integration, and reconciliation. We’re introducing Activeloop to unlock AI Data Analysis for GTM Operations.




(1/7) Rushing from RAGs to Agents before even fully solving RAG? 🚀 Introducing Activeloop-L0: Agentic Reasoning on Your Multimodal Data