Lupascu Cristian, PhD

492 posts

Lupascu Cristian, PhD banner
Lupascu Cristian, PhD

Lupascu Cristian, PhD

@alphacristi

Katılım Aralık 2012
2.4K Takip Edilen3K Takipçiler
Chaofan Shou
Chaofan Shou@Fried_rice·
26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet. We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts. Check our paper: arxiv.org/abs/2604.08407
Chaofan Shou tweet media
English
155
655
3.3K
547.5K
Tancrede
Tancrede@Tancrededib·
Flying 40 founders to SF this summer. Everything covered. 2 months. $250k for the best teams. Convince me in one sentence.
English
360
18
528
39K
Aimar Haddadi
Aimar Haddadi@AdvicebyAimar·
memory in ai agents/ ai models is one of the hardest problems out there. and to think some random crypto founder and a known actress are going to solve that for you is beyond crazy. its safe to say no one has figured this out yet, and i think what 90% of the current startups are doing in this space ain't solving it either. the benchmarks might tell you they do, but we all know those don't reflect reality. for the last two weeks i've been working on something that could potentially solve this. or at least, deliver a way better product than is out there. i genuinely think that if it works it changes things. it will give ai agents persistent memory and the ability to learn after deployment. runs locally. and it way faster than any solution out there. the kind of product that was promised to you but never delivered. if you're an ai/ml engineer that can train small local models, have experience with updating model weights, online learning, know how to make/run great evals or write banger code in python and rust. hit me up. i'll give you free compute for aws e2c and aws bedrock to build this thing out with me.
Aimar Haddadi@AdvicebyAimar

i can spot a grifter from miles away. so i digged into the code to figure out if this is legit or not. guess i was right. ben is a crypto founder who runs some weird bitcoin lending platform, i was pretty sure he knows absolutely nothing about ai and memory so i tracked down the repo myself since i was curious. his website says he likes to build ai powered products and train local ai models? sure man, 80% of your github repo's are bitcoin related stuff. only one ai related project came up you forked in 2024. mempalace has 10k github stars, more than 1k forks but only.. 7 commits ? apparently the best memory layer to date? no git author history, no account connected to whoever wrote the code of this codebase. it doesn't add up.. the account who pushed the original repo, named: aya-thekeeper, under aya-thekeeper/mempal got deleted right after the repo got published. you paid a random guy named lu to build this shit out for you. ( "Written by Lu (DTL) — March 24, 2026. For: Ben." ) - benchmark md file. lu wrote the code. lu wrote the benchmarks. lu is nowhere in the readme. or mentioned in the github history? the git history then got squashed to one commit and published under milla jovovich? seriously? a actress? you say she is a great friend of yours, she has been building this project with you. she does this at night. yet she has.. 7 commits and only 2 active days in her entire github history? you paid an actress and a random guy to promote a product you know absolutely nothing about.

English
23
7
84
7.8K
dei
dei@parcadei·
tested the currently viral MemPalace on MABench (Long Memory Eval) when you plug it in and have an llm answer questions, you get the right answer 17% of the time and the breakdown below is right, it's basically chroma db "LeeLoo edition"
Thin Signal@thin_signal

🧵 MemPalace claims to be "the highest-scoring AI memory system ever benchmarked" I cloned it. Installed it. Ran the benchmarks. Read every line of code. Here's what's actually inside. A thread.

English
19
6
93
16.5K
Aimar Haddadi
Aimar Haddadi@AdvicebyAimar·
i can spot a grifter from miles away. so i digged into the code to figure out if this is legit or not. guess i was right. ben is a crypto founder who runs some weird bitcoin lending platform, i was pretty sure he knows absolutely nothing about ai and memory so i tracked down the repo myself since i was curious. his website says he likes to build ai powered products and train local ai models? sure man, 80% of your github repo's are bitcoin related stuff. only one ai related project came up you forked in 2024. mempalace has 10k github stars, more than 1k forks but only.. 7 commits ? apparently the best memory layer to date? no git author history, no account connected to whoever wrote the code of this codebase. it doesn't add up.. the account who pushed the original repo, named: aya-thekeeper, under aya-thekeeper/mempal got deleted right after the repo got published. you paid a random guy named lu to build this shit out for you. ( "Written by Lu (DTL) — March 24, 2026. For: Ben." ) - benchmark md file. lu wrote the code. lu wrote the benchmarks. lu is nowhere in the readme. or mentioned in the github history? the git history then got squashed to one commit and published under milla jovovich? seriously? a actress? you say she is a great friend of yours, she has been building this project with you. she does this at night. yet she has.. 7 commits and only 2 active days in her entire github history? you paid an actress and a random guy to promote a product you know absolutely nothing about.
Ben Sigman@bensig

30 second explanation of the MemPalace by Milla Jovovich. By day she’s filming action movies, walking Miu Miu fashion shows, and being a mom. By night she’s coding. She’s the most creative, brilliant, and hilarious person I know. I’m honored to be working with her on this project… more to come.

English
350
718
6.4K
755.1K
mal
mal@mal_shaik·
claude code core
mal tweet media
Français
326
637
15.4K
587.3K
elvis
elvis@omarsar0·
Building a personal knowledge base for my agents is increasingly where I spend my time these days. Like @karpathy, I also use Obsidian for my MD vaults. What's different in my approach is that I curate research papers on a daily basis and have actually tuned a Skill for months to find high-signal, relevant papers. I was reviewing and curating papers manually for some time, but now it's all automated as it has gotten so good at capturing what I consider the best of the best. There are so many papers these days, so this is a big deal. You all get to benefit from that with the papers I feature in my timeline and on @dair_ai. The papers are indexed using @tobi qmd cli tool (all of it in markdown files along with useful metadata). So good for semantic search and surfacing insights, unlike anything out there. I am a visual person, so I then started to experiment with how to leverage this personal knowledge base of research papers inside my new interactive artifact generator (mcp tools inside my agent orchestrator system). The result is what you see in the clip. 100s of papers with all sorts of insights visualized. I keep track of research papers daily, so believe me when I tell you that this system is absolutely insane at surfacing insights. This is the result of months of tinkering on how to index research and leverage agent automations for wikification and robust documentation. But this is just the beginning. The visual artifact (which is interactive too) can be changed dynamically as I please. I can prompt my agent to throw any data at it. I can add different views to the data. Different interactions. I feel like this is the most personalized research system I have ever built and used, and it's not even close. The knowledge that the agents are able to surface from this basic setup is already extremely useful as I experiment with new agentic engineering concepts. I feel like this knowledge layer and the higher-level ones I am working on will allow me to maximize other automation tools like autoresearch. The research is only as good as the research questions. And the research questions are only as good as the insights the agents have access to. Where I am spending time now is on how to make this more actionable. I am obsessed about the search problem here. The automations, autoresearch, ralph research loop (I built one months ago) are easier to build but are only as good as what you feed them. Work in progress. More updates soon. Back to building.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
138
438
4.4K
451.8K
Lupascu Cristian, PhD
Lupascu Cristian, PhD@alphacristi·
@karpathy Appreciate the perspective you’ve been sharing on this direction. Here is our paper, in case it’s of interest: arxiv.org/abs/2603.25097 We’re actively exploring improvements (e.g., classifier-based NER alongside LLMs) and would value any thoughts on the approach.
English
0
0
1
28
Lupascu Cristian, PhD
Lupascu Cristian, PhD@alphacristi·
@karpathy Hi @karpathy — your post on LLM-compiled personal knowledge bases really resonated. We open-sourced ElephantBroker, a cognitive runtime for durable, verifiable agent memory and context engine (evidence verification, guards, consolidation etc).
English
1
0
1
84
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.7K
56.4K
20M
Lupascu Cristian, PhD retweetledi
Autism Capital 🧩
Autism Capital 🧩@AutismCapital·
🚨 NEW: Nvidia announces support for OpenClaw 🦞
English
74
186
2.5K
181K
Lupascu Cristian, PhD
Lupascu Cristian, PhD@alphacristi·
@sukh_saroy no vector memory, just more $$ into LLM provider pocket of Gemini Flash-Lite Gemini Flash-Lite is great, but without fine tuning, if skipping embeddings, it's cost overkill. Goodluck with that.
English
0
0
0
106
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨BREAKING: Google just dropped an "Always-On Memory Agent" built with Gemini Flash-Lite. It gives AI agents a persistent brain that runs 24/7 -- ingests text, images, audio, video, and PDFs, then consolidates memories like the human brain does during sleep. No vector database. No embeddings. Just an LLM that reads, thinks, and writes structured memory. 100% Opensource.
Sukh Sroay tweet media
English
70
135
953
86.5K
Lupascu Cristian, PhD retweetledi
ericosiu
ericosiu@ericosiu·
Your OpenClaw can now manage your Claude Code instances without you needing to be there the whole time. Just activated ACP and I’m curious to see how it’ll work on small tasks initially. Some ideas my OpenClaw messaged me: ACP works now. Here's what we can throw at it tomorrow: 🔥 Starting point: Fix the video clipper You're already stuck on it. Perfect first ACP job. You describe what's broken, I spawn Claude Code on Opus 4.6, it debugs and fixes it while you work on the dashboard. Two builds running in parallel instead of you context-switching. Outside the box ideas: 1. Agent self-improvement pipeline Spawn ACP to read every agent's compounding-learnings.json and rewrite their prompts to be better. The agents literally improve themselves overnight. You wake up to sharper Oracle recs, better Cyborg candidates, tighter Flash content. 2. Auto-generate client proposals from CRM data Arrow (sales agent) finds a prospect → ACP builds a custom landing page or one-pager for that specific prospect using their industry, pain points, and our case studies. Personalized at scale. Every outbound email links to a page built just for them. 3. Meeting prep bot Before every call on your calendar, ACP pulls the prospect's website, LinkedIn, recent news, HubSpot history, Gong transcripts, and generates a one-page brief. Pushed to Telegram 30 minutes before the meeting. Never walk into a call cold again. 4. Dashboard feature factory Instead of you building dashboard features manually in Claude Code, you describe what you want in Telegram, I spawn ACP, it builds a feature branch, deploys a Vercel preview URL. You click around for 30 seconds on your phone. Thumbs up → I merge. Thumbs down → I tell ACP what to fix. That's the workflow we designed earlier today, actually working. 5. Cron-triggered builds Oracle finds a high-value keyword cluster → automatically spawns ACP to scaffold a landing page for it. No human in the loop. Oracle thinks → ACP builds → PR opens → you approve. 6. Competitive intel pages ACP scrapes a competitor's pricing/features page, builds a comparison page for Single Grain showing why we're better. One command, full page, ready to deploy. For tomorrow morning, I'd say: 1. Video clipper fix via ACP (unblocks you immediately) 2. Dashboard feature via ACP (proves the workflow end-to-end) 3. Meeting prep automation (high value, moderate build) When you hit me up in the morning, just tell me what's broken with the video clipper and I'll have Claude Code on it before you finish your coffee.
English
8
8
153
15.7K
Lupascu Cristian, PhD retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
🧭 gogcli v0.10.0 shipped: Google in your terminal. (really, Google should make this, but here we are) big Docs/Slides upgrade (markdown updates + tables, tab-aware read/edit, markdown/template slide creation, image-deck ops), Drive upload --replace + convert/share-to-domain, Gmail label delete + watch excludes, Contacts birthdays/notes and more... github.com/steipete/gogcl…
English
173
253
4K
561.7K
Lupascu Cristian, PhD retweetledi
Pavel Durov
Pavel Durov@durov·
Telegram built the most powerful API for chatbot and mini app developers — and made it free. We keep expanding it with features like these colored buttons and custom emoji. Track what’s new: core.telegram.org/bots/api
English
620
259
3.4K
304.7K
Maran
Maran@TheMaran·
deadly combination: ex gambler ceo + ex airdrop farmer mods + toxic cult + cracked devs + community is everything + backed by binance
Maran tweet media
English
36
3
116
6.8K
Lupascu Cristian, PhD retweetledi
Naval
Naval@naval·
Vibe coding is the new product management. Training and tuning models is the new coding.
English
1K
2.1K
20.9K
1.2M