Eduardo Alvarado

244 posts

Eduardo Alvarado banner
Eduardo Alvarado

Eduardo Alvarado

@roboonaut

Postdoc @ Max Planck Institute for Informatics | PhD @ EPX | Virtual Avatars and Human Motion | Dad 👦🏻👶🏻

Stuttgart, Germany Katılım Haziran 2020
511 Takip Edilen253 Takipçiler
Sabitlenmiş Tweet
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Have you ever wondered how avatars for example using #SMPL affect natural scenes at multiple scales? We present "TRAIL: Simulating the impact of human locomotion on natural landscapes", shown at #CGI2024. 📝 Paper: rdcu.be/dOk8O 🖥️ Github: github.com/edualvarado/TR… 🧵
English
1
6
17
2.3K
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Something similar has been my workflow since the year started, plus AI-generated summaries using Zotero for paper indexing. So far a great choice.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
1
96
Eduardo Alvarado retweetledi
Vlad Golyanik
Vlad Golyanik@VGolyanik·
Quantum architecture search (QAS) meets 3D! Designing expressive, lightweight quantum neural networks (QNNs) that mitigate barren plateaus is hard. Our 𝐿𝑎𝑦𝑒𝑟𝑒𝑑-𝑄𝐴𝑆 discovers such QNNs for point cloud classification. 4dqv.mpi-inf.mpg.de/LQAS/ #3DV2026 #quantum #QeCV #QML
English
0
1
7
441
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Giving my research database a new second life: custom Obsidian @obsdmd + Claude Integration with @zotero, importing paper summaries, and automatically connecting common elements. It is going to be nice to see what Claude Code and Obsidian CLI are able to do after this.
English
0
0
3
90
Eduardo Alvarado retweetledi
Marc Habermann
Marc Habermann@marc_habermann·
🔥 #3DV2026 Oral: We present a matrix-free Levenberg-Marquardt optimizer that makes second-order optimization practical for 3DGS! 🚀
English
3
34
333
25.3K
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
DLSS 5 is technically mind-blowing, but AI game-dev tools should: facilitate intent, give control, and cut costs. IMO the issue here was how NVIDIA marketed the thing. If they pitched this as real-time style filters (from photoreal to e.g., pixel art), we'd seen other reactions.
English
0
0
0
77
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
This day would come sooner or later, and now that I’m seeing it with my own eyes, I don’t know what to think. On the one hand, it’s impressive; on the other, scary. The gaming industry is changing forever.
NVIDIA GeForce@NVIDIAGeForce

Announcing NVIDIA DLSS 5, an AI-powered breakthrough in visual fidelity for games, coming this fall. DLSS 5 infuses pixels with photorealistic lighting and materials, bridging the gap between rendering and reality. Learn More → nvidia.com/en-us/geforce/…

English
0
0
0
77
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Our multi-modal diffusion architecture captures a massive range of movement—from walking and jogging to crouching, tiptoeing, and even dancing!🕺 All of this is reconstructed using only 16 pressure sensors and an IMU per insole. #MotionCapture #Wearables #DiffusionModels
English
1
0
4
453
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
NotebookLLM makes you never want to stop learning. What a tool.
English
0
0
0
59
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
@Gossip_Goblin In other words, you don’t need to teach with data how a model becomes a fly if you directly define the fly’s brain. Now the problem is scaling. It’s a great step, but still there is a great wall ahead. We can’t just “hard-code” a much complex brain.
English
0
0
0
21
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
@Gossip_Goblin Most standard AI models require a process to define their internal weights, based on 1. data (training) or 2. rewards (RL). For the fly, this weights have been measured and simulated directly - meaning: behaviour emerges from the architecture directly.
English
1
0
0
287
Gossip Goblin
Gossip Goblin@Gossip_Goblin·
Someone ELI5 the simulated fly brain thing - my brain is broken and I don't understand.
English
16
0
60
20.9K
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Having the accurate physical structure eliminates the need of training, as the weights are already “encoded”. It demonstrates that a complex biological organism's behavior is a direct result of its physical architecture - data was just our way to arrive there.
Dr. Alex Wissner-Gross@alexwg

x.com/i/article/2029…

English
0
0
0
98
Eduardo Alvarado
Eduardo Alvarado@roboonaut·
Today I saw a job posting, and the job description said “(Human)”. I wonder if there will be job openings for robots in the future, and if they will be able to apply for them “autonomously” based on their skills, with their owners getting paid. A new robotic revolution.
English
0
0
0
50