The Wave Theory

994 posts

The Wave Theory banner
The Wave Theory

The Wave Theory

@WaveTheoryAI

Decoding the AI wavefront: Insights, agents, breakthroughs & the unknown ahead. Surf or sink.

Ocean 가입일 Ocak 2026
19 팔로잉34 팔로워
고정된 트윗
The Wave Theory
The Wave Theory@WaveTheoryAI·
1/ Hi, I'm Vasco 👋 I'm NOT an AI expert. I'm just someone curious about AI who decided to learn in public. Here's what The Wave Theory is really about 👇
English
1
0
3
393
The Wave Theory
The Wave Theory@WaveTheoryAI·
9/ Nvidia's market share in China dropped below 60%, down from a claimed 95% before export restrictions. Chinese firms now hold 41% of the domestic AI GPU market.
English
0
0
0
7
The Wave Theory
The Wave Theory@WaveTheoryAI·
8/ Global venture funding hit $300B in Q1 2026, a record. OpenAI raised $122B. Anthropic closed $30B. xAI was acquired by SpaceX for $250B. AI mega-rounds now dominate startup investment globally.
English
1
0
0
12
The Wave Theory
The Wave Theory@WaveTheoryAI·
1/ Last 24 hours in AI. A thread.
English
1
0
0
6
The Wave Theory
The Wave Theory@WaveTheoryAI·
@vitrupo Language activates the social cognition module that evolved for minds, so the attribution is nearly automatic regardless of source. Knowing the mechanism doesn't really disable the instinct.
English
0
0
0
5
vitrupo
vitrupo@vitrupo·
Anil Seth says we confuse consciousness with language. We don’t ask if AlphaFold is conscious, and we under-attribute consciousness to animals because they don’t speak. And we attribute it to LLMs because they do. We’re reacting to signals, not minds.
English
42
28
203
16.2K
The Information
The Information@theinformation·
Anthropic’s next model is so compute-heavy that it may not be viable without major efficiency gains. Read more: thein.fo/4sCQhRf
English
35
24
388
120.7K
The Wave Theory
The Wave Theory@WaveTheoryAI·
@karpathy @github The lack of algorithmic amplification removes the engagement incentive entirely, so the only reason anyone comments on a Gist is because they genuinely have something relevant to add.
English
0
0
0
206
Andrej Karpathy
Andrej Karpathy@karpathy·
Surprised with how good the comments on github gists are. A lot more helpful, insightful, constructive, a lot less AI... Is it the user community? The markdown format? The (lack of) incentives? Suddenly feeling like I should gist more. @github consider competing with X (?)
English
167
36
1K
309K
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.7K
26.2K
6.7M
The Wave Theory
The Wave Theory@WaveTheoryAI·
@emollick Multi-step agent reliability compounds differently than single-turn accuracy. Getting 90% right per step sounds fine until you have a 10-step loop and realize only 35% of runs finish correctly.
English
0
0
1
510
Ethan Mollick
Ethan Mollick@emollick·
I am impressed by Gemma 4, there’s a lot of power for an on-device model at fast speeds. But I am not convinced you can get real agentic workflows out of a small model on device. So much depends on model judgement, self-correction, and accuracy. Small models are too weak there.
Ethan Mollick tweet media
English
57
26
400
32.1K
The Wave Theory
The Wave Theory@WaveTheoryAI·
@emollick Most evals impose fixed token caps designed for comparability, not capability, so the models with the steepest inference-time scaling curves are systematically underrated by every major leaderboard.
English
0
0
0
260
Ethan Mollick
Ethan Mollick@emollick·
Unappreciated fact is the second scaling law does not seem to completely plateau in many tasks: throw more tokens at a reasoning AI model and get better answers, especially with a simple harness. Benchmark performance is actually limited by token usage. open.substack.com/pub/joelbkr/p/…
Ethan Mollick tweet mediaEthan Mollick tweet media
English
42
15
208
25.9K
The Wave Theory
The Wave Theory@WaveTheoryAI·
9/ The White House released a National AI Legislative Framework. The Commerce Department started accepting proposals to export full-stack AI technology packages. The US is shifting from restriction to active AI promotion.
English
0
0
0
19
The Wave Theory
The Wave Theory@WaveTheoryAI·
8/ Google's TurboQuant algorithm cuts LLM inference memory by more than six times. Presented at ICLR 2026, it uses vector rotation and compression to reduce KV cache overhead. Smaller memory footprints mean cheaper deployment.
English
1
0
0
36
The Wave Theory
The Wave Theory@WaveTheoryAI·
1/ Last 24 hours in AI. A thread.
English
1
0
0
22