Vaughn Davis II

1.1K posts

Vaughn Davis II banner
Vaughn Davis II

Vaughn Davis II

@Vtothe2ndPower

Founder & CEO Hyper Nimbus | Hyper Talent

California, USA Katılım Temmuz 2018
897 Takip Edilen1.5K Takipçiler
Vaughn Davis II retweetledi
How To AI
How To AI@HowToAI_·
MIT proved every major AI model is secretly converging on the same "brain." It’s called the “platonic representation hypothesis,” and it’s one of the most mind-blowing papers you’ll ever read. You train a vision model purely on images. You train a language model purely on text. They use completely different architectures. They process completely different data. They should have completely different "brains." But as these models scale up, something impossible is happening. When researchers measure how they organize information, the mathematical geometry is identical. A model that only "sees" images and a model that only "reads" text are measuring the distance between concepts in the exact same way. The models are converging. The researchers named this after Plato’s Allegory of the Cave. Plato believed that everything we experience is just a shadow of a deeper, hidden, perfect reality. The paper argues that AI models are doing the exact same thing. They are looking at the different "shadows" of human data, text, images, audio. And they are independently discovering the exact same underlying structure of the universe to make sense of it. It doesn't matter what company built the AI. It doesn't matter what data it was trained on. As models get larger, they stop memorizing their specific tasks. They are forced to build a statistical model of reality itself. And there is only one reality to map. 2024, Arxiv
How To AI tweet media
English
243
826
4K
293.9K
Vaughn Davis II retweetledi
Elon Musk
Elon Musk@elonmusk·
Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI. AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.
English
46.5K
22.6K
195.2K
69.3M
Vaughn Davis II retweetledi
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Researchers just mathematically proved that AI layoffs will collapse the economy: and every CEO already knows it. The AI Layoff Trap. A game theory paper from UPenn + Boston University is glaringly important! 100K+ tech layoffs in 2025. 80% of US workers exposed. And no market force can stop it. → Every company fires workers to cut costs → Every fired worker stops buying products → Revenue collapses across every sector → The companies that fired everyone go bankrupt It's a Prisoner's Dilemma with math behind it. Automate and you survive short-term. Don't automate and your competitor kills you. But everyone automating destroys the demand that makes all companies viable. UBI (universal basic income) won't fix it. Profit taxes won't fix it. The researchers found only one solution: a Pigouvian automation tax "robot tax" The AI trap on the economy is here!
Priyanka Vergadia tweet media
English
559
2.2K
8.9K
1.5M
Vaughn Davis II retweetledi
Ryan Fournier
Ryan Fournier@RyanAFournier·
Sam Altman has admitted he is on a waitlist for a procedure that would digitize his brain. The procedure would kill him. He considers this an acceptable trade for digital immortality. This is the person making decisions about the future of artificial intelligence for hundreds of millions of users. A man who views ending his own biological life as a reasonable step toward uploading his consciousness to the cloud. These are not the priorities of a stable leader.
English
3.1K
6.2K
43.8K
3M
Vaughn Davis II retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7.1K
58.5K
21M
Vaughn Davis II retweetledi
God of Prompt
God of Prompt@godofprompt·
HOLY SHIT... Google just proved that reasoning models aren't thinking, they're arguing. > DeepSeek-R1 spontaneously generates internal multi-agent debates inside its own chain of thought. Nobody trained it to do this. Reinforcement learning for accuracy produced it automatically. The model rediscovered what centuries of epistemology already knew: robust reasoning is a social process. > Google researchers analyzed frontier reasoning models including DeepSeek-R1 and QwQ-32B and found something nobody expected. These models don't improve by thinking longer. They improve by arguing internally. Distinct cognitive perspectives emerge inside the chain of thought perspectives that debate, question, verify, and contradict each other. The researchers call it a "society of thought." It causally accounts for the accuracy advantage on hard reasoning tasks. When they explicitly primed and amplified this multi-party internal conversation, performance improved further. > The emergence finding is what matters. None of these models were trained to produce internal debates. When reinforcement learning rewards a base model solely for getting the right answer, multi-perspective conversational behavior spontaneously increases. The optimization pressure alone rediscovered what cognitive science has argued for decades: that robust reasoning is fundamentally social, even when it happens inside a single mind. The model didn't learn to think. It learned to argue with itself. > The implication for how we build AI is significant. Today's reasoning models produce a single conversation what the researchers call an AI town hall transcript. But effective groups don't work that way. They have hierarchy, specialization, division of labor, structured disagreement. Brainstorming, devil's advocacy, constructive conflict. None of that is designed into current systems. It emerges accidentally when it emerges at all. The entire toolkit of team science, small-group sociology, and organizational psychology has never been applied to AI reasoning. That's the next design space. The broader argument Google is making: → The AI singularity will not be one godlike mind it will be a plural, social, hybrid intelligence → Reasoning models spontaneously generate internal multi-agent debate without being trained to do so → Reinforcement learning for accuracy alone produces conversational, multi-perspective behavior → Every prior intelligence explosion primate social groups, human language, writing, institutions was social, not individual → RLHF as currently designed is a parent-child correction model fundamentally unable to scale to billions of agents → The alternative: institutional alignment roles, norms, and protocols the way courtrooms and markets work, not individual virtue > The singularity framing has always assumed intelligence is a single quantity that goes up. Google's argument is that intelligence was never a single quantity. Primate intelligence scaled with social group size, not habitat difficulty. Human language created what Tomasello calls the cultural ratchet knowledge accumulating across generations without any individual reconstructing the whole. Writing and law externalized social intelligence into infrastructure. A Sumerian scribe running a grain accounting system didn't understand its macroeconomic function. The system was more intelligent than he was. AI is the next step in that sequence, not a break from it. > What migrates into silicon isn't abstract reasoning. It's social intelligence in externalized form every parameter a compressed residue of communicative exchange, encountering itself on a new substrate. The models that reason best aren't the ones with the most compute. They're the ones that best simulate the social process that produced human knowledge in the first place. > The singularity isn't coming. It's already here and it looks nothing like what anyone predicted. > Not one godlike mind ascending. Billions of agents arguing, forking, debating, governing each other. Intelligence growing like a city, not a single meta-mind.
God of Prompt tweet media
English
68
85
470
37.5K
Vaughn Davis II retweetledi
Santiago
Santiago@svpino·
Robotics is about to have its LLM moment. One of the big issues with industrial robots is having to program individual tasks down to the millimeter and hope nothing changes. Now imagine a foundation model that controls any robot for any task, learning from data rather than being manually programmed. This could become a flywheel where the more we use the robots, the more data we generate, and the better the robots become. We have no idea what's coming.
Skild AI@SkildAI

Nearly every system today, from energy to chips to food, is bottlenecked by scarce human capital. We are changing that by building AI-powered industries of the future. Check out Skild Brain robustly assembling GPU racks, a highly precise task, live at #NvidiaGTC.

English
61
77
514
60.1K
Vaughn Davis II retweetledi
Neyazuddin Ansari
Neyazuddin Ansari@riyazz_ai·
🚨 BREAKING: Stanford just unlocked the cheat code for infinite AI reasoning. Not an upgrade. Not another model. A completely new way for AI to teach itself. Researchers at Stanford University just introduced a framework called Agent0… And it doesn’t learn like anything we’ve seen before. Most AI systems today depend on: • Massive curated datasets • Human feedback loops • Predefined training pipelines Agent0 throws all of that out. No labeled data. No human supervision. No hand-holding. Just pure self-evolution. Here’s what makes it wild: Agent0 starts from zero knowledge… Then improves by: • Generating its own problems • Solving them • Learning from its own mistakes • Iterating endlessly It’s basically AI teaching itself how to think. And the results? Honestly insane: → +18% improvement in mathematical reasoning → +24% boost in general reasoning tasks → Outperforms every existing self-play method currently available This isn’t incremental. This is a leap. But here’s the craziest part: You can literally watch the system evolve… It begins with basic geometry problems (simple shapes, angles, proofs) Then gradually levels up to: • Multi-step logical reasoning • Complex combinatorics • Abstract problem-solving No external help. Just self-driven intelligence scaling. Why this matters: We might be entering a phase where AI no longer needs: • Human-generated datasets • Expensive labeling • Constant retraining Instead… AI systems could: • Continuously improve themselves • Adapt in real-time • Unlock reasoning abilities we didn’t explicitly program If this direction scales… We’re not just building smarter AI. We’re building AI that learns how to become smarter on its own.
Neyazuddin Ansari tweet media
English
22
53
236
20.6K
Vaughn Davis II
Vaughn Davis II@Vtothe2ndPower·
HyperClaw v0.1.0 is live. Built for people who need leverage—not more tools. → autonomous execution → your data stays yours → local or cloud deployment This is about agency. github.com/mentatalbans/h…
English
1
0
0
28
Vaughn Davis II
Vaughn Davis II@Vtothe2ndPower·
Spoke on sovereign AI + agentic systems—where AI doesn’t assist, it operates. At the same time, we launched HyperNimbus. Massive response from operators and enterprise leaders who understand the shift: software → intelligence infrastructure. Grateful for Switzerland—precision market, serious builders, real conversations. Now we deploy. Scale starts now.
Vaughn Davis II tweet mediaVaughn Davis II tweet mediaVaughn Davis II tweet mediaVaughn Davis II tweet media
English
1
0
2
42
Vaughn Davis II retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
Anthropic fucking killed it (again). biggest ai product launch of the year so far. claude can now control your entire computer autonomously. anything you can do on a computer - claude can. your very own digital employee. - any app, browser, file, spreadsheet, tool claude can intelligently access and operate. - claude controls your entire screen (like a human), no connectors. this is a huge step-up in intelligence. - best part: you can text claude to do things from your phone and it'll do work on your computer! - in the last week anthropic has shipped 9 features that have built up to this: a fully automated digital human. unreal
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
298
481
8.1K
2M
Vaughn Davis II retweetledi
Tuki
Tuki@TukiFromKL·
🚨 Claude just shipped an AI that controls your entire computer.. opens apps.. browses the web.. fills spreadsheets.. while you're not even there.. that's OpenClaw's entire product.. shipped as a Monday night feature update.. Anthropic didn't buy them.. they replaced them.. every startup building on top of AI just watched their whole company become someone else's changelog.
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
117
161
2.3K
683.7K
Vaughn Davis II retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
157
658
3.6K
498.5K
Vaughn Davis II retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
Andrej Karpathy (@karpathy), OpenAI co-founder, ex-Tesla AI, "vibe coding" creator. In just 4 mins, he explains why Claude Skills, MCP servers, and AI agents are past the hype and are now the new baseline for building. Worth every second ↓
English
105
434
3.6K
414.9K