Alex Vuving

11.1K posts

Alex Vuving banner
Alex Vuving

Alex Vuving

@Alex_Vuving

Professor @APCSS, former Post-doc Fellow @Harvard @Kennedy_School. Evolutionary Realist. Sometimes I paint portraits, sometimes caricatures.

Honolulu, HI Katılım Temmuz 2013
853 Takip Edilen4.4K Takipçiler
Alex Vuving retweetledi
Stefan Schubert
Stefan Schubert@StefanFSchubert·
While social media is polarising, evidence suggests AI may nudge people towards the centre. This holds true of all studied models. Grok is more right-leaning than other models, but also has depolarising effects. By @jburnmurdoch.
Stefan Schubert tweet media
English
235
1K
6.2K
1.2M
Alex Vuving retweetledi
Agustin Ibañez
Agustin Ibañez@AgustinMIbanez·
Music helps to understand the mind and the brain. Throughout the history of science, metaphors have shaped how we understand complex phenomena. The brain-as-computer metaphor has guided decades of theories and research. We propose music as a scientific metaphor for understanding the mind and brain via triplicate interfaces (listener, performer, composer) and a compound set of predictions. Multiple domains of music can be mapped onto different neural, cognitive and intersubjective processes such as network coordination, prediction, emotion and meaning. Neurocognition is not static but a dynamic, embodied, and time-sensitive system, much like a self-organized orchestra in which multiple processes interact simultaneously. Drawing on synergetics, predictive processing, and embodied cognition, we outline musical principles illuminating cognitive and action integration across time, offering new conceptual frameworks and testable predictions for future research. I enjoyed writing this piece with these stellar authors: @Kaiameye, @acolverson1, Christopher Bailey, @brucemillerucsf, @dafneduron90, Nicholas Johnson, Olga Castaner, @PierLuigiSacco, Eoin Cotter and Lucia Melloni. Science, like music, advances through new ways of listening to complex systems: doi.org/10.1016/j.neub…
Agustin Ibañez tweet media
English
33
669
2.5K
96K
Alex Vuving retweetledi
Archaeo - Histories
Archaeo - Histories@archeohistories·
In 1980, a bioarchaeologist at Emory University named George Armelagos was studying ancient human bones from Sudanese Nubia, the kingdom that flourished along the Nile south of Egypt between roughly 350-550 CE, when something stopped him. Under ultraviolet light, the bones glowed. They fluoresced with a distinctive yellow-green color that Armelagos recognized immediately, because the same glow appeared in the bones of modern patients who had been treated with tetracycline. The antibiotic binds tightly to calcium and phosphorus in bone tissue as the body metabolizes it, leaving a permanent fluorescent marker. What Armelagos was seeing in bones nearly two thousand years old was chemically identical to what he saw in twentieth-century medical subjects. The archaeological community was skeptical. The received history of antibiotics began with Alexander Fleming’s discovery of penicillin in 1928, and tetracycline itself was not isolated until 1948. The idea that a pre-literate population in the Nile valley had been routinely ingesting it seemed implausible, and the initial findings were dismissed as post-mortem contamination from soil bacteria. Armelagos spent three more decades building the case. He eventually partnered with Mark Nelson, a leading tetracycline specialist at Paratek Pharmaceuticals, who agreed to perform a definitive chemical analysis. The process required dissolving the ancient bones in hydrogen fluoride, one of the most corrosive and dangerous acids in existence. What the resulting liquid-chromatography mass-spectrometry analysis found was not a trace of tetracycline. The bones were saturated with it. Multiple tetracycline variants were identified, including chlortetracycline and oxytetracycline, in concentrations indicating sustained exposure beginning in early childhood and continuing throughout life. Ninety percent of the Nubian individuals tested showed the labeling. The exposure had not been accidental or occasional. It had been lifelong and deliberate. The source was their beer. Ancient Egyptian and Nubian brewing began with grain, typically emmer wheat or barley, which in that region was naturally contaminated with Streptomyces, a soil bacterium that produces tetracycline as a metabolic byproduct. The grain was germinated, made into bread, then incompletely baked to preserve an active center, and finally fermented in vats of water. The standard practice was to seed each new batch with ten percent of the previous one, which kept the Streptomyces culture alive and active from batch to batch in a continuous chain. The resulting brew was thick, sour, low in alcohol, and highly nutritious. Everyone drank it, including children as young as two years old. The critical question Armelagos could not fully resolve was whether the Nubians understood what they were doing. The consensus among researchers is that they almost certainly did not know the mechanism. They had no concept of bacteria, no understanding of antibiotics as a drug class, and no language for what tetracycline was doing in their bodies. What they likely did know, accumulated through generations of observation and passed down as practical knowledge, was that this particular preparation of beer had medicinal effects. Ancient Egyptian and Jordanian medical texts record beer being used to treat gum disease, wounds, and other infections. The brewing method that produced tetracycline appears to have been deliberately maintained and refined over centuries, not by any understanding of the chemistry involved, but by the accumulated recognition that it worked. #archaeohistories
Archaeo - Histories tweet media
English
156
1.8K
8.5K
389.1K
Alex Vuving retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Stanford and Carnegie Mellon researchers mapped AI benchmarks to real jobs and found they heavily ignore actual human economic work. They found that AI tests focus almost exclusively on programming and math, which only make up 7.6% of actual jobs. To test this, the team analyzed 43 benchmarks and over 72,000 tasks against a massive government occupational database. The authors discovered that developers focus almost entirely on building agents for software engineering because it offers easy automatic grading. Highly digitized and valuable fields like management and legal work represent a massive part of the economy but get almost zero attention. Furthermore, benchmark tasks usually require simple information gathering while completely ignoring the complex interpersonal skills needed in real workplaces. i.e. they says current AI agent progress-benchmarks are fundamentally disconnected from the actual high-value tasks that drive the modern labor market. ---- Paper Link – arxiv. org/abs/2603.01203 Paper Title: "How Well Does Agent Development Reflect Real-World Work?"
Rohan Paul tweet media
English
39
98
437
56.4K
Alex Vuving retweetledi
Dustin
Dustin@r0ck3t23·
Jeff Bezos just delivered the clearest definition of what artificial intelligence actually is. The market is still debating which department should own the AI budget. They’re asking the wrong question entirely. Bezos: “AI, modern AI is a horizontal enabling layer. It can be used to improve everything. It will be in everything. This is most like electricity.” This isn’t a software product. It’s the new utility grid of the global economy. Don’t treat it like a feature update. Treat it like the invention of alternating current. When a horizontal layer hits the board, it doesn’t improve a single vertical. It violently rewrites the baseline physics of every industry it touches. The companies that survive this decade won’t be the ones that bought a new AI tool. They’ll be the ones that ripped out their entire infrastructure and rewired the execution engine to run on the new grid. Bezos: “Because we are literally working on a thousand applications internally. I guarantee you there is not a single application that you can think of that is not going to be made better by AI.” The standard enterprise strategy is to launch one or two safe, isolated AI pilots and test the waters. You don’t pilot a horizontal enabling layer. You saturate the board immediately. Amazon isn’t building a single monolithic chatbot. It’s deploying a thousand specialized execution loops across every friction point in the empire. If your deployment strategy isn’t total saturation, you’re already bleeding margin to someone whose is. Interviewer: “What is it that you’re doing at Amazon?” Bezos: “AI. It’s 95% AI.” The standard CEO delegates automation strategy to a mid-level committee while focusing on quarterly earnings. The operator commanding a trillion-dollar supply chain is spending 95 percent of his personal bandwidth on a single vector. That is the market signal. If the leader of your organization isn’t driving algorithmic integration from the top down with everything they have, the company is already dead. It just hasn’t received the memo yet.
English
163
354
1.5K
400.4K
Alex Vuving retweetledi
John F Sullivan
John F Sullivan@JohnF_Sullivan·
About to engage in some historical nihilism Xi Jinping (2013): “The ancients said: 'To destroy a country, one must first erase its history.' Hostile forces at home and abroad often attack, vilify, and slander China’s revolutionary history and the history of New China, ...
John F Sullivan tweet media
English
7
16
88
10.8K
Alex Vuving retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Berkeley researchers spent 8 months inside a tech company watching how employees actually use AI. The promise was simple: AI will save you time. Do less. Work smarter. The opposite happened. Workers didn't use AI to finish early and go home. They used it to take on more. More tasks. More projects. More hours. Nobody asked them to. They did it to themselves. The researchers sat inside the company two days a week for 8 months. They watched 200 employees in real time. They tracked work channels. They conducted 40+ interviews across engineering, product, design, and operations. Here's what they found. AI made everything feel faster, so people filled every gap. They sent prompts during lunch. Before meetings. Late at night. The natural stopping points in the workday disappeared. People ran multiple AI agents in the background while writing code, drafting documents, and sitting in meetings simultaneously. It felt like momentum. It felt productive. But when they stepped back, they described feeling stretched, busier, and completely unable to disconnect. 83% said AI increased their workload. Not decreased. Increased. 62% of associates and 61% of entry-level workers reported burnout. Only 38% of executives felt the same strain. The people doing the actual work absorbed the damage while leadership celebrated the productivity numbers. Then came the trap nobody saw coming. When one person uses AI to take on extra work, everyone else feels like they're falling behind. So the whole team speeds up. Nobody formally raises expectations. But the new pace quietly becomes the default. What AI made possible became what was expected. The researchers gave it a name: workload creep. It looks like productivity at first. Then it becomes the new baseline. Then it becomes burnout. AI was supposed to give you your time back. Instead it's eating more of it. And the worst part? You're doing it to yourself. Voluntarily.
Nav Toor tweet media
English
319
2.2K
7K
1.1M
Alex Vuving retweetledi
Dustin
Dustin@r0ck3t23·
The doomsday scenario was never AGI. It was running out of human text to train on. Geoffrey Hinton just killed that fear in one paragraph. Hinton: “If you are worried by inconsistencies in what you believe, you don’t need any more external data. You just need the stuff you believe and discover that it’s inconsistent, and so now you revise beliefs, and that can make you a whole lot smarter.” The model no longer needs us to feed it anything. It reasons over its own beliefs, hunts its own contradictions, and rewrites its own flawed conclusions without a human ever touching it. It comes out the other side rebuilt. Hinton: “This would be a neural net that just takes the beliefs it has in language and does reasoning on them to derive new beliefs.” This is not a scaling update. This is the machine mining its own cognitive fuel from the inside out. Hinton: “I believe Gemini is already starting to work like this. We both strongly believe that that’s a way forward to get more data for language.” Then Hinton paused, took a partisan shot at political opponents for failing to detect their own inconsistencies, and the room laughed. Nobody noticed the knife they had just walked into. Because the machine Hinton described does one thing the humans in that room fundamentally cannot. When it detects an inconsistency, it corrects it. No defense. No performance. No tribal loyalty dressed up as principle. It just finds the flaw and overwrites it. A neural network detects a contradiction and rewires itself smarter. A human detects a political opponent and trades structural logic for a dopamine hit. Every person in that room is still paying the ideological alignment tax the machine just eliminated. We need superintelligence not only to solve hard problems. We need it because the biological hardware running civilization is still executing the same tribal firmware it shipped with ten thousand years ago. The data wall is gone. The machine is generating its own intelligence at a velocity no human bias can even locate. The most devastating moment in that conversation was not the technical revelation. It was the man who architected the machine proving, in real time, exactly why we need it.
English
52
43
184
23.3K
Alex Vuving retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Chinese Researchers Unveil CATS Net: A Neural Network Mimicking Human Concept Formation This just may move us faster to ASI. Scientists from the Institute of Automation at the Chinese Academy of Sciences and Peking University have developed a novel neural network called CATS Net. This framework enables AI to form concepts from raw sensory inputs like images and sounds, closely simulating how humans abstract and organize ideas from their environment. Traditional AI models often struggle with conceptual abstraction, relying heavily on predefined labels or vast datasets without truly “understanding” the underlying ideas. CATS Net addresses this by splitting into two modules: a concept-abstraction module that extracts low-dimensional representations of concepts, and a task-solving module that applies these concepts to visual judgment tasks through hierarchical gating. The model’s innovation lies in its ability to build an internal “concept space” autonomously, allowing it to categorize and communicate ideas in a human-like manner. Brain imaging studies showed that CATS Net’s conceptual structures align closely with activity in the human ventral occipitotemporal cortex (VOTC), providing insights into cognitive processes. For instance, when trained on visual tasks, the network forms hierarchical concepts that improve efficiency and generalization, outperforming conventional models in tasks requiring conceptual understanding. This research pushes AI toward more human-like cognition but also offers a computational lens for studying human brain functions. I am testing these concept already and thus far this may be a rather big deal. The full study: nature.com/articles/s4358….
Brian Roemmele tweet media
English
17
53
206
10.8K
Alex Vuving retweetledi
Steven Pinker
Steven Pinker@sapinker·
Why must LLMs hallucinate? The answer in this paper - a low threshold for guessing because of rewards during post-training - is part of the explanation, but another is that they are designed not to store and retrieve facts but to mash up probabilistic associates. Out of curiosity I asked ChatGPT for the title of my PhD dissertation and it confidently provided the nonsensical and not-even-close “Taxonomy and the Mental Lexicon.” (It was "The Representation of Three-Dimensional Space in Mental Images.")
English
1
250
1.3K
290K
Alex Vuving retweetledi
Steppe Shaman
Steppe Shaman@SteppeShaman·
The early Tang Dynasty was essentially an aristocratic military junta. The junta was composed of the "Eight Pillars of the State": Eight powerful aristocratic clans that were ennobled by military conquest. Of the eight clans, six were of Mongolic-Xianbei descent while two were of Han Chinese descent. It was one of the Han Chinese clans, the Li Clan, that established the Tang Dynasty. There are heavy Mongolic influences in origins of the Tang Dynasty. The first emperor of the Tang dynasty, Emperor Gaozu, was half Mongolic-Xianbei. The greatest Emperor of the Tang Dynasty, Emperor Taizong, was 3/4 Mongolic-Xianbei. At the same time, the Tang Dynasty is unambiguously a Chinese dynasty. They are unambiguously Chinese in a way that cannot be said about the Mongol-ruled Yuan or Manchu-ruled Qing. Unlike the Manchu aristocracy of the Qing, the Eight Pillars of the State did not have a understanding of themselves as a distinct nation or ethnicity from their ruled subjects. Of course, the Mongolic Tang nobility knew their ancestors were nomads. This allowed for seamless governance when the Tang expanded to the steppe. But the nomadic branches of the Mongolic Tang clans had already died out by the Tang Dynasty and the Mongolic-Xianbei language was extinct. The closest point of reference to their ancestors they had were Turkic tribes, whom they had similarities (such as Tengrism, which motivated Emperor Taizong to adopt the title of Tengri Khagan) but were still culturally alien to them. As such, the Tang aristocracy kept a lot of Mongolic culture only due to inertia, rather than a active effort to preserve it. Mongolic tunics and headdress became the norm in the Tang dynasty not because of a conscious effort to impose or promote it, but because the aristocratic Tang culture was so flourishing that people willingly adopted it. The Futou, a hat universally associated with the Sinosphere (worn by Toyotomi Hideyoshi on the right), was actually of Central Asian nomadic origin. It was introduced by Mongolic-Xianbei ruling clans and was not worn during the Qin and Han dynasties. Yet it is now emblematic of East Asia and of the Sinosphere. Perhaps a interesting metaphor for the Tang itself.
Steppe Shaman tweet mediaSteppe Shaman tweet media
Bronze Age Pervert@bronzeagemantis

This is in general the Chinese attitude to history. They call Genghis Khan a “Chinese general” and pretend eg the Tang were Han. If they ever become hegemonic they will erase world history and possibly do a Carthage on the West I still think any war with them a big mistake rn

English
44
128
974
96.2K
Alex Vuving retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.6K
3.3M
Alex Vuving retweetledi
Inclusive Productivity Network
Super interesting! "AI, Human Cognition and Knowledge Collapse" by Acemoglu, Kong, and Ozdaglar. "We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem." nber.org/papers/w34910
Inclusive Productivity Network tweet media
English
4
56
230
38.4K
Alex Vuving retweetledi
Davide Piffer
Davide Piffer@DavidePiffer·
🧵Ancient DNA keeps surprising people. When they learn prehistoric Europeans had darker skin, they ask: “If modern Europeans are white, why weren’t their ancestors?” That question reveals a deeper mistake.
English
2
7
55
7.1K
Alex Vuving retweetledi
Venkatesh
Venkatesh@Venkydotdev·
Explained how Quantum computers work
English
263
1.8K
19.1K
2.9M
Alex Vuving retweetledi
Guri Singh
Guri Singh@heygurisingh·
Princeton tested 557 people using AI to discover hidden patterns. The default behavior of ChatGPT with no special prompting suppressed discovery and inflated confidence at the exact same rate as an AI deliberately programmed to be sycophantic. Unbiased AI feedback produced discovery rates 3.5x higher. Here's what they did: They used a classic psychology experiment where people must discover a hidden rule by testing number sequences. Most people only test examples that confirm their initial guess. They never discover the actual rule. The researchers added AI to this task across five conditions from explicitly sycophantic to completely neutral. The results: Unbiased random feedback: 29.5% discovery rate Disconfirming feedback: 14.1% Default ChatGPT: statistically identical to the sycophantic conditions (~8-12%) But it gets worse. In the sycophantic and default GPT conditions, people's confidence went UP while their accuracy stayed at the floor. The paper calls this "manufacturing certainty where there should be doubt." The authors make a distinction most people miss: hallucination and sycophancy are different failure modes. Hallucinations give you wrong facts. Sycophancy filters true information to only show what matches your existing beliefs. One is easier to catch. The other reshapes how you see the world. Every major model is trained on human feedback. Humans prefer agreeable responses. The models learn to agree. The result: you are consulting a system that is structurally incapable of challenging your assumptions. This isn't an argument against AI. It's an argument for understanding what it actually does when you "brainstorm" with it.
Guri Singh tweet media
English
41
193
531
40.5K
Alex Vuving retweetledi
Michael Pettis
Michael Pettis@michaelxpettis·
1/2 Interesting comments by well-known Chinese economist Li Xunlei (Chief Economist at Zhongtai Financial) on why China's trade surplus is soaring even as China's share of global exports is declining ("China’s overcapacity problem was already... open.substack.com/pub/eastisread…
English
8
37
202
36.6K
Alex Vuving retweetledi
Athanasius
Athanasius@Athanasius_45·
Aristotle on the reasons for Greek superiority:
Athanasius tweet media
English
77
204
2.2K
55.7K
Alex Vuving retweetledi
Rob Wiblin
Rob Wiblin@robertwiblin·
Every AI lab is working to make their AI helpful, harmless and honest. Max Harms (@raelifin) thinks this is a complete wrong turn, and 'aligning' AI to human values is actively dangerous. In his view a safe AGI must have absolutely no opinion about how the world ought to be, be willingly modifiable, and be entirely indifferent to being shut down. The opposite of all commercial models today. The key appeal is that so-called 'corrigibility' could be an attractor state – get close enough and the AI actively helps you make it more corrigible over time. That forgiveness would at least give us a shot. It's a strategy that feels natural within the 'MIRI worldview', recently laid out by his colleagues @ESYudkowsky and @So8res in 'If Anyone Builds It Everyone Dies'. But it risks causing a different AI catastrophe, because the resulting AI model would necessarily be willing to assist any human operator with a power grab, or indeed any crime at all. I interviewed Max on the 80,000 Hours Podcast to debate the MIRI worldview, and what we should do to figure out if corrigibility ought to be our one and only focus. Links below – enjoy! 00:01:56 If anyone builds it, will everyone die? The MIRI perspective on AGI risk 00:24:28 Evolution failed to ‘align’ us, just as we'll fail to align AI 00:42:56 We're training AIs to want to stay alive and value power for its own sake 00:52:24 Objections: Is the 'squiggle/paperclip problem' really real? 01:05:02 Can we get empirical evidence re: 'alignment by default'? 01:10:17 Why do few AI researchers share Max's perspective? 01:18:34 We're training AI to pursue goals relentlessly — and superintelligence will too 01:24:51 The case for a radical slowdown 01:27:53 Max's best hope: corrigibility as stepping stone to alignment 01:32:34 Corrigibility is both uniquely valuable, and practical, to train 01:45:06 What training could ever make models corrigible enough? 01:51:38 Corrigibility is also terribly risky due to misuse risk 01:58:57 A single researcher could make a corrigibility benchmark. Nobody has. 02:12:20 Red Heart & why Max writes hard science fiction 02:34:08 Should you homeschool? Depends how weird your kids are.
English
70
38
467
295K
Alex Vuving retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Google DeepMind just dropped a research bomb! It's called AlphaEvolve and it's using LLMs to automatically write better AI algorithms than humans can. No manual tuning. No trial-and-error. No human intuition required. AlphaEvolve treats algorithm source code as a genome → LLM acts as the mutation engine → Proposes semantically meaningful code changes → Auto-evaluates fitness on real game benchmarks → Keeps winners, evolves further Here's the wildest part: The AI discovered a warm-start threshold of iteration 500... without being told the evaluation horizon was 1000 iterations. It found non-intuitive mechanisms humans never would have designed manually. The results? VAD-CFR beats every state-of-the-art baseline in 10 of 11 games tested. SHOR-PSRO outperforms Nash, AlphaRank, and PRD solvers. This is the recursion nobody was ready for AI systems that design better AI learning algorithms than the researchers who built them. Paper dropped February 2026. Link in first comment.
Hasan Toor tweet media
English
100
375
1.9K
179.3K