Charles Wang

13.6K posts

Charles Wang banner
Charles Wang

Charles Wang

@charleswangb

Bio/Medicine/Health AI. Transform life and the world. Complexity—Universality—Regenerativity—Transformation—Progress

Silicon Valley, CA Katılım Haziran 2009
738 Takip Edilen2.1K Takipçiler
Sabitlenmiş Tweet
Charles Wang
Charles Wang@charleswangb·
Proteins act as active matter, creating intelligibility by coming together and staying together dynamically to cultivate mutual constraints and mutual affordances, giving rise to biological structural functional organizations (SFOs).
English
27
40
249
33.4K
Charles Wang
Charles Wang@charleswangb·
Deep respect for Terence Tao. Sincerely, I wish he were equipped with a good sense of the epistemology of mathematics. If one conceptualizes reality as multidimensional — mathematics being one dimension — others are beyond its reach. For example, computation is beyond mathematics. Look no further than simple cellular automata or the halting problem. So too with countless things in the living world — you can't formulate them in mathematics.
Prof. Brian Keating@DrBrianKeating

Terence Tao told me something that is both clarifying and unsettling about large language models. The mathematics underlying today’s LLMs is not especially exotic. At its core, training and inference mostly involve linear algebra, matrix multiplication, and some calculus. This is material a competent undergraduate could learn. In that sense, there is very little mystery about how these systems are constructed or how they run. And yet the real mystery begins there. What we do not understand well is why these models perform so impressively on certain tasks while failing unexpectedly on others. Even more striking, we lack reliable principles that allow us to predict this behavior in advance. Progress in the field remains largely empirical. Researchers scale models, change datasets, run experiments, and observe what emerges. Part of the difficulty lies in the nature of the data itself. Pure randomness is mathematically tractable. Perfectly structured systems are also tractable. But natural language, like most real-world phenomena, lives in an intermediate regime. And we humans hate that liminal space! It is neither noise nor order but a mixture of both. The mathematics for this middle ground remains comparatively underdeveloped. So we find ourselves in a peculiar position. We understand the machinery, yet we cannot reliably explain its capabilities. We can describe the mechanisms that produce these systems, but we cannot predict when new abilities will appear or how performance will vary across tasks. That tension, between relatively simple mathematical tools and highly unpredictable behavior, is the central puzzle of modern AI. (Video link in comments)

English
2
0
3
336
Charles Wang
Charles Wang@charleswangb·
$ARM is the latest example of 3D CANSizing. Anything technology will eventually 3D-CANSize. The sooner, the greater the accumulative advantage. The later, the greater the risk of being 3D-CANSized by others.
Charles Wang@charleswangb

I'll write up 3D CANS this weekend. Been through many iterations since I coined it last October. 3D CANS is potent to envision, explain and create the emerging themes and paradigms, the most generative asymmetries (the 1000Xs), the death and birth, the relevance and irrelevance of many things, the plain sights and blind spots. You can get the core idea from Grok and start ripping into real-world cases (shown below), some prompts listed below. x.com/i/grok/share/4… (@DeepwriterAI does it work with Grok? interested to see what it can come up with.) 1. Use this as a guideline to distill all @charleswangb tweets on and related to "3D CANS" into an essay as a coherent flow with super clarity, succinctness, salience and profound truth: Imagine a world where the distinction between hardware and software no longer exists — where the chip, the AI model, and the domain it serves (medicine, energy, biology) are designed as one unified, co-evolving system from the ground up. That's the three-dimensional Core AI-Native Substrate (3D CANS): Compute ⇌ AI ⇌ Domain Knowledge — not three separate layers bolted together, but a single substrate that thinks, adapts, and deepens as it operates. Just as life doesn't separate its "hardware" (cells) from its "software" (DNA) from its "domain" (environment) — they co-evolve as one — 3D CANS is the first engineering paradigm that mirrors that principle. Every vertical it touches — health, materials, climate, finance — doesn't just get automated. It gets reinvented from the physics up. 2. What are the emerging themes and paradigms that 3D CANS offers a potent lens to explain, list the links of tweets 3. When did @charleswangb first brought up 3D CANS? List all themes and paradigms that he mentioned, from now back to that date. 4. Let’s test its potency in real world cases: in what ways and aspects it can explain these companies? And what are fundamentally wrong or misconceptions in these lines of thinking as in x.com/jiahanjimliu/s… 5. What potency can it bring to explain this: x.com/charleswangb/s… 20 sample new themes and paradigms below. You can 10X it thru the 3D CANS lens. Or if you have burning questions on new themes and paradigms, post them here, I'll 3D-CANS it.

English
0
0
0
202
Charles Wang
Charles Wang@charleswangb·
@Scobleizer In the same time in Lex pod Jensen also said the chance of AI creates another Nvidia is 0. I think what he meant is AI is enabling superhumans at many tasks now, not replacement. Apparently this isn’t the notion of AGI by folks like @demishassabis or @bengoertzel
English
0
0
1
35
Robert Scoble
Robert Scoble@Scobleizer·
So NVIDIA has solved AGI? I am going to sleep thinking about all the reasons why Jensen didn’t announce this last week in front of all the AI developers.
English
62
11
249
34.7K
Charles Wang
Charles Wang@charleswangb·
If UTM's a multi-century startup, we’re still in MVP phase next few decades.
English
0
0
0
20
Charles Wang
Charles Wang@charleswangb·
Make CPU great (core perf & qty) again. UTM has no expiration date in this universe—what the U stands for. Age like wine.
English
1
0
0
52
Charles Wang retweetledi
tae kim
tae kim@firstadopter·
Fun fact. Amazon uses Nvidia GPUs and Nvidia Dynamo to power Amazon Ads $NVDA
tae kim tweet mediatae kim tweet media
English
5
9
92
10.8K
Charles Wang
Charles Wang@charleswangb·
AI factory illustrated in health, bio, and med: x.com/charleswangb/s…
Charles Wang@charleswangb

How AI factory cultivates a 1000X superior business model In the example below, frontier model companies seek to be downstream users of $TEM's clinical database. AI factory flips the positioning. It generates intelligence upstream — actively cultivating AI-native companies like $TEM. Upstream intelligence generation for AI-native startups. It helps create these startups in the first place. The same substrate (grok 3D CANS) naturally expands to hundreds of thousands of AI-native startups. The more startups AIF produces, the least path of resistance it gets for the next ones. These AI startups do specialization. AIF does the generation. Their co-opt kicks off several self-reinforcing mega flywheels: (Grok these constructs.) In fields like med, bio, and health, the scale and scope of specialization is unlimited. A niche field like sleep science can feed 1,000 AI startups — not an overstatement for anyone who understands the field — each a potential $10B company. Not bad for a 2–3 person startup made possible by AIF. (Grok tokenization theory of firm) AI in health, bio, and med: most value lies in fat tails. These fat tails are intractable to the economies of scale, distribution, or network effects of the industrial and Internet era — presenting a number of challenges: large investment, longer investment cycles, higher risk. AIF dissolves these fat tail problems via interlocking mega flywheels. Addressing long tails is its second nature — what it does best and is designed to excel at. What's strikingly counterintuitive yet profoundly true: AIF can release its own AI startups as first party without conflict of interest to its third party customers and partners. See this from two angles: On one hand, AIF can package its 1st party services as API calls and pass a large portion of its margin to 3rd party AI startups — say keeping only a few cents on the dollar. Because the success of these startups is the lifeblood of AIF, their coopt is at much greater scope (from physics to workloads and back, grok this). And these 1st party services complement the startups, enabling more holistic offerings. Expandable TAM. On the other hand, with the bird's-eye view of AIF, 3rd and 1st party services are dynamically combined to create, say, sleep regime mixes adaptable to personal needs. Elastic TAM for both. Easily $10T TAM: sleep science is a dragon head with a mile-long downstream industrial chain — supplements, meds, wearables, beds, hotels. When you sleep like a baby, you work like a horse, live like an eagle. In sum, 1st and 3rd party services can be engineered and positioned to be reciprocally opening and enhancing — jointly developing a regenerative guild. In the grand scheme, sleep science is a drop in the ocean of health, bio, and med. Expands to further specializations — as deep as life, as fast as the speed of physics. Extreme TAM expansion for both AI-native startup ecosystems and AIF. The mega flywheels become ever more self-reinforcing. The impact doesn't stop here. It reinvents cognition, ontology, creativity, engineering, science, world-making-and-remaking, and human organization and action: Friction-heavy cognition (manual layers, brittle abstractions) → friction-free cognition (electrons and workloads flowing seamlessly) Predefined ontology → dynamical ontology Predefined workflows → human-AI-world cocreation fabric Classical engineering (like bridge-building) → emergent engineering (like gardening) Legacy scientific method (last 400 years) → the new science of the space of possible — from what it is, to what it can be, and could be Modeling the world → regenerating the world Coase theory of the firm → Tokenization theory of self-organizing human action takes the stage. x.com/HyperTechInves…

English
0
0
1
44
Charles Wang
Charles Wang@charleswangb·
The engine of AI factory is accelerated computing and extreme codesign — from physics to workloads and back — jumpstarting several reciprocal, self-reinforcing positive feedback loops that truly differentiate AI factory: • These aren't the Internet-era virtuous loops — like product-data-ML or Amazon flywheels. They are the mother of all flywheels. They are drivers of the Perez techno-economic paradigm leap (2021-2070). x.com/charleswangb/s…
English
1
0
1
31
Charles Wang
Charles Wang@charleswangb·
This is about the closest one can get from analysts in understanding AI factories. But this misses 99% of the alpha of AI factory. Few people understand it. This is not Google in 2001. Google search is a derivative business: search (x) → ads (y), dx/dy. AI factory is the mothership of AI-native businesses: intelligence production at industrial scale (x) → AI-native businesses as powerhouses of intelligence generation themselves (y), x^y — where both x and y are co-shaped by extreme codesign - from physics to workloads and back. X and Y are unlike traditional businesses: value by units sold, zero elasticity thereafter. Compute is not by units sold — but by units it prints, indefinitely. Units recombining to print even more. Elastic and expandable TAM. Intelligence begetting intelligence. This is a phase transition from what compute does to what compute is for — where the biggest alpha is, and what AI factory cultivates. AI factory is the culmination of NVIDIA's accelerated computing and extreme codesign — decades of engineering prowess. Nebius is the world's first AI factory — being co-developed with NVIDIA.
M. V. Cunha@mvcinvesting

I just finished going through BofA’s research on $NBIS. Here are the key competitive advantages they highlight: ⬇️ 1) Differentiated virtualization layer and distributed compute fabric Nebius’ most important differentiator is its virtualization layer, which allows GPUs across multiple data centers to function as a single unified cluster. This matters a lot more than it might seem at first. AI workloads are no longer just about raw compute, they’re about how efficiently that compute can be assembled, scaled, and allocated across geographies. With power constraints, GPU shortages, and long data center build timelines, the ability to pool distributed resources into one system becomes a major advantage. It allows for: - Higher utilization - Faster deployment - Better ability to serve variable demand GPU innovation is moving faster than infrastructure expansion, so software layers like this become increasingly valuable over time. Importantly, Nebius appears to be the only player currently enabling global GPU orchestration. Most neoclouds are limited to virtualization within a single data center, and hyperscalers like Oracle or Microsoft don’t yet offer this type of GPU-specific orchestration across locations. That makes this a potentially durable and hard-to-replicate edge. 2) Full stack AIaaS cloud offering creates defensible long-term moat On top of its infrastructure, Nebius has built a full AI cloud platform that simplifies how companies build, train, and deploy models. The key value proposition here is abstraction: customers don’t need large internal AI/ML teams to operate complex workloads. As AI moves from experimentation to production, this becomes increasingly important. Then there’s Token Factory, which is a big part of the differentiation: - Enables large-scale inference and model serving out of the box - Supports 60+ models (including open-source) - Allows easy fine-tuning and deployment - Offers predictable, transparent pricing ($/token) - Claims significant cost savings vs proprietary models - Provides enterprise-grade governance and security - Uses an OpenAI-compatible API (lower switching friction) From Nebius’ perspective, this does two important things: - Improves utilization of idle compute - Introduces a more recurring, usage-based revenue stream It also increases platform stickiness, especially for enterprises that want a turnkey AI solution without building internal infrastructure. Long term, if inference becomes the dominant workload (which is the current direction), Token Factory could become a very meaningful moat. 3) Leadership team with proven hyperscale execution and deep technical DNA Nebius is led by a team that previously built and operated Yandex’s infrastructure across multiple verticals (search, cloud, autonomous driving, etc.). That experience is highly relevant: - Managing large-scale data centers - Operating multi-generation hardware fleets - Building distributed systems at scale What stands out is not just the technical capability, but the execution track record. Yandex was able to outperform global players like Google and Uber in its home market. In their view, this combination of deep technical expertise and proven execution is one of Nebius’ strongest assets.

English
1
0
1
96
Charles Wang
Charles Wang@charleswangb·
Jevons paradox is peanuts compared to Perez's techno-economic paradigm leap: A profound, system-wide capability leap driven by a cluster of interrelated radical technologies — unleashing unprecedented productivity potential across the entire economy. Not merely by optimizing or cheapening existing activities (as in efficiency-driven rebounds like Jevons paradox), but by creating entirely new categories of economic value, products, processes, organizational forms, and human behaviors that were previously impossible or unimaginable at scale. This leap manifests as: •A quantum jump in generic capabilities — often anchored in a low-cost, all-pervasive input or enabling infrastructure that dramatically expands the possibility space for innovation. •The emergence of brand-new demand vectors and use cases that did not exist before. •A cascading, self-reinforcing explosion of adoption once the new "common sense" takes hold: users experience the leap → productivity soars in unforeseen domains → more people and businesses join → revenue floods back to foundation layers → investment pours into expanding capacity → performance and cost curves improve further → even more novel applications become viable. Unlike incremental or efficiency-only improvements, this is not a linear extension of the old economy — it's a non-reversible phase transition in how value is created. The old Jevons-style rebound (lower cost → more of the same) gets dwarfed by new-category creation (zero prior activity → massive new activity). The result is asymmetric — far beyond what efficiency models predict. Perez's historical surges show these leaps recur roughly every 50–60 years. It's the invention of new economic reality at industrial scale.
English
1
0
1
81
Charles Wang
Charles Wang@charleswangb·
@Farshchi All these are textbook equivalents to the current genre: Phase shift from what compute does to what compute is for—where biggest alpha is.
English
0
0
2
44
Shahin Farshchi
Shahin Farshchi@Farshchi·
The biggest AI companies won't sell AI. Every technology cycle produces generational companies: - light alloys brought Boeing, - the assembly line brought Ford, - solid-state electronics brought Texas Instruments, - fabless enabled Nvidia. AI is that next enabling layer, and we expect great companies to emerge redefining how we move, produce energy, and live longer, healthier lives. Right now, most startups are focused on building AI itself. The next wave will leverage it under the hood and build products - from airplanes to bridges to computers - that don't even resemble what's commonplace today.
English
2
4
32
4.7K
Charles Wang
Charles Wang@charleswangb·
FEP has its root in physics. Its ideal home may be a module in Ominverse—physics-informed simulation for workloads. And use that as baseline to augment Dynamo’s orchestration. Just throw out this idea here.
English
0
0
0
27
Charles Wang
Charles Wang@charleswangb·
Framing LLMs as a collab of 5 core distinctive constructs: data, model, algo, compute, and interactions (human-AI-world, agent to agent). Compute needs to be adaptive to the other 4. Model and algo adaptability: mostly addressed by CUDA But data (physical AI, multimodal) and interaction adaptability: where real complexity are. My feel this is where active inference can help.
English
1
0
0
34
Charles Wang
Charles Wang@charleswangb·
Just a feel, active inference can liftoff Nvidia Dynamo: the operating system AI factory @NVIDIAAI @NVIDIADC Just like traffic pattern != street map because it’s self-organizing criticality(SOC), workloads will ratchet up complexity, which is what active inference is good for, as what it does in the brain. An effective pathway to transfer neuroscience to AI factory—the brain of AI.
Inês Hipólito@ineshipolito

A vital missing has arrived. 🧩 ​Huge congratulations to Sanjeev Namjoshi on the release of "Fundamentals of Active Inference" with @MITPress! The engineering-focused guide the community has needed to bridge the gap between theory and application. ​#ActiveInference #fep

English
1
0
0
248