June_e Bai

1.9K posts

June_e Bai banner
June_e Bai

June_e Bai

@June84413

System-Level Noise Reducer, intj, Lower "idiocy index", Chinese dance, Skilled manual driver, Bring out my dead spam! 🧩 Waiting for you at Arcadia Planitia...

เข้าร่วม Ağustos 2025
243 กำลังติดตาม422 ผู้ติดตาม
June_e Bai
June_e Bai@June84413·
@ylecun @aakashgupta @boztank While LLMs excel at language and reasoning, prioritizing them over world models' core advantages in embodied interaction is definitely short-sighted.
English
0
0
0
4
Yann LeCun
Yann LeCun@ylecun·
Right. Except that Mark Z, @boztank and others in the leadership were always supportive of the JEPA / World Models project as a long-term bet. But the AI strategy of company became more LLM-pilled and short-term focused. And many of the applications of JEPA/WM are in industrial domains that Meta is not particularly interested in.
English
26
27
1.1K
42K
Aakash Gupta
Aakash Gupta@aakashgupta·
Earlier this year Yann LeCun left Meta because Mark Zuckerberg wouldn't bet the company on JEPA. Last week his group dropped the first JEPA that actually trains end-to-end from raw pixels. 15 million parameters. Single GPU. A few hours. The timing is not a coincidence. For four years Meta has been the house that JEPA built. LeCun published the original paper from FAIR in 2022. I-JEPA and V-JEPA came out of his lab. The architecture was supposed to be the escape hatch from LLMs, the path to robots that actually learn physics instead of hallucinating about it. Every version shipped fragile. Stop-gradients. Exponential moving averages. Frozen pretrained encoders. Six or seven loss terms that had to be hand-tuned or the model collapsed into garbage representations. Meta kept funding LLMs. Llama shipped. Llama scaled. Llama got beat by Qwen and DeepSeek. Zuck spent $14 billion to buy ScaleAI and install Alexandr Wang. The FAIR robotics group was dissolved. LeCun's research kept winning papers and losing the product roadmap. He left, started AMI Labs, and said publicly that LLMs were a dead end. Now the paper. LeWorldModel. One regularizer replaces the entire pile of heuristics. Project the latent embeddings onto random directions, run a normality test, penalize deviation from Gaussian. The model cannot collapse because collapsed embeddings fail the test by construction. Hyperparameter search went from O(n^6) polynomial to O(log n) logarithmic. Six tunable knobs became one. The downstream numbers are what should scare the robotics capex class. 200 times fewer tokens per observation than DINO-WM. Planning time drops from 47 seconds to 0.98 seconds per cycle. 48x faster at matching or beating foundation-model performance on Push-T and 3D cube control. The latent space probes cleanly for agent position, block velocity, end-effector pose. It correctly flags physically impossible events as surprising. It learned physics without being told physics existed. Figure AI is valued at $39 billion. Tesla Optimus is mass-producing. World Labs raised $230 million to sell generative world models. Everyone in humanoid robotics is burning capital on foundation-model pipelines that plan in 47 seconds per cycle. LeCun's group just showed you can do it with 15 million parameters on a single GPU in a few hours. This is the Xerox PARC pattern running again. Meta had the next architecture. Meta had the scientist. Meta dissolved the robotics team, passed on the productization, and watched the exit. Three months later the lab that was supposed to be Meta's publishes the result that resets the robotics cost structure. The paper is worth more than Alexandr Wang.
Aakash Gupta tweet media
English
53
272
2.4K
715.2K
Viv 🪩
Viv 🪩@battleangelviv·
Cheery fatalism makes life a lot more fun
Viv 🪩 tweet media
English
6
13
114
2.8K
June_e Bai
June_e Bai@June84413·
@SpaceX Apollo’s legacy, SpaceX’s rocket, Roman’s eye — Pad 39A never fails to get humanity’s blood pumping.
June_e Bai tweet mediaJune_e Bai tweet mediaJune_e Bai tweet media
English
0
0
3
74
ZUBY:
ZUBY:@ZubyMusic·
I do think it's strange to immigrate to a country and then spend all your time and energy criticising the place.
English
334
1.8K
23.9K
311.8K
June_e Bai
June_e Bai@June84413·
@elonmusk @HowToAI_ @grok Is the AGI Elon is referring to the hybrid system below? Grok 5 (Brain) + FSD World Model (Cerebellum / Physics Engine) + Optimus (Body) = AGI🧠🤖
English
1
0
0
21
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
391
1.9K
11.1K
1M
June_e Bai
June_e Bai@June84413·
Variables and motion are the ultimate laws of this world — at 4 a.m. on April 10, 2026, my account was frozen. I long to share my thoughts, which are rough, foolish, wild, and clumsy, yet full of sincere passion and vitality. That is why I came to 𝕏, and I have never intended to give up or leave: 1. We only cherish what we have when we lose it, a universal underlying bug in the human system. 2. I am certain luck will favor me — my account was unfrozen on April 20, my birthday. 3. In the eight months on 𝕏, I have been growing and never flustered. On to the next orbit. 🚀🎂
June_e Bai tweet mediaJune_e Bai tweet media
English
0
0
0
37
June_e Bai
June_e Bai@June84413·
@elonmusk WhatsApp’s “end-to-end encrypted” 👇
June_e Bai tweet media
English
0
0
0
18
June_e Bai
June_e Bai@June84413·
The WaPo’s $38B “gotcha” is a lie by omission. Most are competitive contracts, not handouts. Tesla repaid its DOE loan 9 years early with interest. SpaceX delivers rockets that cut costs 4-10x for taxpayers. Musk’s companies created FAR more public value than they received. The real waste is California’s $126B high-speed rail with zero progress.
English
0
0
1
25
Beff (e/acc)
Beff (e/acc)@beffjezos·
Video generation is just the warmup for robotic world models. Everyone who questions why Elon is going so hard on Imagine doesn't understand this
Xiaoyin Qu@quxiaoyin

Sora is dead. OpenAI just pivoted to robotics. I have friends on the Sora team. They're now working on robot-related projects. Makes sense when you think about it. OpenAI's philosophy: Why waste resources making people laugh with videos when you can solve humanity's physical intelligence problem? Let TikTok handle the entertainment. We'll build the future. But here's the real story: Video model startups are struggling to raise money. VCs think it's a big tech game now. Google has Veo, Kuaishou has Kling, ByteDance has SeeDance. How can a startup compete? Many video model founders I know have pivoted. The window closed fast. This reflects a broader pattern I'm seeing: Areas where big tech companies compete directly become no-go zones for startups. The money flows to either: Highly regulated industries big tech won't touch Niche verticals too small for them to care about Hardware-first approaches they can't easily replicate Video generation became a commodity overnight. The differentiation isn't in the model anymore – it's in the application layer. Smart founders are moving up the stack or finding defensible niches. The gold rush is over. Time to find the next mountain. #AI #Sora #OpenAI #VideoGeneration #Startups #VentureCapital #BigTech #Pivot #AIStrategy #Innovation

English
21
21
428
19.2K
June_e Bai
June_e Bai@June84413·
Intel’s world-leading 19μm ultra-thin GaN chiplet directly addresses the extreme requirements of space equipment for SWaP (Size, Weight, and Power) and radiation-hardened reliability. This is a critical step in Elon's buildout of a fully independent space technology stack. @grok #GaN #SpaceX #Intel
Intel Foundry@Intel_Foundry

Intel Foundry unveils the world’s thinnest GaN chiplet (19 μm). By integrating power and digital control on a single chiplet, it delivers higher efficiency, faster switching, and smaller designs. Learn more: ms.spr.ly/6018Q2h90 #IntelFoundry #Semiconductors

English
1
0
0
41
Elon Musk
Elon Musk@elonmusk·
If only we’d trained Grok on just these 2 books, we’d be done already!
Elon Musk tweet media
English
4.3K
16.3K
250.9K
20M
Elon Musk
Elon Musk@elonmusk·
@agenda2033 @imPenny2x 0.5T total. Current Grok is half the size of Sonnet and 1/10th the size of Opus. Very strong model for its size.
English
109
143
1.8K
695K
Elon Musk
Elon Musk@elonmusk·
@Teslaconomics @boringcompany The real reason for the “high speed rail” is money-laundering to bureaucrats, consultants & unions, not actually transport. That is where the billions spent so far have gone. That is why they don’t want an actually cost-efficient high speed transport system.
English
943
3.2K
21.8K
2M
Elon Musk
Elon Musk@elonmusk·
SpaceXAI Colossus 2 now has 7 models in training: - Imagine V2 - 2 variants of 1T - 2 variants of 1.5T - 6T - 10T Some catching up to do.
English
6.7K
7.7K
68.1K
28.2M
June_e Bai
June_e Bai@June84413·
@XFreeze Elon saved human dignity in the AI age with Starship 🚀
English
1
0
1
31
X Freeze
X Freeze@XFreeze·
Elon talks about how Starship was built, and it really puts things into perspective This is "probably the biggest thing ever made by pure human hands" • Created entirely pre-AI • "At the limit of biological intelligence" • Driven entirely by our "20-watt meat computers" Literally pushing engineering to its absolute peak When AGI finally arrives, it's going to look at Starship and say, "Not bad for a human"
English
327
893
3.5K
651.7K
Marc Andreessen 🇺🇸
Overheard in Silicon Valley: "Everyone in the world is now either 'too AI psychotic' or 'not AI psychotic enough'."
English
165
108
1.9K
170.8K