Rob Freeman

3.8K posts

Rob Freeman banner
Rob Freeman

Rob Freeman

@rob_freeman

Promoting a significance for complex systems science in AI.

Mostly Asia Katılım Nisan 2009
299 Takip Edilen335 Takipçiler
Sabitlenmiş Tweet
Rob Freeman
Rob Freeman@rob_freeman·
My presentation to INLP workshop at AGI-21. Creativity, freewill, consciousness. Process & where deep learning fails @IntuitMachine. "Togetherness" still too trapped in formalism @coecke. Hypothesis for significance of neural oscillations at end @hb_cell. youtu.be/YiVet-b-NM8
YouTube video
YouTube
English
2
2
14
0
Rob Freeman
Rob Freeman@rob_freeman·
@poladian57260 @PeterSchiff @SenSanders There's truth in that. Millions of small shop keepers and middlemen. In exchange we get vastly more products at lower prices. Other different businesses made possible. A bit like building the railways, or roads. You can argue it was better when we walked. Or before the plough.
English
0
0
0
17
Mark Poladian
Mark Poladian@poladian57260·
@PeterSchiff @SenSanders And how many small businesses did Bezos destroy while doing that ? Bezos “created” 1.5 million jobs after he destroyed 15 million jobs. It was not wealth or job creation. It was wealth transfer from small shops into Bezos one account.
English
2
0
3
834
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
The reality of American life today: Jeff Bezos, worth $290 billion, spent: $10 million on the Met Gala $120 million on a penthouse $500 million on a yacht Meanwhile, he‘s planning to throw 600,000 Amazon workers out on the streets and replace them with robots. Unacceptable.
English
7.2K
5K
20.9K
1.2M
Rob Freeman
Rob Freeman@rob_freeman·
@elonmusk Nice line this one: "How cool is it, that we live on a planet, where when you go through a list of the richest people on Earth, all of them have built something".
English
0
1
2
20
Elon Musk
Elon Musk@elonmusk·
Try Tesla, Starlink & auto-translation on 𝕏
English
2.7K
5.7K
35.1K
42.3M
no.mind
no.mind@the_no_mind·
Dark mode prevents myopia (nearsightedness). Dr. Alexander Wunsch, German physician studying the effects of light on health for 30+ years: "Whenever the opportunity arises to read text white on black, we should do so." Here's the mechanism: Your retina has two cell systems — ON cells and OFF cells. Black letters on white background activate the OFF cell system. Chronic activation of the OFF cell system promotes the development of myopia. Dark mode — white text on black background — activates the ON cell system instead. This is the preventive mechanism. There is also a significant difference in total light load: > Standard display (white background, black text): 95% of screen light is emitted to display the page. Only 5% is the actual text. > Dark mode (black background, white text): only 5% of total light is needed to convey the same information. Less light from the screen. The correct cell system is activated. Lower myopia risk. The first computer screens were dark mode by default — black background, green or amber text. That was optimal. The shift to white backgrounds was aesthetic, not biological. University of Tübingen, 2018 — researchers developed a mathematical model analyzing image content for myopia risk potential. Their finding? Natural outdoor scenes have a neutral effect on the eye. Conventional text on white background promotes myopia development. Dark mode presentation does not. They then tested this in subjects. Black letters on white background produced measurable changes in the eye that promote myopia. In Europe, half of all students are already nearsighted. Myopia is the most common visual impairment among young people. The more a child reads — the higher the risk. The mechanisms are not yet fully understood in all details. But the consequences are clear. Wunsch: "Dark mode is recommended wherever it does not impair the workflow during screen work." "In the evening and at night, this should be the mandatory setting in order to keep the disruption of the internal clock and the possible damage to the retina as low as possible." Wunsch's advice to parents: "If children discover the joy of reading and develop into bookworms, it is certainly a good investment to provide them with an e-book reader that enables text display on a dark background." The original screen setting was black background. We changed it for aesthetics. The biology didn't change with it.
English
29
211
925
53.8K
Rob Freeman
Rob Freeman@rob_freeman·
@OrenTirosh @the_no_mind Lens elasticity you'll be thinking of presbyopia or "long" sight. Short sight is when the eyeball grows too long. So it's associated with focus signals while the eye is developing. I've heard limited exposure to daylight cited as a cause. First I've heard of black on white text.
English
1
0
0
27
Oren T
Oren T@OrenTirosh·
@the_no_mind What’s the mechanism? Isn’t this about lens elasticity? How is that affected by the retina ?
English
1
0
1
221
Rob Freeman
Rob Freeman@rob_freeman·
@r0ck3t23 What analyses like this miss is that prediction is not "just probability". It's a parameter which gives the freedom to create new organizations of information. It's the organizations of information that make these systems intelligent. Constraint only by prediction liberates them.
English
0
0
0
27
Dustin
Dustin@r0ck3t23·
Terence Tao has an IQ above 200. Youngest gold medalist in Math Olympiad history. Fields Medal winner. The greatest living mathematician by nearly any measure. And he just said something most people aren’t ready for. Tao: “This whole era of AI is teaching us that our idea of what intelligence is, is not really accurate.” We spent centuries building civilization on one assumption. That intelligence was sacred. Irreducible. Uniquely ours. The one thing that made the entire human story make sense. Then AI started solving things we swore only we could. Chess. Language. Vision. Math. And every time, we reached for the same defense. That’s not real intelligence. It’s just tricks. Just pattern matching. Just an algorithm. Tao: “You look at how it’s done and it doesn’t feel like intelligence.” So we moved the line. Again. And again. And again. Because intelligence was supposed to feel like something. Something deep. Something we could point to and say… this is what separates us from everything else. But AI kept solving the problems. And that feeling never arrived. Tao: “We were looking for some elusive, intelligent way of thinking and we don’t see it in the tools that actually solve our goals.” Here’s what makes it worse. Large language models work by predicting the next word. One word at a time. No grand architecture. No deep understanding. Just probability. And it works. Tao: “Maybe that’s actually a lot of what humans do as well.” The greatest living mathematician just told you human thought might run on the same machinery. Not some transcendent spark. Pattern recognition. Prediction. One thought, one decision, one word at a time. We built religion around intelligence. Philosophy around it. An entire species identity around it. And a machine running probability just held up a mirror. We didn’t lose intelligence to AI. We just finally saw what it always was. What haunts us isn’t that machines learned to think. It’s that thinking was never what we needed it to be.
English
408
735
3.5K
553K
Polly
Polly@PursueOptimism·
What is that saying? ‘Beware - the things you own, end up owning you’. You are owned and controlled by your own project. A self imposed prison. Break free. Besides, I’m with Elon on this one. He has stated that humans should not try to live forever, arguing that holding onto life too long would cause "asphyxiation of society" and stifle progress. He believes death is important because people rarely change their minds, and without generational turnover, society would be stuck with old, stale ideas.
English
2
0
1
192
Bryan Johnson
Bryan Johnson@bryan_johnson·
idk if I can keep this up you guys. I just got one more to do: a five min warm eye compress before bed. I do endless things every day. From the moment I wake up to bedtime, and even when I'm sleeping, I'm always doing something and measuring it. It never ends. Help.
English
1.9K
79
7.8K
999.5K
Rob Freeman
Rob Freeman@rob_freeman·
@skdh Nah, you're missing the point. It's not water content, or any specific thing. Everyone knows you wouldn't even be able to eat that much fruit and veg. You'd be done for the day. There's micronutrients that trigger satiation. It matters that we've evolved to deal with wholefoods.
English
0
0
0
56
Rob Freeman
Rob Freeman@rob_freeman·
@0xMetaLabs @aakashgupta My argument might be contained in that "won't capture future modes". There's a reason GOFAI didn't work (beyond 80%) but LLMs have. Work like this misses the lesson LLMs tried to teach us. Granted LLMs miss it too. But this goes the other way. Reinvents the square wheel of GOFAI.
English
1
0
0
35
0xMetaLabs
0xMetaLabs@0xMetaLabs·
@rob_freeman @aakashgupta Agreed that a Gaussian prior won’t capture all future modes. But it’s not trying to, it’s preventing collapse and forcing separation. You can layer mixtures/flows on top if the distribution needs to be richer.
English
1
0
0
39
Aakash Gupta
Aakash Gupta@aakashgupta·
Earlier this year Yann LeCun left Meta because Mark Zuckerberg wouldn't bet the company on JEPA. Last week his group dropped the first JEPA that actually trains end-to-end from raw pixels. 15 million parameters. Single GPU. A few hours. The timing is not a coincidence. For four years Meta has been the house that JEPA built. LeCun published the original paper from FAIR in 2022. I-JEPA and V-JEPA came out of his lab. The architecture was supposed to be the escape hatch from LLMs, the path to robots that actually learn physics instead of hallucinating about it. Every version shipped fragile. Stop-gradients. Exponential moving averages. Frozen pretrained encoders. Six or seven loss terms that had to be hand-tuned or the model collapsed into garbage representations. Meta kept funding LLMs. Llama shipped. Llama scaled. Llama got beat by Qwen and DeepSeek. Zuck spent $14 billion to buy ScaleAI and install Alexandr Wang. The FAIR robotics group was dissolved. LeCun's research kept winning papers and losing the product roadmap. He left, started AMI Labs, and said publicly that LLMs were a dead end. Now the paper. LeWorldModel. One regularizer replaces the entire pile of heuristics. Project the latent embeddings onto random directions, run a normality test, penalize deviation from Gaussian. The model cannot collapse because collapsed embeddings fail the test by construction. Hyperparameter search went from O(n^6) polynomial to O(log n) logarithmic. Six tunable knobs became one. The downstream numbers are what should scare the robotics capex class. 200 times fewer tokens per observation than DINO-WM. Planning time drops from 47 seconds to 0.98 seconds per cycle. 48x faster at matching or beating foundation-model performance on Push-T and 3D cube control. The latent space probes cleanly for agent position, block velocity, end-effector pose. It correctly flags physically impossible events as surprising. It learned physics without being told physics existed. Figure AI is valued at $39 billion. Tesla Optimus is mass-producing. World Labs raised $230 million to sell generative world models. Everyone in humanoid robotics is burning capital on foundation-model pipelines that plan in 47 seconds per cycle. LeCun's group just showed you can do it with 15 million parameters on a single GPU in a few hours. This is the Xerox PARC pattern running again. Meta had the next architecture. Meta had the scientist. Meta dissolved the robotics team, passed on the productization, and watched the exit. Three months later the lab that was supposed to be Meta's publishes the result that resets the robotics cost structure. The paper is worth more than Alexandr Wang.
Aakash Gupta tweet media
English
64
379
3.3K
1.1M
Rob Freeman
Rob Freeman@rob_freeman·
@0xMetaLabs @aakashgupta Fine. But how do we know it is possible to learn invariants? Long story short: I think the relevant patterns are chaotic attractors. If so, you won't be able to learn adequate ones. Your Gaussian prior might capture 80% of existing ones, zero of new ways the system might evolve.
English
1
0
0
38
0xMetaLabs
0xMetaLabs@0xMetaLabs·
@rob_freeman @aakashgupta Invariants aren’t discovered in isolation, they’re constrained. The Gaussian prior is basically shaping the geometry so trivial collapse is impossible, forcing the model to encode meaningful variation.
English
1
0
0
60
0xMetaLabs
0xMetaLabs@0xMetaLabs·
@aakashgupta Gaussian regularization isn’t just a trick, it’s forcing information geometry into the latent space. That’s closer to learning invariants than memorizing patterns.
English
1
0
4
3.2K
Rob Freeman
Rob Freeman@rob_freeman·
@HowToAI_ What's the substance of this work? That if you simplify the world you can make predictions about the simplification? What language models should have taught us is that cognitive worlds can't be simplified. The solution is to embrace the complexity, not seek simplicity again.
English
0
0
0
621
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
430
2.1K
12.2K
1.3M
Rob Freeman
Rob Freeman@rob_freeman·
@bryan_johnson Whenever someone wants me to join a 2am Zoom call, I cite Bryan Johnson. Thanks for that. What is even the point of these sour women? Envy? Some argument that success means you're taking from someone else? Why aren't they focusing on their own contributions?
English
0
0
1
128
Bryan Johnson
Bryan Johnson@bryan_johnson·
Kara Swisher says I have a male eating disorder. She's closer to right than she knows. There is a decade of my life where I didn't take any pictures of myself because I was overweight, unhealthy, and sad. That version of me ate his feelings and self-destructed. No one called that version sick. The obsessive measurement didn't create the disorder. It was the treatment for one that already existed. My internal signals were so broken I couldn't trust them. I needed external data to find my way back. I had to rebuild from scratch. Kara says this is body dysmorphia cosplaying as science. I'd say it's what happens when a person loses the plot so completely that the only way home is through the numbers. I publish it all. Including the failures, embarrassment, and the erection data, because hiding is what created the disorder. She can call it a circus, repellent, or eating disorder. You cannot call it hiding.
English
246
51
2.5K
303.6K
Rob Freeman
Rob Freeman@rob_freeman·
@elonmusk You'll never get heat? I don't know dude, my laptop gets quite warm when I run their language models. But fair point. Lots of hype. LLMs don't yet have a soul. But that's partly us not knowing what a soul is. We're just sacks of cells too. At some point the two will collide.
English
0
0
1
13
Elon Musk
Elon Musk@elonmusk·
Grok will never go to therapy. Never.
English
4K
6.4K
63.6K
31.9M
Rob Freeman retweetledi
Tim Soret
Tim Soret@timsoret·
As a European, I apologize to Americans for all the idiocy coming from our side. You save your pilots no matter the cost. You send humans to the moon. You fight authoritarianism head-on. It's truly inspiring. We're on the wrong side of the moral equation.
English
4.4K
13.6K
108.2K
2.1M
Rob Freeman retweetledi
Robin
Robin@xdNiBoR·
Incorrect. People said "reusability is a dream".
PebMets@PebMet1

I want to correct a misconception @elonmusk and @SpaceX follower keep repeating. - No one thought reusability was crazy or impossible before Musk/SpaceX. Shuttle orbiters and SRBs did it 1981-2011 so we knew it was possible because we did it.

English
33
62
1.2K
95.8K
Rob Freeman
Rob Freeman@rob_freeman·
@dontforgetchaos @ukilaw @BjceeSa The US acted in its own interest. Europe actually needed it more. Still needs it more. But the US acted for itself. The question of alliances is a separate one: if Europe won't help the US when it asks, why should the US defend Europe?
English
0
0
0
20
Jonny G 🇺🇦
Jonny G 🇺🇦@dontforgetchaos·
@ukilaw @BjceeSa Well for a start, if you’re going to start a war “for the benefit of Europe, Canada and Australia” , it’s probably a reasonable idea to ask them first?
English
27
4
467
7.1K
Khalid Umar
Khalid Umar@ukilaw·
The riddle I’ll never be able to solve: How did the UK, Italy, Spain, France, Germany, Canada and Australia collectively decide that confronting the world’s biggest state sponsor of terrorism, soon-to-be nuclear Ayatollah, sitting astride the global energy chokepoint, is simply “not their war”? How did the memory, experience, philosophy and logic of a millennium of Western civilisation simply vanish? Is TDS really that deadly a mental disease? Can anyone help decipher this puzzle?
English
2.1K
1.5K
8.8K
368.6K
Rob Freeman
Rob Freeman@rob_freeman·
@pythagoreanmetr @visionergeo Ha ha. It's true in a way. Although to be fair the "gayness", or otherwise, of your political system should be the thing which matters, after all.
English
0
0
1
36
PythagoreanMetronome
PythagoreanMetronome@pythagoreanmetr·
@visionergeo I enjoy the fact that lo these many years later it turns out that we would love Germany to invade Russia. Apparently the last time they tried they weren't gay enough yet and now we have solved that problem.
English
1
0
12
7.8K
Visioner
Visioner@visionergeo·
🇩🇪BREAKING | Starting January 1, 2026, all men aged 17 to 45 must obtain permission from a Bundeswehr career center if they plan to leave Germany for more than three months — whether for studying abroad, work, or extended travel — Berliner Zeitung. This requirement is now in effect on a permanent basis and is no longer limited to periods of heightened tension or a state of defense, meaning a specific military threat. See the latest updates with us: @visionergeo
English
800
1.5K
6.4K
2.8M
Rob Freeman
Rob Freeman@rob_freeman·
@newstart_2024 It does add up. The problem is that rationality revealed contradictions, & that undermined rationality. This provides cover for the craziness. There is a kernel of... truth to subjective truth... Paradox. A full solution will require us to get to the heart of meaning. With AI?
English
0
0
0
254
Camus
Camus@newstart_2024·
Truth is now considered a right-wing conspiracy. That’s the chilling line from Melanie Phillips that stopped me in my tracks. She explains how we’ve reached a point where simply stating observable reality — whether it’s basic biology defining a woman or pushing back against blanket accusations that all white people are inherently bad — gets you branded as evil. Not wrong. Evil. Therefore you must be silenced, cancelled, or erased. No debate. No evidence allowed. She calls it cultural totalitarianism: a Manichean worldview where one ideology claims a monopoly on goodness, progress, and reason itself. Dissent isn’t argued with — it’s treated as a moral threat that has to be removed. The deepest irony? In an era that smugly ditched religion in the name of superior rationality, we’ve ended up rejecting reason, evidence, and open inquiry altogether. We’re so “rational” we’ve dispensed with the very tools of rationality. It doesn’t add up. Her take has me wondering how we got here — and how quickly disagreement turned into moral excommunication. Anyone else seeing this pattern play out in conversations lately? Where have you felt truth itself become off-limits?
English
891
8.8K
25.1K
625.1K
Matt Lester
Matt Lester@MattLester10000·
@OtherSideAus Cut it out dickhead. If the west is concerned about human rights why dont we take on China, librate every African country and knock off Kim jong. Stop cut and pasting your responses depending on the daily narrative.
English
1
0
1
36
The Other Side (Australian Vodcast)
I’m tired of the smug tone of every news report in Australia on the war right now which seems to be saying: “oh no we should never have started this and it’s getting worse and Trump is to blame.” Does anyone actually want to get rid of horrible regimes that want to threaten the civilised world, develop nukes and routinely kill their own people? Or is it too inconvenient to our pampered lives? We expect a war to have zero horrible consequences and be over in a couple of weeks and if it isn’t we vilify our own leaders for even trying. Is it best to enable despots and legitimise them in the name of “equity” as the UN has done for decades?* Or do we fight and get rid of them to make the world a better place for the long term? We’re so weak. We have no moral compass. And we have no resolve. And we want everything NOW - with no pain. We praise weak leaders who won’t even try: Europe, UK, Australia, Canada. We condemn those who do like the US and Israel. We smugly accuse them of “acting in self interest” as if that alone makes an action against pure evil invalid? We point to their own flaws as if that validates the horrors they are fighting and makes the fight unjustified. God, we are so dumb. *The UN isn’t even doing what the UN is SUPPOSED to do. It has a “responsibility to protect” doctrine which means it should have acted against Iran decades ago.
English
322
141
810
17.1K
Rob Freeman
Rob Freeman@rob_freeman·
Very interesting visual illusion. Spoiler...: Is it that the brain mixes the images, but then recalls the mix only, even when one is covered. As if it only recovers from cues and doesn't bother to re-scan.
Brian Roemmele@BrianRoemmele

I use this to help folks understand how the brain is a theater recreating a highly edited world based on a Shannon Limit of human bandwidth of consciousness. Observe the specimen sequence. What are your observations? Now cover one image.

English
0
0
0
43