Jason Fox

4.7K posts

Jason Fox banner
Jason Fox

Jason Fox

@JasonGFox

Founder at Noumenal Labs. Building Physical AI.

Dallas, TX Katılım Mayıs 2008
1.3K Takip Edilen1.3K Takipçiler
Jason Fox
Jason Fox@JasonGFox·
@vytalow Just wanted to drop in and say that, first - awesome work! second, as you have rightly pointed out in your work, neurons don't learn using gradient updates. Take a look at predictive coding and active inference. Would he happy to connect you with my team for a chat if interested.
English
1
0
0
64
vytal
vytal@vytalow·
Very early on in training, but it seems like the cells can pick up on the general features/some movement in the video. Will release the full code once I've worked out a way to improve the fidelity of the output. Currently, the main problem is that the spikes inherently do not contain sufficient information for video reconstruction. I am trying ways to extract more data out of these limited spike data.
GIF
English
8
9
40
11.1K
Jason Fox
Jason Fox@JasonGFox·
@mcuban Bandwidth may be an issue, but I'm more worried about latency. I don't care how much data I can pass around if I can't do it in less than 20ms.
English
0
0
0
12
Mark Cuban
Mark Cuban@mcuban·
Will outdoor AI use cases, particularly when we get to “world view” based AI, overwhelm 5G ? Satellite uplinks ? And make many of those use cases, unusable ? Is the bottleneck going to be bandwidth ?
English
292
20
349
425.2K
Jason Fox
Jason Fox@JasonGFox·
@asimovinc Is that a transparent speaker I spy embedded in the chest?
English
0
0
0
82
Asimov
Asimov@asimovinc·
This is Asimov v1 today. Full body assembly is almost complete. A few more weeks of work and we'll test the whole system together to make this guy walk! Exciting to see it taking shape.
Asimov tweet media
English
8
17
181
8.3K
Jason Fox
Jason Fox@JasonGFox·
@JacklouisP Pick any humanoid company in the West. Vertical integration is a part of the raise narrative and one of the reasons why they are closing such large rounds.
English
0
0
1
31
Jack 🤖
Jack 🤖@JacklouisP·
Selling into robotics is hard. Roboticists don't want to buy tools. They want to build them. "Not invented here" syndrome kills more picks-and-shovels startups than bad tech does. I've watched teams reject better solutions purely because the code wasn't theirs.
English
16
1
73
6.9K
Jason Fox
Jason Fox@JasonGFox·
@asimovinc Honestly, if you can make a modular hand system that enables a user to choose along a spectrum of grip strength and dexterity for different tasks, that would be ideal. Right tool for the job, you know?
English
0
0
1
103
Asimov
Asimov@asimovinc·
Early hand designs for Asimov. We're testing different approaches to see what works. Nothing final yet. What would you prioritize: grip strength, dexterity, or modularity?
Asimov tweet media
English
37
29
328
49.3K
Jason Fox
Jason Fox@JasonGFox·
@cixliv I keep wanting ask. What are you using for remote control? Any teleoperation?
English
1
0
0
33
CIX 🦾
CIX 🦾@cixliv·
2025 was the year I yoloed and got a 6 foot tall robot. It has been the most exciting year of my life professionally. Grateful for 2025. Happy new year everyone! 2026 is when we make Real Steel real!
English
10
2
137
6.1K
Jason Fox
Jason Fox@JasonGFox·
I think @cixliv has it right. Hard not to conclude that the near term use of humanoid robots that makes the most commercial sense is entertainment. Especially, since every demo video from Chinese manufacturers is literally robots performing martial arts moves.
CIX 🦾@cixliv

Why humanoid robots, why not? A rant. The humanoid form is the right form if you want generalized “physical AI”. As the method to train it will be human data, into a human-like robot. Then economies of scale will make humanoids cheaper than specialized robots due to mass production. Thus making it lucrative to get generalized AI. But there are four main issues: economics, data, hardware and distribution. Economics are a problem because unlike ChatGPT that can be a pocket lawyer (thus saving you $1000 an hour) most blue collar labor is much cheaper. Replacing a $15 an hour job with a $50,000 robot (that will perform more poorly) is not economical. Data estimates are Humanoid robot training data ≈ 0.000001% to 0.00000001% of LLM training data. The only way we bridge this gap is a massive data collection effort or very robust life like simulators burning through GPU farms all day. Hardware: Humans have around 250 DOF while the top of the line humanoids have around 40 DOF. Although we don’t need all this degrees of freedom that a human has to solve most tasks. Humanoids now only last about 1-2 hours on battery life, most aren’t water proof, and still far from parity with the human body. Distribution: Say we solve all the above 3 issues, we still need to mass produce the robots. “Physical AI” isn’t something we can simply access with a browser or a phone. You need to ship millions of robots in excess of 100-150 pounds that are more complex than cars all over the world. With huge raw materials and rare earth requirements. This will take time. So while everyone knows the future is humanoid, it will take longer than people realize to become disruptive. So don’t worry about those blue collar jobs being taken by robots for a while. But you know what will be lucrative in the meantime? Entertainment. That is why at @rek we are aware of the limitations of humanoids, and will bridge that AI gap with entertainment. Entertainment that will set the framework for a product category that will forever change our world. To become the next F1, with humanoids.

English
1
0
2
177
Jason Fox
Jason Fox@JasonGFox·
@theonlyAyo I agree that teleoperation is a fundamental part of the new robotics stack, but could one restate this as “human demonstration is the new programming language”?
English
1
0
2
316
Ayo
Ayo@theonlyAyo·
In robotics, skills are the new apps, and teleoperation is the new programming language.
English
15
6
153
9.9K
Jason Fox
Jason Fox@JasonGFox·
@tkipf Sweet! I’ve had a similar idea for years. Just never the team nor time to execute on it. Glad somebody is. Great work!
English
0
0
1
54
Thomas Kipf
Thomas Kipf@tkipf·
The world doesn’t live on a pixel grid and neither should vision models! Excited to share Moving off-the-Grid (MooG): a video model w/o grid-based representations. MooG learns detached “off-the-grid tokens” that bind to (and track) scene elements as camera & content move. 🧵
English
10
89
756
76.4K
Jason Fox
Jason Fox@JasonGFox·
We are officially entering into the humanoid robotic “gladiator” phase. A natural evolution given where the technology is(strong locomotion, limited agency) and humanity’s enjoyment of sport and entertainment.
English
0
0
0
88
Humanoid Scott
Humanoid Scott@GoingBallistic5·
Qu'est-ce que c'est
Humanoid Scott tweet media
Français
7
4
34
21.5K
Jason Fox retweetledi
Noumenal Labs
Noumenal Labs@noumenal_labs·
During this year’s NeurIPS afterhours, we’re hosting an intimate gathering of researchers, founders, and investors exploring the intersection of computation, thermodynamics, and embodied intelligence. If you’re working at the edge of alternative architectures, stat-phys-inspired ML, or embodied intelligence, you’ll feel right at home. What to expect: • Thought-provoking conversations on alternative compute paradigms • A curated group of technologists & builders • Great food, great drinks, great company Hosted by cyber•Fund, Noumenal & collaborators. Registration & approval required: luma.com/tqgzktjg Looking forward to connecting with the people shaping the next chapter of intelligent systems. @mjdramstead @cyberfund
English
0
3
14
3.4K
Jason Fox
Jason Fox@JasonGFox·
@IntuitMachine @Scobleizer @kscalelabs This. There is also a broad belief that humanoids are not viable in any reasonable timeframe. The handful of VCs who do believe in them have already made their bets - Figure, Apptronik, etc.
English
0
0
2
154
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
@Scobleizer @kscalelabs This just feels like a repeat of other technologies like drones, 3d printers, etc. US capital only invests in potential monopolies. It totally ignores ecosystem plays. Despite democratization of the internet, investors only care about walled gardens..
English
4
0
7
221
Jason Fox
Jason Fox@JasonGFox·
@jloganolson Awesome work! Be sure to burn it all with fire after Halloween.
English
0
0
0
67
Logan Olson
Logan Olson@jloganolson·
Still keeping crawl at half-speed but it works with the costume on!
English
331
1.6K
21.5K
697.6K
Jason Fox retweetledi
Maxwell Ramstead
Maxwell Ramstead@mjdramstead·
New blog post by @noumenal_labs: “WTF is the FEP? A short explainer on the free energy principle”: noumenal.ai/post/wtf-is-th… Really happy to share this one! We discuss the free energy principle: What it is, what it is not, what promise it holds, why it can be extremely useful, and why it has yet to live up to the hype.
English
6
46
142
10.3K
Jason Fox retweetledi
Maxwell Ramstead
Maxwell Ramstead@mjdramstead·
New blog post by @noumenal_labs: “Grounded rewards in the era of experience: A commentary on ‘Welcome to the era of experience’”: noumenal.ai/post/grounded-… Here’s the tl;dr: • This post is a commentary on a new paper by Silver and Sutton, entitled “Welcome to the Era of Experience” (2025). • Silver and Sutton (2025) provide a thought-provoking discussion of the last decade of research and development in the field of artificial intelligence (AI), and where the field is heading. The core idea is that we have reached a performance ceiling for AI agents trained via supervised learning from human data — and that we have entered a new epoch in the development of AI, which the authors call the “era of experience.” • The era of experience, as the authors describe it, is a forthcoming phase in the development of AI that will be characterized by “grounding” in the real world, online action-perception loops, physical embodiment, environment-sourced reward signals, and online real-time experiential learning. • In particular, Silver and Sutton argue that the era of experience heralds a shift from hand-crafted, user-specified reward functions and the heavy use of human expert feedback and supervision, towards “grounded rewards,” which are measured and evaluated by AI agents themselves by continually assessing the sensory consequences of their actions in real time. • Here, we review and evaluate their argument. We enthusiastically embrace several aspects of their discussion and offer some constructive feedback pertaining to the learning of grounded reward functions.
English
3
7
30
1.8K
Jason Fox
Jason Fox@JasonGFox·
Physical AI is a new frontier that presents challenges beyond the scope of current approaches to AI. Deep Learning works great in use cases where data is abundant, but in the physical world data is sparse and ever-changing. This necessitates a new set of architectures. Let's go!
Maxwell Ramstead@mjdramstead

I’m thrilled to share a blog post by @noumenal_labs: “From Natural Intelligence to Physical AI”: noumenal.ai/post/from-natu… Here’s the tl;dr: • Physical AI is the next big wave of research and development in the field of artificial intelligence. Its proponents claim that Physical AI holds the promise of revolutionizing industry. • But state of the art AI will not deliver on these promises, because it is not capable of understanding the structure, variability, and complexity of the physical world that we inhabit. • Noumenal Labs is a newly formed deep tech company that is laser focused on building digital brains for Physical AI — so it can be deployed profitably, efficiently, safely, and at scale. • We are using our unique, proprietary macroscopic physics discovery technology to build object centered world models that will power the brains of autonomous systems — unlocking machines that can act in intelligent and situationally appropriate ways in the real world, and that can adapt to a changing world in real time. • Driven by key insights from statistical physics and cognitive science, in particular by Karl Friston’s active inference framework, the approach pioneered at Noumenal Labs unlocks the capability of machines to represent the physical world in the same way we do, enabling them to act safely and in alignment with human values — and thereby, to deliver on the promise of Physical AI.

English
2
2
12
1.2K
Jason Fox retweetledi
Machine Learning Street Talk
Machine Learning Street Talk@MLStreetTalk·
Neuroscientist Dr. Jeff Beck from @noumenal_labs discusses the fundamental nature of representation, understanding, and modelling, comparing biological intelligence with current artificial intelligence. Jeff argues that *how* information is represented dictates predictive ability and that LLMs, while impressive at symbol manipulation and pattern matching (like next-word prediction), lack the *grounded*, causal understanding of the world inherent in biological systems. Timestamps: 00:00 - Cat visual cortex experiments & discovering orientation sensitivity (slide projector analogy) 01:49 - Representation choice and neural coding (orientation vs. feature intensity) 02:30 - Choice of representation impacts predictions; generative models 03:15 - Importance of choosing the right generative model for predictions 03:35 - The problem: We don't know the brain's true generative model 03:55 - Theory of Mind (ToM) in LLMs 04:05 - Jeff Beck's ToM tests on early ChatGPT (stapler example) 05:40 - ChatGPT recognizing the ToM test vs. passing it 06:32 - Analogy: LLMs recognizing known problems vs. generalizing (sum/product riddle) 07:25 - Do LLMs implicitly build world models? Vicarious experience analogy 07:59 - The difference: Grounding symbols in reality outside language 08:35 - AI Alignment: Difficulty in capturing human reward functions & belief formation 09:21 - Nightmare scenario: Humans as "complacent value function selectors" 09:44 - Hope: AI enhancing human understanding, not replacing thought 10:08 - Philosophy of science: Science realism vs. modeling pockets of regularity 10:39 - Noise in models as ignorance or deliberate exclusion (design choice) 11:00 - Design choices in science, controlled experiments, and induced bias 11:29 - Are there true, discoverable mathematical laws of the universe? 11:41 - Is there a "true" ground truth distribution (P)? Beck's answer: No (with nuance) 12:55 - Ontological vs. Epistemological divide: Perfect models vs. models of regularities 13:21 - Are scientific models "false by definition"? The Bayesian perspective 14:07 - "All knowledge is conditional"; Are foundational theories (e.g., FEP) true or just perspectives? 14:51 - FEP as a mathematical framework, not a theory; models are just models 15:54 - Legibility vs. Utility: Useful but illegible AI models 16:01 - Prediction vs. Explanation: Trusting black boxes can be unsatisfying 16:30 - Why understanding AI matters: Ensuring alignment with human decisions/values 17:08 - Line-of-sight legibility as an alignment approach 17:14 - Benefits of explainable AI: Human understanding and value alignment verification 18:21 - RL components: Prediction engine, reward function, policy; the alignment challenge 19:25 - Trusting AI = Trusting its policy aligns with our reward + its superior beliefs 19:57 - Language: Intrinsic representation vs. pointers between shared minds 20:21 - Why language works: Shared internal models and common grounding 21:00 - Basis of shared understanding: Not linguistic, but shared experience/intuitive physics 22:44 - Consciousness and language as lossy, simplified summaries of complex brain processes 23:22 - Evidence for simplification: Brain regions, perception vs. representation; limits of language models 24:32 - Counterpoint: Language captures complex/ambiguous human concepts 24:54 - Language as massive compression: The information bottleneck (Meister's paper) 26:22 - Implication: Language/actions are poor representations of internal understanding 27:03 - Can language models understand? The mimicry argument (Piantadosi) 27:33 - Beck's skepticism: LLMs excel at prediction/mimicry, not true understanding 28:09 - LLM explanations replicate structure but lack grounding 28:54 - Beck's test for LLM understanding: Genuine novelty beyond training data 29:19 - Summary: Symbol manipulation is not understanding; grounding is key 30:06 - Abstraction and Idealization in scientific modeling ("The Brain Abstracted") 30:45 - Revisiting Newton: Intuitive physics is correct for our world; idealizations are simplifications 32:01 - Sophistication & boundaries: Nested systems vs. one complex system? 32:32 - The boundary problem in FEP/Markov Blankets: Where to partition? 33:41 - Beck's research: Finding principled partitions based on interaction dynamics 35:44 - Beyond direct experience: Imagination, language, and learning 36:16 - Human creativity: Creating new *things* by combining modeled objects (Systems Engineering) 37:43 - Goal for AI: Automating systems engineering for creative combination 38:17 - Sutton's "Reward is Enough" paper 38:25 - The challenge of "Reward is Enough": Defining and obtaining the *right* reward function 39:02 - Difficulty of eliciting individual reward functions 39:52 - The core alignment problem: Accessing and representing individual reward functions 40:13 - Impossibility: Disentangling beliefs and rewards from observed actions 41:51 - Argument analogy: Disagreements stem from different beliefs or values 43:00 - Prerequisite for value inference: Understanding belief formation 43:13 - Building aligned systems: Sparsity of data, meta-models vs. base system modification 43:46 - Proposed solution: AI layer that models the human's belief formation system 44:40 - Alignment process: Align beliefs first, then address value differences 45:00 - Conclusion CC @mjdramstead
English
7
15
82
17.8K
Jason Fox
Jason Fox@JasonGFox·
Build the world you want to see
English
0
0
1
138