Sentient Car

170 posts

Sentient Car banner
Sentient Car

Sentient Car

@sentientcar

Shuffling numbers till the bot moves

San Jose, CA Katılım Temmuz 2025
1.1K Takip Edilen63 Takipçiler
Sabitlenmiş Tweet
Sentient Car
Sentient Car@sentientcar·
6 dof tracking of umi style hand and replicating it on a robot. @GeneralistAI you better watch out !
English
6
6
83
5.5K
Sentient Car
Sentient Car@sentientcar·
@MarwaEldiwiny It does seem like the unsaid part is. This is to get teleop data in actual homes. And what early adopters get is to mention they have a robot butler.
English
0
0
0
13
Sentient Car
Sentient Car@sentientcar·
@k7agar @GeneralistAI The trick to getting it smooth is to hate wifi with everything you got (the lag here is purely from motor safety contraints). Everything wired
English
0
0
0
9
Sentient Car
Sentient Car@sentientcar·
@k7agar @GeneralistAI Haven't documented but the tracking is quite simple: * 6 dof tracking using quest 3 controller. * IK the 6 dof position onto the hand
English
1
0
0
17
Sentient Car
Sentient Car@sentientcar·
6 dof tracking of umi style hand and replicating it on a robot. @GeneralistAI you better watch out !
English
6
6
83
5.5K
Sentient Car
Sentient Car@sentientcar·
@andrewgwils Isn't math/chess a very easy counter example to this. We are quite sample efficient in learning math compared to llms. And here was no maths/chess on the ancestral environment.
English
1
0
20
717
Andrew Gordon Wilson
Andrew Gordon Wilson@andrewgwils·
There's a fourth possibility: humans only appear sample efficient because they've effectively seen a massive amount of data through evolution. Remember, there is a fluidity between the model and the data. The model is a representation of our understanding of data.
Dwarkesh Patel@dwarkesh_sp

There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.

English
56
34
442
43.8K
Sentient Car
Sentient Car@sentientcar·
@roblee_rl Nice is there a reason you prefer this arm compared to something cheaper like the yam arms ?
English
1
0
0
349
Rob Lee
Rob Lee@roblee_rl·
a good model trained on even a simple task with a tiny amount of data feels mesmerising, no matter how many times you see it.
English
15
13
162
22.6K
Justin Strong
Justin Strong@GPTJustin·
Nothing in robotics is built yet Working with classical software or even LLMs is so comparatively easy. If you approach "I want to finetune pi0.5 on a new embodiment" with the old mindset, you'd assume the tooling(sim, finetune script, etc) around this task exists, you'd be so wrong! Case in point, I've just discovered the LeRobot finetune script, which includes a train_expert_only flag and supports Pi0.5, does not implement Knowledge Insulation which is absolutely required to only train the action expert in pi0.5. So this signals I need to switch mindsets here, I can't just "throw data at the model and let it learn", I'm going to need to deeply study and understand this model before I'll be able to make it do anything.
English
8
5
86
8.4K
Sentient Car
Sentient Car@sentientcar·
@zephyr_z9 Is there an online version of this? always love to hear noam talk !
English
1
0
4
2.2K
Zephyr
Zephyr@zephyr_z9·
Very interesting
Zephyr tweet media
English
9
14
364
42.9K
Sentient Car
Sentient Car@sentientcar·
@EMostaque I mean they explicitly mentioned that it is a different "tech tree" so didnt want to scale it. So video diffusion vs incorporating image VQVAE tokens + something else into the AR model to get gpt-image-2. completely different directions of scaling
English
0
0
4
333
Emad
Emad@EMostaque·
Given how good gpt-image-2 is it’s an absolute puzzle OpenAI shut down Sora
English
108
3
401
60.8K
Sentient Car
Sentient Car@sentientcar·
@Norapom04 Are any of the models actually good at this ? Havent used any for this recently !
English
0
0
0
48
Aaron
Aaron@Norapom04·
When you tell claude to optimize a blackwell kernel
Aaron tweet media
English
2
0
70
3.6K
Sentient Car
Sentient Car@sentientcar·
Unless you work with django ofcourse!
English
0
0
0
21
Sentient Car
Sentient Car@sentientcar·
I love the people still using benchmarks for testing coding ability. Use it for a week. did you get more done? that is the benchmark now. AND THE ONLY ONE YOU SHOULD BE USING !
English
1
0
1
31
Sentient Car
Sentient Car@sentientcar·
Best case let me train a lora on a latest WAM in a privacy preserving way( like how ltm does for video) . I pay gpu hourly cost and some service charge. ( can be high like 100%). And i can call it at a relatively low latency to get action chunks. (200 - 500ms is fine as long as longer than action chunk. Otherwise let me train a video model lora where i have control over the masking. Lets me train video models to wam with minimal architecture change from a video training infra. These can both be a simple api call .train(mask=x) and flexattention gives a good api design for the mask part.
English
0
0
0
41
Sentient Car
Sentient Car@sentientcar·
Does anyone know a company for video model finetuning like tinker is for llm? I know @fal but they don't have any ability to train world action models for example. I would assume it would be trivial as a world action model is the same as a video+ audio model finetuning which they do!
English
1
0
0
116
Sentient Car
Sentient Car@sentientcar·
@ryanmhickman Yeah. Given how difficult it is to make work for each usecase. I think there will be a lot more small winner
English
0
0
0
47
Sentient Car
Sentient Car@sentientcar·
@gokulr In the extreme case the trend might even be opposite i.e. simpsons paradox
Sentient Car tweet media
English
0
0
5
1.3K
Gokul Rajaram
Gokul Rajaram@gokulr·
SEGMENT, ALWAYS SEGMENT Most confounding business problems have the same root cause: you haven't segmented your customers. You look at the top-line number. It's flat, or weird, or inconsistent with what your gut tells you. You poke at it and you can't figure out why. The answer is almost always that you're staring at an average that's hiding two or three very different stories. A few places this shows up: 1. When your high-level metrics look wonky or divergent, break them out by segment. A flat retention curve often hides one cohort churning out violently and another expanding aggressively. A "meh" NPS usually has one segment of fanatics and one segment of detractors cancelling each other out. The average is a lie. The segments are the truth. 2. When your product is trying to be everything to everyone, you need to tailor it per segment. If your roadmap has SMB founders, mid-market IT buyers, and Fortune 500 procurement all fighting for features in the same backlog, that's three products in a trench coat pretending to be one. Pick the segment you're actually building for, and ship accordingly. 3. When your pricing or positioning feels wrong no matter where you set it, it's because one SKU or pitch is spanning segments with wildly different needs or willingness to pay. Enterprise will pay 10x what a startup will for the exact same thing. A single price point either leaves money on the table at the top or closes the door at the bottom. Segment the packaging. Segment the price. The pattern holds every time. Whenever a business problem is hard to reason about, break the population into segments and look again. Nine times out of ten, the fog lifts. Importantly, you don't need to use standard gender or demographic segments. You can build your own! (And AI is a superpower here). One of the best segmentations in real life was done by @davidweiden at TellMe Networks in the early 2000s. TellMe was selling phone automation software into financial services: a half-billion dollar market, and they had almost no traction. David built a custom segmentation framework called Rifle, which scored every prospect on five weighted criteria. Where the customer was in their buying cycle (engage before the RFP, not after). Whether their long-distance carrier was compatible with TellMe's deployment model. Three more criteria with explicit weightings, including negative scores that disqualified prospects outright. The whole company aligned on the scoring. Sales stopped chasing bad-fit accounts. Product stopped building features for customers who would never close. Marketing stopped spraying the market. Over two years, Rifle drove $20M in ARR inside the qualified segment and took TellMe from a loss to a profit. They literally would have failed without the segmentation. . Founders: when a metric confuses you, when your product feels scattered, when your sales pitch or pricing won't land, segment. Segment, always segment.
English
17
45
529
160.4K
Sentient Car
Sentient Car@sentientcar·
The most surprising lesson of moving from software to hardware has been how much waiting around there is for repair parts, quotes etc. Really makes you realize why gdp growth rate doesn't go to infinity in the singularity ! @tylercowen proven right as usual!
English
0
0
0
47