Yash

13 posts

Yash banner
Yash

Yash

@coldifl

Co-founder Intelligence Factory (YC P26). Tech/Robots/NFL

San Francisco, CA Beigetreten Nisan 2026
65 Folgt8 Follower
Yash
Yash@coldifl·
This is a great watch! Scaling data (what type of data?) and building the infrastructure required for it, is one of the biggest bottlenecks in robotics. Extra cool that egoscale was one of the first papers that validated our thesis. youtu.be/3Y8aq_ofEVs?si…
YouTube video
YouTube
English
0
0
0
24
Yash
Yash@coldifl·
@thejesonlee would love to join!! Building general purpose robotics (current YC batch)
English
0
0
0
10
Jeson Lee
Jeson Lee@thejesonlee·
May 12th in SF, Where the Opportunity Sits in Physical AI by @join_savant For our next Choose Good Quests session, we are hosting @bznotes ,managing partner at @redglassvc and @KanuGulati, partner at @khoslaventures to share about opportunities in physical AI. Physical AI is moving fast — but for founders, the map is still being drawn. Bilal and Kanu have backed some of the most ambitious companies across AI, robotics and autonomy. They’ll share what they’re seeing from the investor seat: where the biggest opportunities are, what makes a strong founding team, how founders should think about wedge, market timing, defensibility, and what kinds of Physical AI companies they’d be excited to fund. Come join us. Link in the thread 🧵
Jeson Lee tweet media
English
3
1
31
7.4K
Yash
Yash@coldifl·
Awesome thesis! Super excited to launch Intelligence Factory soon
Lukas Ziegler@lukas_m_ziegler

🚨 BREAKING: Genesis AI has just launched GENE-26.5, what the company is calling the first robotic brain to give robots human-level physical manipulation capabilities. The demo video alone is extraordinary, robots cooking 20-step meals, solving Rubik's Cubes mid-air, playing piano, and conducting lab experiments with delicate instrumentation. All with human-level dexterity. @gs_ai_ has built a dexterous robotic hand that exactly mirrors the human hand, paired with a data collection glove with tactile-sensing electronic skin. When a human wears the glove, every movement maps 1:1 to the robotic hand. This closes the embodiment gap. In this case human skill transfers DIRECTLY to a robot at scale. The economics are game-changing too, the glove is 100x cheaper than typical options and delivers 5x greater data collection efficiency vs traditional teleoperation. That makes continuous large-scale robotics training viable for the first time. The company has raised $ 105M in seed funding backed by Eclipse, Khosla Ventures and Bpifrance, with Eric Schmidt and Xavier Niel among the strategic angels. And the first general-purpose robot is coming soon. Co-founded by @zhou_xian_ and @theo_gervet, this is a full-stack robotics company controlling every layer, AI, hardware, simulation, and data. That's a serious moat. ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com

English
0
0
0
36
Yash
Yash@coldifl·
@chris_j_paxton @gs_ai_ This makes sense for pre-trained models, but post-training is still unsolved. I imagine when they begin deploying, they’d see multiple edge-cases and the data flywheel approach takes months to hit metrics that a human would do
English
0
0
1
73
Chris Paxton
Chris Paxton@chris_j_paxton·
Pipetting, cracking an egg, using tape -- we have seen these from other teams now but they're all very bleeding edge capabilities, exciting stuff; huge congrats to the @gs_ai_ team Another entry into the 'data and good controls is all you need' camp of robotics learning, interesting that they seem to have moved away from the simulation angle and go for this glove instead
Humanoids daily@humanoidsdaily

Genesis AI has exited stealth for real with the announcement of GENE-26.5, a foundation model designed to achieve human-level physical manipulation. The system uses a proprietary dexterous hand and a tactile-sensing glove that is reportedly 100 times cheaper and 5 times more efficient at data collection than legacy teleoperation methods. Watch the system autonomously master tasks ranging from 20-step meal prep and wire harnessing to precision lab experiments and piano performance at 1x speed. 📽️🤖

English
7
15
137
14.5K
Ayman Saleh
Ayman Saleh@sir_aymansaleh·
Excited to annouce the launch of my company Chronicle Labs (YC P26); the staging environment for enterprise AI agents. Just like trading teams backtest algorithms before deploying capital, companies should be able to backtest agents before deploying them into real workflows. We turn operational history into replayable sandboxes, so teams can test, debug, and safely ship better agents.
English
36
19
214
613.9K
Yash
Yash@coldifl·
@chris_j_paxton Egocentric data is not enough. That is one of the modalities, you cannot solve generalized robotics without action and tactile information
English
0
0
1
28
Chris Paxton
Chris Paxton@chris_j_paxton·
Its not the only thing holding general purpose robots back! But its one of the big things. Its also one that, to be honest, seems like it will be solved in short order given how much egocentric data is being collected
Rohan Paul@rohanpaul_ai

Robotic data is insanely expensive and brutal to collect. It’s the only thing holding back general-purpose robots right now. Figure CEO @adcock_brett : "If we could get a pile of data in the helix stack, we would solve general robotics right now."

English
9
1
34
3.9K
Yash
Yash@coldifl·
The way we learn any task - by seeing, doing, and feeling - is how we train robots. We're building general intelligence that works on any robot and fits seamlessly within our world. In the coming weeks, I will share more about our approach as we deploy our across verticals.(2/2)
English
0
0
0
27
Yash
Yash@coldifl·
Intelligence Factory is backed by @ycombinator and @CommaCapital . With hardware converging, Intelligence is the biggest bottleneck. Our world was built for humans and we're giving robots the ability to reason and act like us. (1/2)
Yash tweet media
English
1
0
0
44
Yash
Yash@coldifl·
@pliang279 Looks sick! We are also developing hardware for tactile data collection. Although, inclusion of tactile feedback into current model architecture is still an open question. Would love to chat
English
0
0
0
202
Paul Liang
Paul Liang@pliang279·
Most of today's AI can see the world, but it doesn’t **feel** it. Capturing the sense of touch is crucial for dexterous robotic manipulation, user modeling, and understanding physical interactions. Introducing OpenTouch: bringing full-hand tactile sensing into real-world AI🖐️ OpenTouch is collected in-the-wild using tactile sensing gloves, hand pose tracking gloves, and egocentric glasses. It includes: • 5 hours of real-world data, • 3 hours densely annotated contact-rich interactions, • 2,900 curated interaction clips, • across 800 objects, 14 environments, and 29 grasp types. all open at: opentouch-tactile.github.io
English
10
37
220
46.4K
Yash
Yash@coldifl·
@oyhsu @chris_j_paxton The only certainty is that high quality data is a defensible moat (even though everyone has a subjective opinion of what is high quality). After that model architecture and hardware are open questions
English
0
0
0
112
Oliver Hsu
Oliver Hsu@oyhsu·
Robotics is in an interesting place right now where there is space for a variety of methods and approaches, even within the scaling paradigm. The ecosystem is evolving in a Darwinian fashion with regular breakthroughs, pivots, and emergent behaviors. This is a dynamic that rewards adaptability, comprehensiveness, and an exploration-first approach. The thing to do is to do everything. The age of research indeed.
English
20
18
249
16.9K
Yash
Yash@coldifl·
@xiao_ted @peteflorence Couldn’t agree with this more. I think people underestimate how big of an infra challenge robotics is, along with technical
English
0
0
1
309
Ted Xiao
Ted Xiao@xiao_ted·
Robotics pre-training *from scratch* has been a heretical idea for the last two years. That “there’s no internet of robotics data” has led to two prevailing conclusions: 1) we need to use pretrained model backbones and 2) we need to scale robotics data. The first conclusion in particular is painful. We want the generality and scaling properties that have transformed the rest of AI; but we aren’t sure if the designs and models that were appropriate for text, vision, and audio will also be appropriate for robot actions as a first-class modality. So is this trade-off worth it? All leading robotics labs have believed so for the last 2 years. Every serious player has relied on rich pretrained priors resulting from large-scale internet training (via VLMs, LLMs, or video models). The correctness of the second conclusion seems more obvious, but the execution is not. The devil is in the details, and *what* robot data you scale and *how* you scale it matters immensely. I really appreciate that from the start, @GeneralistAI has taken an unusually considered and unique approach to both conclusions above. Eschewing pretrained representations and co-designing novel hardware/data collection/model from scratch is not an obvious path to take. This requires immense conviction and necessitates long-term pain tolerance. The team at Generalist has displayed both with clear-eyed thoughtfulness. Like all things in Physical AI today, there are still so many open research questions. I deeply respect and enjoy the rapid progress that @peteflorence @andyzengineer @coolboi95 @ColinearDevin are making on a very differentiated direction. The GEN-* reports have been some of my favorite recent robotics works, and I eagerly await the next updates! 👏
Pete Florence@peteflorence

x.com/i/article/2041…

English
6
19
237
25.7K