Chase Brignac 🚀

9K posts

Chase Brignac 🚀 banner
Chase Brignac 🚀

Chase Brignac 🚀

@chasebrignac

ex-NASA/Apple/Amazon AGI Lab/YC S22 Founder

1 AU Katılım Aralık 2009
375 Takip Edilen7.5K Takipçiler
Sabitlenmiş Tweet
Chase Brignac 🚀
Chase Brignac 🚀@chasebrignac·
As 2023 wraps up I think it’s time to look at next year and a little beyond 2024. Let’s ask the most consequential question of our life. What will AGI literally be? I think we actually know. This timeline fits with the @elonmusk timeline of AGI by 2029.
Chase Brignac 🚀 tweet media
English
1
0
5
2.5K
Chase Brignac 🚀 retweetledi
Pamela Ortiz
Pamela Ortiz@Paamee_Ortiz·
Durante unos minutos cada año, la luz del sol hace que esta cascada de Yosemite se vea como un río de fuego.
Español
195
8.7K
76.8K
1.3M
Chase Brignac 🚀 retweetledi
Elon Musk
Elon Musk@elonmusk·
We are improving the 0.5T Grok foundation model V8 (public version 4.3) every few days. The 1.5T V9 just finished training (incorrectly called pre-training) and is a major upgrade. Next, we are adding the Cursor data in supplemental training (others call this mid-training), then SFT and RL. About 3 or 4 weeks to release. This will be a banger.
English
551
458
5.4K
302.1K
Chase Brignac 🚀 retweetledi
Joanne Jang
Joanne Jang@joannejang·
pov you're a cat that worked on interpretability research in 2020
Joanne Jang tweet media
English
4
5
117
8K
Chase Brignac 🚀 retweetledi
Core Automation
Core Automation@CoreAutoAI·
Deep learning is still not solved
English
18
10
184
12.7K
Chase Brignac 🚀 retweetledi
Gabriel Jarrosson
Gabriel Jarrosson@GJarrosson·
A PhD student just got into YC building a spy drone that looks exactly like a bird. Counter-drone systems ignore birds. Too many false positives. So the drone flies completely undetected. To stop it, you'd have to shoot every bird out of the sky. That's not a product insight. T hat's first principles thinking at its sharpest. It reminded me of Elon spotting a toy car with a single-cast chassis. That observation became Tesla's gigacasting advantage. The outsider sees what the expert stopped questioning years ago. YC is backing more of these founders now. Domain expertise helps. But it's not the ticket. A clear problem, a defensible insight, and proof you've done the work. That's what gets you in.
Gabriel Jarrosson tweet media
English
62
21
338
61.9K
Chase Brignac 🚀 retweetledi
Nick Khami
Nick Khami@skeptrune·
visual metaphor for the CTO transitioning into a member of technical staff and 50x’ing all the existing ICs with AI
English
44
74
2.2K
322.6K
Chase Brignac 🚀 retweetledi
Mathelirium
Mathelirium@mathelirium·
A Schrödinger quantum wave packet moves into a double barrier, swells inside the resonant chamber, and then tunnels through.
English
7
85
486
24.1K
Chase Brignac 🚀 retweetledi
sasha
sasha@lobotosasha·
HAHAHHAHAHAHAHA O FINAL
Italiano
667
8.4K
69.1K
2.6M
Chase Brignac 🚀 retweetledi
Martin Valigursky
Martin Valigursky@ValigurskyM·
You can now walk through a real-world Gaussian Splat scene in @playcanvas 🚶🕺 New first-person and third-person demos, powered by a brand-new reusable third-person camera controller: mouse orbit, wall-collision avoidance, scroll-wheel zoom, animations & shadow catcher 🧵
English
23
54
696
60.2K
Chase Brignac 🚀 retweetledi
Chris Hayduk
Chris Hayduk@ChrisHayduk·
GPT 5.5 is an effective autoresearcher in structural biology! I've had goal mode running for over 150 hours straight, looking for topologically inspired architectural changes to improve the performance of AlphaFold2. Performance is strong and improving!
Chris Hayduk tweet media
English
34
107
1.1K
82.8K
Chase Brignac 🚀 retweetledi
Ravid Shwartz Ziv
Ravid Shwartz Ziv@ziv_ravid·
I must admit that 1-2 years ago, I was sure that LLMs would be much better at predicting the future. It makes sense that if you can search and aggregate information from different sources, you can predict events. But so far, all the models have failed quite badly. I'm not sure what the missing parts are here. It might be a good memory system, but it might be something more fundamental, such as a missing internal world model. Anyway, very interesting problem to (try) to solve
Lisan al Gaib@scaling01

new forecasting benchmark: FutureSim GPT-5.5 performs the best at 25%, but Mythos, Gemini 3.1 Pro and Opus 4.7 are not included. Based on their Brier Skill Score the models don't seem to be much better than just assigning equal probabilities to all outcomes

English
10
5
47
12.1K
Chase Brignac 🚀 retweetledi
Arvindh Arun
Arvindh Arun@arvindh__a·
Introducing FutureSim: where we replay a temporal slice of the web and let agents forecast real-world events over time 🔮🌎 FutureSim replays the web day by day. Agents start on Jan 1, 2026 (past their knowledge cutoffs) with date-gated access to real news articles and forecast on real-world events resolving over the next 90 days. Around 244K new articles stream in during the simulation. Agents decide which questions to answer, what to search for, and when to advance to the next day 🤔 We evaluate frontier models in their native harness. GPT 5.5 (Codex) leads at 25% acc, followed by Opus 4.6 (Claude Code) at 20% 📈 Open weight frontier models have a significant gap to catch up, with DeepSeek V4 pro at 13%, GLM 5.1 at 10%, and Qwen3.6 Plus at 5% On some questions that have a parallel @Polymarket market, we find that GPT 5.5 in our simulation sometimes beats the crowd aggregate, like in the Super Bowl LX ($704M traded) market 💰💸 FutureSim serves as a test bed for evaluating a lot of important agentic capabilities > Adaptation: how agents adapt beliefs over time, and handle new incoming information and environment feedback > Memory: how agents make the best use of external memory to store persistent insights and handle context limitations over a thousand tool calls > Search: how agents find relevant information over thousands of articles streaming in > Inference scaling: how agents benefit from scaling inference compute More cool insights and deep dives in our paper 👇
English
10
38
276
67.4K
Chase Brignac 🚀 retweetledi
Elon Musk
Elon Musk@elonmusk·
Beware the empathy exploit. Empathy is good and right when thought through (deep), but can be deadly to civilization when simply stimulus-response (shallow). For example, releasing a repeat violent offender may feel good at first (shallow empathy for the criminal), but it is wrong to do so when that person will go on to hurt or murder innocent victims, as there should be deep empathy for future victims.
Gad Saad@GadSaad

Oh my! timesnownews.com/lifestyle/book…

English
9.2K
30K
171.7K
30M
Chase Brignac 🚀 retweetledi
David Dobáš
David Dobáš@DDobas·
Good news: @nvidia GEAR-SONIC can be quickly finetuned for extreme motions. This took 30 minutes on 5080 (500 iterations, 4096 parallel envs).
English
4
25
229
10.1K
Chase Brignac 🚀 retweetledi
Ilir Aliu
Ilir Aliu@IlirAliu_·
Most dexterous robot demos fail when small errors compound over time. Especially in long-horizon, contact-rich manipulation. Researchers from ByteDance Seed, Shanghai Jiao Tong University, and The University of Tokyo just released “Hand-in-the-Loop”, a system that lets humans intervene during VLA execution without causing destructive “gesture jumps.” Instead of abruptly switching from policy control to teleoperation, the system blends both together in real time. - 99.8% reduction in takeover jitter - 87.5% fewer grasp failures - 19.1% faster task completion - Better policy improvement than standard teleoperation data Instead of forcing the robot hand to match the human hand pose during takeover, HandITL tracks relative fingertip motion while preserving the robot’s existing grasp. That means smoother corrections without destabilizing manipulation. ❗️Shared “copilot” corrections worked better than full human takeover. Keeping the policy active while injecting local corrections produced more stable learning and stronger long-horizon performance. 📍Paper: arxiv.org/abs/2605.15157 —— Weekly robotics and AI insights. Subscribe free: 22astronauts.com
Ilir Aliu tweet mediaIlir Aliu tweet media
English
3
21
116
8.9K
Chase Brignac 🚀 retweetledi
Andrew Carr 🤸
Andrew Carr 🤸@andrew_n_carr·
The term "markerless" gets thrown around a lot in motion capture. What does it mean? Well...surprise! There are still markers (we put them on you instead of you getting in a suit) When we capture motion from your videos, we still need to estimate important landmarks, we've just built a way to do it on a web camera instead of a $100k mocap stage. Multi person + scale aware with extremely good camera pose estimation. Try it out and let me know what you think
English
23
86
840
70.4K
Chase Brignac 🚀
Chase Brignac 🚀@chasebrignac·
Eat the rich? No, become the rich. You wanna do something about wealth inequality? Open up the fucking markets. It’s insane my mom couldn’t save $5000, invest in OpenAI, and make millions. No wonder Americans are always losing their shit about the rich getting richer.
Cathie Wood@CathieDWood

For too long, many wealth creation opportunities in innovation have been gated. At @ARKInvest, we believe that should change. The team at @RobinhoodApp agrees. Different structures, same mission: expand access for everyday investors. Thank you @ShivVerma for a wonderful conversation about access to private markets! ARK Venture Fund info: ark-funds.com/funds/arkvx

English
0
0
0
90
Chase Brignac 🚀 retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Semantically annotating 3D gaussian splats on the fly using gemini 3.1 + sparkjs 1. Load any 3D scene and hit scan 2. Get 2D detections from VLM 3. Cluster outputs & project into 3D world space 4. Save as a persistent 3D semantic layer Inspired by @alexanderchen's experiments with gemini visual intelligence. Just had to try to lift it from 2D to 3D!
English
22
118
949
52K
Chase Brignac 🚀 retweetledi
Adithya Murali
Adithya Murali@Adithya_Murali_·
The much-awaited cuRoboV2 release is finally here and open sourced! From whole-body humanoid IK/trajectory optimization to real-time TSDF mapping, this is the new SOTA for GPU-accelerated motion generation.
Balakumar Sundaralingam@balakumar_

cuRobo is now open source under Apache 2.0. cuRoboV2 adds <1 ms GPU-native TSDF/ESDF semantic mapping, whole-body IK/MPC and trajectory optimization for humanoids, ~50 ms torque-limited planning. Report: arxiv.org/abs/2603.05493 Code: nvlabs.github.io/curobo

English
2
11
77
9.8K
Chase Brignac 🚀 retweetledi
Will Eastcott
Will Eastcott@willeastcott·
Some say you can't use 3D Gaussian splats to make videogames. Oh yeah? Put a few zombies in this lab! 🧟‍♂️🧟‍♀️ SuperSplat link 👇
English
47
52
819
65.1K