Ray_Dimension

134 posts

Ray_Dimension banner
Ray_Dimension

Ray_Dimension

@dlfpdl

23 y/o 🇰🇷🇺🇸 3x founder since high school 📈 AI engineer 🌠 co-founder & ceo at Dimension. customers: samsung, hyundai, and more

Katılım Mayıs 2016
413 Takip Edilen150 Takipçiler
Sabitlenmiş Tweet
Ray_Dimension
Ray_Dimension@dlfpdl·
The missing piece of AGI is memory. So we built memory that compounds. Available now: dimension.company
English
14
30
67
17.5K
Joshua Park
Joshua Park@JoshuaIPark·
@dlfpdl Can't wait for the era where I can vibe coding both Hardware and Software.
English
1
0
0
24
Ray_Dimension
Ray_Dimension@dlfpdl·
Asked CoBrA to generate and simulate a robotic arm assembly. AI-native engineering workflows are coming faster than people realize.
English
3
2
6
64
Philipp Berner
Philipp Berner@philippberner·
@dlfpdl I’m putting all my effort in that and guard rails. What do you use for memory?
English
1
0
0
12
Ray_Dimension
Ray_Dimension@dlfpdl·
Once agents remember: workflows become compounding systems instead of isolated prompts.
English
0
0
3
41
Ray_Dimension
Ray_Dimension@dlfpdl·
GPT-5.5-Pro + CoBrA generated a functional turbocharger assembly. This is still early.
English
1
6
11
131
Ray_Dimension retweetledi
Ray_Dimension
Ray_Dimension@dlfpdl·
The next generation of CAD software won’t just be used — it’ll collaborate with you. AI should amplify human creativity, not replace it. Welcome to @dimensionagent
English
4
7
13
601
Ray_Dimension
Ray_Dimension@dlfpdl·
The interface after chat is execution. The interface after execution is memory.
English
0
1
4
38
Ray_Dimension
Ray_Dimension@dlfpdl·
Asked CoBrA to generate a quadrotor drone assembly from a single prompt. Persistent agents are going to change engineering workflows forever.
English
2
4
11
109
Ray_Dimension
Ray_Dimension@dlfpdl·
Most AI agents fail for the same reason: they wake up every morning with amnesia. Memory changes the entire architecture.
English
0
1
7
75
Igor Kulakov
Igor Kulakov@ihorbeaver·
We have news! We created a new robotics model called Loop Model 1. On the zip-tie insertion task, it achieves 20x more throughput per unit of data than "Pi06 + RLT" from Physical Intelligence, a top model for such tasks. It’s the missing piece that makes MicroFactory work, because now deployment becomes so simple and fast that our users can do it themselves.
English
21
41
422
72.3K
Charly Wargnier
Charly Wargnier@DataChaz·
🚨 Karpathy was right. He warned that 90% of AI advice dies in 6 months. spoiler: most tools won't even survive 90 days. this guy is literally giving away the exact 2026 playbook for AI Agents. he covers exactly what to learn, build, and ignore entirely 👀 ↓ read this today
Rohit@rohit4verse

x.com/i/article/2048…

English
36
120
839
188.1K
Ray_Dimension
Ray_Dimension@dlfpdl·
@karpathy As interfaces evolve, memory becomes the foundation underneath all of them.
English
0
0
0
19
Andrej Karpathy
Andrej Karpathy@karpathy·
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212

x.com/i/article/2052…

English
878
1.8K
17.3K
2.8M
Ray_Dimension
Ray_Dimension@dlfpdl·
Asked GPT-5.5-Pro + CoBrA to generate a fully parametric mechanical keyboard. AI-generated CAD is getting interesting.
English
2
4
9
167
Ray_Dimension
Ray_Dimension@dlfpdl·
The most powerful use of AI isn’t replacing human creativity. It’s amplifying it.
English
0
2
5
91