David Mack

1.2K posts

David Mack banner
David Mack

David Mack

@DavidHHMack

Robotics AI @ultraroboticsco, @SketchDeck Co-founder (YCW14, exited). Writing: https://t.co/hp3zs2i9gw

Truckee, CA 가입일 Haziran 2012
200 팔로잉599 팔로워
David Mack
David Mack@DavidHHMack·
@benjamin_bolte Most of our large mature model markets have many open and closed source players (E.g. LLMs - OpenAI, Anthropic, Qwen, Lllama etc etc)
English
0
0
3
943
Benjamin Bolte
Benjamin Bolte@benjamin_bolte·
I mean, I know this is just some astroturfing thing and I should just ignore it. But seriously, don't fall for it anon, you're gonna get rugged. The day Alibaba or Minimax or whoever open-sources their video action model, Pi will fade into obscurity and everyone will collectively remember that startups are supposed to try and make money. I would be shocked if Alibaba doesn't already have a Tesla / xAI-level real-time video model release planned for the next 12 months. I have multiple friends at Pi, they're super smart and hard-working. And they've done great with their secondaries. I still cannot fathom how someone can look at this situation and not see the glaring sectoral risk. You're taking business and machine learning advice from the geniuses behind Everyday Robots. Advice for anyone trying to invest in robotics: just buy Unitree on Hiive, it's still a huge discount and it's an objectively great business. Write-up on Unitree's IPO filing: therobotreport.com/unitree-ipo-sh… Key details: - 60% (!) gross margins - 300% YoY growth, $250m in revenue - Humanoids at > 50% of core revenue
Y Combinator@ycombinator

Physical Intelligence (@physical_int) is building a foundation model that can control any robot to do any task — what the team describes as the GPT moment for robotics. The company's cross-embodiment approach trains across many different robot platforms, and recent results show tasks being performed zero-shot that last year required hundreds of hours of data collection. In this episode of the @LightconePod , co-founder Quan Vuong (@QuanVng) sat down with @garrytan, @snowmaker, @sdianahu, and @harjtaggar to talk about why robotics is finally ready for its scaling moment, how PI runs its models in the cloud rather than on-device, and the playbook for what Quan sees as a Cambrian explosion of vertical robotics companies. 00:00 — Robotics just got cheaper 00:41 — The GPT moment for robotics 02:24 — Why robots didn’t work before 05:30 — The breakthrough that changed everything 09:12 — The data problem 13:33 — Robots learning without data 15:05 — Robots folding laundry (for real) 22:18 — From engineering problem → ops problem 29:12 — The startup playbook 38:46 — Thousands of robotics startups are coming

English
21
9
225
48.4K
David Mack 리트윗함
Ultra
Ultra@Ultraroboticsco·
A closer look at Operator. It reaches across the entire work cell, up to 10 feet high and all the way down to the floor. Operator handles bags, mailers, and boxes with the dexterity to adapt to an endless variety of items on the fly. [1/4]
English
7
12
102
10.4K
David Mack
David Mack@DavidHHMack·
🥰thanks for the @Ultraroboticsco shout-out
Y Combinator@ycombinator

Physical Intelligence (@physical_int) is building a foundation model that can control any robot to do any task — what the team describes as the GPT moment for robotics. The company's cross-embodiment approach trains across many different robot platforms, and recent results show tasks being performed zero-shot that last year required hundreds of hours of data collection. In this episode of the @LightconePod , co-founder Quan Vuong (@QuanVng) sat down with @garrytan, @snowmaker, @sdianahu, and @harjtaggar to talk about why robotics is finally ready for its scaling moment, how PI runs its models in the cloud rather than on-device, and the playbook for what Quan sees as a Cambrian explosion of vertical robotics companies. 00:00 — Robotics just got cheaper 00:41 — The GPT moment for robotics 02:24 — Why robots didn’t work before 05:30 — The breakthrough that changed everything 09:12 — The data problem 13:33 — Robots learning without data 15:05 — Robots folding laundry (for real) 22:18 — From engineering problem → ops problem 29:12 — The startup playbook 38:46 — Thousands of robotics startups are coming

English
0
0
0
59
David Mack 리트윗함
Andrew Jefferson
Andrew Jefferson@EastlondonDev·
Presenting Meridian: a line to connect deterministic compute and language model AI. From Neural Turing Machines and Differentiable Transformers to The Neural Computer, there’s a rich history of trying to combine traditional deterministic computation with the wildly different architecture of Artificial Intelligence. I’ve spent the last 4 weeks creating a single neural network that has the combined capabilities of a 4B param language model and a deterministic computation engine based on Web Assembly. It allows the AI deterministic integer computations up to 2^32, control flow (while loops and if statements) and a basic filesystem - all implemented as part of the transformer neural network, no external tool calls. With this architecture adding fewer than 1 million parameters to an existing 4B param language model I can take it from <20% accuracy on arithmetic with 4-digit numbers to 100% accuracy on 4 digit numbers and 99% accuracy on arithmetic up to 2^32 without adversely affecting the language model’s performance on non-mathematical tasks. The combined model can precisely execute a range of algorithms including checking number for primeness, finding the GCD of two integers and sorting arrays.
Andrew Jefferson tweet media
English
15
42
188
35.8K
Can Vardar
Can Vardar@icanvardar·
why does it feel like everyone is just building the same thing right now
English
238
7
257
22.9K
David Mack
David Mack@DavidHHMack·
@_joe_harris_ We definitely generate huge amounts of data and have very early stage infrastructure, but that doesn’t stop us training on all of the data we want to
English
1
0
0
288
Joe Harris
Joe Harris@_joe_harris_·
Every robotics company we work with has more data than they can use and less infrastructure than they need
English
13
12
151
47.3K
Siddarth Venkatraman
Siddarth Venkatraman@siddarthv66·
Robotics will be solved by AGI companies (Ant/OAI/GDM) before robotics companies (PI, Figure, Skild)
English
30
5
141
25.4K
David Mack 리트윗함
Andrew Jefferson
Andrew Jefferson@EastlondonDev·
I’ve been working on combining a language model and a basic computer (based on web assembly) into a single AI model. One outcome of that is if the model generates programs or compute instructions, I don’t have to do round trips to the CPU, start a process, run the program/calculation and serialize and tokenize it before feeding the answer into the AI to get its response. I can do it in a continuous loop on the GPU. That’s already a pretty interesting performance characteristic for a certain kind of tool use BUT I realised I can do something even more interesting. When the machine outputs a compute instruction token, “multiply the two numbers at the top of the stack” for example (that’s a single token). I don’t need to wait for the compute to happen and print a response and enter it in to the input before I can generate the response token. I can just start the network generating the next token immediately after the “multiply” token was generated. Since the stack machine and the language model are part of the same neural network, running on a single GPU, I can start the model generating the next token right away and the language model first layers will run in parallel with the multiply computation (or whatever instruction it is). No waiting at all for it to compute. The output of the compute subnetwork goes into the mid layers of the language model allowing it to steer the next token generation even before it’s been emitted from the gpu and converted to human readable output. Concretely in my setup the neural wasm implementation runs in parallel with the first 10 layers of the language model and it’s working pretty well.
English
4
5
30
2K
David Mack
David Mack@DavidHHMack·
@sama I often think back to that, and think how little I'd get done in a day
English
0
0
1
21
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.7K
2.2K
35.9K
5.6M
David Mack
David Mack@DavidHHMack·
@chris_j_paxton I was streaming policy inference to the robot and then the Wi-Fi crapped out chaos!
English
0
0
0
42
David Mack
David Mack@DavidHHMack·
@toddsaunders I think that smart companies will come up with a more startup like version of planning. where you make sure that people have really clear high-level goals and have a lot of autonomy, and then they’re strong enough metrics to cull features
English
0
0
0
45
Todd Saunders
Todd Saunders@toddsaunders·
The token cost to build a production feature is now lower than the meeting cost to discuss building that feature. Let me rephrase. It is literally cheaper to build the thing and see if it works than to have a 30 minute planning meeting about whether you should build it. It’s wild when you think about it. This completely inverts how you should run a software organization. The planning layer becomes the bottleneck because the building layer is essentially free. The cost of code has dropped to essentially 0. The rational response is to eliminate planning for anything that can be tested empirically. Don’t debate whether a feature will work. Just build it in 2 hours, measure it with a group of customers, and then decide to kill or keep it. I saw a startup operating this way and their build velocity is up 20x. Decision quality is up because every decision is informed by a real prototype, not a slide deck and an expensive meeting. We went from “move fast and break things” to “move fast and build everything.” The planning industrial complex is dead. Thank god.
English
368
565
5.5K
470.3K
David Mack
David Mack@DavidHHMack·
@andrewchen Spreadsheets are an easy fast way to make an nice grid
English
0
0
0
23
andrew chen
andrew chen@andrewchen·
prediction re the end of spreadsheets AI code gen means that anything that is currently modeled as a spreadsheet is better modeled in code. You get all the advantages of software - libraries, open source, AI, all the complexity and expressiveness. think about what spreadsheets actually are: they're business logic that's trapped in a grid. Pricing models, financial forecasts, inventory trackers, marketing attribution - these are all fundamentally *programs* that we've been writing in the worst possible IDE. No version control, no testing, no modularity. Just a fragile web of cell references that breaks when someone inserts a row. The only reason spreadsheets won is that the barrier to writing real software was too high. A finance analyst could learn =VLOOKUP in an afternoon but couldn't learn Python in a month. AI code gen flips that equation completely. Now the same analyst describes what they want in plain English, and gets a real application - with a database, a UI, error handling, the works. The marginal effort to go from "spreadsheet" to "software" just collapsed to near zero. this is a massive unlock. There are ~1 billion spreadsheet users worldwide. Most of them are building janky software without realizing it. When even 10% of those use cases migrate to actual code, you get an explosion of new micro-applications that look nothing like traditional software. Internal tools that used to live in a shared Google Sheet now become real products. The "shadow IT" spreadsheet that runs half the company's operations finally gets proper infrastructure. The interesting second-order effect: the spreadsheet was the great equalizer that let non-technical people build things. AI code gen is the *next* great equalizer, but the ceiling is 100x higher. We're about to see what happens when a billion knowledge workers can build real software.
English
438
290
3.1K
1.3M
David Mack
David Mack@DavidHHMack·
@samhogan Real time collaboration systems always have to have a mechanism for dealing with right conflicts. Git does this very well. What if we just made branching and merging faster? We don’t have to have pull requests right now for agents.
English
0
0
0
57
Sam Hogan 🇺🇸
Sam Hogan 🇺🇸@samhogan·
What if a codebase was actually stored in Postgres and agents directly modified files by reading/writing to the DB? Code velocity has increased 3-5x. This will undoubtedly continue. PR review has already become a bottleneck for high output teams. Codebase checked-out on filesystem seems like a terrible primitive when you have 10-100-1000 agents writing code. Code is now high velocity data and should be modeled at such. Bare minimum, we need write-level atomicity and better coordination across agents, better synchronization primitives for subscribing to codebase state changes and real-time time file-level code lint/fmt/review. The current ~20 year old paradigm of git checkout/branch/push/pr/review/rebase ended Jan 2026. We need an entirely new foundational system for writing code if we’re really going to keep pace with scale laws.
English
468
104
2.1K
941.1K
David Mack
David Mack@DavidHHMack·
@oh_that_hat @eonsys What year write is incorrect. The behavior didn’t emerge. This is the equivalent of getting the weights for a model online and then running it.
English
0
0
0
89
Hattie Zhou
Hattie Zhou@oh_that_hat·
There's a fruit fly walking around right now that was never born. @eonsys just released a video where they took a real fly's connectome — the wiring diagram of its brain — and simulated it. Dropped it into a virtual body. It started walking. Grooming. Feeding. Doing what flies do. Nobody taught it to walk. No training data, no gradient descent toward fly-like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior just... emerged. It's the first time a biological organism has been recreated not by modeling what it does, but by modeling what it is. A human brain is 6 OOM more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens when we have a working copy of the human mind?
English
709
2.4K
25.4K
9.3M