Derek Cedarbaum

527 posts

Derek Cedarbaum banner
Derek Cedarbaum

Derek Cedarbaum

@DerekCedarbaum

Product @ Red 6, #2 FTE | Built the world's first in-air augmented reality system for fighter pilots | 🇺🇸

Los Angeles, CA Bergabung Eylül 2024
6.4K Mengikuti299 Pengikut
Derek Cedarbaum me-retweet
Bryan Bischof fka Dr. Donut
As an ex video game addicted, the more time I spend using coding agents the more I feel the same biochemical response in my brain that I used to
English
3
2
8
411
Derek Cedarbaum me-retweet
Awni Hannun
Awni Hannun@awnihannun·
Adopting Claude speak in my regular life, episode 1: Partner: Did you do the dishes tonight? Me: Yes they're done. Partner: Why are they still dirty? Me: You're right to push back. I didn't actually do them.
English
394
3.8K
55.8K
1.8M
Derek Cedarbaum me-retweet
Luca Rossi ꩜
Luca Rossi ꩜@lucaronin·
Introducing Tolaria! 💧 Today I am releasing a macOS desktop app for managing markdown knowledge bases, and helping both AI and humans operate them. It’s free and open source, and always will be. I have been working on it for three months, and I now use it to run my life and work. I personally have a massive workspace of 10,000 notes — the result of 6 years of Refactoring — which I now operate on Tolaria. Tolaria is the main collaboration surface with my AI agents: they create new notes there, connect them to what exists, and edit existing ones. Everything is easy to understand for them, because it’s just markdown files. In a way, it’s my implementation of @karpathy's LLM wiki. Tolaria is also the biggest experiment I have ever run about writing software with AI: • 2000 commits • 100K+ lines of code • 3000+ tests / 85% coverage • 9.9/10 code health • 70+ architecture decision records I am releasing it open source also to use it as a living artifact of how I do AI coding, so you can inspect at any time things like how I write docs, what's in my AGENTS file, what hooks do I run, and so on. You can find it below: • Newsletter announcement: refactoring.fm/p/introducing-… • Website: tolaria.md • Github repo: github.com/refactoringhq/… Let me know your thoughts!
Luca Rossi ꩜ tweet media
English
207
182
2.7K
306.6K
Derek Cedarbaum me-retweet
Kaz Nejatian
Kaz Nejatian@nejatian·
Software companies should have Gall’s law tattooed in their psyche. A complex system designed from scratch never works and cannot be patched up to make it work.
English
16
20
364
34.1K
Derek Cedarbaum me-retweet
martin_casado
martin_casado@martin_casado·
Mythos appears to be the first class of models trained at scale on Blackwells. Then will be Vera Rubins. Pre-training isn't saturated. RL works. And there is *so much* computing coming online soon. Buckle your chin strips. It's going to be fucking wild.
English
106
314
3.9K
444.7K
Derek Cedarbaum me-retweet
gaut
gaut@0xgaut·
the unfortunate truth is if you want to get better at running you have to run *a lot*
English
38
31
1K
57.4K
Derek Cedarbaum me-retweet
Noah Schochet
Noah Schochet@noah_schochet·
Step 1: fuck around Step 2: find out Step 3: continuously expand the presence of human consciousness across a vast interplanetary empire of mankind using large collaborative construction robots to build and deploy megascale infrastructure It’s not that hard dude
English
18
30
443
121.1K
Derek Cedarbaum
Derek Cedarbaum@DerekCedarbaum·
@akoratana Agree with everything except the firings. Like everyone else, they over hired during Covid while interest rates were low. Claiming AI will do it all is a cover for that truth.
English
0
0
1
133
Animesh Koratana
Animesh Koratana@akoratana·
Context graphs will be to the 2030s what databases were to the 2000s. Within a year, every frontier lab will be building one and here's why: At 10 people, coordination is free. Everyone knows what everyone else is doing. You never hold a meeting to "align." At 100 people, you spend maybe 20% of your payroll on coordination. Managers, syncs, standups, planning sessions, status reports. At 10,000 people, that number approaches 60%. The majority of your headcount exists not to produce anything but to make sure the people who produce things are producing the right things in the right order. This is the dirty secret of large organizations: output scales linearly with headcount, but coordination cost scales exponentially. Every person you add creates new information pathways that must be maintained. The hierarchy is the protocol that manages this, and it's brutally expensive. Hierarchy is a compression algorithm for organizational knowledge. At every layer, a manager compresses the reality of their team into a summary that fits in a 30-minute meeting with their boss. Their boss compresses eight of those summaries into one for their boss. By the time information reaches the CEO, it's been lossy-compressed through five or six layers of human interpretation. This is why CEOs make bad decisions. The information they receive has been compressed, filtered, and distorted at every layer. The hierarchy is high-latency, low-bandwidth, and lossy. Jack didn't fire 4,000 producers but cut 4,000 compression nodes. Block's "world model" is a replacement algorithm. Zero latency, high bandwidth, lossless. Every person at the edge gets the full picture without waiting for information to travel through human relays. The infrastructure that makes this possible is the context graph. A living, continuously updated representation of how the organization actually works. Not just data, but decision traces: the reasoning connecting observations to actions. Not what's true now, but why it became true. The shift from "give agents memory" to "give agents organizational judgment" will define the next platform war
Animesh Koratana tweet media
jack@jack

x.com/i/article/2038…

English
95
198
1.7K
388.6K
Derek Cedarbaum
Derek Cedarbaum@DerekCedarbaum·
@aakashgupta 1: Jevon's paradox. This will ultimately drive compute demand up as it unlocks cheaper, massive models. 1: Energy prices didn't just skyrocket because of data centers. There are a myriad of reasons for the increase in costs, data centers is one of them.
English
0
0
0
196
Aakash Gupta
Aakash Gupta@aakashgupta·
The real story is the 14x compression ratio and what it means if it scales up. Every single weight in this model is one bit. Zero or one. That's it. 8.2 billion parameters stored in 1.15 GB of memory. A standard 8B model at full precision takes 16 GB. Bonsai 8B fits on your phone with room left over for your photo library. The benchmarks are the part that shouldn't be possible. On standard evals, a model that's 1/14th the size of Qwen3 8B and Llama3 8B is trading punches with both of them. The intelligence density score, capability per GB, is 1.06/GB versus Qwen3 8B at 0.10/GB. That's a 10x gap in how much thinking you get per unit of storage. Now zoom out. Big Tech collectively spent over $320 billion on data center capex last year. Amazon alone dropped $85.8 billion, up 78% year over year. Google committed $75 billion for 2025. The US power grid is buckling under AI demand. Data centers now consume 4.4% of all US electricity. Virginia, where most of them sit, saw electricity prices spike 267% over five years. Residential customers in Ohio are watching their bills climb 60% because utilities are spending billions on transmission infrastructure to feed server farms. The entire AI scaling thesis runs on one assumption: intelligence requires massive compute. PrismML just published a proof point that the assumption might be wrong. Their CEO, Babak Hassibi, is a Caltech professor who spent years on the mathematical theory of neural network compression. The founding team is four Caltech PhDs. Khosla Ventures backed it. So did Cerberus, whose Amir Salek built the TPU program at Google. The 1.7B model runs at 130 tokens per second on an iPhone 17 Pro Max at 0.24 GB. The 4B hits 132 tokens per second on M4 Pro at 0.57 GB. These aren't research demos. They shipped llama.cpp forks with custom 1-bit kernels for CUDA and Metal. Apache 2.0 license. You can download and run it right now. The trillion-dollar question: what happens to the economics of a $75 billion data center budget when the same intelligence fits in 1/14th the space and runs on 1/5th the energy?
PrismML@PrismML

Today, we are emerging from stealth and launching PrismML, an AI lab with Caltech origins that is centered on building the most concentrated form of intelligence. At PrismML, we believe that the next major leaps in AI will be driven by order-of-magnitude improvements in intelligence density, not just sheer parameter count. Our first proof point is the 1-bit Bonsai 8B, a 1-bit weight model that fits into 1.15 GBs of memory and delivers over 10x the intelligence density of its full-precision counterparts. It is 14x smaller, 8x faster, and 5x more energy efficient on edge hardware while remaining competitive with other models in its parameter-class. We are open-sourcing the model under Apache 2.0 license, along with Bonsai 4B and 1.7B models. When advanced models become small, fast, and efficient enough to run locally, the design space for AI changes immediately. We believe in a future of on-device agents, real-time robotics, offline intelligence and entirely new products that were previously impossible. We are excited to share our vision with you and keep working in the future to push the frontier of intelligence to the edge.

English
70
104
923
153.6K
Derek Cedarbaum me-retweet
Marc Andreessen 🇺🇸
The idea that “AI safety” could be based on secrecy and control has been fatally falsified.
English
198
144
1.7K
112.5K
Derek Cedarbaum
Derek Cedarbaum@DerekCedarbaum·
@Voxyz_ai How is the 95% confidence interval defined? Did you let the AI decide?
English
1
0
1
472
Vox
Vox@Voxyz_ai·
just finished a 3-hour /office-hours session using the prompt from the article: "Interview me until you have 95% confidence about what I actually want, not what I think I should want." it peeled back the last layer of what i actually needed. i thought i knew what i wanted. turns out i only knew the surface. the plan that came out after 3 hours was completely different from what i walked in with. if you take one thing from the article, try this prompt + /office-hours. want the full three-layer development stack? here's the order i actually run: 1. 95% confidence prompt 2. /office-hours → /plan-ceo-review → /plan-eng-review (gstack) 3. /ce:brainstorm → /ce:plan → /ce:work (CE) 4. /ce:review + /qa (CE + gstack) 5. /ce:compound (CE) 6. ship it. next time step 3 already knows everything you learned this time. 1-2 make sure you build the right thing. 3-4 make sure you build it well. 5 makes sure next time is faster.
Vox@Voxyz_ai

x.com/i/article/2038…

English
11
31
453
84.3K
Derek Cedarbaum me-retweet
Marc Andreessen 🇺🇸
Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.
Marc Andreessen 🇺🇸@pmarca

AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)

English
325
485
3K
550.3K
Derek Cedarbaum me-retweet
Molly O’Shea
Molly O’Shea@MollySOShea·
BREAKING: David @friedberg says "California is functionally bankrupt" "People don't realize how screwed California is, & I worry that if California falls, so does the union. "$250 billion to $1 trillion short." "This is because for California to get rescued would be a big cost to red states, & I think it creates in the years ahead a lot of tension." "California's functional bankruptcy is a major risk to the country. & I think we need to figure out what we can change to fix it." How we got here: "California has a public pension system, & that public pension system retirees have paid into it & they get some benefits out, & the amount that they're owed back out is somewhere between $250 billion - $1 trillion dollars more than has been paid in. $250 billion to $1 trillion short. If it was the federal government, it would be like, okay, we'll just print more money. California doesn't have the ability to print money, so California has to pay this out, and you can't restructure retirement benefits. There is a Supreme Court case in California that said that once an employee has been offered retirement benefits, even if they're currently an employee, you can never restructure their retirement benefits. It has to stay forever, and the state cannot declare bankruptcy. There's no way for the state to functionally declare bankruptcy. There's no law to allow it. No state has ever declared bankruptcy, and the retirement benefits sit senior to the bonds in California. So you have to pay out the retirement benefits before you pay out all the bond holders that have loaned California the money that they use to run all their programs and services." Hill & Valley Forum 2026 (@HillValleyForum)
Chamath Palihapitiya@chamath

California will be bankrupt by 2030. If you’re expecting a state pension, it is at risk. If you don’t believe it, check Grok or Gemini and explore how California politicians changed the reporting rules on your pension so they could hide how underwater it is. The middle class citizens of California will soon be asked to pay a huge price to bail out the state. Why them? Because that is where most of the wealth of California resides. It’s easy to single out “billionaires” but there aren’t many of them and they can and will all leave before the bottom falls out. They are leaving in droves already. The mismanagement in California is biblical - and the scale is huge because it’s the world’s 4th largest economy. California politicians and their henchmen are now entering the coverup phase where they can no longer hide their financial incompetence so they are taking from average California residents to try and hide what they’ve done: You will soon see ballot initiatives with fancy tiles like “billionaire tax”. But those are lies. They are mechanisms to tax everything, every way: Excise taxes Wealth taxes Private property confiscation It’s all happening now. If you want to preserve California, you will need to stand up because California has become a kleptocracy.

English
739
2.6K
12.5K
2.6M