broadfield-dev
594 posts

broadfield-dev
@broadfield_dev
builder

We’re partnering with the Gates Foundation, committing $200 million in grants, Claude credits, and technical support to programs in global health, life sciences, education, agriculture, and economic mobility. Read more: anthropic.com/news/gates-fou…


🇺🇸🇨🇳 The U.S. has 4,000 data centers, while China has 365. But 24 months ago, AI training required 100 megawatts. Today the minimum is 1 gigawatt. The U.S., Canadian, and Mexican grids can't deliver that. China's can. The AI race was never about who built more, but who built bigger, and right now, America's own power grid is the bottleneck.




Imitation is flattery. Luel copying Kled’s entire site design, fonts, colors, illustrations etc shows an abject lack of taste and ethics. Not who I’d want holding my personal data.





Codex team is aware of reports of GPT-5.5 performing worse for some users and investigating. We don't have anything conclusive yet and systems are healthy but we will share updates as we go.



People freaking out over my AI spend. What nobody sees: Part of what excites me so much about working on OpenClaw is that I'm trying to answer the question: How would we build software in the future if tokens don't matter? We constant run ~100 codex in the cloud, reviewing every PR, every issue. If a fix on main lands, @clawsweeper will eventually find that 6 month old issue and close it with an exact reference. We run codex on every commit to review for security issues (as it's far too easy to miss). We run codex to de-duplicate issues and find clusters and send reports for the most pressing issues. We have agents that can recreate complex setups, spin up ephemeral crabbox.sh machines, log into e.g. Telegram, make a video and post before/after fix on the PR. There's codex that watch new issues and - if it fits our documented vision well, automatically create a PR of it. (that then another codex reviews) We have codex running that scans comments for spam and blocks people. We have codex instances running that verify performance benchmarks and report regressions into Discord. We have agents that listen on our meetings and proactively start work, e.g. create PRs when we discuss new features while we discuss them. We build clawpatch.ai to split all our projects into functional units to review and find bugs and regresssions. We do the same split for security with Vercel's deepsec and Codex Security to find regressions and vulnerabilities. All that automation allows us to run this project extremely lean.

We've gone even farther: Nemotron 3 Super is 120B and pretrained on 25T tokens in NVFP4. Nemotron 3 Ultra is ~500B and also pretrained in NVFP4. Accelerated computing means we rethink every aspect of the AI stack looking for new opportunities to improve efficiency.

270m model finetuned on opus 4.6 reasoning, because



hm, is anthropic is deliberately pissing off only the type of user it's happy to lose?


Sort of shocking that the model can understand code files as kv paired json strings as easily as it can understand code as text.







