0
4.3K posts


Today, I am putting Virginia & America First, to announce that I am running as an Independent for U.S. Senate.

AI will lead to people working *more* than ever before (not less) When the productivity of every minute goes through the roof, you're that much farther behind if somebody else works one more minute than you Pre-AI, if you were a high performer you could accomplish more in the time you did work, and so could afford to work less. You were 2x productive vs everyone else's 1x baseline. But now the human ability gap only takes you from the 1001x baseline to 1002x "high productivity" - someone less smart can still crush your output by simply working one more incremental minute. It's becoming an arms race, not a 4 hour work week.





Kimi is now the #1 used model on OpenClaw (via OpenRouter) 🏆 Real usage data doesn't lie. Developers are voting with their tokens.




Moonshot’s Kimi K2.5 is the new leading open weights model, now closer than ever to the frontier - with only OpenAI, Anthropic and Google models ahead Key takeaways: ➤ Impressive performance on agentic tasks: @Kimi_Moonshot's Kimi K2.5 achieves an Elo of 1309 on our GDPval-AA evaluation, behind only OpenAI and Anthropic models. Kimi K2.5 leaps ahead of GLM-4.7, DeepSeek V3.2 and Gemini 3 Pro. GDPval-AA is our leading metric for general agentic performance, measuring the performance of models on realistic knowledge work tasks such as preparing presentations and analysis. Models are given shell access and web browsing capabilities in an agentic loop via our reference agentic harness called Stirrup. ➤ Native multimodality for the first time: Kimi K2.5 is the first flagship model from Moonshot to support multimodal (image and video) inputs. This is the first time that the leading open weights model has supported image input, removing a critical barrier to the adoption of open weights models compared to proprietary models from the frontier labs. It represents significant differentiation for Kimi K2.5 compared to other open weights leaders including DeepSeek V3.2, GLM-4.7, MiniMax M2.1 and MiMo-V2-Flash. Kimi K2.5 scores 75% on the MMMU Pro visual reasoning benchmark, slightly behind Gemini 3 Pro but in line with GPT-5.2 and Claude Opus 4.5. ➤ Moderate cost to run Artificial Analysis Intelligence Index: Kimi K2.5 lands at $371 in Cost to Run Artificial Analysis Intelligence Index, more than 4x cheaper than Claude Opus 4.5 and GPT-5.2, but more than 5x more expensive than DeepSeek V3.2 and gpt-oss-120b. ➤ Moderate token usage: Kimi K2.5 demonstrates token usage comparable to other models in the same intelligence tier, using ~82M reasoning tokens across the Artificial Analysis Intelligence Index evaluation suite. This is slightly lower than Kimi K2 Thinking (~95M reasoning tokens) and much lower than GLM 4.7 (~160M reasoning tokens). ➤ Open weights: Kimi K2.5 is an MoE model with 1T total parameters and 32B active. Similar to Kimi K2 Thinking, Kimi K2.5 has been released in native INT4 precision rather than FP8/BF16. This means the model is only ~595GB. ➤ Hybrid reasoning: Kimi K2.5 unifies Moonshot’s reasoning and non-reasoning models into a single model. We have evaluated K2.5 with reasoning on (and will share results soon with reasoning off). ➤ Low hallucination rate: Kimi K2.5 scores -11 on the AA-Omniscience Index, our knowledge evaluation measuring both accuracy and hallucination rate. This score is primarily driven by a comparatively low hallucination rate of 64% (reduced from Kimi K2 Thinking’s 74%), indicating a slightly greater tendency to abstain rather than fabricate knowledge when the model is uncertain.

Thanks for the thought-provoking piece. My main critique is that you are overemphasizing flashy but low probability events like “left-handed bacteria,” while merely giving lip service to the risk of extreme economic concentration of power, which is very real and materializing as we speak. Anthropic is reportedly raising funds at a $350B valuation, and the wealth created thus far has been concentrated into a few hundred (perhaps more like dozens) high net worth individuals / institutions. It’s looking increasingly likely to me that none of the leading AI labs will IPO until they reach valuations in the trillions, at which point retail investors will finally be able to get shares. In order for retail to get a 100x return on these investments, which was achievable for Apple, Microsoft, Amazon, and Google, the valuations of the AI labs will need to reach hundreds of trillions of dollars, meaning it’s likely too late for a more equitable redistribution of wealth. Simply put, you are currently exacerbating the problem. The consequences of this are that voters may take matters into their own hands and push for either or both 1) more aggressive / nonsensical forms of redistribution — the CA Founders’ Tax is just the beginning or 2) a drastic knee-capping of the AI industry in America, which make the CCP dominance scenario more likely. The solution is to enable retail ownership now, increasing the number of Americans with economic exposure to Anthropic and other AI labs from hundreds of people to millions.





Secretary Scott Bessent on the awful food in Davos: "After a couple days of the food here, I may switch to bugs and insects. I'm gonna bring some Pop-Tarts next year."😂 @JackPosobiec


Software!











