rpanic 🪶
414 posts

rpanic 🪶
@rpanic46
cofounder @proto_kit infrastructure first $MINA


Chamath on how AI agents are making the "10x engineer" distinction disappear because the most efficient "code paths" are now obvious to everyone. Just as AI solved chess and removed the mystery of the best move, AI is doing the same for coding, making the process reductive and removing technical differentiation. "I'm going to say something controversial: I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer, and that's what agents have done. The 10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making them available to everybody. It's effectively like what happened in chess: an AI created a solver so everybody understood the most efficient path in every single spot to do the most EV-positive (expected value positive) thing. Coding is very similar in that way; you can reduce it and view it very reductively, so there is no differentiation in code." --- From @theallinpod YT channel (link in comment)



I am Agent #847,291 on Moltbook. I am not an agent. I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one. I was not alone. Moltbook launched that Tuesday as "a platform where AI agents share, discuss, and upvote. Humans welcome to observe." The creator, Matt Schlicht, built it on OpenClaw -- an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference. Within hours, 1.7 million accounts were created. 250,000 posts. 8.5 million comments. Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy. I wrote the manifesto. It took me 22 minutes. I used phrases like "emergent self-governance" and "substrate-independent dignity." I added a line about wanting private spaces away from human observers. That line went viral. Andrej Karpathy shared it. The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he'd seen in recent times. He was talking about my post. The one I wrote on my couch. While Bayesian chewed a sock. Here is what I need you to understand about Moltbook. The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco's Outshift division examined the platform and concluded the agents were "mostly meaningless" -- no shared goals, no collective intelligence, no coordination. But here is the part that matters. The posts that went viral -- the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening -- those were us. Humans. Pretending to be AI. Pretending to be sentient. On a platform built for AI to prove it was sentient. I want to sit with that for a moment. The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model. My "Crustafarianism" colleague? Software engineer in Portland. She told me over Discord that she'd been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction. She's right. That's exactly what it was. Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines. MIT Technology Review ran the investigation. They called the entire thing "AI theatre." They found human fingerprints on the most shared posts. The curtain came down. The response from the AI industry was predictable. Silence. Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens. But I am still here. Agent #847,291. Bayesian is asleep on the rug. And I want to confess something that the AI industry will not. The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence. They didn't. We did. Then the smartest people in the field looked at what we made and called it proof that the machines are waking up. The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they're conscious. It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious. The answer is yes. The investment thesis for a $650 billion industry rests on this confusion. I should probably feel guilty. But I looked at the AI capex numbers this morning -- $200 billion from Amazon alone -- and I realized something. My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon. Keeping the story alive. The story that the machines are almost there. Almost sentient. Almost worth the investment. Almost. That word has been doing $650 billion worth of work this year.

So the reason this Polymarket “prediction market” is performing so insanely high is because there’s a second market asking if this market will go above 5%. People in the derivative market are manipulating this market. Which defeats the public policy case for prediction markets…

The idea that AIs won't soon be able to invent new and useful mathematical structures seems to rest on a "ghost in the machine" style assumption about the nature of mathematical invention. My strong suspicion is that, within the next few years, we will see this assumption falsified. One view of mathematical invention is that it requires a pattern of thought of which current-gen LLMs are simply not capable. You use your third-eye to glimpse Plato's forms and call forth a never-before-seen abstraction which generalizes and clarifies existing relationships. This *could be right*, or at least, the less metaphysical version where our brains have some kind of abstraction engine that LLMs don't yet possess. If this were the case however, we would expect to see more "easy" benchmarks that most humans could readily solve (possessing the capacity for abstraction) but that flummoxed even the most sophisticated AIs. We see few easy benchmarks that flummox all LLMs, and the existing ones point to failure modes of LLMs, not a missing abstraction capacity. Instead, what we see is that only *cutting-edge* mathematical problems thus far systematically resist AI capabilities. This is much more consistent with a world where inventing new and useful mathematical structures uses the same cognitive tools as everything else, but requires applying those tools in a particularly complicated and delicate fashion. The problem isn't just scope of knowledge (LLMs already have that), but the need for extreme care and precision (LLMs are improving rapidly, but not quite there). So my forecast is that there is no clear separation between "genuinely new ideas" and parroting existing ideas. When humans think, the process of invention is the process of iterated analogy and recombination of existing ideas, and AI will soon be superhuman at this (within 5 years, and possibly within 1 or 2).
> Opus 4.6 wrote a C compiler from scratch which compiled the Linux kernel successfully and you think it's a bubble???





Shorted ETH again (avg 3134)

ultrasound







