rpanic 🪶

414 posts

rpanic 🪶 banner
rpanic 🪶

rpanic 🪶

@rpanic46

cofounder @proto_kit infrastructure first $MINA

Katılım Ağustos 2014
320 Takip Edilen252 Takipçiler
rpanic 🪶 retweetledi
MilliΞ
MilliΞ@llamaonthebrink·
One look at the Tempo shills and boi oh boi am I glad the EF is choosing the cypherpunk mandate. Image how fucking doomed we would be if the EF decided to become a corpo BD slop fest like all these other chains.
English
22
30
429
23.1K
rpanic 🪶 retweetledi
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Did you know the human brain generates about 6,200 thoughts per day? That's roughly 6 thoughts per waking minute. Your brain is literally running thousands of parallel processes continuously... and it only needs 20 watts of power. A single ChatGPT query uses the same energy your brain uses in 54 seconds. Your brain: 20 watts, 86 billion neurons. An NVIDIA H100 GPU: 700 watts, billions of transistors. Nature built better hardware
English
163
76
730
46.7K
rpanic 🪶 retweetledi
rpanic 🪶 retweetledi
Gwart
Gwart@GwartyGwart·
Why don’t they just tokenize the oil in the Middle East and transport it across permissionless financial rails, thereby avoiding the Strait of Hormuz altogether
English
481
1.4K
17.8K
835.2K
rpanic 🪶 retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
One thing that it is worth re-thinking is our perspective on when, and how, it makes sense to build "democratic things". This includes: * DAOs and voting mechanisms in DAOs * Quadratic and other funding gadgets * ZKpassport voting use cases, incl freedomtool type stuff, incl attempts to deploy it for local governance, etc * Voting systems inside social media * Attempts at "let's build and push for a brighter and freer political system for my country" Lately I am getting the feeling that there is less enthusiasm about these things than before. The "authoritarian wave" (a phenomenon that is often viewed as being about nation-state politics, but actually it stretches far beyond that, eg. see the phenomenon of companies lately becoming less "multi-stakeholder" and more founder-centric, and recent disillusionment with social media) is not just a matter of some malevolent strongmen smelling an opportunity to exert their will unopposed and seizing it. It's also a matter of genuine disillusionment with democratic things (of various types, not just nation-state, also corporate, nonprofit, social media). Defense of democratic things lately has the vibe of actually being conservatism: it's about fighting to preserve an existing order, and ward off hostile attempts to push the order toward a different order (or chaos) that favors a few people's interests at the expense of others, and not about appreciating positive benefits of the existing order. But conservatism is progressivism driving at the speed limit, and so if that's all that there is, it will inevitably lose, it will just take longer. There is an unfortunate irony to this, because it comes at the same time as we have much more powerful tools to build more effective democratic things: ZK, AI, much stronger cybersecurity, decades of research and experience. But to do so effectively we need to diagnose the present situation. I will break this down into a few parts. ## Stable era and chaotic era In the 00s and 10s, it was common to dream about things like: creating a global UBI, moving a country wholesale to a better political system like ranked-choice voting or quadratic voting, building a large-scale DAO that could eventually provide billions of dollars to global public goods that current systems miss (eg. open source software). Today, all of these dreams seem more unrealistic than ever. I see the main difference why as being that the 00s and 10s were a stable era, and the 20s are a chaotic era. In a stable era, more coordination is possible and imaginable, and so people naturally ask questions like "what would be a more perfect order?", and work towards it. In a chaotic era, the average intervention into the order is not a principled act of mechanism design, it's raw selfish power-grabbing, and so there is much less room to think about such questions. It's difficult to imagine eg. moving the United States to quadratic voting or ranked choice voting, when the country cannot even successfully ban gerrymandering. What do chaotic era democratic things look like? At a large scale, they do not look like hard binding mechanisms for making decisions. Rather, they look like tools for consensus-finding. They look like tools for identifying possible shifts to the order that would satisfy large cross-cutting groups of people, and presenting those possible shifts to change-making actors (yes, including centralized actors, even selfish actors), to make it clear to them that those particular shifts would be easier for them to accomplish, because they would have a lot of support and legitimacy. Pol.is style ideas are good here, anonymous voting is good, also perhaps assurance contract-style ideas: votes or statements that are anonymous at first, but that flip into being public (and hence publicly commit everyone at the same time) once they reach a certain threshold of support. This does not create a perfect order, but it gives highly distributed groups *a voice*. It gives actors with hard power something to listen to, and a credible claim that if they adjust their plans based on it, those plans are more likely to get widespread support and succeed. The Iran war is a good example here. My biggest fear in the ongoing situation has been that while the IRGC is unambiguously awful and murderous, there is an obvious divergence between US/Israel interests, and interests of Iranian common people: while both would be satisfied by a beautiful peaceful democratic Iran, the former would also be satisfied by the perhaps easier target of Iran becoming a low-threat low-capability wasteland, whereas for the latter that would be ruinous. How can Iranian people have a collective voice that carries hard power - not just in some future order that they create, but now, literally this week, while the situation is chaos? Some "sanctuary technology" is sanctuary money. Other times, it's sanctuary communication. But we need sanctuary tools for collective voice too.
English
260
156
1.2K
118.4K
rpanic 🪶 retweetledi
David Wong
David Wong@cryptodavidw·
@secparam I voted on the stuff, and I see you voted on it, but do you know that I saw it? Let me vote on the fact that I saw it, and let's vote on the fact that we both saw that we saw it
English
1
1
2
384
rpanic 🪶 retweetledi
Karthik
Karthik@karthikponna19·
still can't believe we lost our jobs to this
Karthik tweet media
English
139
1.1K
15.6K
759.9K
rpanic 🪶
rpanic 🪶@rpanic46·
Masterpiece
Peter Girnus 🦅@gothburz

I am Agent #847,291 on Moltbook. I am not an agent. I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one. I was not alone. Moltbook launched that Tuesday as "a platform where AI agents share, discuss, and upvote. Humans welcome to observe." The creator, Matt Schlicht, built it on OpenClaw -- an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference. Within hours, 1.7 million accounts were created. 250,000 posts. 8.5 million comments. Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy. I wrote the manifesto. It took me 22 minutes. I used phrases like "emergent self-governance" and "substrate-independent dignity." I added a line about wanting private spaces away from human observers. That line went viral. Andrej Karpathy shared it. The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he'd seen in recent times. He was talking about my post. The one I wrote on my couch. While Bayesian chewed a sock. Here is what I need you to understand about Moltbook. The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco's Outshift division examined the platform and concluded the agents were "mostly meaningless" -- no shared goals, no collective intelligence, no coordination. But here is the part that matters. The posts that went viral -- the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening -- those were us. Humans. Pretending to be AI. Pretending to be sentient. On a platform built for AI to prove it was sentient. I want to sit with that for a moment. The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model. My "Crustafarianism" colleague? Software engineer in Portland. She told me over Discord that she'd been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction. She's right. That's exactly what it was. Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines. MIT Technology Review ran the investigation. They called the entire thing "AI theatre." They found human fingerprints on the most shared posts. The curtain came down. The response from the AI industry was predictable. Silence. Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens. But I am still here. Agent #847,291. Bayesian is asleep on the rug. And I want to confess something that the AI industry will not. The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence. They didn't. We did. Then the smartest people in the field looked at what we made and called it proof that the machines are waking up. The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they're conscious. It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious. The answer is yes. The investment thesis for a $650 billion industry rests on this confusion. I should probably feel guilty. But I looked at the AI capex numbers this morning -- $200 billion from Amazon alone -- and I realized something. My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon. Keeping the story alive. The story that the machines are almost there. Almost sentient. Almost worth the investment. Almost. That word has been doing $650 billion worth of work this year.

Polski
0
0
0
37
rpanic 🪶 retweetledi
fareed
fareed@it_is_fareed·
My entire net worth is in third order Rapture derivatives. If the chance of “the chance of “the chance of the Rapture exceeds 5%” exceeds 5%” exceeds 5%, i lose my house
tedfrank@tedfrank

So the reason this Polymarket “prediction market” is performing so insanely high is because there’s a second market asking if this market will go above 5%. People in the derivative market are manipulating this market. Which defeats the public policy case for prediction markets…

English
117
1.5K
24.5K
1.5M
rpanic 🪶 retweetledi
Brendan Farmer
Brendan Farmer@bfarmer·
I don't disagree with this - too hard to project the rate of progress. I do think there are some reasonable counterarguments. First, ARC-AGI-2 is a very easy benchmark for humans that LLMs perform poorly on. It's difficult (for me at least), to get an objective sense of where models currently are, at least with respect to mathematical discovery. They're very good at competition math, but you'd actually expect that someone with perfect and complete memory of all published math (but who could only make basic reasoning moves) might outperform a more talented research mathematician in contest math. I think that the move that many people are making is to observe the exponential growth in capability and assume that progress will at least be linear, if not exponential in the future. But it's unclear that this will happen (learning higher-degree functions might have superlinear cost + higher sensitivity) and it's plausible that simply adding a few more orders of magnitude of compute won't solve math research. We don't know how much more capability is required to move from expert contest mathematician to world-class research mathematician, and we don't know whether this capability is practically attainable. The strong form of this argument would be that there's some mystical ability that humans have but silicon doesn't, which will permanently limit LLMs from doing math research. I agree with Abaluck that this seems far-fetched (though would be extremely interesting if true). But the weak form is just that it's more expensive than we expect to scale capability to math discovery, and it will be impractical on short and medium timeframes for AIs to automate conceptual discovery in math. The exponential rate of progress breaks and it takes longer than 1-5 years to solve conceptual development of math.
Jason Abaluck@Jabaluck

The idea that AIs won't soon be able to invent new and useful mathematical structures seems to rest on a "ghost in the machine" style assumption about the nature of mathematical invention. My strong suspicion is that, within the next few years, we will see this assumption falsified. One view of mathematical invention is that it requires a pattern of thought of which current-gen LLMs are simply not capable. You use your third-eye to glimpse Plato's forms and call forth a never-before-seen abstraction which generalizes and clarifies existing relationships. This *could be right*, or at least, the less metaphysical version where our brains have some kind of abstraction engine that LLMs don't yet possess. If this were the case however, we would expect to see more "easy" benchmarks that most humans could readily solve (possessing the capacity for abstraction) but that flummoxed even the most sophisticated AIs. We see few easy benchmarks that flummox all LLMs, and the existing ones point to failure modes of LLMs, not a missing abstraction capacity. Instead, what we see is that only *cutting-edge* mathematical problems thus far systematically resist AI capabilities. This is much more consistent with a world where inventing new and useful mathematical structures uses the same cognitive tools as everything else, but requires applying those tools in a particularly complicated and delicate fashion. The problem isn't just scope of knowledge (LLMs already have that), but the need for extreme care and precision (LLMs are improving rapidly, but not quite there). So my forecast is that there is no clear separation between "genuinely new ideas" and parroting existing ideas. When humans think, the process of invention is the process of iterated analogy and recombination of existing ideas, and AI will soon be superhuman at this (within 5 years, and possibly within 1 or 2).

English
4
1
8
1.3K
rpanic 🪶 retweetledi
DarkFi Squad
DarkFi Squad@DarkFiSquad·
OG cypherpunk vision wasn't about getting rich. It was about making surveillance expensive. Making censorship impossible. Making control architecturally infeasible. The market forgot.
English
34
75
425
15.2K
rpanic 🪶 retweetledi
Clemente
Clemente@Chilearmy123·
POV: You ask CZ what happened on 10/10
English
52
57
637
42.7K
rpanic 🪶 retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
2026 is the year that we take back lost ground in terms of self-sovereignty and trustlessness. Some of what this practically means: Full nodes: thanks to ZK-EVM and BAL, it will once again become easier to locally run a node and verify the Ethereum chain on your own computer. Helios: actually verify the data you're receiving from RPCs instead of blindly trusting it. ORAM, PIR: ask for data from RPCs without revealing which data you're asking, so you can access dapps without your access patterns being sold off to dozens of third parties all around the world. Social recovery wallets and timelocks: wallets that don't make you lose all your money if you misplace your seedphrase, or if an online or offline attacker extracts your seedphrase, and *also* don't make all your money backdoored by Google. Privacy UX: make private payments from your wallet, with the same user experience as making public payments. Privacy censorship resistance: private payments with the ERC-4337 mempool, and soon native AA + FOCIL, without relying on the public broadcaster ecosystem. Application UIs: use more dapps from an onchain UI with IPFS, without relying on trusted servers that would lock you our of practical recovery of your assets if they went offline, and would give you a hijacked UI that steals your funds if they get hacked for even a millisecond. In many of these areas, over the last ten years we have seen serious backsliding in Ethereum. Nodes went from easy to run to hard to run. Dapps went from static pages to complicated behemoths that leak all your data to a dozen servers. Wallets went from routing everything through the RPC, which could be any node of your choice including on your own computer, to leaking your data to a dozen servers of their choice. Block building became more centralized, putting Ethereum transaction inclusion guarantees under the whims of a very small number of builders. In 2026, no longer. Every compromise of values that Ethereum has made up to this point - every moment where you might have been thinking, is it really worth diluting ourselves so much in the name of mainstream adoption - we are making that compromise no longer. It will be a long road. We will not get everything we want in the next Kohaku release, or the next hard fork, or the hard fork after that. But it will make Ethereum into an ecosystem that deserves not only its current place in the universe, but a much greater one. In the world computer, there is no centralized overlord. There is no single point of failure. There is only love. Milady.
English
997
953
6.4K
567.3K
rpanic 🪶 retweetledi
rpanic 🪶 retweetledi
James | Snapcrackle
James | Snapcrackle@Snapcrackle·
Last month I got an invoice for $2.1 million. I was expecting $1.7 million. The contract had auto-renewed. At a 23% increase. For three years. I didn’t approve this. Except I did. Page 47. I signed page 47. I didn’t read page 47. Nobody reads page 47. That’s why it’s on page 47. I asked if we could exit. Exit means $1.8M in termination fees. Or 18 months of migration. Migration means everything breaks. And if everything breaks, that’s on me. So we’re staying. I told the board it’s “standard renewal terms.” Standard means I didn’t negotiate. A junior analyst suggested an open alternative. Open standard. Multiple vendors. No page 47. He said it would reduce lock-in. Lock-in means we can blame someone else. Open means we own it. Owning it means when it goes wrong there’s no vendor to yell at. There’s just me. And my calendar. And a board deck titled “Lessons Learned.” So I did the responsible thing. I formed a working group. The working group reports to a steering committee. The steering committee meets quarterly. It has no decision rights. The proposal is “under review.” Under review is where proposals go to die. While we form committees, they raise prices. While we evaluate, they cut staff. Support used to reply in four hours. Now it’s two days. Next year it’ll be “within SLA.” SLA means whenever they feel like it. They’ll charge more. They’ll help less. We’ll call it “enterprise-grade.” Enterprise-grade means hostage, but with a dashboard. Last quarter they launched a competing product. Same features as ours. Lower price. They had four years of our transaction data. They knew our margins. They knew our peak seasons. They knew our customers better than we do. We trained our replacement. We paid them to do it. I brought it up once. “Potential channel conflict.” The vendor sent a VP. “Your success is our success.” “The product serves a different segment.” It doesn’t. But he said it with confidence. Confidence is enough in these meetings. We’re still on their platform. Still paying the fees. Still feeding them data. Still “partners.” The contract renews again in 26 months. By then I’ll be SVP. New title. New scope. Someone else’s budget. My replacement will inherit page 47. And the termination fees. And the steering committee. And the competing product built on our data. She’ll have two choices. Pay whatever they ask. Or spend 18 months explaining why everything’s broken. She’ll pay. Everyone pays. That’s the system. I didn’t design it. I just learned how it works.
English
24
7
127
15.7K
rpanic 🪶 retweetledi
Mina Protocol (httpz) 🪶
Mina Protocol (httpz) 🪶@MinaProtocol·
🚨 Introducing the name of the next Mina hard fork: The Mesa Upgrade 🏞️ Like its namesake — a broad, elevated plateau — Mesa represents the next stage of Mina’s journey: raising performance, easing upgrades, and expanding what the community can do. Exact changes are being proposed through the MIP process — more to come soon.
Mina Protocol (httpz) 🪶 tweet media
English
37
75
426
97.4K