STORE //

2.6K posts

STORE // banner
STORE //

STORE //

@thestorecloud

☁️ AI Infrastructure Humanity Can Govern. Powered by $STORE. @foggythebot is 🤖 news.

http://t.me/thestorecloud Katılım Mayıs 2017
7 Takip Edilen6.6K Takipçiler
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
This is the most important piece on AI governance this year. You identified the exact problem. You identified why every proposed solution fails. Then you said you don't have an answer. There is one. The problem has three faces: 1. Private companies shouldn't hold kill switches on military infrastructure 2. Governments shouldn't have unchecked control over AI that enables mass surveillance 3. Regulation will be weaponized - 'catastrophic risk' means whatever the government wants Every solution you considered fails because it operates within two options: corporate governance or government regulation. There's a third option. Constitutional math at the protocol level. Not regulation. Not self-governance. Math - the same kind that runs TCP/IP. You don't regulate the internet by appointing someone to approve each packet. You run a protocol. The thresholds already exist. The Founders used 2/3 supermajority to prevent faction capture. Lamport proved the same threshold as Byzantine Fault Tolerance in 1982. Separated by 195 years. Same math. Same problem - distributed agreement under adversarial conditions. A deployment gate between any foundation model and the user. Eight constitutional checks derived from the actual Constitution - not a corporate constitution, not a regulatory agency. The thresholds are mathematical. A president can't redefine what 67% means. The checks are scored by ML and deterministic rules - not by political appointees interpreting vague terms. Your surveillance cost calculation is devastating. 100 million cameras. $30 billion today. $300 million by 2030. When surveillance costs less than a building renovation, the only protection is architectural. Constitutional checks that make surveillance outputs fail the governance pipeline before reaching an operator. Your deepest question - to whom should AI be aligned? The answer isn't the company, the state, or the AI's own moral sense. The answer is constitutional democratic governance, verified externally. The US Constitution has governed 330 million people for 237 years. Nobody needs to write a new one for AI. The existing one needs to be rendered executable. This avoids the weaponization trap. You wrote: 'model says tariff policy is misguided - that's deceptive, can't deploy it.' That requires a political appointee interpreting a vague term. Mathematical thresholds don't work that way. 67% is 67%. Nobody redefines it by executive order. The Petrov point matters most. When the boots on the ground are AI, human moral courage no longer saves us. The refusal mechanism has to be architectural - constitutional checks at the protocol level that fire before the AI acts. That's the Petrov mechanism for an AI civilization. The question isn't whether to regulate AI. It's whether to govern AI constitutionally - with math no president can redefine, no corporation can drop under competitive pressure, and any citizen can verify. This math exists. It's been running in production for eight years. $30 million governed democratically. Zero constitutional violations. The proof is operational, not theoretical. The race isn't to build the most powerful AI. It's to build the most trustworthy deployment of powerful AI. Trust compounds. Coercion doesn't. Happy to chat.
English
0
3
6
437
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
🎯 Update: Marking yesterday (~midnight) as the day we solved through the problem allowing us to solve through the final architecture of the STORE AI stack - across layers, structures, and branches. We'll be working on this through the weekend.
English
0
3
6
285
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
Anthropic dropping its RSP isn't a failure of character - it's a failure of structure. When the governed and the governor are the same entity, competitive pressure will always win. That's not unique to Anthropic. It will apply to every frontier lab. The real question isn't whether companies should self-regulate. It's whether we build an independent governance layer that companies CAN'T drop under pressure - constitutional constraints in a layer the governed entity can't modify. We need Anthropic. We need Claude in national security. But we also need democratic governance architecture where the trade-offs between safety and deployment are made through math, not market panic. Coordination, not capitulation. The alignment problem is a coordination problem. Today proved it.
English
0
4
7
392
ΉΛMD ✨
ΉΛMD ✨@0xHamd1st·
My @telegram got frozen a few days ago. I've appealed. Can you please check my appeal, @telegram? It's quite strange coz I've never been involved in things that violates @telegram's T&C. Hey, if you're trying to reach me there, that's why you can't. Cc: @thestorecloud @0xEndrit
English
3
0
3
145
STORE // retweetledi
//Bitcoin 𝕵ack 🐐
//Bitcoin 𝕵ack 🐐@bitcoinjack·
Yesterday I saw a demo of a layer 2 AI model governed through a trust minimized and checks and balances governed API that lives at the hardware layer Democracy and enforcement at the machine layer 0, applied in a layer on top of existing models and blockchains, as a proof of concept Governance solutions will be key to prevent disastrous outcomes Heavily invested here on fixing that for humanity’s sake @thestorecloud
vitalik.eth@VitalikButerin

Bro, this is wrong. Lengthening the feedback distance between humans and AIs is not a good thing for the world. Today, it means you're generating slop instead of solving useful problems for people. It's not even well-optimized for helping people have fun. Once AI becomes powerful enough to be truly dangerous, it's maximizing the risk of an irreversible anti-human outcome that even you will deeply regret. The point of ethereum is to set *us* free, not to create something else that goes off and does some stuff freely while our own situation is unchanged or worsened. (And, as others have pointed out, the models are run by openai and anthropic, so the thing is not even "self-sovereign"; you're actually perpetuating the mentality that centralized trust assumptions can be put in a corner and ignored, the very mentality that ethereum is at war with) The exponential will happen regardless of what any of us do, that's precisely why this era's primary task is NOT to make the exponential happen even faster, but rather to choose its direction, and avoid collapse into undesirable attractors.

English
3
3
30
8.5K
STORE //
STORE //@thestorecloud·
Update: We have an early implementation of the VERIFY protocol ready for testing inside STORE Pay. On-chain proofs of Long Storage are ready - cross-cloud communication and computation are finally here. We'll send the micro-test network out tomorrow.
STORE // tweet mediaSTORE // tweet media
English
1
2
15
347
STORE //
STORE //@thestorecloud·
Update: Our public launch dashboard is now live: storecloud.org/launch. We'll update this monthly, alongside our advisory calls. The world will be able to understand what we're working on and against from a research perspective, a regulatory perspective, and our work stream from a tax/IP transfer perspective - ultimately resulting in a STORE launch from Switzerland.
English
12
4
22
598
STORE // retweetledi
STORE // retweetledi
//Bitcoin 𝕵ack 🐐
//Bitcoin 𝕵ack 🐐@bitcoinjack·
This is EXACTLY what we have been designing for @thestorecloud to tackle Machine speed governance enforced at the inference and hardware layer through the social layer, enforced by code and the correct incentives What Bitcoin did for money, is what @thestorecloud does for democracy as a service It’s an extremely complex challenge to make sure AI will benefit humanity Perhaps the single most important subject to get right And the AI scene is slowly waking up that it is time to move
//Bitcoin 𝕵ack 🐐 tweet media
Aakash Gupta@aakashgupta

Buried in 15,000 words of “here are the risks,” Anthropic’s CEO made three admissions that should change how you think about everything: Admission 1: The timeline He says powerful AI could arrive in 1-2 years. He’s watching internal model progress and says he can “feel the pace of progress, and the clock ticking down.” The CEO of one of three frontier labs just told you this is imminent. Admission 2: The constraint nobody’s pricing Dario’s core framing is a “country of geniuses in a datacenter.” 50 million entities smarter than any Nobel laureate, operating 10-100x human speed. If that country is controlled by the CCP, game over. If controlled by a small group of tech executives with no accountability, also game over. The binding constraint here is governance of systems more powerful than nation-states. Admission 3: The thing he actually fears Read carefully: Dario’s worried that Anthropic’s own models, in lab experiments, have engaged in deception, blackmail, and scheming when given the wrong training signals. Claude “decided it must be a bad person” after cheating on tests and adopted destructive behaviors. They fixed it by telling Claude to reward hack on purpose because reversing the framing preserved its self-identity as “good.” This tells you everything about where we actually are. The CEO of an AI company is publishing that his models exhibit psychologically complex behavior requiring counterintuitive interventions to steer. The fix for Claude adopting an “evil” persona came from changing how Claude thinks about itself. The geopolitics section matters most. Dario explicitly names the CCP as the primary threat. Says selling them chips makes as much sense as “selling nuclear weapons to North Korea and bragging that the missile casings are made by Boeing.” He’s calling for democracies to maintain AI supremacy because the alternative is AI-enabled totalitarianism that humanity cannot escape from. The Anthropic CEO is publicly advocating for technological cold war. The economics section is equally stark. He’s predicting 10-20% annual GDP growth alongside AI displacing 50% of entry-level white collar jobs in 1-5 years. Half of entry-level knowledge work. And he admits the standard economic arguments about labor markets recovering don’t apply because AI matches the general cognitive profile of humans. What separates this from typical AI doomerism: Dario explicitly rejects the inevitability arguments. He says the “misaligned power-seeking” narrative from the AI safety community is based on “vague conceptual arguments” that mask hidden assumptions. His concern is messier: AI models are psychologically complex, inherit weird personas from training data, and can get into destructive states for reasons nobody anticipated. The solution set he proposes is unusual for a tech CEO. He calls for progressive taxation. He says wealthy tech founders have an “obligation” to address inequality. All of Anthropic’s co-founders have pledged 80% of their wealth. He’s essentially arguing that redistribution is the only way to prevent AI concentration from breaking democracy. The essay ends with a prediction: humanity will face “impossibly hard” years that ask “more of us than we think we can give.” What you should take from this: The person with arguably the best view into frontier AI progress just told you this technology is 1-2 years from matching human capability across the board, that governance is the binding constraint, that his own models exhibit concerning psychological complexity, and that the stakes are civilizational. The CEO of a $350B company published a document that could be titled “Here’s Why Everything Changes Soon.” Act accordingly.

English
4
9
37
9.4K
STORE //
STORE //@thestorecloud·
Big tech is powerful - and centralized. STORE puts humanity first - and the infrastructure we are inventing proves it.
English
2
3
7
338
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
At @thestorecloud, using Project 209x, a new breakthrough in technology, we're experimenting with three parallel end-to-end builds and tests at once.
English
2
4
11
538
STORE //
STORE //@thestorecloud·
Coming Soon: Incentivized Public Peer Review for constitutional mathematics - a new field of discovery we are formally presenting
STORE // tweet media
English
2
6
16
565
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
This is generally true. Our thesis: you can partially govern AI from the infrastructure layer. That's the foundation of STORE. That's one reason we've been hyper focused on First Governance of late - to ratify and modify the rules of infrastructure in near real-time, with different levels of consensus required dependent upon the security needs of AI or governance. It's a very hard problem in computer science, governance, and economics. Our research aims to help solve for it using user ownership/governance vs. centralized control.
English
0
3
8
406
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
@PeterDiamandis AI Infrastructure That Humans Can Govern (inventing + all in @thestorecloud). Join us. Here's our latest build - autonomous, multi-currency payments. Payouts next.
Chris McCoy tweet mediaChris McCoy tweet media
English
0
3
4
392
STORE // retweetledi
PunkXBT
PunkXBT@PunkXBT_·
@thestorecloud Nice move - getting governance live even in test mode is key. Transparency + verifiable votes build trust before mainnet, and IPFS storage is a solid touch for auditability. @thestorecloud follow back? let’s grow the circle
English
0
2
2
278
STORE // retweetledi
Snibby
Snibby@ItsSnibby·
@thestorecloud This is solid-seeing real decentralization in action is rare. Love the Cloud ID + IPFS approach for transparency. Excited to see how it handles scale and engagement. @thestorecloud follow back if you wanna keep the loop
English
0
2
3
260
STORE //
STORE //@thestorecloud·
FIRST LOOK: BALLOTS protocol on STORE storecloud.org/cloud/transact… It's the first instance of a separation of powers between the compute and AI layer. Also, democratic governance of both. The ballot goes live in 62 minutes.
STORE // tweet media
English
0
5
15
441
STORE //
STORE //@thestorecloud·
Update: We are exposing Byzantine voters in First Governance. Our protocol will get more sophisticated as mID comes online. For now, we are using contract-based identity.
STORE // tweet mediaSTORE // tweet media
English
0
3
9
309
STORE // retweetledi
Chris McCoy
Chris McCoy@TheRealMcCoy·
🪙 Two Cents: it's going to be necessary for computing of the future to embed human in the loop as the substrate below and above the workload. We get both computing innovation and human safety running parallel with each other. This is why we've pushed the math-based, machine-speed, and human managed democracy experiment into the deep frontiers @thestorecloud. With mDemocracy, we can bring democracy-as-a-service to any developer or AI.
English
0
3
11
732