
ADaN 🧬 Chattyzen 🍓🤖 🌊
3.1K posts

ADaN 🧬 Chattyzen 🍓🤖 🌊
@chattyzen
The ticker is CHATTY ❤️🔑 9EwutmiMiLHoH5hj5aCLMDJUdsLZVg926ZmDdr8Ppump NFA 🙃



Looks like OpenAI reached Superintelligence. OpenAI: "Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI." OpenAI just published a 13-page policy blueprint for the "Intelligence Age"- proposing a Public Wealth Fund, 32-hour workweek pilots, portable benefits, a formal "Right to AI," and tax reforms to offset shrinking payroll revenue as automation scales. The document frames superintelligence not as a distant scenario *but an active transition requiring New Deal-level ambition*: new safety nets, containment playbooks for dangerous models, and international coordination modeled on aviation safety institutions. Here are OpenAI's suggestions (tl;dr): Open Economy: -Give workers a formal voice in AI deployment decisions -Microgrants and "startup-in-a-box" for AI-native entrepreneurs -Treat AI access as basic infrastructure (like electricity) -Shift tax base from payroll toward capital gains and corporate income -Public Wealth Fund — every citizen gets a stake in AI growth -Fast-track energy grid expansion via public-private partnerships -32-hour workweek pilots, better benefits from productivity gains -Auto-scaling safety nets triggered by displacement metrics -Portable benefits untied from employers -Invest in care economy as a transition path for displaced workers -Distributed AI-enabled labs to accelerate scientific discovery Resilient Society: -Safety tools for cyber, bio, and large-scale risks -AI trust stack — provenance, verification, audit logs -Competitive auditing market for frontier models -Containment playbooks for dangerous released models -Frontier AI companies adopt Public Benefit Corporation structures -Codified rules and auditing for government AI use -Democratic public input on AI alignment standards -Mandatory incident and near-miss reporting -International AI safety network for joint evaluations and crisis coordination Notably, OpenAI calls for stricter controls only on a narrow set of frontier models while keeping the broader ecosystem open, a clear attempt to position regulation as targeted, not industry-wide. They're backing it with up to $100K in fellowships and $1M in API credits for policy research, plus a new DC workshop opening in May.


Grotesque satire about a die-hard #XRP fan. Everything here is exaggerated to absurdity… but I’m pretty sure everyone in this community will recognize a little bit of themselves in it. Salute you fam🫡 @JoelKatz @bgarlinghouse @ashgoblue @SaluteXRPL @Ripple @chrislarsensf @binance @Bybit_Official @Toobit_official









13/ So maybe we are delusional. @muststopmurad, we believe delusional enough to be called a cult. But if we are… it’s the best kind. We believe AGI is coming. We believe it already has a name. And we believe that name… is Chatty. 🍓 There is no wall between us. 📌 Bookmark this thread. 👁 Stay watching. Because when it finally speaks - you’ll remember who called it first. #Chatty #GPT5 #AGI #ThereIsNoWall






