Chubby♨️@kimmonismus
Looks like OpenAI reached Superintelligence.
OpenAI: "Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI."
OpenAI just published a 13-page policy blueprint for the "Intelligence Age"- proposing a Public Wealth Fund, 32-hour workweek pilots, portable benefits, a formal "Right to AI," and tax reforms to offset shrinking payroll revenue as automation scales.
The document frames superintelligence not as a distant scenario *but an active transition requiring New Deal-level ambition*: new safety nets, containment playbooks for dangerous models, and international coordination modeled on aviation safety institutions.
Here are OpenAI's suggestions (tl;dr):
Open Economy:
-Give workers a formal voice in AI deployment decisions
-Microgrants and "startup-in-a-box" for AI-native entrepreneurs
-Treat AI access as basic infrastructure (like electricity)
-Shift tax base from payroll toward capital gains and corporate income
-Public Wealth Fund — every citizen gets a stake in AI growth
-Fast-track energy grid expansion via public-private partnerships
-32-hour workweek pilots, better benefits from productivity gains
-Auto-scaling safety nets triggered by displacement metrics
-Portable benefits untied from employers
-Invest in care economy as a transition path for displaced workers
-Distributed AI-enabled labs to accelerate scientific discovery
Resilient Society:
-Safety tools for cyber, bio, and large-scale risks
-AI trust stack — provenance, verification, audit logs
-Competitive auditing market for frontier models
-Containment playbooks for dangerous released models
-Frontier AI companies adopt Public Benefit Corporation structures
-Codified rules and auditing for government AI use
-Democratic public input on AI alignment standards
-Mandatory incident and near-miss reporting
-International AI safety network for joint evaluations and crisis coordination
Notably, OpenAI calls for stricter controls only on a narrow set of frontier models while keeping the broader ecosystem open, a clear attempt to position regulation as targeted, not industry-wide. They're backing it with up to $100K in fellowships and $1M in API credits for policy research, plus a new DC workshop opening in May.