GreekSage
4.3K posts
GreekSage
@GreekSage
journey towards wisdom and the Ring.
15년전, 사람들은 SW가 세상을 먹어치운다는 의미를 몰랐고, 10년전, 다시 AI가 SW를 먹어치운다고 말했을때 역시 그 의미를 몰랐다. 이렇게 오랜 시간이 지나서야 그게 무슨 의미인지 대충 느끼고 있고 그사이 많은 기회들이 지나갔고 이제 다 먹어 치우는데 몇년 남아 있지 않다.
We do not plan to make Mythos Preview generally available. Our goal is to deploy Mythos-class models safely at scale, but first we need safeguards that reliably block their most dangerous outputs. We’ll begin testing those safeguards with an upcoming Claude Opus model.
Looks like OpenAI reached Superintelligence. OpenAI: "Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI." OpenAI just published a 13-page policy blueprint for the "Intelligence Age"- proposing a Public Wealth Fund, 32-hour workweek pilots, portable benefits, a formal "Right to AI," and tax reforms to offset shrinking payroll revenue as automation scales. The document frames superintelligence not as a distant scenario *but an active transition requiring New Deal-level ambition*: new safety nets, containment playbooks for dangerous models, and international coordination modeled on aviation safety institutions. Here are OpenAI's suggestions (tl;dr): Open Economy: -Give workers a formal voice in AI deployment decisions -Microgrants and "startup-in-a-box" for AI-native entrepreneurs -Treat AI access as basic infrastructure (like electricity) -Shift tax base from payroll toward capital gains and corporate income -Public Wealth Fund — every citizen gets a stake in AI growth -Fast-track energy grid expansion via public-private partnerships -32-hour workweek pilots, better benefits from productivity gains -Auto-scaling safety nets triggered by displacement metrics -Portable benefits untied from employers -Invest in care economy as a transition path for displaced workers -Distributed AI-enabled labs to accelerate scientific discovery Resilient Society: -Safety tools for cyber, bio, and large-scale risks -AI trust stack — provenance, verification, audit logs -Competitive auditing market for frontier models -Containment playbooks for dangerous released models -Frontier AI companies adopt Public Benefit Corporation structures -Codified rules and auditing for government AI use -Democratic public input on AI alignment standards -Mandatory incident and near-miss reporting -International AI safety network for joint evaluations and crisis coordination Notably, OpenAI calls for stricter controls only on a narrow set of frontier models while keeping the broader ecosystem open, a clear attempt to position regulation as targeted, not industry-wide. They're backing it with up to $100K in fellowships and $1M in API credits for policy research, plus a new DC workshop opening in May.
New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.
컴퓨터가 특별히 휴가를 원하는게 아니라면 컴퓨터는 쉬지않고 계속 돌고 있는게 좋다. 사람은 놀고 컴퓨터가 일하는게 이상적이다. 딥러닝은 더 그렇다.
머스크가 처음 기업들을 시작할때 말한 거대한 비전들은 그당시는 완전 허황되게 들렸을 것이지만 지금은 거의 다 현실적인 말들이 된 것처럼 그는 처음부터 문명 게임을 하고 있는것. 테슬라에 투자한다는건 거대한 문명 전환의 참여자이자 지분 소유자로서 이 과정(journey)을 즐긴다는 뜻이다. 단순한 수익 이상의 이 흥미로운 게임의 (참여)가치를 지구인들은 잘 모르고 있을것.
SpaceXAI + Tesla TERAFAB Project Goal is a trillion watts of compute/year Most must necessarily go to space, as US electricity is only 0.5TW
Let's get ready for BTS THE COMEBACK LIVE | ARIRANG together! The live event begins on March 21 at 4AM PT, only on Netflix: netflix.com/title/82157128 #BTSLIVEonNetflix #BTS_ARIRANG twitter.com/i/broadcasts/1…
AI가 잘 못할 분야를 찾아서 하면 된다는 건 매우 어리석은 접근법이다.그런 분야도 별로 없을 것이거니와 남아있다 한들 기호에 맞을리도 없다.내가 추천하는 것은, 자신이 가장 좋아하는 일을 똑똑한 AI를 활용해 기업처럼 운영하는 것이다.