Building Government Capacity: Strengthen the federal government’s ability to prevent and respond to AI-enabled crises; empower CAISI to meet the national security demands of frontier AI.
New IAPS policy memo from @__J0E___, @rosen_br, and @covinstantinop: a national security playbook for frontier AI after Mythos. The White House is weighing executive actions to address national security risks from AI; we set out what a durable response should look like.
On the Deterring American AI Model Theft Act, the memo recommends amendments: to provide options to address legal barriers to industry cooperation; enable cross-gov info sharing; restrict agency access to certain models; clarify that these attacks constitute trade secret theft.
New IAPS policy memo from @theobearman: The White House and Congress have begun acting on AI distillation attacks—but gaps remain in both the OSTP National Security and Technology Memorandum (NSTM) and the proposed Deterring American AI Model Theft Act of 2026.
While initiatives like Project Glasswing provide tech companies with access to cyber-capable models, "access is only the starting point".
A timely policy memo on AI cyber defence, by @covinstantinop, @JKraprayoon, & @MattInThemittel@iapsAI
Our report offers a harmonized standard compatible with all 3 frameworks. Developers should build a safety case across 3 risk factors for each threat vector:
Means: can they cause harm?
Motive: would they?
Opportunity: would they get the chance?
iaps.ai/research/risk-…
California's SB 53, New York's RAISE Act, and the EU Code of Practice now require frontier developers to report on internal use risks. But each law leaves significant discretion over what reports should contain.
Before a frontier AI model is released publicly, it runs inside the company for weeks or months.
New from IAPS: a harmonized reporting standard for internal model use risks, compatible w/ California's SB 53, New York's RAISE Act, & EU's Code of Practice: iaps.ai/research/risk-…