

Jack Crovitz
13 posts

@CrovitzJack
Palantir, Senior Fellow for AI & Emerging Tech at @A1Policy, UChicago ’25.




This initial study suggests the need for further work. AI safety has not by and large engaged with religion extensively But, given its massive salience in training corpora, the field leaves a huge amount of latent alignment on the table by not investigating these representations





Last year, CENTCOM had a classified AI compute shortage, so it bought some chips. CENTCOM’s Chief Data Officer boasted that the acquisition “will give us a really huge, significant amount of compute capability that no one else — at least that I’m tracking — has in the Defense Department for classified networks.” How much computing power made him brag like that? It was exactly 28 H100 GPUs. In contrast, at the same time, American commercial hyperscalers routinely built AI data centers with hundreds of thousands of H100 GPUs. The US government is not yet ready to fully harness AI. It is bottlenecked by complex procurement processes, a lack of expertise in AI adoption, and limited access to classified AI compute. If Washington fails the AI adoption challenge, the federal government will fall dangerously behind private actors and foreign adversaries in both efficiency and lethality. I wrote a piece with the AFPI team (@A1Policy) laying out four strategies for policymakers to accelerate AI adoption within the US government. 1: Streamline Procurement of Commercial AI Tools. > Reform complex acquisition processes > Implement a “colorless” money system for software acquisition in the Department of War > Give agencies access to more flexible procurement options like Other Transaction Authority 2: Develop a Security Framework for AI Agent Deployment. > Publish clear security standards for federal agencies to control and monitor AI agents > Give federal agencies the confidence they need to aggressively deploy AI agents for US government workflows 3: Empower and Train a Network of AI Adoption Leaders. > The US government needs leaders who can take responsibility for accelerating AI adoption > Agencies also need procurement officials to be trained in AI technology and commercial contracting 4: Expand Classified AI Compute Infrastructure. > American warfighters must not fall behind private actors and adversaries in AI adoption due to a shortage of classified compute > Congress should direct DOE and the Department of War to lead a cross-agency effort to construct or retrofit AI data centers that are provably secure enough to handle classified information Read the full report here: t.co/Gz4jzWLC8t

In 2022 and 2023, tiny teams of researchers drew straight lines on graphs that predicted the US was headed for an energy bottleneck in AI. But the government had no idea. The future of AI is too important to make the same mistake again. We need talent-dense, AI-focused offices that can skate to where the puck is going and implement President Trump’s AI agenda. In a new piece for AFPI (@A1Policy), we discuss 2 promising offices that could act as hubs of government AI foresight: the Center for AI Standards and Innovation (CAISI) in the Department of Commerce and the Bureau of Emerging Threats (ET) in the Department of State. We found that they have the density of talent to succeed but still lack resources: funding, headcount, and authorization. Here’s a summary: 1) The Center for AI Standards and Innovation (CAISI) lacks resources > It has talented technical staff and a strong track record in evaluations, industry relationships, and insight into China > But it’s chronically underfunded. It’s been around for 3 years but only received $30M in total, not annual, funds. That’s 11 times less than the UK’s equivalent. (It’s even short of Canada and Singapore) > It’s only has 20-30 employees who are swamped with workstreams and external requests from agencies like the IC To solve this, Congress should fund CAISI with an annual budget of $50-100 million. 2) CAISI lacks authorization or a focused mission > Between Department asks, inbound from other offices, and the AI Action Plan, it has more missions than staff > Its critical mission could be threatened by future administrations, who would externally pressure it to pursue DEI initiatives Congress needs to enshrine the office and give it a clear mission. We present an America First vision for CAISI, in which it acts as a technical strike team, bridge between industry and government, frontier analysis unit, and technical standards organization. 3) The Bureau of Emerging Threats (ET) lacks authorization > ET is similarly talent-dense, with experts in cyber, AI, and international relations > But it lacks congressional authorization and could be destroyed or co-opted by future administrations The Bureau needs concrete support from Congress and levers of interagency influence, like regular reports to national security leaders. With appropriate action, Congress can help ensure the President has the resources he needs to help America win the AI race and usher in a new golden age of human flourishing. Always fun to collaborate with @CrovitzJack and @YusufSMahmood, who have posted about other sections of our piece.

In 2022 and 2023, tiny teams of researchers drew straight lines on graphs that predicted the US was headed for an energy bottleneck in AI. But the government had no idea. The future of AI is too important to make the same mistake again. We need talent-dense, AI-focused offices that can skate to where the puck is going and implement President Trump’s AI agenda. In a new piece for AFPI (@A1Policy), we discuss 2 promising offices that could act as hubs of government AI foresight: the Center for AI Standards and Innovation (CAISI) in the Department of Commerce and the Bureau of Emerging Threats (ET) in the Department of State. We found that they have the density of talent to succeed but still lack resources: funding, headcount, and authorization. Here’s a summary: 1) The Center for AI Standards and Innovation (CAISI) lacks resources > It has talented technical staff and a strong track record in evaluations, industry relationships, and insight into China > But it’s chronically underfunded. It’s been around for 3 years but only received $30M in total, not annual, funds. That’s 11 times less than the UK’s equivalent. (It’s even short of Canada and Singapore) > It’s only has 20-30 employees who are swamped with workstreams and external requests from agencies like the IC To solve this, Congress should fund CAISI with an annual budget of $50-100 million. 2) CAISI lacks authorization or a focused mission > Between Department asks, inbound from other offices, and the AI Action Plan, it has more missions than staff > Its critical mission could be threatened by future administrations, who would externally pressure it to pursue DEI initiatives Congress needs to enshrine the office and give it a clear mission. We present an America First vision for CAISI, in which it acts as a technical strike team, bridge between industry and government, frontier analysis unit, and technical standards organization. 3) The Bureau of Emerging Threats (ET) lacks authorization > ET is similarly talent-dense, with experts in cyber, AI, and international relations > But it lacks congressional authorization and could be destroyed or co-opted by future administrations The Bureau needs concrete support from Congress and levers of interagency influence, like regular reports to national security leaders. With appropriate action, Congress can help ensure the President has the resources he needs to help America win the AI race and usher in a new golden age of human flourishing. Always fun to collaborate with @CrovitzJack and @YusufSMahmood, who have posted about other sections of our piece.


Today, the @WhiteHouse released a commonsense National AI Policy Framework that ensures every American benefits from AI. As @POTUS has said — we need one federal AI policy, not a 50 state patchwork. This gets us there. Eager to work with Congress on this important legislation.



NEW PAPER ON AI TRANSPARENCY FROM THE AMERICA FIRST POLICY INSTITUTE Last week, the Senate okayed the use of AI for staffers, and the Department of War articulated legitimate concerns about the values embedded in Anthropic’s AI systems. So it’s worth asking: to what extent are these systems biased? The evidence of anti-conservative bias that we cite is damning: > In a corpus of real-world examples, right-leaning outlets represent only 1% of cited sources. > On political compass tests, 23 of 24 LLMs leaned left across economic, social, and cultural dimensions. (The single exception was a model fine-tuned for right-leaning responses). > AI rates right-leaning sources as less reliable than left-leaning sources, even when human fact-checkers rate them comparably. Unlike traditional software, we can’t merely inspect the code of systems like ChatGPT or Gemini and identify how they were designed to behave. As AI becomes further integrated into the analysis and decision-making of individuals in and out of government, transparency into the AI becomes more important. In a new piece from me and @YusufSMahmood at America First Policy Institute, we argue for a disclosure-forward framework on AI so that, whether it's a government official procuring AI or an individual choosing which model to use, they have the information necessary to make that decision. Beyond transparency to expose political bias, we argue that disclosure can protect children and national security. When the public is made aware of what companies already know about risks from their systems, the mitigations they have in place, and how well those mitigations are working, parents can vote with their feet and standards form that courts can enforce. The American people deserve greater insight into the systems that indirectly and directly influence their lives. Read it here: americafirstpolicy.com/issues/ai-tran…






