Jack Crovitz

13 posts

Jack Crovitz banner
Jack Crovitz

Jack Crovitz

@CrovitzJack

Palantir, Senior Fellow for AI & Emerging Tech at @A1Policy, UChicago ’25.

New York, NY Katılım Ekim 2013
164 Takip Edilen76 Takipçiler
Sabitlenmiş Tweet
Jack Crovitz
Jack Crovitz@CrovitzJack·
Last year, CENTCOM had a classified AI compute shortage, so it bought some chips. CENTCOM’s Chief Data Officer boasted that the acquisition “will give us a really huge, significant amount of compute capability that no one else — at least that I’m tracking — has in the Defense Department for classified networks.” How much computing power made him brag like that? It was exactly 28 H100 GPUs. In contrast, at the same time, American commercial hyperscalers routinely built AI data centers with hundreds of thousands of H100 GPUs. The US government is not yet ready to fully harness AI. It is bottlenecked by complex procurement processes, a lack of expertise in AI adoption, and limited access to classified AI compute. If Washington fails the AI adoption challenge, the federal government will fall dangerously behind private actors and foreign adversaries in both efficiency and lethality. I wrote a piece with the AFPI team (@A1Policy) laying out four strategies for policymakers to accelerate AI adoption within the US government. 1: Streamline Procurement of Commercial AI Tools. > Reform complex acquisition processes > Implement a “colorless” money system for software acquisition in the Department of War > Give agencies access to more flexible procurement options like Other Transaction Authority 2: Develop a Security Framework for AI Agent Deployment. > Publish clear security standards for federal agencies to control and monitor AI agents > Give federal agencies the confidence they need to aggressively deploy AI agents for US government workflows 3: Empower and Train a Network of AI Adoption Leaders. > The US government needs leaders who can take responsibility for accelerating AI adoption > Agencies also need procurement officials to be trained in AI technology and commercial contracting 4: Expand Classified AI Compute Infrastructure. > American warfighters must not fall behind private actors and adversaries in AI adoption due to a shortage of classified compute > Congress should direct DOE and the Department of War to lead a cross-agency effort to construct or retrofit AI data centers that are provably secure enough to handle classified information Read the full report here: t.co/Gz4jzWLC8t
Jack Crovitz tweet media
English
2
12
54
6.9K
Jack Crovitz
Jack Crovitz@CrovitzJack·
Last year, CENTCOM had a classified AI compute shortage, so it bought some chips. CENTCOM’s Chief Data Officer boasted that the acquisition “will give us a really huge, significant amount of compute capability that no one else — at least that I’m tracking — has in the Defense Department for classified networks.” How much computing power made him brag like that? It was exactly 28 H100 GPUs. In contrast, at the same time, American commercial hyperscalers routinely built AI data centers with hundreds of thousands of H100 GPUs. The US government is not yet ready to fully harness AI. It is bottlenecked by complex procurement processes, a lack of expertise in AI adoption, and limited access to classified AI compute. If Washington fails the AI adoption challenge, the federal government will fall dangerously behind private actors and foreign adversaries in both efficiency and lethality. I wrote a piece with the AFPI team (@A1Policy) laying out four strategies for policymakers to accelerate AI adoption within the US government. 1: Streamline Procurement of Commercial AI Tools. > Reform complex acquisition processes > Implement a “colorless” money system for software acquisition in the Department of War > Give agencies access to more flexible procurement options like Other Transaction Authority 2: Develop a Security Framework for AI Agent Deployment. > Publish clear security standards for federal agencies to control and monitor AI agents > Give federal agencies the confidence they need to aggressively deploy AI agents for US government workflows 3: Empower and Train a Network of AI Adoption Leaders. > The US government needs leaders who can take responsibility for accelerating AI adoption > Agencies also need procurement officials to be trained in AI technology and commercial contracting 4: Expand Classified AI Compute Infrastructure. > American warfighters must not fall behind private actors and adversaries in AI adoption due to a shortage of classified compute > Congress should direct DOE and the Department of War to lead a cross-agency effort to construct or retrofit AI data centers that are provably secure enough to handle classified information Read the full report here: t.co/Gz4jzWLC8t
Jack Crovitz tweet media
English
2
12
54
6.9K
Jack Crovitz retweetledi
Yusuf Mahmood
Yusuf Mahmood@YusufSMahmood·
We need world-class AI engineers to join government. But small salaries, months of paperwork, and large bureaucracies stand in the way. That has to change. At AFPI (@A1policy) we wrote a new paper to discuss how. The talent problem: > Federal hiring processes are burdensome, requiring rigid ranking, interview quotas, and inflexible recruitment that all discourage and slow hiring > AI experts make 3 to 10 times as much in salary in the private sector. The maximum federal salary is $195k but AI experts in industry are making millions. > Technical staff see the federal bureaucracy as a career graveyard full of complacency and DEI Fortunately, the Trump Administration is already working on solutions. Most notably, @skupor has created U.S. Tech Force, which is great because it centralizes hiring, focuses on quality over quantity, and has a media blitz. We recommend 6 additional actions to promote hiring of talented experts into AI-specific roles. Congress could: 1) Authorize and fund the Tech Force to preserve it and incentivize hiring via cost sharing with agencies 2) Create a “Tech Force Reserve” so that patriotic technical experts can contribute to critical missions without interrupting their private sector careers 3) Expand DOW’s “PPTE” program to AI-specific roles across government so that industry experts can flexibly lend their expertise on term-limited appointments 4) Establish a “Highly Qualified AI Experts” program that allows agency heads to hire up to 20 AI experts without pay scale or procedural limitations The Office of Personnel Management could also: 5) Boost technical salaries by as much as 25-50% through awards and bonuses to attract and retain experts 6) Expand and promote the use of flexible hiring authorities like DHA, IPA, and excepted service to recruit better candidates faster The piece also discusses accelerating federal AI adoption and how to build hubs of government AI foresight, which my colleagues @CrovitzJack and @ColeSalvador31 also worked on. If the federal government is going to ensure AI serves the American people, it needs to build in-house expertise to understand, deregulate, procure, and deploy the technology. Link here: americafirstpolicy.com/issues/buildin…
Yusuf Mahmood tweet media
English
2
10
39
4.7K
Jack Crovitz
Jack Crovitz@CrovitzJack·
Thanks, Sam! The section on "Accelerating AI Adoption in the Federal Government" was strongly inspired by AI and Leviathan. x.com/CrovitzJack/st…
Jack Crovitz@CrovitzJack

Last year, CENTCOM had a classified AI compute shortage, so it bought some chips. CENTCOM’s Chief Data Officer boasted that the acquisition “will give us a really huge, significant amount of compute capability that no one else — at least that I’m tracking — has in the Defense Department for classified networks.” How much computing power made him brag like that? It was exactly 28 H100 GPUs. In contrast, at the same time, American commercial hyperscalers routinely built AI data centers with hundreds of thousands of H100 GPUs. The US government is not yet ready to fully harness AI. It is bottlenecked by complex procurement processes, a lack of expertise in AI adoption, and limited access to classified AI compute. If Washington fails the AI adoption challenge, the federal government will fall dangerously behind private actors and foreign adversaries in both efficiency and lethality. I wrote a piece with the AFPI team (@A1Policy) laying out four strategies for policymakers to accelerate AI adoption within the US government. 1: Streamline Procurement of Commercial AI Tools. > Reform complex acquisition processes > Implement a “colorless” money system for software acquisition in the Department of War > Give agencies access to more flexible procurement options like Other Transaction Authority 2: Develop a Security Framework for AI Agent Deployment. > Publish clear security standards for federal agencies to control and monitor AI agents > Give federal agencies the confidence they need to aggressively deploy AI agents for US government workflows 3: Empower and Train a Network of AI Adoption Leaders. > The US government needs leaders who can take responsibility for accelerating AI adoption > Agencies also need procurement officials to be trained in AI technology and commercial contracting 4: Expand Classified AI Compute Infrastructure. > American warfighters must not fall behind private actors and adversaries in AI adoption due to a shortage of classified compute > Congress should direct DOE and the Department of War to lead a cross-agency effort to construct or retrofit AI data centers that are provably secure enough to handle classified information Read the full report here: t.co/Gz4jzWLC8t

English
0
1
11
1.5K
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
Great piece with contributions from @CrovitzJack, new alum of FAI's Conservative AI Policy Fellowship
Cole Salvador@ColeSalvador31

In 2022 and 2023, tiny teams of researchers drew straight lines on graphs that predicted the US was headed for an energy bottleneck in AI. But the government had no idea. The future of AI is too important to make the same mistake again. We need talent-dense, AI-focused offices that can skate to where the puck is going and implement President Trump’s AI agenda. In a new piece for AFPI (@A1Policy), we discuss 2 promising offices that could act as hubs of government AI foresight: the Center for AI Standards and Innovation (CAISI) in the Department of Commerce and the Bureau of Emerging Threats (ET) in the Department of State. We found that they have the density of talent to succeed but still lack resources: funding, headcount, and authorization. Here’s a summary: 1) The Center for AI Standards and Innovation (CAISI) lacks resources > It has talented technical staff and a strong track record in evaluations, industry relationships, and insight into China > But it’s chronically underfunded. It’s been around for 3 years but only received $30M in total, not annual, funds. That’s 11 times less than the UK’s equivalent. (It’s even short of Canada and Singapore) > It’s only has 20-30 employees who are swamped with workstreams and external requests from agencies like the IC To solve this, Congress should fund CAISI with an annual budget of $50-100 million. 2) CAISI lacks authorization or a focused mission > Between Department asks, inbound from other offices, and the AI Action Plan, it has more missions than staff > Its critical mission could be threatened by future administrations, who would externally pressure it to pursue DEI initiatives Congress needs to enshrine the office and give it a clear mission. We present an America First vision for CAISI, in which it acts as a technical strike team, bridge between industry and government, frontier analysis unit, and technical standards organization. 3) The Bureau of Emerging Threats (ET) lacks authorization > ET is similarly talent-dense, with experts in cyber, AI, and international relations > But it lacks congressional authorization and could be destroyed or co-opted by future administrations The Bureau needs concrete support from Congress and levers of interagency influence, like regular reports to national security leaders. With appropriate action, Congress can help ensure the President has the resources he needs to help America win the AI race and usher in a new golden age of human flourishing. Always fun to collaborate with @CrovitzJack and @YusufSMahmood, who have posted about other sections of our piece.

English
1
2
31
3.6K
Jack Crovitz
Jack Crovitz@CrovitzJack·
Always a pleasure to work with @ColeSalvador31! His arguments for empowering CAISI and the Bureau of Emerging Threats are very compelling.
Cole Salvador@ColeSalvador31

In 2022 and 2023, tiny teams of researchers drew straight lines on graphs that predicted the US was headed for an energy bottleneck in AI. But the government had no idea. The future of AI is too important to make the same mistake again. We need talent-dense, AI-focused offices that can skate to where the puck is going and implement President Trump’s AI agenda. In a new piece for AFPI (@A1Policy), we discuss 2 promising offices that could act as hubs of government AI foresight: the Center for AI Standards and Innovation (CAISI) in the Department of Commerce and the Bureau of Emerging Threats (ET) in the Department of State. We found that they have the density of talent to succeed but still lack resources: funding, headcount, and authorization. Here’s a summary: 1) The Center for AI Standards and Innovation (CAISI) lacks resources > It has talented technical staff and a strong track record in evaluations, industry relationships, and insight into China > But it’s chronically underfunded. It’s been around for 3 years but only received $30M in total, not annual, funds. That’s 11 times less than the UK’s equivalent. (It’s even short of Canada and Singapore) > It’s only has 20-30 employees who are swamped with workstreams and external requests from agencies like the IC To solve this, Congress should fund CAISI with an annual budget of $50-100 million. 2) CAISI lacks authorization or a focused mission > Between Department asks, inbound from other offices, and the AI Action Plan, it has more missions than staff > Its critical mission could be threatened by future administrations, who would externally pressure it to pursue DEI initiatives Congress needs to enshrine the office and give it a clear mission. We present an America First vision for CAISI, in which it acts as a technical strike team, bridge between industry and government, frontier analysis unit, and technical standards organization. 3) The Bureau of Emerging Threats (ET) lacks authorization > ET is similarly talent-dense, with experts in cyber, AI, and international relations > But it lacks congressional authorization and could be destroyed or co-opted by future administrations The Bureau needs concrete support from Congress and levers of interagency influence, like regular reports to national security leaders. With appropriate action, Congress can help ensure the President has the resources he needs to help America win the AI race and usher in a new golden age of human flourishing. Always fun to collaborate with @CrovitzJack and @YusufSMahmood, who have posted about other sections of our piece.

English
0
1
10
1.6K
Jack Crovitz
Jack Crovitz@CrovitzJack·
This is a powerful and compelling plan for American AI policy. Our leaders are thinking seriously about the right steps forward! I especially appreciate this passage about the need for our national security agencies to be ready for AI:
Jack Crovitz tweet media
Director Michael Kratsios@mkratsios47

Today, the @WhiteHouse released a commonsense National AI Policy Framework that ensures every American benefits from AI. As @POTUS has said — we need one federal AI policy, not a 50 state patchwork. This gets us there. Eager to work with Congress on this important legislation.

English
0
0
4
186
Jack Crovitz retweetledi
Yusuf Mahmood
Yusuf Mahmood@YusufSMahmood·
"Data centers are draining our water" is the new "plastic straws are destroying the ocean." It's a hoax, and many people pushing it know it's not true. At AFPI (@A1policy) we wrote a piece breaking down the numbers: 1) Data centers use very little water > Somewhere between 0.2% and 0.5% of U.S. freshwater consumption > 15x less water than we lose each year to leaky pipes > The biggest data center of 2024 uses less water than 3 square miles of farmland (America has 1.3 million) 2) Local water impacts are small, too > In one of the country’s most “water stressed” counties, data centers are 0.12% of its water use (golf courses are 3.8%) 3) This hasn’t stopped lawmakers from fearmongering about data centers > 5 senators, including Bernie Sanders and Ed Markey, wrote a letter to the admin complaining about data center water use > Lawmakers have introduced legislation and called for data center moratoriums because of fake water use claims. Denver might enact one soon 4) Data centers are one of America’s greatest strengths > Huge local tax revenues > The AI data center boom has created tremendous economic growth > Wages in construction and the trades have skyrocketed (construction up >30% because of data centers) We end by suggesting some ways to accelerate the data center buildout, while protecting local communities' interests. Full piece here: americafirstpolicy.com/issues/the-dat…
Yusuf Mahmood tweet media
English
43
97
480
51.8K
Jack Crovitz
Jack Crovitz@CrovitzJack·
Exciting new piece from @mattburtell and @yusufsmahmood on the need for transparency in AI development 🙌
Matt Burtell@MattBurtell

NEW PAPER ON AI TRANSPARENCY FROM THE AMERICA FIRST POLICY INSTITUTE Last week, the Senate okayed the use of AI for staffers, and the Department of War articulated legitimate concerns about the values embedded in Anthropic’s AI systems. So it’s worth asking: to what extent are these systems biased? The evidence of anti-conservative bias that we cite is damning: > In a corpus of real-world examples, right-leaning outlets represent only 1% of cited sources. > On political compass tests, 23 of 24 LLMs leaned left across economic, social, and cultural dimensions. (The single exception was a model fine-tuned for right-leaning responses). > AI rates right-leaning sources as less reliable than left-leaning sources, even when human fact-checkers rate them comparably. Unlike traditional software, we can’t merely inspect the code of systems like ChatGPT or Gemini and identify how they were designed to behave. As AI becomes further integrated into the analysis and decision-making of individuals in and out of government, transparency into the AI becomes more important. In a new piece from me and @YusufSMahmood at America First Policy Institute, we argue for a disclosure-forward framework on AI so that, whether it's a government official procuring AI or an individual choosing which model to use, they have the information necessary to make that decision. Beyond transparency to expose political bias, we argue that disclosure can protect children and national security. When the public is made aware of what companies already know about risks from their systems, the mitigations they have in place, and how well those mitigations are working, parents can vote with their feet and standards form that courts can enforce. The American people deserve greater insight into the systems that indirectly and directly influence their lives. Read it here: americafirstpolicy.com/issues/ai-tran…

English
0
0
2
184
Jack Crovitz retweetledi
Matt Burtell
Matt Burtell@MattBurtell·
NEW PAPER ON AI TRANSPARENCY FROM THE AMERICA FIRST POLICY INSTITUTE Last week, the Senate okayed the use of AI for staffers, and the Department of War articulated legitimate concerns about the values embedded in Anthropic’s AI systems. So it’s worth asking: to what extent are these systems biased? The evidence of anti-conservative bias that we cite is damning: > In a corpus of real-world examples, right-leaning outlets represent only 1% of cited sources. > On political compass tests, 23 of 24 LLMs leaned left across economic, social, and cultural dimensions. (The single exception was a model fine-tuned for right-leaning responses). > AI rates right-leaning sources as less reliable than left-leaning sources, even when human fact-checkers rate them comparably. Unlike traditional software, we can’t merely inspect the code of systems like ChatGPT or Gemini and identify how they were designed to behave. As AI becomes further integrated into the analysis and decision-making of individuals in and out of government, transparency into the AI becomes more important. In a new piece from me and @YusufSMahmood at America First Policy Institute, we argue for a disclosure-forward framework on AI so that, whether it's a government official procuring AI or an individual choosing which model to use, they have the information necessary to make that decision. Beyond transparency to expose political bias, we argue that disclosure can protect children and national security. When the public is made aware of what companies already know about risks from their systems, the mitigations they have in place, and how well those mitigations are working, parents can vote with their feet and standards form that courts can enforce. The American people deserve greater insight into the systems that indirectly and directly influence their lives. Read it here: americafirstpolicy.com/issues/ai-tran…
Matt Burtell tweet media
English
2
6
49
7.4K
Jack Crovitz
Jack Crovitz@CrovitzJack·
@cremieuxrecueil Question: You say that statehood emerged in response to a need for hydraulic management. But then you also say that the Greeks’ society was different from Near Eastern “hydraulic” ones because they had little need for hydraulic management. So how did Greek statehood emerge?
English
1
0
2
651
Crémieux
Crémieux@cremieuxrecueil·
China and Rome were both massive centralized regimes. In the popular mind, their capitals were Beijing and Rome, respectively. But this wasn't always true. China first. This map shows the Han Dynasty. (206 BC-220 AD) and it has labels for China's two historic capitals. The lesser-known one is modern-day Xi'an, then known as Chang'an or Changan. When different Chinese dynasties placed their capital there, it was not because it was the biggest city. In fact, Guanzhong province contained 4% of the population during the Han dynasty, versus 60% who were in Guandong. If capitals were to be placed in population centers, it would have gone there instead of in the Guanzhong city of Changan. The choice of Beijing was similarly not based on picking the most populous area to plop the capital down in. Chinese capitals were, instead, placed where they were because of their proximity to external threats. Take a look at this wonderful relief map of China (obtained from @Alethios3): The only people who would enter China in significant numbers were people in the north, steppe nomads. At the time when Changan was more often the capital, Manchuria also wasn't really an invasion threat, but people coming in from the northwest and flooding over the central plains most definitely was. And thus, Changan tended to be the capital in those years. Why it was the capital has to do with how force can be projected in a past without much state capacity. Technology just wasn't that great for administrating states in the past, so the power of the state declined majorly with distance from administrative centers. Accordingly, China tended to choose to put its capital near its borders, and once they walled up those borders and the locations of the threats moved, the new go-to capital became Beijing! What about Rome? Does anyone even know Rome's non-Rome capitals? My guess is that this isn't common knowledge, even among Roman Republic and Empire fanboys. Take a look at this map of the Tetrarchy-era Roman Empire: Rome moved its capital seemingly for the same reasons China did: to get administrative centers closer to the frontiers that invaders were at. Trier, Sirmium, Milan, and Nicomedia were not the largest cities in the Empire, much as Changan and Beijing weren't the largest cities and weren't located in the most populous provinces in China. If the empire were to be placed in those cities, it would have gone to Alexandria, Carthage, or Antioch. The logic of imperial capital placement seemingly works based on a simple threat model, in both East and West. To learn more about this, go check out my latest article in which I talk about the invention of agriculture and states, the unifications of China and Rome, and why China had to fall behind the West when it came to industrializing first. Link: cremieux.xyz/p/from-caveman…
Crémieux tweet mediaCrémieux tweet mediaCrémieux tweet media
English
5
16
141
13.4K
Daragh Grant
Daragh Grant@daraghjgrant·
Looking like that total weirdo who brings Leviathan on a transatlantic flight. Then again it’s apt enough; this voyage is almost certain to be solitary, poor, nasty and brutish, if, alas, not short.
Daragh Grant tweet mediaDaragh Grant tweet media
English
3
1
43
4.3K