Center for AI Policy

1.2K posts

Center for AI Policy banner
Center for AI Policy

Center for AI Policy

@aipolicyus

Nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy.

Washington, DC Katılım Ocak 2024
3 Takip Edilen776 Takipçiler
Sabitlenmiş Tweet
Center for AI Policy
Center for AI Policy@aipolicyus·
On Tuesday, March 25th, 2025 the Center for AI Policy hosted a panel discussion for U.S. House and Senate staff on AI and Cybersecurity: Offense, Defense, and Congressional Priorities. The Center's Executive Director, Jason Green-Lowe, moderated a discussion between a panel of esteemed experts: • Daniel Kroese, Vice President of Public Policy & Government Affairs, @PaloAltoNtwks • Krystal Jackson, Non-Resident Research Fellow, @CLTCBerkeley • Kyle Crichton, Cyber AI Research Fellow, @CSETGeorgetown • Fred Heiding, Postdoctoral Researcher, @Kennedy_School The session included a demonstration of an automated spear phishing AI agent, followed by discussion of current cybersecurity challenges, AI's evolving impact, and policy recommendations for Congress. Watch a video recording at the link below.
Center for AI Policy tweet mediaCenter for AI Policy tweet mediaCenter for AI Policy tweet media
English
3
4
11
1.5K
Center for AI Policy
Center for AI Policy@aipolicyus·
Ep. 17 of the CAIP Podcast features @PeterWildeford, Chief Advisory Executive at @iapsAI. Peter and @Jakub__Kraus discuss forecasting 101, the U.S. government's forecasting track record, integrating forecasters into government, AI’s societal impacts and opportunities, AI’s improving software skills, AI-powered forecasting systems, future AI trajectories, and more. Available on YouTube, Apple Podcasts, Spotify, and many other podcasting platforms. For show notes, visit the link below.
Center for AI Policy tweet media
English
2
3
6
665
Center for AI Policy
Center for AI Policy@aipolicyus·
On Tuesday, May 20th, 2025 the Center for AI Policy hosted a panel discussion for U.S. House and Senate staff titled Progress and Policy Implications of Advanced Agentic AI. The Center's Executive Director Jason Green-Lowe, moderated a discussion between a panel of esteemed experts: • Atoosa Kasirzadeh, Assistant Professor, Carnegie Mellon University • Michael Boyce, Founder and Former Director, US Department of Homeland Security AI Corps • Jackie Kerr, Senior Research Fellow, National Defense University • Jam Kraprayoon, AI Policy Researcher, Institute for AI Policy and Strategy (IAPS) The session included a demonstration of current AI agents interacting in Sage's AI Agent Village by Adam Binksmith, followed by a deep dive into the current state of agentic AI, what promise and peril awaits us as models become incredibly agentic, and a discussion of potential policy solutions to avert the risks. Watch now at the link below:
Center for AI Policy tweet media
English
1
0
7
263
Center for AI Policy
Center for AI Policy@aipolicyus·
AI Experts From Biden’s Talent Surge Leave Federal Government Six days before the 2024 election, the Biden administration announced that it had hired over 250 AI practitioners through its “AI Talent Surge”—a program launched in late 2023 through Biden’s AI executive order. These technically-savvy federal employees worked on tasks like “informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.” According to recent reporting from @Time, most of these experts are gone. It’s not entirely clear why, but Time heard from “multiple federal officials” that the employees were “quickly pushed out by the new administration.” Time writes that many of the cuts came when Elon Musk’s Department of Government Efficiency (DOGE) “fired hundreds of recent technology hires as part of its broader termination of thousands of employees on probation or so-called ‘term’ hires.” Others occurred through workforce reductions at the U.S. Digital Service and the 18F technology office. Angelica Quirarte, who helped recruit approximately 250 AI specialists for Biden’s AI Talent Surge, told Time that “about 10%” of those experts remain in federal service. Quirarte resigned 23 days into the Trump administration. Thus, it seems that over 200 AI specialists have recently left the government. The Center for AI Policy thinks government AI expertise is essential. It enables both government modernization and informed preparation for future AI systems that could fundamentally transform national security, global stability, and everyday life. We strongly urge the Trump administration to prioritize building its AI capacity and growing it to be even greater than its previous levels. We were pleased to see OMB Memorandum M-25-21’s guidance that “agencies are strongly encouraged to prioritize recruiting, developing, and retaining technical talent in AI roles,” and we urge those efforts to continue with full strength. Pictured: The start of the article in Time.
Center for AI Policy tweet media
English
0
1
2
221
Center for AI Policy
Center for AI Policy@aipolicyus·
Congress Passes the Take It Down Act On April 28th, the U.S. House passed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act with a 409–2 vote. Since the bill previously passed the Senate in February by unanimous consent, the next step is for President Trump to sign the bill into law. He’s almost certain to do so, since he spoke in favor of the bill during his joint address to Congress in March, stating “I look forward to signing that bill into law.” Furthermore, @FLOTUS Melania Trump has been a strong supporter of the bill. @SenTedCruz (R-TX) originally introduced the Take It Down Act in June 2024 during the previous session of Congress, then reintroduced it in January 2025 for the current session. @SenAmyKlobuchar (D-MN), @RepMariaSalazar (R-FL), and @RepDean (D-PA) have also championed the legislation. The Take It Down Act seeks to curb the publication of nonconsensual intimate imagery (NCII) online, including AI-generated NCII. It introduces the term “digital forgery,” defined as an “intimate visual depiction of an identifiable individual created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means [...] that, when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.” “Intimate visual depiction” is an established term for sexually explicit imagery from the Violence Against Women Act (VAWA)’s 2022 reauthorization. According to the Justice Department, existing law already “lets you bring a civil action in federal court against someone who shared intimate images, explicit pictures, recorded videos, or other depictions of you without your consent.” The Take It Down Act goes further by creating criminal liability for people who knowingly publish digital forgeries or real intimate imagery to an online platform—with some exceptions. Criminal penalties also apply to people who threaten to publish such images “for the purpose of intimidation, coercion, extortion, or to create mental distress.” Furthermore, the Take It Down Act directs every covered platform to create a process whereby victims of NCII on the platform can ask the platform to remove that content (i.e., take it down). Platforms must comply with valid takedown requests within 48 hours, and they must “make reasonable efforts to identify and remove any known identical copies of such depiction.” The Take It Down Act represents a significant step from Congress towards addressing the harms of deepfakes. Pictured: Statement from First Lady Melania Trump on the bill’s passage.
Center for AI Policy tweet media
English
0
1
2
213
Center for AI Policy
Center for AI Policy@aipolicyus·
AI Policy Weekly No. 73: 🏛️ Congress passed the Take It Down Act with a 409-2 House vote, following previous Senate approval. The bill criminalizes publishing digital forgeries of intimate imagery to online platforms - with some exceptions - and requires platforms to comply with valid NCII takedown requests within 48 hours. 🚪 Approximately 90% of the 250+ AI specialists hired during Biden's "AI Talent Surge" have left the federal government, according to Time reporting. Many exits reportedly occurred when the Department of Government Efficiency (DOGE) terminated certain probationary and term employees, while others occurred during workforce reductions at U.S. Digital Service and 18F. 💬 The White House published over 10,000 responses to its AI action plan RFI, and the Institute for Progress used AI to analyze submissions. Approximately 700 organizational submissions offered substantive policy recommendations ranging from tax credits for AI alignment research to defining measurable indicators of human flourishing. Full stories at the link below.
Center for AI Policy tweet media
English
1
2
4
342
Center for AI Policy retweetledi
Alec Stapp
Alec Stapp@AlecStapp·
🚨 NEW TOOL FROM @IFP: The White House asked for input on its AI Action Plan & was flooded with >10,000 responses. To help, we built aiactionplan.org — a searchable database of what America thinks the gov't should do on AI. Naturally, we used AI to make it... 🧵 (1/10)
Alec Stapp tweet media
English
12
88
377
192.7K
Center for AI Policy
Center for AI Policy@aipolicyus·
"Within the next few years, the largest artificial intelligence models will likely be smarter and more powerful than their human controllers," said Jason Green-Lowe, executive director of the Center for AI Policy. "Under current law, private companies can deploy any AI model regardless of the danger it creates for public safety. It is unreasonable to bet the world's future on the chance that every frontier AI developer will always be perfectly responsible." Access the Center for AI Policy's model legislation, the Responsible Artificial Intelligence Act of 2025 (RAIA), here: centeraipolicy.org/work/model
English
1
1
3
145
Center for AI Policy
Center for AI Policy@aipolicyus·
OpenAI recently released o3 and o4-mini (and o4-mini-high, which thinks more before responding). Paid ChatGPT subscribers can use all these models, while free users are limited to o4-mini. o3’s benchmark scores are impressive, but they are lower than the early prototype that OpenAI flaunted in December. Granted, o3-pro is coming “in a few weeks,” and it might match or exceed the December scores by using more computation to solve problems. o3 is arguably the best reasoning model in the world currently, though Google’s Gemini 2.5 Pro is a close competitor. The model’s competence may be the result of heavy computational consumption, with one OpenAI employee stating “we put in more than 10 times the training compute of o1 to produce o3.” That compute is yielding impressive results. The third-party evaluator @METR_Evals tested o3 and believes it surpasses previous projections for AI’s ability to complete lengthier and lengthier software-related tasks entirely autonomously (albeit with 50% reliability). Importantly, o3 can fluently wield tools, especially web search and image processing, as it reasons. Though many chatbots have reasoning capabilities and search access and image recognition, few of them integrate these all together so smoothly. Not all the news is rosy. Many users are reporting cases of o3 lying to them and fabricating information with confidence. Indeed, its hallucination rate is higher than o1 on the PersonQA benchmark. Additionally, biological risks are visible on the horizon. OpenAI writes that “several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold. We expect current trends of rapidly increasing capability to continue, and for models to cross this threshold in the near future.” In line with that assessment, a new paper from @SecureBio and @cais finds that language models score well on the Virology Capabilities Test (VCT), a benchmark with “322 search-proof, relevant, and multimodal questions on practical troubleshooting in virology, including coverage of many dual-use topics. The questions in VCT involve rare knowledge that trained virologists themselves consider hard-to-find or even tacit.” Emerging AI-driven biorisks make it even more pressing for Congress to pass bills like the Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats Act (from @SenTedBuddNC and @SenMarkey) and the MedShield Act (from @SenatorRounds and @SenatorHeinrich). The Center for AI Policy supports both these bills. Pictured: Sample question from the VCT.
Center for AI Policy tweet media
English
0
1
1
177
Center for AI Policy
Center for AI Policy@aipolicyus·
Center for AI Policy Unveils Model Legislation to Regulate Frontier AI Systems The Responsible AI Act of 2025 Establishes Critical Testing Requirements for Advanced AI Systems WASHINGTON - April 29, 2025 - The Center for AI Policy (CAIP) released its model legislation, the "Responsible AI Act of 2025" (RAIA), designed to establish a regulatory framework for the most powerful artificial intelligence systems while ensuring continued innovation in the field. As frontier AI systems approach and potentially surpass human intelligence in the coming years, RAIA proposes a commonsense approach to mitigating catastrophic risks through independent verification, hardware security requirements, and a dedicated federal oversight body. "The unchecked development of increasingly powerful AI systems creates unprecedented risks to public safety and national security," said Jason Green-Lowe, executive director of the Center for AI Policy. "The ‘Responsible AI Act of 2025’ provides a balanced framework that allows innovation to flourish while ensuring these systems remain firmly under responsible human control. This model legislation is creating a safety net for the digital age to ensure that exciting advancements in AI are not overwhelmed by the risks they pose." Key provisions of the "Responsible AI Act of 2025" (RAIA): Targeted scope: RAIA applies only to the largest general-purpose AI systems, specifically exempting developers who spend less than $1 billion on training or create narrow AI with limited applications. Independent auditing system: Before receiving deployment permits, developers of frontier AI systems would need validation from independent auditors confirming adequate safeguards against catastrophic outcomes. Hardware security: The model legislation includes minimum standards for physical security, cybersecurity, and know-your-customer protocols for AI data centers to prevent unauthorized access. Monitoring and reporting: RAIA would establish a team of government experts to track AI trends and developments, providing critical intelligence to federal agencies. Liability reform: The model legislation addresses current legal loopholes by articulating a standard of care and establishing clear liability frameworks for AI-related damages. Emergency powers: RAIA outlines specific procedures for government intervention in the event of an AI emergency, including provisions for compensating innocent parties. "Within the next few years, the largest artificial intelligence models will likely be smarter and more powerful than their human controllers," said Green-Lowe. "Under current law, private companies can deploy any AI model regardless of the danger it creates for public safety. It is unreasonable to bet the world's future on the chance that every frontier AI developer will always be perfectly responsible." At the core of the legislation is a requirement for independent third-party testing and certification. Developers of frontier AI systems would need to be evaluated by independent auditors who would certify that sufficient safeguards exist to prevent catastrophic outcomes. A new federal office, the Frontier AI Administration, would review these audits and have the authority to require additional safeguards before issuing deployment permits. "This model legislation plugs critical loopholes in our current regulatory framework by putting a second pair of eyes on the largest AI systems," Green-Lowe said. "We're not trying to stop AI progress—we're working to ensure AI remains beneficial by keeping it under meaningful human control." The model legislation is designed to be a resource for lawmakers, industry leaders, and other stakeholders concerned with AI safety. Access the "Responsible AI Act of 2025” (RAIA) text here: assets.caip.org/caip/RAIA%20Fu… The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Based in Washington, DC, CAIP works to ensure AI is developed and implemented with effective safety standards. Learn more at centeraipolicy.org. ###
Center for AI Policy tweet media
English
0
2
4
179
Center for AI Policy
Center for AI Policy@aipolicyus·
A newly released report from @GladstoneAI identifies severe vulnerabilities in America’s frontier AI development and warns that espionage could give China access to U.S. AI breakthroughs before they benefit American national security. According to authors @JeremieCHarris and @Harris_Edouard, Trump White House officials viewed the document, titled “America’s Superintelligence Project.” While some sections are redacted in the public version, the report presents findings from a 12-month investigation involving over 100 specialists from intelligence, military, and AI research communities. The authors identify several critical AI security and governance challenges: Data center vulnerabilities: One experienced special forces operator assessed a $2 billion data center and identified a $30,000 attack that would disable it for six months or more. Supply chain dependencies: Many AI infrastructure components come from China, making them vulnerable to compromise. For example, Taiwan-based ASPEED manufactures 70% of the world’s Baseboard Management Controllers (BMCs). AI lab insecurity: According to one former OpenAI researcher, OpenAI had serious security vulnerabilities that “would have allowed any employee to exfiltrate model weights from the lab’s servers undetected.” AI control challenges: The authors write that “highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.” Power concentration: The report warns that without robust checks and balances, “a small handful of people will end up in control of the most powerful technology ever created.” To help address these issues, the authors recommend building highly secure data centers in remote locations, creating U.S.-based supply chains, developing robust AI control techniques, implementing oversight mechanisms similar to nuclear command protocols, and more.
Center for AI Policy tweet media
English
0
1
4
137
Center for AI Policy
Center for AI Policy@aipolicyus·
On April 23rd, @POTUS Donald Trump signed an executive order titled “Advancing Artificial Intelligence Education for American Youth.” The order sets the policy of the United States to “promote AI literacy and proficiency among Americans by promoting the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology.” A new White House Task Force on AI Education, led by OSTP Director @MKratsios47, will coordinate implementation efforts across multiple federal agencies, including the departments of Education, Labor, Agriculture, and Energy, alongside the National Science Foundation. Within 90 days, the Task Force will plan a Presidential AI Challenge to implement over the ensuing 12 months. The Challenge will “encourage and highlight student and educator achievements in AI, promote wide geographic adoption of technological advancement, and foster collaboration [...] to address national challenges with AI solutions.” The order also mandates public-private partnerships to develop K–12 AI literacy resources, with initial partnerships announced on a rolling basis and resources mobilized within 180 days of the first announcement. Separately, the Department of Education will issue guidance within 90 days on using existing grant funds for “AI-based high-quality instructional resources; high-impact tutoring; and college and career pathway exploration, advising, and navigation.” Through teacher training grants, the Department of Education will also support AI-related projects that reduce administrative work, improve teacher evaluation, provide professional development for teachers, and help educators “integrate the fundamentals of AI into all subject areas.” The National Science Foundation will prioritize research on the use of AI in education, while the Agriculture Department will support AI education through 4-H programs and the Cooperative Extension System. The Labor Department will expand AI-related Registered Apprenticeships, direct funding toward AI skills development, promote AI education certifications, and support high school AI courses through grants. Overall, this executive order represents a significant federal push towards AI literacy across America’s K–12 educational landscape. Pictured (from left to right): Commerce Secretary @HowardLutnick, Labor Secretary @SecretaryLCD, President Donald Trump, and Education Secretary @EdSecMcMahon.
Center for AI Policy tweet media
English
0
1
4
119
Center for AI Policy
Center for AI Policy@aipolicyus·
AI Policy Weekly No. 72: 🧬 OpenAI released o3 and o4-mini with enhanced reasoning and tool integration. While o3 shows impressive benchmark performance, OpenAI warns their models are "on the cusp" of meaningfully enabling biorisks. 🎓 President Trump signed an executive order on "Advancing AI Education for American Youth," directing federal agencies to promote AI literacy through multiple initiatives including teacher training programs, expanded apprenticeship opportunities, and prioritizing research on AI in education. 🛡️ A new Gladstone AI report warns of challenges in U.S. frontier AI development, including data center vulnerabilities, supply chain dependencies, AI lab insecurity, and loss of control concerns. Full stories at the link below.
Center for AI Policy tweet media
English
1
1
4
151
Center for AI Policy
Center for AI Policy@aipolicyus·
CNBC: Ex-OpenAI staffers urge states not to approve ChatGPT maker’s restructuring effort + A group including ex-OpenAI employees sent a letter to attorneys general in California and Delaware last week, requesting that they halt OpenAI’s restructuring. + On Tuesday evening, the group delivered the letter to OpenAI’s board. + “OpenAI may one day build technology that could get us all killed,” Nisan Stiennon, who worked at OpenAI from 2018 to 2020, said in a statement. + Jason Green-Lowe, executive director of the Center for AI Policy, said in a statement that even under OpenAI’s current structure, it was able to back away from its promise to set aside 20% of its compute for safety research. + “If this is how OpenAI behaves when it’s still notionally subject to nonprofit oversight, it’s terrifying to imagine how they’ll behave after they’re freed to focus entirely on maximizing profits,” Green-Lowe said in his statement. “This is not a company that you want to see start behaving with even less social responsibility — the stakes are too high.” cnbc.com/2025/04/23/ex-…
English
0
0
1
97
Center for AI Policy
Center for AI Policy@aipolicyus·
Politico: Superintelligent AI fears: They’re baaa-ack Two years ago, Washington was in full panic mode over AI. A widely signed open letter warned of catastrophic consequences, lawmakers grilled AI execs in hearings and federal agencies scrambled to sketch out safety rules. Then all the caution abruptly ended. The Trump administration and Congressional GOP embraced the go-go attitude of “accelerationist” figures in venture capital and national security. With voices in the GOP calling AI America’s “moonshot moment,” the message from the top is simple: build fast, and worry later. So it might be an inconvenient time for the so-called doomers to return. But it’s happening. Jason Green-Lowe, executive director at the Center for AI Policy (CAIP), says that’s a dangerous inversion.“We’ve gone in one year from ‘this is a toy’ to ‘this is essential,’” he said. “We’re not prepared.” Green-Lowe is one of many voices urging Congress to treat autonomous AI systems—particularly those that can code themselves—as distinct, high-risk entities requiring bespoke oversight. Groups like CAIP and FLI are holding briefings, drafting policy agendas and trying to drum up lawmaker interest — but the political momentum is still tilted hard toward faster deployment. politico.com/newsletters/di…
English
0
0
2
122
Center for AI Policy
Center for AI Policy@aipolicyus·
Rounds, Warner Introduce the Stop Stealing Our Chips Act In SEC filings, @NVIDIA recently disclosed that it will need licenses to ship cutting-edge H20 GPUs to China. These new controls, issued by the U.S. government, will curb approximately $5.5 billion in NVIDIA sales. Of course, the addition of new export controls does not guarantee effective enforcement. “China continues to utilize back-door methods to smuggle these chips into their country, creating a grave national security concern,” said @SenatorRounds (R-SD) in a press release for a new bill aiming to strengthen enforcement. The bill is called the “Stop Stealing Our Chips Act.” @MarkWarner (D-VA) is a co-sponsor. The Stop Stealing Our Chips Act would: - Establish a formal Bureau of Industry and Security (@BISgov) whistleblower program specifically for export control violations, with a focus on sensitive technologies like AI chips. This is modeled after the SEC’s whistleblower incentive program. - Offer monetary awards (10–30% of any civil fines ultimately collected) to insiders or other tipsters who bring information that leads to a fine. - Guarantee confidentiality and strong anti-retaliation and judicial remedies for whistleblowers. - Create an Export Compliance Accountability Fund so that fines can pay for whistleblower awards and program operations. As AI capabilities grow, so does the value of effective export control enforcement. Pictured: Press release from Senator Rounds’ office.
Center for AI Policy tweet media
English
0
2
3
181
Center for AI Policy
Center for AI Policy@aipolicyus·
Liberation Day Tariffs Save Semiconductors for Later “My fellow Americans, this is Liberation Day,” began @POTUS Trump during his speech in the White House Rose Garden on April 2nd. In a corresponding executive order, the President declared a national emergency and issued a 10% tariff on nearly all imports coming into the United States. This 10% universal tariff took effect April 5th. The executive order also issued country-specific tariffs that bumped tariff rates for dozens of countries starting on April 9th, such as a 15% tariff on Venezuela, a 31% tariff on Switzerland, and a 47% tariff on Madagascar. However, the President signed a separate executive order on April 9th pausing the elevated country-specific tariffs—except for tariffs on China—for 90 days. The April 2nd executive order sought to explicitly exclude microchips, stating that “other products enumerated in Annex II to this order, including [...] semiconductors” are exempt from Liberation Day tariffs. Specifically, Annex II listed relevant Harmonized Tariff Schedule of the United States (HTSUS) codes, such as: 8541.21.00 (transistors, other than photosensitive transistors: with a dissipation rate of less than one watt) 8542.31.00 (electronic integrated circuits: processors and controllers) 8542.32.00 (electronic integrated circuits: memories) Semiconductor industry analysts at @SemiAnalysis_ studied these exemptions carefully, concluding that “although semiconductor dies and integrated circuits will not be subject to the higher import duties [...] the list of exemptions does not include GPUs and a range of chipmaking products and equipment that are essential to industry.” Nonetheless, thanks to a loophole in the United States–Mexico–Canada Agreement (USMCA) for free trade, SemiAnalysis concluded that “GPU servers are largely exempted from tariffs” in practice. Other AI hardware was more affected. Analysis from @WIRED found that out of over 1,300 items listed on NVIDIA’s export regulation compliance webpage, “less than one-fifth appear to be exempt from Trump’s new tariffs.” Days later, in a memorandum on April 11th, President Trump named additional HTSUS headings and subheadings to exclude from Liberation Day tariffs. These new exemptions protected not only AI chips, but also laptops, smartphones, and flat-panel displays. However, chips won’t stay tariff-free for long—in a recent interview with @ABC, Commerce Secretary @HowardLutnick stated that “semiconductor sectoral tariffs” are coming in “probably a month or two.” “This is not, like, a permanent sort of exemption,” said Lutnick. “These are things that are national security, that we need to be made in America.” Accordingly, @CommerceGov just issued a request for public comments on its ongoing investigation “to determine the effects on national security of imports of semiconductors, semiconductor manufacturing equipment, and their derivative products.” This investigation began on April 1st. When semiconductor sectoral tariffs arrive, they could be steep. In a January speech to Republican members of Congress, President Trump talked about a potential “25%, 50% or even 100% tax” on Taiwanese chips. In summary, the future of AI chip imports—and with it, the future of AI—is rapidly evolving.
Center for AI Policy tweet media
English
0
1
2
135