Ben Hayum

518 posts

Ben Hayum banner
Ben Hayum

Ben Hayum

@BenHayum

AI and National Security @cnasdc, CS @uwcdis. Views my own.

Katılım Kasım 2015
1.1K Takip Edilen4.7K Takipçiler
Ben Hayum retweetledi
Peter Harrell
Peter Harrell@petereharrell·
Reports of a sudden, widespread shutdown of Baidu robotaxis in China drives home the security vulnerabilities of connected cars and why the US restricts Chinese connected vehicles. This outage likely an accident, but imagine what China could do with cars in the US in a conflict.
Peter Harrell tweet media
English
29
97
354
77.1K
Ben Hayum retweetledi
Michelle Nie
Michelle Nie@michellesnie·
Much of the semiconductor export control conversation focuses on the U.S. pressuring allies to fall in line. But the Dutch have their own compelling reasons to tighten controls on ASML equipment — independent of Washington. New piece for @AIPB_org 🧵
AI Policy Bulletin@AIPB_org

Dutch Export Controls Don’t Go Far Enough on China The Netherlands can do more to prevent ASML technology from undermining its own national security, writes @michellesnie @CNASdc

English
1
4
11
1.3K
Ben Hayum retweetledi
Matija Franklin
Matija Franklin@FranklinMatija·
Excited about our new paper: AI Agent Traps AI agents inherit every vulnerability of the LLMs they're built on - but their autonomy, persistence, and access to tools create an entirely new attack surface: the information environmental itself. The web pages, emails, APIs, and databases agents interact with can all be weaponised against them. We introduce a taxonomy of six classes of adversarial threats - from prompt injections hidden in web pages to systemic attacks on multi-agent networks. I’m outlining the six categories of traps in the thread bellow
Matija Franklin tweet media
English
64
146
555
42.8K
Ben Hayum retweetledi
Anton Leicht
Anton Leicht@anton_d_leicht·
this is why i think AI policy really needs work toward reliable methods of assessing likely AI contribution to potentially-AI-enabled harms. if we don’t have that in place in time, we might miss the ‘warning shot’ effect of any potential early cyber/chembio etc harms
Nathan Calvin@_NathanCalvin

I previously thought "I keep hearing people say that AI advances are going to make massive cybersecurity attacks more common but so far I haven't really noticed an uptick" idk if this one was AI related but does feel like I am now anecdotally noticing a real increase

English
1
3
12
1.1K
Ben Hayum retweetledi
Cole Salvador
Cole Salvador@ColeSalvador31·
In 2022 and 2023, tiny teams of researchers drew straight lines on graphs that predicted the US was headed for an energy bottleneck in AI. But the government had no idea. The future of AI is too important to make the same mistake again. We need talent-dense, AI-focused offices that can skate to where the puck is going and implement President Trump’s AI agenda. In a new piece for AFPI (@A1Policy), we discuss 2 promising offices that could act as hubs of government AI foresight: the Center for AI Standards and Innovation (CAISI) in the Department of Commerce and the Bureau of Emerging Threats (ET) in the Department of State. We found that they have the density of talent to succeed but still lack resources: funding, headcount, and authorization. Here’s a summary: 1) The Center for AI Standards and Innovation (CAISI) lacks resources > It has talented technical staff and a strong track record in evaluations, industry relationships, and insight into China > But it’s chronically underfunded. It’s been around for 3 years but only received $30M in total, not annual, funds. That’s 11 times less than the UK’s equivalent. (It’s even short of Canada and Singapore) > It’s only has 20-30 employees who are swamped with workstreams and external requests from agencies like the IC To solve this, Congress should fund CAISI with an annual budget of $50-100 million. 2) CAISI lacks authorization or a focused mission > Between Department asks, inbound from other offices, and the AI Action Plan, it has more missions than staff > Its critical mission could be threatened by future administrations, who would externally pressure it to pursue DEI initiatives Congress needs to enshrine the office and give it a clear mission. We present an America First vision for CAISI, in which it acts as a technical strike team, bridge between industry and government, frontier analysis unit, and technical standards organization. 3) The Bureau of Emerging Threats (ET) lacks authorization > ET is similarly talent-dense, with experts in cyber, AI, and international relations > But it lacks congressional authorization and could be destroyed or co-opted by future administrations The Bureau needs concrete support from Congress and levers of interagency influence, like regular reports to national security leaders. With appropriate action, Congress can help ensure the President has the resources he needs to help America win the AI race and usher in a new golden age of human flourishing. Always fun to collaborate with @CrovitzJack and @YusufSMahmood, who have posted about other sections of our piece.
Cole Salvador tweet media
English
1
14
48
14.3K
Ben Hayum retweetledi
Jamie Bernardi
Jamie Bernardi@The_JBernardi·
It's great to see @NCSC highlight AI's increasing cyber capabilities. Its message is clear: it's time to invest in cyber defence.
Jamie Bernardi tweet media
English
1
1
2
162
Ben Hayum retweetledi
roon
roon@tszzl·
recursive self improvement, like human research, will come in fits and starts, asymptotes. S-curves, ai winters, waiting for the next chip generation to come online, etc
English
156
97
2K
140.4K
Ben Hayum retweetledi
Miles Brundage
Miles Brundage@Miles_Brundage·
The cyber/AI situation is getting quite serious -- it's been clear for years that this day was coming eventually. The thing that has changed it is now here, today. Carlini saying LLMs are better vulnerability researchers than him should be a 🚨x.com/AlexPalcuie/st…
palcu@AlexPalcuie

also: > Speaking not as an Anthropic employee — I don't really care where you help, just please help... the world will need a lot of people to be doing a lot of this work and it needs to happen soon. Order months. Waiting a year is going to be too long. youtube.com/watch?v=1sd26p…

English
9
20
195
24K
Ben Hayum retweetledi
Shital Shah
Shital Shah@sytelus·
I am not sure if everyone will be able to appreciate this. Zero days are immensely treasured by national security organizations. They are essentially weapons for cyber warfare. They get sold for millions of dollars each. Countries stock pile them and the country with most diverse stock pile will have the cyber superiority. Now models are getting to a point where you can point them to any software in global supply chain and they will spit out zero days for you! In future, the software doesn’t even have to be open source because binaries are good enough for decompilation and analysis. I am highly doubtful if models at this capabilities would ever get any public release even through APIs without dumbing them down. In other words, we are quickly reaching a point where frontier labs will have way more capable internal models that would never see a light of day. All public models would be significantly nerfed.
chiefofautism@chiefofautism

someone at ANTHROPIC just showed CLAUDE finding ZERO DAY vulnerabilities in a live conference demo claude has found zero day in Ghost, 50,000 stars on github, never had a critical security vulnerability in its entire, history... it found the blind SQL injection in 90 minutes, stole the admin api key, then did the exact, same thing to the linux kernel

English
11
25
186
14.9K
Ben Hayum retweetledi
NatSecKatrina
NatSecKatrina@natseckatrina·
I firmly believe that in America, competition is a good thing. We should want patriotic, experienced leaders like Anthropic's Tarun Chhabra helping to steer the trajectory of democratic AI. Though we are competitors because we work for competing frontier AI labs, one thing we share in common is a sincere belief that America's prosperity and security depends, in part, on the American AI industry continuing to lead on this technology.
English
3
6
120
37.3K
Ben Hayum retweetledi
CNAS Technology and National Security Program
The CNAS Technology and National Security Program is looking for our next intern, starting in May! If you want to support cutting-edge policy research to advance U.S. leadership in AI, biotech, quantum, and more, please apply at the link below by April 3. 👇👇👇
CNAS Technology and National Security Program tweet media
English
1
5
9
1.1K
Ben Hayum
Ben Hayum@BenHayum·
Agreed. If US cyber-agencies do not have ahead of time access to these new frontier models to penetration test every piece of critical infrastructure they can, that is a massive problem. Any barriers to doing so must be torn down.
Samuel Hammond 🦉@hamandcheese

In a sane world the USG would have a special "differential access" program to get early access to these new models for purposes of cyber defense. In the real world the USG is pursuing "differential non-access" by attempting to restrict Anthropic from govt contracts. Seems bad.

English
1
1
13
1.3K
Ben Hayum retweetledi
Brad Carson
Brad Carson@bradrcarson·
Having been in San Francisco the last few days talking to people, I'm radically updating my odds for an imminent (next year) AI-enabled cyber disaster.
English
18
19
233
34.3K
Ben Hayum retweetledi
Peter Romov
Peter Romov@romovpa·
Autoresearch can discover SOTA white-box adversarial attacks on LLMs. We gave Claude 30+ existing GCG-like algorithms and access to a compute cluster, and it quickly learned to combine them into new methods that outperform all existing ones. Here’s what that looks like:
Peter Romov tweet media
English
2
21
106
13.5K
Ben Hayum retweetledi
AI Security Institute
AI Security Institute@AISecurityInst·
🔍How are AI agents used in the real world? We analysed 177,000+ agent tools published between November 2024 and February 2026 and found rapid growth in deployment for increasingly complex tasks. Learn more ⬇️
GIF
English
3
9
56
3K
Ben Hayum retweetledi
Caleb Withers
Caleb Withers@CalebWithersDC·
✍️ NEW PAPER ✍️ The Pentagon’s AI Acceleration Strategy, released in January, targets an “AI-first” warfighting force, accepting that “the risks of not moving fast enough outweigh the risks of imperfect alignment.” The urgency is right. But I worry this elides how quickly alignment could become a central bottleneck on realizing AI’s potential in the national security enterprise. New paper from me (w/ Jay Kim and Ethan Chiu) on this challenge and what to do about it 🧵👇
Caleb Withers tweet media
English
3
14
40
3.4K