JD Work
19.5K posts

JD Work
@HostileSpectrum
Former intel, now academic @NDU_CIC, @TheKrulakCenter, @SIWPSColumbia @ColumbiaSIPA, @CyberStatecraft, @ElliottSchoolGW, @PAISWarwick. Apolitical, views=own















This is an excellent paper from the folks at @AISecurityInst and worth reading. I will have to read it again but this particular point is a good one and I think the takeaway is important. Cyber attack chains across a set of enterprise systems (simulated or real) have a finite number of states that, at a high level, are all well represented in training data, and so the more tokens you spend on a frontier reasoning model the more state space between those chains you can explore. The finding that gains were log linear, and have exponential growth, might improve through model architecture alone, especially if they require fewer compactions overall. Still the cost for these outcomes is extremely low, and that is a very relevant takeaway for policymakers. The ICS example is less well represented in the training data and explains why the model made less progress overall. With the right expert prompting this is likely not a hurdle in practice. But expert prompting falls back on human expertise.


Most of the DC arms control community is reflexively anti-military action—under any circumstances. To them, U.S. diplomatic failure under Republicans is always to blame, regardless of facts or Iran’s stances and threatening nuclear advances. This goes back further than the early 2010s when the Ploughshares Fund asked grantees to sign on to talking points about how we can “live with” a nuclear Iran, which I personally witnessed. No serious analyst can rule out military force to stop adversaries from developing nuclear weapons. That’s simply ideology—and a dangerous one.


