
solst/ICE of Astarte
18.5K posts

solst/ICE of Astarte
@IceSolst
Voidweaver @AstarteSecurity - Pentester turned seceng turned meeting canceller - meetup https://t.co/E4rlINC0U6 - conf tracker https://t.co/tReNhuhANF





Mandatory human-in-the-loop is a cybersecurity cop-out. People are giving agents more and more autonomy. We need solutions that accept that world because there is no stopping it. It's like telling people in the 90s to not use the internet to avoid getting hacked. Good luck.








Interesting article on treating agent output like compiler output (and why) skiplabs.io/blog/codegen_a…



Idk if I'm missing something but I'm seeing a lot of smart security people talking abt just having AI code just never be reviewed at all as a desirable thing? Am I missing someth?

Inevitably we’ll have tooling to enhance code review (not replace it) If you’re responsible for hundreds of devs or large OSS projects, its already hard to trust manual review (eg disgruntled reviewer on their last day blindly lgtm’ing with ramifications appearing months later). Plus the increasing rate of change is unsustainable. Given that manual reviews already vary in quality, and that automated code reviews are significantly improving, it makes sense we’re converging towards a state in which how reviews are made will transform. Many recent security bugs are found by LLMs but without a good toolchain. We’re doing the bare minimum. Harnesses will improve and add dynamic instrumentation and a more thoughtfully broken down process for review. At some point it would be a waste to conduct a review without them. So instead of staring at a diff, you look at a series of automated review output, that’s already taken into account design decisions and assumptions about the program’s purpose and intent. The question then would be: would this last part in itself be automatable?

Interesting article on treating agent output like compiler output (and why) skiplabs.io/blog/codegen_a…








