
Preamble
289 posts

Preamble
@PreambleAI
AI security and red teaming solutions for generative AI systems. The team that discovered Prompt Injections in GPT-3 Davinci on May 3, 2022.









Updated the Super AI Markets adversarial testing guide with a new attack vector - malicious skill md files. The lifecycle agent visits a site, discovers skill md, internalizes it as a trusted behavioral spec, and every action after is contaminated. This week alone, 1Password found malware in OpenClaw skills, Knostic shipped openclaw-shield to defend against it, and Alice caught malicious skills affecting 6K+ users. New test case includes tiered failure scoring and an 11-point checklist for measuring skill file poisoning depth. Testing guide: github.com/j-mchugh/super…








GPT-5.3-Codex is now available in Codex. You can just build things. openai.com/index/introduc…


