@robrichardson_@garrytan The hard part is still detecting injections. Governance and observability are fallbacks to runtime security failures. They are not your first line of defense
@garrytan The hard part in production isn't detecting injections, it's knowing which tool calls you'd actually care about blocking. Most enterprise agent stacks don't have enough observability to even know when they got hit.
Simaril (YC Spring 2026) is SOTA prompt injection defense for LLMs.
This is the missing link for OpenClaw for Enterprise and all agents working on mission-critical data and workflows.
The cofounders were the team that stopped billions of dollars worth of damages at Amazon.com and AWS.
#performance" target="_blank" rel="nofollow noopener">silmaril.dev/#performance
@ycombinator@Silmarildev@aumup001 wait does the retraining loop run on your infra or theirs? 'self-healing' is doing a lot of work in that sentence and I wanna know what's actually happening under the hood
Silmaril (@Silmarildev) is the first self-healing prompt injection defense.
It catches 2x more attacks 10x faster than leading defenses, and retrains continuously to protect your full AI stack, including agents like Claude Code and OpenClaw.
Congrats on the launch, @aumup001 and @EduardoVel36291!
ycombinator.com/launches/Pvl-s…
@ycombinator@Silmarildev@aumup001@aumup001 Can we use this on smaller scale AI chatbot apps or is this meant for enterprise? Doesn't seem to be a clear answer on the website
@ycombinator@Silmarildev Silmaril was made for cyber so we don't mention our synthetic data generation process. For AI folks, we use simulations and nested RL environments to create human and superhuman hacking data that allows our tiny classifier to beat reasoning models at detecting threats