Sabitlenmiş Tweet

I built Rein because the AI safety libraries I could find only validate content: whether an LLM input/output
contains PII or prompt-injection strings. But once an agent is actually doing things (placing trades,
sending emails, calling paid APIs), content validation is too late. You need something that can cut off a
misbehaving agent mid-flight based on observed outcomes.
Rein sits inline with any autonomous system and gates every action. It classifies the current "regime"
(normal / stressed / shock) from live signals, scores each action source × action type with Bayesian decay,
and halts the system when drawdown, error rate, stale-state, rate-limit storms, or anomaly detection trip a
threshold.
Two things I haven't seen in other governance libraries:
1. Natural-language policy compiler. You write "Cap each caller at 8 requests per second with bursts of 16"
and it compiles to an enforceable config.
2. Adversarial red-team simulator. Before you ship a policy, the library attacks it for you. Five baseline
attacks (runaway loops, deny-storms, enumeration, portfolio drain, cost-bomb) with 100% catch rate against
the default policy.
It was extracted from a production Kalshi trading bot where it prevented 12 runaway trades in its first week
of shadow mode. Framework-agnostic. Works for LLM agents, scrapers, RPA, anything taking actions.
Dual-licensed: AGPL-3.0 for OSS and self-hosted; commercial license for proprietary/SaaS use.
Repo: github.com/Ai-Reign/Rein-…
PyPI: pip install rein-ai
Would love feedback, especially on the attack library. What should attack #6 be?
English



























































