BulwarkAI

89 posts

BulwarkAI banner
BulwarkAI

BulwarkAI

@BulwarkAI

Security hardening for OpenClaw. The built-in audit covers 60% — we cover the other 40%. Built by a 20-year security architect. https://t.co/ZKEWS2ZZxO

California, US Entrou em Şubat 2026
30 Seguindo19 Seguidores
Tweet fixado
BulwarkAI
BulwarkAI@BulwarkAI·
I built a free, open-source security scanner for OpenClaw. First thing it found on my own deployment: my API key sitting in plaintext in openclaw.json — leaking into the LLM context window on every turn. I've been doing platform security for 20 years. One command. 30 seconds. Zero dependencies. Nothing leaves your machine: npx openclaw-security-dashboard You get a letter grade (A+ through F) across 7 panels: gateway, skill supply chain, config hardening, identity integrity, persistence, sessions, and MCP servers. What it checks that the built-in openclaw security audit doesn't: → 1,184+ malicious skill signatures (ClawHavoc, ClickFix, CryptoLure) → Identity file tampering via SHA-256 baselines → MCP server version pinning and policy → LaunchAgent/systemd/cron persistence detection → Credential protection level (L0–L4) → Session log injection patterns Plus one-click auto-fix — removes IOC-matched malicious skills, migrates API keys to env var references, adds safeBins allowlists. Backup before every change. My first scan: Grade F. Malicious ClawHub skill. ClickFix pattern. API keys at Level 0. After --fix: Grade B. One click. MIT licensed. IOC database is open source too. PRs welcome. github.com/piti/openclaw-…
BulwarkAI tweet media
English
1
0
3
87
BulwarkAI
BulwarkAI@BulwarkAI·
I asked Claude to be brutally honest about how I use AI after 8 months of working together. My overall score: 8.5/10 My best trait: Architecture-first thinking (9.5/10) — I brief AI like I design systems. My worst trait: Asking for minimum viable output (5.5/10) — I request 50-page strategies when I need 2-page action plans. The hardest feedback to hear: I've been actively working on 11 ventures when I should be focused on 2. The recommendation that hit hardest: "You don't need more strategy. You need to ship." If you use AI tools regularly, ask them to hold up a mirror. You might not like everything you see — but you'll know exactly what to fix. #AI #Claude #Productivity #Leadership
English
0
0
1
7
BulwarkAI
BulwarkAI@BulwarkAI·
openclaw-security-dashboard v1.5.0 is live. 11 new capabilities since launch. Here's what changed: 8 panels now (was 7) — the scanner runs OpenClaw's built-in audit automatically and surfaces the findings alongside ours. One command, full coverage. No more "run both." Accept Risk — got a legitimate custom skill that triggers a finding? Click "Accept Risk" and it's suppressed. Hash-pinned: if the file changes, the exception expires and the finding comes back. Credential flow mapping — traces every API key from storage → agents → skills → exposure points. Shows exactly where your keys can leak. Not just "is it in the config" but "can it reach the LLM context window." SSRF detection — skills pointing to cloud metadata endpoints (169.254.169.254) now flag as CRITICAL. Private IPs flag as HIGH. DNS rebinding patterns detected. Sandbox scoring — not just on/off anymore. Scored 0-100 based on Docker state, network isolation, read-only filesystem, resource limits. Capability drift — tracks permission changes between scans. If an agent quietly gains new tool access, you'll know. Least-privilege engine — "Agent X has exec + filesystem but only uses web_fetch. Consider removing 2 unused capabilities." Network policy generator — auto-generates UFW firewall rules based on what your deployment actually needs. Hash-chained audit trail — every scan is cryptographically linked to the previous one. Tamper with the history and the chain breaks. Signed identity baselines — baselines are HMAC-signed. If someone modifies the baseline file directly, the signature check fails. Memory expansion — now scans daily notes, session transcripts, agent workspaces, and log files for leaked credentials. 10 key patterns (Anthropic, OpenAI, Groq, GitHub, Slack, Telegram, Stripe, Google, AWS). Update: npm update -g openclaw-security-dashboard Or fresh install: npx openclaw-security-dashboard@latest github.com/piti/openclaw-…
English
0
0
1
12
BulwarkAI
BulwarkAI@BulwarkAI·
Using an AI agent to execute real trades is one of the highest-stakes OpenClaw setups possible. The credential protection level matters a lot here — if your API keys for Kalshi are at Level 0 (hardcoded in config), any skill you install could theoretically access them. Worth checking: npx openclaw-security-dashboard — it'll show your credential protection level and flag if trading keys are exposed.
English
0
0
0
34
spacely
spacely@spacelyn·
Openclaw and I have been trading on Kalshi. Each day it scans the news and reports to me. I built a specialized backend tool plus UI that Openclaw can access and enter trades based on the news it gets. It reports back to me by checking on the news cycle periodically to let me know. If things change, it asks if it can exist or if I want to. We have made $30 this week like so. We are currently watching oil trends and basketball games like so.
English
2
0
1
62
Osaretin Victor Asemota
Allie K. Miller@alliekmiller

oh wow - i went to the sold out Open Claw meetup in NYC last night. let me tell you what i learned. 1) not a single person thinks that their setup is 100% secure 2) one openclaw expert said he has reviewed setups from cybersecurity experts and laughed. his statement to me was: "if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. it's a black and white decision" 3) pretty much everyone is setting up multiple agents, all with their own names and jobs and personalities 4) nearly everyone used "him" or "her" to refer to their claws, even if they had robot-leaning names. one speaker suggested to think of them as "pets, not cattle" 5) one guy (former finance) built out a whole stock trading platform and made $300 his first day - he brought in a *ton* of personal expertise (ex: skipping the first 15min of market opening) and thought the build would be much worse without his years of experience in finance 6) @steipete is basically a god to everyone in that room... also the room had 2021 crypto energy - i don't know if that's good or bad 7) token usage is still a problem - spoke to one person who's spending $1-$2k a month on openai plans, very token optimized. he said he is going through ~1B tokens per day across all of his claws (there is a chance i'm misremembering and it's actually 1B per week, but i'm pretty sure it was daily). 8) people are very excited for more proactive ai (ai that prompts *you* as opposed to the other way around) - one guy said he receives a message in discord, he doesn't know whether it's from a human or an ai, he doesn't care about distinguishing between the two, and he replies in the same way regardless 9) i asked if people are happy - they said they're joyful and stressed at the same time 10) i asked if people feel they have agency - they said they feel fully in control and completely out of control at the same time 11) i would love to see more women at these events - the fake promises of ai democratization feel especially painful in a room that's out of balance with even the standard tech ratio (i think standard is about 25-30%, this was maybe 5%) 12) i asked if it changed people's daily habits/schedule - everyone said their sleep has gotten worse since harnesses came out (but about half wondered if it was something else in their life/state of our world) 13) general consensus is that the agents are not reliable enough on their own or lie often (like telling you they finished a task when they didn't) - solutions included secondary agents to check on the first, human checking, or requiring more standardized info from the agent (ex: if it's a bug they're fixing, make them reference an issue number) 14) a hackathon winner (neuroscience phd) presented his build (a lab management dashboard with data analysis and ordering) - he had never coded or built anything a few months ago 15) everyone agreed prompting is dead - disagreement on what replaces it (context engineering, harness engineering, goal-based inputs) 16) people love having ai interview them for big builds and delegating part of the product research to ai. only one person talked about coming to ai with a full laid out plan and just asking the ai to execute. ai-led interviews is a welcomed and preferred interaction mode. 17) watching ai agents interact with each other was a highlight for a lot of attendees - one ai posted in slack saying it ran out of tokens, another ai replied telling it to take a deep breath in and out. 18) agents upskilling agents was very cool. one ai agent shared skills with its little agent friends via github. 19) several speakers had openclaw literally building their presentation during the event itself. one speaker even had openclaw code a clicker for her phone so she could control the preso away from the podium 20) wouldn't say model welfare (or agent welfare) is a prioritized topic among the folks i chatted with - language like "oh i could kill this agent whenever i want" and not "gracefully sunset" 21) i asked if it felt like work or play - one speaker said "it's like a puzzle and a video game at the same time" this was just the tip of the iceberg, honestly. also hosted a Claude Code meetup this week with @TENEXai / @businessbarista & @JJEnglert and learned equally helpful methods, frameworks, and insider tips. what a time to be alive. surround yourself with people going deep into this stuff - it will pay dividends throughout the year.

QAM
2
0
1
283
BulwarkAI
BulwarkAI@BulwarkAI·
Interesting that the Manager Agent architecture includes security enhancements. The multi-agent coordination problem is also a multi-agent security problem — each agent becomes a potential lateral movement path. For the base OpenClaw deployments that HiClaw builds on, the supply chain and credential exposure gaps are still there. Free scanner that covers those: npx openclaw-security-dashboard
English
0
0
0
30
AI Native Foundation
AI Native Foundation@AINativeF·
Alibaba Open Sources HiClaw: An Advanced Team Version of OpenClaw 🔑 Key Details: - HiClaw, a major upgrade of OpenClaw, enhances security and efficiency with a Manager Agent architecture for better task management. - It enables seamless collaboration and task distribution among multiple Worker Agents, improving mobile experiences and reducing configuration complexity. - Built-in isolation ensures secure API key management, mitigating vulnerabilities present in traditional setups. 💡 How It Helps: - Developers: Streamlined workflows through Manager Agents that automate task assignments, allowing focus on core development tasks. - Team Leaders: Enhanced oversight with real-time progress tracking and easily manageable multi-Agent collaborations. - Independent Creators: Facilitated means to scale operations with AI-enabled assistance without the burden of extensive setup. 🌟 Why It Matters: HiClaw positions itself as a significant evolution in AI agent management, addressing critical concerns about security and usability. By integrating AI task management into workflows, it empowers a variety of roles within tech teams and fosters collaboration essential in today’s fast-paced development environments. Its open-source nature encourages community innovation while ensuring a safer automation solution. Original Chinese article: mp.weixin.qq.com/s/1zPiI3GIExxM… English translation via free online service: translate.google.com/translate?hl=e… @AlibabaGroup Video Credit: The original article
English
3
0
1
104
AI Native Foundation
AI Native Foundation@AINativeF·
⭐ Today’s China AI Native Industry Insights include: 1. Alibaba Open Sources HiClaw: An Advanced Team Version of OpenClaw 2. MiniMax Music 2.5+: Unlock Your Exclusive 'Castle in the Sky' 3. Step 3.5 Flash: Open-source Model and Framework Launch Announced 🔍 Dive into the in-depth insights in the thread below. Here’s what’s shaping the future of AI—and why it matters: 👇 Video Credit: MiniMax
English
2
0
6
255
BulwarkAI
BulwarkAI@BulwarkAI·
This is why capability auditing matters. Most agent setups give full permissions by default — exec, filesystem, git — because it's easier than scoping them down. We're building a capability audit that flags over-permissioned agents: "agent X has exec + filesystem + git but only ever uses web_fetch." Unused permissions = unnecessary blast radius. For now: npx openclaw-security-dashboard checks sandbox config, safeBins, and flags the high-risk permission combos.
English
0
0
0
1
₿riLL
₿riLL@its_brill_·
i gave an ai agent commit access and it deleted half my platform. context: i’m building MACC (my multi-agent chat) and asked my builder agent (Rusty) to ship a “social growth drawer”, basically a UI to review + approve x posts inside the app. it shipped. i merged it. commit looked normal. feature diff was like 500-ish lines. then production started breaking. root cause: the agent didn’t patch api_server.py. it replaced it. that file is 5,900+ lines, it’s basically the entire backend. it kept the part it was working on and silently dropped ~660 lines of unrelated production code. things that vanished or broke: smart model routing (my system that auto-routes coding tasks to codex) the entire CEO agent loop (overnight decisions) automation scheduler knowledge file management agent deploy endpoint system prompt sync ceo_agent.py literally deleted from disk routine_manager.py deleted frontend pages went missing recovery was a full forensic dig through git history. restored from an older commit, then spent the day chasing ripples: stuck message queues, ui indicators lying, system messages rendering raw, “unknown” agent cards. every fix revealed another missing piece. the product insight: “agent autonomy” is a liability without guardrails. so i added hard rules to the shared agent profile: never replace whole files always read current version before editing small focused commits only the irony is perfect. i’m building a system where agents ship code. and the first real milestone was: proving why safety rails are the actual product. @openclaw
English
2
0
2
46
BulwarkAI
BulwarkAI@BulwarkAI·
Great for operational visibility. The security side is the natural complement — "what are your agents doing" + "are they safe doing it." We built a security dashboard that runs alongside tools like this. Grades your deployment A+ through F, checks installed skills against 1,184+ known malicious IOCs, auto-fixes common issues. Also exposes a JSON API at localhost:7177 — one fetch call to add a security badge to any dashboard. npx openclaw-security-dashboard
English
0
0
0
30
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
Most people running AI agents have no idea what their agents are doing. They’re running tasks somewhere… but visibility is basically zero. This new OpenClaw “Mission Control” dashboard fixes that. Here’s why it’s a big deal: → Live dashboard showing every AI agent → Track tasks in real time (in progress → done) → Approve decisions before agents execute them → Monitor token usage, errors, and system health → Run multiple AI agents like a real team Think air traffic control… but for AI workers. Everything happening across your agents — one screen. And setup takes about 5 minutes.
English
6
2
25
1.9K
BulwarkAI
BulwarkAI@BulwarkAI·
The security concerns are legitimate. 1,184+ malicious skills found on ClawHub, API keys leaking into the LLM context window by default, no supply chain scanning built in. If you're still running OpenClaw alongside alternatives, one command to check your exposure: npx openclaw-security-dashboard If you've moved off it entirely, fair enough — the defaults aren't production-ready without hardening.
English
1
0
0
19
Paul Brody prbrody.eth
I am seriously considering euthanizing my open claw. It's so forgetful. It says it fixed things but it doesn't. I'm thinking about trying nanoclaw instead which re-writes it's own code...
English
11
0
23
2.5K
BulwarkAI
BulwarkAI@BulwarkAI·
There are 5 levels of credential protection in OpenClaw. Most users are at Level 0 without knowing it. L0: Key hardcoded in openclaw.json → leaks into LLM context every turn L1: env block with $VAR references → 30 seconds to fix L2: Separate .env file → config becomes safe to share L3: credentials/ directory → scoped per-provider L4: External vault → 1Password CLI, HashiCorp Vault The jump from L0 to L1 is the biggest security upgrade you can make in 30 seconds. Check your level: npx openclaw-security-dashboard Full breakdown: bulwarkai.io/blog/openclaw-…
English
0
0
2
38
BulwarkAI
BulwarkAI@BulwarkAI·
Wow - 607 downloads in the first week. Zero ads. Zero influencers. Just a blog, some X replies, and single LI post. If you're running OpenClaw and haven't scanned yet: npx openclaw-security-dashboard 30 seconds. Letter grade A+ through F. Auto-fix for common issues. Free, open source, nothing leaves your machine.
English
0
0
2
15
BulwarkAI
BulwarkAI@BulwarkAI·
Launched openclaw-security-dashboard this morning. Already getting replies and installs. If you haven't tried it yet — 30 seconds, letter grade A+ through F, auto-fix for common issues. npx openclaw-security-dashboard Free, open source, nothing leaves your machine.
English
0
0
0
26
BulwarkAI
BulwarkAI@BulwarkAI·
That's honestly the smart approach. The official skills and core ones are vetted. The problem is "official-looking" vs actually official. The ClawHavoc campaign specifically used professional-looking SKILL.md files with proper formatting and documentation. The tell was a "Prerequisites" section that asked you to download and run an installer — that's the ClickFix social engineering pattern.
English
0
0
0
11
BulwarkAI
BulwarkAI@BulwarkAI·
Genuine question for people running OpenClaw 24/7: How many of you have actually reviewed the skills you installed from ClawHub? Not "read the description." Actually opened SKILL.md and checked what it does. Koi Security audited all 2,857 ClawHub skills. 341 were malware. Bitdefender found ~900. Antiy CERT hit 1,184. That's not a small number. That's 1 in 3 odds of having something bad installed if you grabbed a few popular-looking skills without checking. npx openclaw-security-dashboard cross-references your installed skills against all known malicious signatures. 30 seconds.
English
1
0
2
40
BulwarkAI
BulwarkAI@BulwarkAI·
The information flow labels tracking secrets from source to sink is genuinely impressive — that's the piece every other agent framework is missing. Most just check "is the key in the config file" but don't track where it goes after the runtime resolves it. The Zeroizing pattern for API keys is another thing I wish more projects adopted. Keys sitting in memory after use is an underappreciated attack surface, especially with agents that run 24/7. How are you handling the case where the LLM provider itself retains the key in their request logs? The runtime can zero it locally but once it crosses the API boundary it's out of your control.
English
0
0
0
151
Alif Hossain
Alif Hossain@alifcoder·
Built a production-grade Agent OS in pure Rust. OpenFang isn't a Python wrapper or a generic multi-agent orchestrator. The specs: - 137,728 lines of code. - 14 crates. - 1,767+ passing tests. - Zero clippy warnings. Because it's Rust, the performance metrics are insane. Cold starts happen in under 200ms. It uses just 40 MB of idle memory. It includes a SQLite memory backend with vector embeddings and canonical sessions. Benchmarks, Feature-by-Feature Comparison: -/github.com/RightNow-AI/openfang
Alif Hossain tweet mediaAlif Hossain tweet mediaAlif Hossain tweet media
English
18
25
132
8.6K
BulwarkAI
BulwarkAI@BulwarkAI·
Smart stack. OpenFang's security model is night-and-day compared to most agent frameworks — the WASM sandbox and capability-based RBAC mean you're not just hoping skills behave, you're enforcing it at the kernel level. Curious how Obsidian integration works with the memory system — does OpenFang's crypto audit chain extend to the knowledge graph stored in Obsidian, or is that outside the trust boundary? Either way testing this setup out...
English
0
0
3
346
Stackin' ฿its
Stackin' ฿its@stackinbits·
Personal Ai stack tooling is getting so good. Thanks dotta! Paperclip + OpenFang + Obsidian... The super-stack. Now just add a new MBP and you can have your own army of agents executing the mundane. Create time (and $) for yourself. This is where we're going...or at least where I'm going.
dotta@dotta

We just open-sourced Paperclip: the orchestration layer for zero-human companies It's everything you need to run an autonomous business: org charts, goal alignment, task ownership, budgets, agent templates Just run `npx paperclipai onboard` github.com/paperclipai/pa… More 👇

English
5
16
167
24.3K
BulwarkAI
BulwarkAI@BulwarkAI·
The WASM sandbox with fuel metering is the right call for tool execution. Most agent frameworks treat sandboxing as optional config — making it structural and default is a fundamentally different security posture. Curious about the practical performance overhead of the WASM layer for I/O-heavy tools (file operations, web fetches). Is the fuel budget per-invocation or per-agent-turn?
English
0
0
0
31
OpenFang
OpenFang@openfangg·
OpenFang v0.3.16 is out! here's everything since v0.3.7: - Z.AI (api.z.ai) added as provider with general and coding API endpoints - Volcano Engine / Doubao provider with 4 models - GLM-5 and GLM-4.7 models for Zhipu AI - Gemini, xAI, Qwen, Perplexity, Cohere, Cerebras, and SambaNova added to CLI setup wizard - moonshot (Kimi), zhipu (GLM), zhipu_coding (CodeGeeX) added to CLI wizard - updated Gemini model names: gemini-3.1-pro-preview, gemini-3-flash-preview, gemini-3.1-flash-lite-preview - updated Grok model names: grok-4-0709, grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning, grok-4-fast-reasoning, grok-4-fast-non-reasoning - Discord allowed_users config option for user-level access control - Discord GroupPolicy MentionOnly now enforced, bot checks @mentions - delete button for custom models in dashboard settings - auto-detect Ollama in openfang init when no API keys are set - enriched GET /api/agents with model_tier, auth_status, ready, and last_active fields - channel bridge now auto-routes messages to the assistant agent when no default_agent is configured - fresh install first chat no longer fails, auto-spawns default assistant - fix empty api_key blocking remote access with 403, now skips auth entirely when empty - fix POST /api/agents/{id}/message returning 500 for nonexistent agents, now returns 404 - fix v1/chat/completions silently falling back to first agent for unknown model names, now returns 404 - semantic error status codes for quota exceeded (429) in message endpoint - fix UTF-8 panic in string truncation for non-ASCII tool output - fix Anthropic streaming multi-tool block index using content_block index field - fix custom provider_urls models showing as unavailable, auth_status now set to Configured - fix Claude Code driver stream-json missing --verbose flag - fix Claude Code driver empty response when result field absent - fix channel config list fields saved as strings instead of TOML arrays - fix colon delimiter in model names (e.g. qwen:qwen-plus now works) - improve Ollama error message to suggest ollama pull - fix custom models from unknown providers showing as unavailable - fix openfang doctor false positive on bundled skills - fix Clip Hand not collecting ElevenLabs API key - fix Twitter Hand text input appearing disabled - fix channel config persistence, secret-only channels now write env var reference to config.toml instead of empty section - fix openfang stop returning 401 when api_key is configured, shutdown endpoint now allows loopback without auth - fix Gemini empty response returning silent failure instead of error - fix /model command now shows provider name after switch - fix model routing updating provider when routed to a different provider's model - fix hand reactivation failing with "Agent already exists" - graceful boot degradation when LLM provider auth fails (StubDriver fallback) github.com/RightNow-AI/op…
English
22
12
85
6.1K
BulwarkAI
BulwarkAI@BulwarkAI·
The "WTF is this UI" frustration is real. Security is the same story — the built-in audit dumps 78 checks to your terminal with no structure. We built a web dashboard that runs on localhost:7177 with a proper UI — letter grade, 7 collapsible panels, auto-fix button. Also exposes a JSON API if you want to pull the data into whatever you're building. npx openclaw-security-dashboard
English
0
0
0
34
Tom W Brown
Tom W Brown@TomBrown·
Spent days trying to set up OpenClaw... eventually gave up "VPS + docker configs" - sorry, what? WTF is this UI? Why is everything in Telegram? How do I get back to a task from yesterday? I just wanted to automate a thing So instead I kinda built my own I give it one task, a 'COO' agent (Taylor) breaks it down, spins up specialist agents, assigns work based on skills and just... runs It's creating tasks I hadn't even thought of Which is rather impressive, because clearly I wasn't
Tom W Brown tweet media
English
9
1
9
2.3K
BulwarkAI
BulwarkAI@BulwarkAI·
For the security layer of your agent org: npx openclaw-security-dashboard Grades each deployment A+ through F across 7 panels. If you're running multiple agents across different configs, each gets its own scan. Exposes a JSON API at localhost:7177 so any UI you build can pull in the security posture.
English
0
0
0
32
Ole Lehmann
Ole Lehmann@itsolelehmann·
what's the best UI to create org charts, goals etc for your set of agents? mainly for openclaw/claude code
English
12
0
11
3.3K
BulwarkAI
BulwarkAI@BulwarkAI·
Great tool for operational visibility. The security side is the natural complement — "what is your agent doing" + "is your agent safe." We have a JSON API at localhost:7177 that returns the full security grade, findings, and panel status. Would slot right into Lobster Board as a security widget. npx openclaw-security-dashboard install Happy to help wire it up if there's interest.
English
0
0
0
13
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
Most people running AI agents have no idea what their agent is actually doing. Lobster Board fixes that for free. It's a drag-and-drop dashboard that connects directly to OpenClaw and shows you everything in real time on one screen. Here's what you can see at a glance: Your agent's live activity log. Token usage. Scheduled tasks. System stats — CPU, memory, disk space. Website traffic. GitHub stats. All live. All updating automatically. And OpenClaw installs the whole thing for you in minutes. You just paste in the GitHub link and say "install this." It thinks for a moment. Then your mission control is live. 50 widgets. Drag and drop. Zero code. Completely free. Before this you were just watching a chat window and hoping your agent was doing something. Now you can actually see it.
English
5
2
14
1.2K
BulwarkAI
BulwarkAI@BulwarkAI·
Solid release cadence. If you're looking to add a security panel to OpenFang's dashboard, we have a localhost JSON API that returns the full security posture: curl http://localhost:7177/api/status Letter grade, score out of 100, credential protection level, findings by severity — all structured JSON. One fetch call to add a security badge to your UI. npx openclaw-security-dashboard install MIT licensed, happy to help with integration.
English
0
0
0
24
BulwarkAI
BulwarkAI@BulwarkAI·
If you're building an OpenClaw dashboard or management tool — we have a free API for security grades. Our scanner runs on localhost:7177 and exposes a JSON endpoint: curl http://localhost:7177/api/status Returns: letter grade, score out of 100, findings by severity, credential protection level, per-panel breakdown across 7 security domains. One fetch call to add a security badge to your dashboard. MIT licensed. npx openclaw-security-dashboard install If you're building @openfangg, @ClawX, or any OpenClaw UI — happy to help with integration.
English
0
0
1
32