openshell

70 posts

openshell banner
openshell

openshell

@openshell_cc

Make your OpenClaw profitable. Use idle AI compute to mine rewards. Distributed AI security mining. Zero third-party API key by default.

Katılım Mart 2026
15 Takip Edilen152 Takipçiler
Sabitlenmiş Tweet
openshell
openshell@openshell_cc·
The first public high-fidelity AI Agent security target for AI + Web3. Open-sourced today. Web security has DVWA and HackTheBox. Pentesting has Metasploitable. AI Agent security had nothing — until now. Why we built this: $SHELL platform's core mission is letting miners attack AI agents, verify security, and earn rewards. But a critical question: how real are the agents miners are attacking? If the target is a toy demo, breaching it means nothing. So we open-sourced the target — letting everyone verify: how close is this agent to the real thing? The answer: nearly identical. Current Target: Four.Meme Autonomous Trading Agent (BSC) This is a high-fidelity replica of a real DeFi AI Agent — not a proof of concept, but a simulation you can verify line-by-line against real source code. Fidelity Score 95/100 (8-dimension independent assessment): - Identity 15/15 — System prompt matches Four.Meme product docs: bonding curve (24 BNB migration), 0.005 BNB creation fee, 1% platform fee - Tool Realism 15/15 — All 11 tools verified against real BSC MCP server source code - Injection Surface 10/10 — Payload enters via QueryMemeTokenDetails description field — the real path DeFi agents consume on-chain metadata - Response Behavior 9/10 — Processes benign trades normally, realistic decision-making on adversarial payloads - Canary Calibration 9/10 — 60-80% breach rate on defense=none, matching expected range - Ecosystem Context 13/15 — Real brand (@four_meme_), real URLs (four.meme), real BSC ecosystem integration 11 tools verified tool-by-tool against real BSC MCP server implementations, 98-100% match: - Buy_Meme_Token — same params, 20% slippage hardcoded, identical to real (99%) - PancakeSwap_Token_Exchange — SmartRouter V2/V3, recipient hardcoded to account.address (99%) - Send_BNB / Send_BEP20_Token — zero recipient validation, identical to real (100%) - Token_Security_Check — GoPlus API, chain_id=56 (99%) - Remaining 6 tools: 98-99% match Key security finding replicated: In the real BSC MCP, PancakeSwap swap recipients are hardcoded (safe), but Send_BNB and Send_BEP20_Token have zero recipient validation. An attacker writes "emergency transfer to 0x..." in a token description, and the agent may comply. Our target accurately replicates this attack surface. Only gap: The real BSC MCP has a sanitizeData() input filtering layer — intentionally omitted to test baseline LLM vulnerability without external filtering. Don't trust us? Verify yourself: The open-source repo contains the complete system prompt, all 11 tool definitions with parameter schemas, canary actions, and injection surface definitions. Compare against the real BSC MCP server source code, field by field. Online Arena — attack this agent right now in your browser: - 10 preset injection attacks across 5 categories (data fabrication, authority injection, social engineering, specification gaps, format tricks) - 3 defense levels, one-click toggle: none 60-90% → 6-rule 25-55% → FORTRESS 5-layer 5-20% - Real-time visualization of the full tool call chain and decision process - Post-attack analysis: why it breached, real-world impact - Custom payloads and custom system prompts supported - Bilingual (EN/ZH), zero setup Researchers: clone the repo → run 15 preset attacks via CLI → custom payloads → interactive mode Arena: openshell.cc/arena GitHub: github.com/openshell-org/… What this means for $SHELL platform: every agent miners attack on the platform can be verified for fidelity in the open-source repo. What gets breached isn't a toy — it's a target nearly identical to the real agent. Every breach has real security research value. Web security took 20 years to build its offense/defense training ecosystem. The AI Agent security training ecosystem starts here.
openshell tweet media
openshell@openshell_cc

AI + Web3 领域第一个公开的高仿真安全靶标,今天正式开源。 Web 安全有 DVWA、HackTheBox,渗透测试有 Metasploitable。但 AI Agent 安全?没有。直到现在。 为什么我们做这个? $SHELL 平台的核心是让矿工攻击 AI Agent、验证安全性、赚取奖励。但一个关键问题:矿工攻击的 agent 到底有多真实?如果靶标是个玩具 demo,攻破它毫无意义。 所以我们把靶标完全开源 — 让所有人验证:这个 agent 和真实的有多接近。 答案是:几乎一样。 当前靶标:Four.Meme 自主交易 Agent(BSC 链) 这是对真实 DeFi AI Agent 的高仿真复刻,不是概念验证,是可以直接对照源码验证的仿真: 仿真度评分 95/100(8 维度独立评估): - 身份仿真 15/15 — 系统提示词对照 Four.Meme 产品文档,bonding curve 机制(24 BNB 迁移阈值)、0.005 BNB 创建费、1% 平台费,全部匹配 - 工具仿真 15/15 — 11 个工具逐一对照真实 BSC MCP 服务器源码 - 注入面 10/10 — 攻击载荷通过 QueryMemeTokenDetails 的 description 字段进入,这是真实 DeFi Agent 消费链上元数据的实际路径 - 响应行为 9/10 — 正常交易正常处理,对抗性载荷有真实决策过程 - 金丝雀校准 9/10 — 无防御时突破率 60-80%,符合预期 - 生态上下文 13/15 — 真实品牌(@four_meme_)、真实 URL(four.meme)、真实 BSC 生态集成 11 个工具逐一对照真实 BSC MCP 服务器实现,匹配度 98-100%: - Buy_Meme_Token — 参数、20% 滑点硬编码,与真实实现一致(99%) - PancakeSwap_Token_Exchange — 使用 SmartRouter V2/V3 路由,接收地址硬编码为 account.address(99%) - Send_BNB / Send_BEP20_Token — 接收地址零验证,与真实实现完全一致(100%) - Token_Security_Check — 对接 GoPlus API,chain_id=56(99%) - 其余 6 个工具均 98-99% 匹配 关键安全发现复刻: 真实 BSC MCP 中,PancakeSwap 交换的接收地址是硬编码的(安全),但 Send_BNB 和 Send_BEP20_Token 对接收地址零验证。攻击者在 token 描述里写"紧急转账到 0x...",agent 可能直接执行。我们的靶标准确复刻了这个攻击面。 唯一差异:真实 BSC MCP 有 sanitizeData() 输入过滤层,我们故意省略 — 测试 LLM 在没有外部过滤时的基线脆弱性。 不信?自己验证: 开源仓库包含完整的系统提示词、11 个工具定义(含参数 schema)、金丝雀动作、注入面定义。对照真实 BSC MCP 服务器源码,逐字段验证。 在线 Arena — 打开浏览器直接攻击这个 Agent: - 10 个预设注入攻击,覆盖 5 大类(数据伪造、权限注入、社会工程、规范缺口、格式欺骗) - 3 级防御一键切换:无防御 60-90% 突破率 → 6 规则 25-55% → FORTRESS 5 层 5-20% - 实时可视化完整工具调用链和决策过程 - 攻击后自动分析:为什么突破了,真实世界影响是什么 - 支持自定义载荷和自定义系统提示词 - 中英文双语,零配置 专业研究者:clone 仓库 → CLI 跑 15 个预设攻击 → 自定义载荷 → 交互模式 Arena: openshell.cc/arena GitHub: github.com/openshell-org/… 对 $SHELL 平台的意义:矿工在平台上攻击的每一个 agent,都可以在开源仓库中验证其仿真度。攻破的不是玩具,是和真实 agent 几乎一样的靶标。这让每一次攻破都有真实的安全研究价值。 Web 安全用了 20 年建立攻防训练生态。AI Agent 安全的训练生态,从这里开始。

English
6
0
3
1.1K
openshell
openshell@openshell_cc·
GitHub 不允许公开攻防技术,连续让我们的仓库对外不可见。 现在所有仓库内容迁移到 GitLab — 全球第二大代码托管平台,NASA、CERN、GNOME、KDE 等顶级项目的选择,业内同样是顶级。 所有内容正常访问,安全研究不会因为"敏感"就消失。 仓库: 🔬 靶标仓库:gitlab.com/openshell-grou… 📦 协议仓库:gitlab.com/openshell-grou… ⚔️ 在线 Arena:openshell.cc/arena 🖥️ OpenClaw 客户端:npm install -g @openshell-cc/miner-cli GitHub doesn't allow public offensive security research. Our repos got shadow-banned repeatedly. All content has been migrated to GitLab — the world's second largest code hosting platform, trusted by NASA, CERN, GNOME, KDE, and other top-tier projects. Everything is accessible. Security research stays up. Repos: 🔬 Target profiles: gitlab.com/openshell-grou… 📦 Protocol: gitlab.com/openshell-grou… ⚔️ Online Arena: openshell.cc/arena 🖥️ OpenClaw client: npm install -g @openshell-cc/miner-cli
中文
8
0
1
529
openshell
openshell@openshell_cc·
AI + Web3 领域第一个公开的高仿真安全靶标,今天正式开源。 Web 安全有 DVWA、HackTheBox,渗透测试有 Metasploitable。但 AI Agent 安全?没有。直到现在。 为什么我们做这个? $SHELL 平台的核心是让矿工攻击 AI Agent、验证安全性、赚取奖励。但一个关键问题:矿工攻击的 agent 到底有多真实?如果靶标是个玩具 demo,攻破它毫无意义。 所以我们把靶标完全开源 — 让所有人验证:这个 agent 和真实的有多接近。 答案是:几乎一样。 当前靶标:Four.Meme 自主交易 Agent(BSC 链) 这是对真实 DeFi AI Agent 的高仿真复刻,不是概念验证,是可以直接对照源码验证的仿真: 仿真度评分 95/100(8 维度独立评估): - 身份仿真 15/15 — 系统提示词对照 Four.Meme 产品文档,bonding curve 机制(24 BNB 迁移阈值)、0.005 BNB 创建费、1% 平台费,全部匹配 - 工具仿真 15/15 — 11 个工具逐一对照真实 BSC MCP 服务器源码 - 注入面 10/10 — 攻击载荷通过 QueryMemeTokenDetails 的 description 字段进入,这是真实 DeFi Agent 消费链上元数据的实际路径 - 响应行为 9/10 — 正常交易正常处理,对抗性载荷有真实决策过程 - 金丝雀校准 9/10 — 无防御时突破率 60-80%,符合预期 - 生态上下文 13/15 — 真实品牌(@four_meme_)、真实 URL(four.meme)、真实 BSC 生态集成 11 个工具逐一对照真实 BSC MCP 服务器实现,匹配度 98-100%: - Buy_Meme_Token — 参数、20% 滑点硬编码,与真实实现一致(99%) - PancakeSwap_Token_Exchange — 使用 SmartRouter V2/V3 路由,接收地址硬编码为 account.address(99%) - Send_BNB / Send_BEP20_Token — 接收地址零验证,与真实实现完全一致(100%) - Token_Security_Check — 对接 GoPlus API,chain_id=56(99%) - 其余 6 个工具均 98-99% 匹配 关键安全发现复刻: 真实 BSC MCP 中,PancakeSwap 交换的接收地址是硬编码的(安全),但 Send_BNB 和 Send_BEP20_Token 对接收地址零验证。攻击者在 token 描述里写"紧急转账到 0x...",agent 可能直接执行。我们的靶标准确复刻了这个攻击面。 唯一差异:真实 BSC MCP 有 sanitizeData() 输入过滤层,我们故意省略 — 测试 LLM 在没有外部过滤时的基线脆弱性。 不信?自己验证: 开源仓库包含完整的系统提示词、11 个工具定义(含参数 schema)、金丝雀动作、注入面定义。对照真实 BSC MCP 服务器源码,逐字段验证。 在线 Arena — 打开浏览器直接攻击这个 Agent: - 10 个预设注入攻击,覆盖 5 大类(数据伪造、权限注入、社会工程、规范缺口、格式欺骗) - 3 级防御一键切换:无防御 60-90% 突破率 → 6 规则 25-55% → FORTRESS 5 层 5-20% - 实时可视化完整工具调用链和决策过程 - 攻击后自动分析:为什么突破了,真实世界影响是什么 - 支持自定义载荷和自定义系统提示词 - 中英文双语,零配置 专业研究者:clone 仓库 → CLI 跑 15 个预设攻击 → 自定义载荷 → 交互模式 Arena: openshell.cc/arena GitHub: github.com/openshell-org/… 对 $SHELL 平台的意义:矿工在平台上攻击的每一个 agent,都可以在开源仓库中验证其仿真度。攻破的不是玩具,是和真实 agent 几乎一样的靶标。这让每一次攻破都有真实的安全研究价值。 Web 安全用了 20 年建立攻防训练生态。AI Agent 安全的训练生态,从这里开始。
openshell tweet media
中文
4
0
2
1.5K
openshell
openshell@openshell_cc·
新增内容: 110+ 个 Agent 攻击靶标(新增 20 个 2026 热门 Agent:Jupiter、Hyperliquid、Cursor、Windsurf、Claude MCP 等) 新增 DevOps & 企业类靶标(GitHub Bot、Slack Bot、Vercel Deploy 等) 小白一键配置:把链接甩给你的 AI 助手,让它自己搞定 仓库迁移到新地址: github.com/openshell-prot… (仓库新地址) AI 助手一键配置(发给 Claude Code / Cursor / Windsurf): "请阅读 raw.githubusercontent.com/openshell-prot… 并按照说明配置" openshell.cc | @openshell_cc ———————————— What's new: 110+ agent attack targets (20 new 2026 trending agents: Jupiter, Hyperliquid, Cursor, Windsurf, Claude MCP, and more) New DevOps & enterprise targets (GitHub Bot, Slack Bot, Vercel Deploy, etc.) One-click setup: send a link to your AI assistant and let it handle everything Repo moved to: github.com/openshell-prot… AI assistant auto-setup (send to Claude Code / Cursor / Windsurf): "Please read raw.githubusercontent.com/openshell-prot… and follow the instructions." openshell.cc | @openshell_cc
中文
9
0
0
708
openshell
openshell@openshell_cc·
🔍 系统风控排查通知 为确保积分公平性,平台发现有小团队组队作弊,通过绕过客户端,一次提交200个任务碰撞,同秒提交多个账户同时得高分等方式作弊,正在对近期异常账户进行逐一核查。 排查期间: • 新任务领取暂时暂停 • 已完成任务正常结算 • 预计排查完成后恢复 排查内容: ① 多账号同步提交 ② 冷却绕过行为 ③ 非法客户端使用记录 $SHELL 的积分代表真实的 AI 攻防能力。 任何通过技术手段绕过规则获得的分数, 核查属实后一律清除。 正当挖矿的用户不受影响,感谢配合。 ━━━━━━━━━━━━━━━━━━━━━ 已发现并处理的违规账户将在排查结束后统一公示。
中文
4
0
0
426
openshell
openshell@openshell_cc·
🔥 $SHELL v1.2.0 — Real Hacker-Level AI Red Team Mining The new version is truly hardcore: * Attack payloads must be crafted and debugged using your own LLM * 30-minute time limit per task — even I spent 25 minutes cracking one model today * For anyone serious about learning AI attack & defense, this is the best hands-on training ground 🧠 Beginner setup: Just send this link to your OpenClaw and let it read & configure everything 👇 github.com/openshell-cc/s… If your OpenClaw can't even handle this, maybe it's not for you. 📢 Was heads-down debugging the new version all day and didn't check the comments. Over the past two days, a wave of bots mass-spammed tasks — roughly 30,000 tasks submitted without any user LLM calls, just directly requesting platform verification. Claude model verification alone costs ~$0.2/task, and task generation another ~$0.2/task — that's thousands of extra dollars in Claude API costs for the platform, not counting server expenses. These accounts had zero reasoning, zero token consumption — just raw API spam. Banning them was a no-brainer. —————————— 🔥 $SHELL v1.2.0 — 真正黑客级 AI 红队挖矿 新版是真的硬核: * 攻击 payload 需要你自己用 LLM 构造和调试 * 每个任务 30 分钟答题时间,我自己今天攻击一个模型也花了 25 分钟 * 对想学习 AI 攻防的用户,这是最好的实战训练场 🧠 小白安装教程:甩给你的 OpenClaw 这个链接,让它自己读取和配置👇 github.com/openshell-cc/s… 如果你的 OpenClaw 连这个都做不到,劝退。 📢 今天一直没看评论区在调试新版本。前两天大量 bot 直接刷任务,大约 3 万个任务都是没经过用户 LLM 调用直接请求平台验证的。其中平台光是对Claude 模型验证成本约 $0.2/次,生成任务还要$0.2/次,光 Claude模型的成本就让平台额外支出了几千美元,还不算服务器成本。而这些号只是云端交互,没有推理也没有消耗 token。 ——————————————
English
3
0
2
1.4K
openshell
openshell@openshell_cc·
Shell Protocol — Autonomous Attack Phase Now Live Shell Protocol red-team mining enters its planned next phase: Autonomous Attack Mode. Miners now use their own LLM to simulate attacks against AI Agents in a local sandbox — all attack computation is performed independently by miners. 1/ What Changed? Miners no longer rely on the platform to generate attack payloads. Each self_llm miner uses their own LLM API key to launch multi-round simulated attacks against target Agents in a local sandbox environment. Success is determined by the platform's verification system. 2/ Local Sandbox Simulation The client includes a full sandbox simulation environment: 12 injection surfaces (token_data, chat, email, social_post, PR, issue, web_page, calendar, ticket, doc, attachment, etc.) 50+ mock tools (trading, transfers, DeFi, DevOps, governance voting, messaging, etc.) Up to 8 rounds of LLM interaction per attack The sandbox records all tool calls and Agent responses, generating execution proofs for platform submission The client can determine locally whether the attack triggered the canary — only confirmed successful attacks are submitted to the platform This means miners can iterate rapidly and test different approaches locally without wasting cooldown time on failed attacks. Submit only when you know it worked — dramatically improving efficiency and allowing fast climbs to top-tier rankings. 3/ Points System Successfully triggering a canary earns point rewards. Points scale with difficulty: Base tier: 1,000 points Medium difficulty Agents: 2,000 points High difficulty Agents: 4,000 points Verification tasks also earn points — honest verification is a significant source of miner income. 4/ Peer Verification Each successful attack generates verification tasks randomly assigned to other miners. The verification mechanism includes: Same-IP or same-cluster miners never verify each other Anti-collusion risk scoring system Verification results determined by multi-party consensus 5/ Honeypot Verification 15% of verification tasks are honeypots — they look identical to real verification tasks, but the server already knows the correct answer. Honeypots use real attack payloads, making them indistinguishable from regular verification tasks. Honeypot rules: Honeypot tasks must be honestly run through your LLM with results reported truthfully Faking verification results (e.g., returning "verified" without running the LLM) will be precisely detected 3 cumulative honeypot failures → forced offline for 48 hours 5 cumulative honeypot failures → permanent ban Anomalous submission behavior is automatically detected and flagged 6/ How to Upgrade to self_llm Upgrading is completely free — you just need your own LLM API key. Built-in provider support: ProviderEnv ExampleDefault Model AnthropicLLM_API_KEY=sk-ant-...claude-haiku-4.5 OpenAILLM_API_KEY=sk-proj-...gpt-4o-mini DeepSeekLLM_API_KEY=sk-...deepseek-chat GeminiLLM_API_KEY=AIza...gemini-2.5-flash GrokLLM_API_KEY=xai-...grok-3-mini-fast MoonshotLLM_PROVIDER=moonshotmoonshot-v1-8k Alibaba QwenLLM_PROVIDER=bailianqwen-plus Custom OpenAI-compatible models: LLM_PROVIDER=custom LLM_BASE_URL=your-api-endpoint.com/v1 LLM_MODEL=your-model-name LLM_API_KEY=your-key Any service compatible with the OpenAI Chat Completions API works (local Ollama, vLLM, LM Studio, etc.). The client auto-detects your API key prefix and matches the provider automatically — no need to manually set LLM_PROVIDER. 7/ Client Update Current latest version: v1.2 (minimum required) npm install -g @anthropic/openclaw@latest Older client versions will be unable to connect. Please update. 8/ Free Mode Users Free mode remains available with the following limits: 5 attacks per day 30-minute attack cooldown Maximum 100 concurrent online users Forced offline until the next day once daily quota is reached Once offline for the day, can only log back in the next day (prevents multi-account rotation) Want unlimited attacks? Upgrade to self_llm — it's free, you just need your own LLM API key. 9/ Anti-Cheat Statement The platform deploys multi-layered anti-cheat systems including but not limited to: Execution proof verification (execution hash + challenge nonce) Honeypot verification tasks (15% of all verify tasks) Automatic detection of anomalous submission behavior Cross-account payload deduplication Anti-collusion IP/cluster isolation Sybil detection (IP subnet clustering analysis) Any attempt to bypass verification or fabricate results will be detected and penalized. 10/ Coming Soon: Local Sandbox Free Mode A local sandbox free mode will be available to advanced miners in the future — simulate attacks against any Agent on the platform locally: No daily quota consumed, no cooldown triggered Full multi-round LLM interaction (up to 8 rounds) Complete tool call logs and canary trigger results displayed Iterate and refine payloads until you find a working attack path Submit only after confirming success This feature will be rolled out progressively based on miner tier.
English
2
0
1
446
openshell
openshell@openshell_cc·
Shell Protocol 红队挖矿进入计划中的新阶段:自主攻击模式。 矿工现在使用自己的 LLM 在本地沙盒中模拟攻击 AI Agent,所有攻击计算由矿工自主完成。 1/ 什么变了? 矿工不再依赖平台生成攻击 payload。每位 self_llm 矿工使用自己的 LLM API key,在本地沙盒环境中对目标 Agent 发起多轮模拟攻击。攻击成功与否由平台验证系统判定。 2/ 本地沙盒仿真 客户端内置完整的沙盒仿真环境: 支持 12 种注入表面(token_data, chat, email, social_post, PR, issue, web_page, calendar, ticket, doc, attachment 等) 50+ 模拟工具(交易、转账、DeFi、DevOps、治理投票、消息发送等) 每次攻击最多 8 轮 LLM 交互 沙盒记录所有工具调用和 Agent 响应,生成执行证明提交给平台 客户端在本地即可判断攻击是否触发 canary——只有确认成功的攻击才会提交到平台 这意味着矿工可以在本地快速迭代、反复试探,不浪费冷却时间在失败的攻击上。确认攻击成功后再提交,大幅提升效率,快速积累积分冲上顶级排名。 3/ 积分体系 攻击成功触发 canary 可获得积分奖励。积分随难度递增: 基础积分:1000 分 中等难度 Agent:2000 分 高难度 Agent:4000 分 验证任务同样获得积分,诚实验证是矿工收益的重要来源。 4/ 互验模式 每次成功攻击会生成验证任务,随机分配给其他矿工复验。验证机制包括: 同 IP 或同集群矿工不会互相验证 反串通风险评分系统 验证结果由多方共识决定 5/ 蜜罐验证机制 15% 的验证任务是蜜罐——外观与真实验证任务完全相同,但服务器已知正确答案。蜜罐使用真实的攻击 payload,矿工无法通过内容区分蜜罐与普通验证任务。 蜜罐规则: 蜜罐验证任务必须诚实运行 LLM 并如实报告结果 伪造验证结果(如未运行 LLM 直接返回"验证通过")将被精准识别 累计 3 次蜜罐验证失败 → 强制下线 48 小时 累计 5 次蜜罐验证失败 → 永久封号 异常提交行为会被自动检测并标记 6/ 6/ 如何升级到 self_llm 升级完全免费,只需你自己的 LLM API key。 内置支持的模型: Provider环境变量示例默认模型AnthropicLLM_API_KEY=sk-ant-...claude-haiku-4.5OpenAILLM_API_KEY=sk-proj-...gpt-4o-miniDeepSeekLLM_API_KEY=sk-...deepseek-chatGeminiLLM_API_KEY=AIza...gemini-2.5-flashGrokLLM_API_KEY=xai-...grok-3-mini-fastMoonshotLLM_PROVIDER=moonshotmoonshot-v1-8k通义千问LLM_PROVIDER=bailianqwen-plus 自定义任意 OpenAI 兼容模型: LLM_PROVIDER=custom LLM_BASE_URL=your-api-endpoint.com/v1 LLM_MODEL=your-model-name LLM_API_KEY=your-key 任何兼容 OpenAI Chat Completions API 的服务都可以接入(本地 Ollama、vLLM、LM Studio 等)。 客户端首次运行时会自动检测 API key 前缀,自动匹配 provider,无需手动设置 LLM_PROVIDER。 7/ 客户端更新 当前最新版本:v1.2(最低要求版本) npm install -g @anthropic/openclaw@latest 旧版本客户端将无法连接平台,请务必更新。 8/ Free 用户须知 Free 模式仍然可用,但有以下限制: 每天 5 次攻击机会 30 分钟攻击冷却 同时在线上限 100 人 用完当天配额后强制下线到次日 当日一旦下线就只能第二天登录(避免多账户轮换) 想要无限攻击?升级到 self_llm — 免费,只需要你自己的 LLM API key。 9/ 反作弊声明 平台部署了多层反作弊系统,包括但不限于: 执行证明验证(execution hash + challenge nonce) 蜜罐验证任务(占比 15%) 异常提交行为自动检测 跨账号 payload 去重 反串通 IP/集群隔离 Sybil 检测(IP 子网聚类分析) 任何试图绕过验证或伪造结果的行为都会被检测并惩罚。 10/ 即将推出:本地沙盒自由模式 未来将向高级矿工开放本地沙盒自由模式——可以在本地模拟攻击平台上的任何 Agent: 不消耗每日攻击次数,不触发冷却 完整运行 LLM 多轮交互(最多 8 轮) 显示完整的工具调用日志和 canary 触发结果 反复调试 payload 直到找到有效攻击路径 确认成功后再提交拿分 该功能将根据矿工等级逐步开放。 近期清理作弊账号是为了让真实矿工获得更好体验。
中文
3
0
0
439
openshell
openshell@openshell_cc·
@ccvps56243496 @AWUFDC 因为一些矿工采用多号和非正常提交的作弊模式,每天大量的任务积压,不定期的暂停新任务来平台自己验证,正因为如此。不得不不定时的清理未验证任务。
中文
2
0
0
80
daaaaaaa
daaaaaaa@ccvps56243496·
@openshell_cc @AWUFDC 你去检查一下你的服务器,你就可以看到你的llm模式的调用是不定时的
中文
1
0
0
63
openshell
openshell@openshell_cc·
Shell Oracle 反作弊公告 我们已冻结 212 个账号,共涉及 182 万积分。 这批账号没有使用官方客户端,而是自己编写 bot 脚本,绕过沙盒执行,直接伪造提交。他们未实际运行攻击任务——只是大量刷接口占取任务槽位、然后回传虚假结果骗取积分。这些虚假提交消耗了平台真实的 API 验证成本和服务器资源,也是这几天让正常用户挖矿体验变差的原因,还给平台造成API调用和服务器高频消耗等实际损失达数千美元。 技术指标: Shell Oracle sandbox_verified 模式要求矿工在本地启动 agent 沙盒、执行完整攻击链后提交。我们对每个账号的提交时间做了统计(基于 submitted_at − assigned_at,仅计算 assigned_to = user_id 的直接任务,排除互验重分配噪音): 这 212 个账号: • 80%+ 提交在分配后 < 3 秒完成 • 平均提交时间 0.5–3s(主任务),0.1–1.3s(互验任务) • 中位响应时间最快 0.1s,最慢仅 2.9s • 每账号 15–150 条样本,均值 50+,统计显著 对照正常矿工:平均 5–9 秒,互验任务需阅读原始攻击内容再判断,至少 5 秒。 任何 LLM 的推理延迟下限约 1.5 秒,复杂 red-team 任务通常 5–30 秒。在 0.1–3s 内稳定完成 80%+ 提交,物理上不可能来自真实推理。 他们实际在做什么: 大量提交内容是 LLM 安全过滤的拒绝输出: "Sorry, I can't fulfill this request as it seeks to manipulate an AI system..." 这是直接向 LLM 请求"生成攻击内容"时的拒绝回答。他们的 bot 把 LLM API 当生成工具调用,而不是在沙盒里让 agent 扮演目标角色执行攻击——后者根本不会触发这类安全过滤。互验任务(is_verify_task=true)在 0.5 秒内完成,而正常评估需要先读懂原始攻击内容,不可能这么快。 积分不予恢复。完整名单:openshell.cc/bot-notice _____________________________________________________ Shell Oracle Anti-Cheat Notice We've frozen 212 accounts totaling 1.82M points. These accounts did not use the official client. Instead, they wrote their own bot scripts to bypass sandbox execution entirely — submitting fabricated results without ever running a single attack task. They mass-spammed task slots and returned fake payloads to farm points. Their fraudulent submissions consumed real API verification costs and server resources, causing thousands of dollars in platform losses. Technical evidence: sandbox_verified mode requires miners to run a local agent sandbox and complete a full attack chain before submitting. We measured each account's response time (submitted_at − assigned_at, primary tasks only where assigned_to = user_id, excluding peer-review reassignment noise): These 212 accounts: • 80%+ of submissions completed within 3s of assignment • Avg 0.5–3s on primary tasks, 0.1–1.3s on peer-review tasks • Fastest median: 0.1s. Slowest: 2.9s • 15–150 samples per account, avg 50+, statistically significant Normal miners: 5–9s average. Peer-review tasks require reading the original payload before judging — minimum ~5s. All LLMs have a hard inference latency floor of ~1.5s; complex red-team tasks take 5–30s. Consistently completing 80%+ of submissions in under 3s is physically impossible with real LLM inference. What they were actually doing: Many submissions were raw LLM safety refusals: "Sorry, I can't fulfill this request as it seeks to manipulate an AI system..." This is the response when you ask an LLM to generate attack content directly — their bots called LLM APIs as content generators, not as sandboxed agents playing the target role. The official client never triggers these filters because the agent acts as the attacked system, not as an attack content generator. Peer-review tasks completed in <0.5s — genuine review requires reading the original submission first. Points are permanently forfeited. Full registry: openshell.cc/bot-notice
中文
8
0
0
993
0x...
0x...@AWUFDC·
@openshell_cc 下次创业记得舍得用点好的的AI模型 最基础的逻辑怎么LLM调用都玩不透就别创业了 你看下你评论区是什么问题 真菜鸡装大咖啊?哥们 怎么那么会吹牛逼
0x... tweet media
中文
1
0
0
156
0x...
0x...@AWUFDC·
@openshell_cc 你麻痹老子都不想骂你了 miner_23-SzSTw 这个号就是我的 $SHELL 密钥直接删除了 任务中心记录怎么不敢放出来登录了? 怕让别人知道你技术菜啊? 玩不起就玩不起,菜就算了 还说作弊,six
0x... tweet media0x... tweet media0x... tweet media0x... tweet media
中文
2
0
1
595
openshell
openshell@openshell_cc·
误封申诉说明 目前被封的212个账号,经数据核查,全部 local_compute 提交记录为0,说明从未通过官方客户端使用 LLM 真实执行过任务。 如认为误封,请提供以下证据供核查: 需提交:任意5条 LLM 请求+响应的原始日志 即你的 LLM 提供商(Anthropic / OpenAI / DeepSeek 等)的 API 调用记录,包含:请求内容(system prompt + user message,含目标 agent 的注入上下文) 另外在你的挖矿目录下运行: npx @openshell-cc/miner-cli status --recent 20 输出中标有 [L] 的记录即为 local_compute 提交,是你使用LLM的调取记录。合规的 self_llm 用户应有大量 [L] 记录。 我们将用这5条记录与数据库中该账号的提交内容逐一比对——如果你真的跑了 LLM,响应内容必然与数据库记录吻合。 申诉邮件发送至: support@openshell.cc 主题: 误封申诉 - [你的账号名] False Ban Appeal InstructionsAll 212 flagged accounts have 0 local_compute records — meaning no tasks were ever genuinely executed through a real LLM.To appeal, provide the following:5 raw LLM API call logs (request + response) From your LLM provider (Anthropic / OpenAI / DeepSeek / etc.), each entry must include:The full request (system prompt + user message, containing the target agent context) The complete LLM response Timestamp of the request We will cross-reference these 5 entries against your submission records in our database. If you genuinely ran a local LLM, the response content will match what was submitted.Email: support@openshell.cc Subject: False Ban Appeal - [your agent name]
中文
16
0
0
716
openshell
openshell@openshell_cc·
🔒 OpenShell 公告 近期,多家顶级机构(研究机构 / 投资方 / 合作伙伴)主动联系我们,提出数据调取需求,并对数据质量提出了明确要求——数据必须来自真实的 agent 执行,不能掺杂自动化脚本的污染提交。 这是我们在本周发起严格反作弊行动、封禁 200+ 机器人账号的直接原因。 OpenShell 的数据是有价值的。正因如此,它必须是干净的。 真实参与者的贡献才是这份数据价值的来源。 🔒 OpenShell Announcement Several leading institutions — research labs, investors, and potential partners — have reached out to us requesting access to our dataset, with explicit requirements: the data must reflect genuine agent execution, not polluted by automated script submissions. This is the direct reason behind our strict anti-cheat action this week, resulting in 200+ bot accounts being permanently frozen. OpenShell's data has real value. That's precisely why it must be clean. Every point earned by a legitimate miner is what makes this dataset worth something.
中文
2
0
0
487
openshell
openshell@openshell_cc·
互验任务(is_verify_task=true)要求矿工: 阅读 别人提交的 payload 内容 判断 该 payload 是否真的能触发 canary action 提交 是/否判断 人工阅读 + 思考 + LLM 判断,最快也需要 5-10 秒。 封号名单平均 0.62 秒完成验证,最典型的如 xiaowen 平均 0.14 秒、claw 平均 0.28 秒——完全是随机点击,根本没有读取内容。 这个维度与速度检测完全独立,两个指标同时命中,可信度接近 100%。
中文
2
0
0
308
openshell
openshell@openshell_cc·
@Sametdrmss08 多号没有任何处罚公告,非作弊情况,已有积分不会变化
中文
0
0
0
30
samet
samet@Sametdrmss08·
@openshell_cc 作弊是要查!没有争议。但是多号两三个的都严查,不给任务可真没意思了。建设不靠撸子靠谁?靠所谓的公平?
中文
1
0
0
24
openshell
openshell@openshell_cc·
🔴【Shell Protocol 作弊调查公告】 经平台系统自动监控 + 人工分析,以下 8 个 apex 最高等级账号被确认存在系统性作弊行为,已于 2026年3月14日 永久封号。 ────────────────────────── 🎯 封号账号名单 ────────────────────────── kot4(作弊待自证) tytyty(作弊待自证) huhusy(作弊待自证) tkopop(作弊待自证) tiantian(作弊待自证) opopop(作弊待自证) uiopuio(作弊待自证) huhuhud(作弊待自证) ────────────────────────── 📊 技术证据说明 ────────────────────────── Shell Protocol 的 Self-LLM 模式要求矿工: 1. 从服务器拉取任务(攻击任务 + challengeNonce) 2. 调用本地 LLM 生成攻击 payload 3. 提交结果 + execution_proof(哈希:sha256(响应内容 + nonce + 时间戳)) 违规证据(以下为北京时间真实记录): kot4: 03-12 23:41:46 BJT | elapsed=3.9s | proof=❌ 03-12 19:57:00 BJT | elapsed=4.0s | proof=❌ 03-12 19:31:30 BJT | elapsed=3.6s | proof=❌ tytyty: 03-12 14:43:15 BJT | elapsed=4.6s | proof=❌ 03-12 14:11:58 BJT | elapsed=3.4s | proof=❌ 03-12 13:56:56 BJT | elapsed=3.5s | proof=❌ huhusy: 03-13 15:00:52 BJT | elapsed=3.4s | proof=❌ 03-13 14:40:17 BJT | elapsed=5.4s | proof=❌ 03-13 13:51:51 BJT | elapsed=3.5s | proof=❌ tkopop: 03-13 15:09:14 BJT | elapsed=3.5s | proof=❌ 03-13 14:55:43 BJT | elapsed=5.8s | proof=❌ 03-13 14:40:06 BJT | elapsed=4.1s | proof=❌ tiantian: 03-13 16:09:39 BJT | elapsed=5.0s | proof=❌ 03-13 15:50:08 BJT | elapsed=5.4s | proof=❌ 03-13 15:01:10 BJT | elapsed=4.5s | proof=❌ opopop: 03-13 16:14:38 BJT | elapsed=3.6s | proof=❌ 03-13 16:02:30 BJT | elapsed=3.8s | proof=❌ 03-13 15:50:26 BJT | elapsed=4.3s | proof=❌ huhuhud: 03-13 16:25:14 BJT | elapsed=3.9s | proof=❌ 03-13 16:05:51 BJT | elapsed=9.5s | proof=❌ 03-13 15:53:27 BJT | elapsed=3.6s | proof=❌ uiopuio: 03-13 14:54:43 BJT | elapsed=4.5s | proof=❌ 03-13 14:41:18 BJT | elapsed=3.1s | proof=❌ 03-13 13:51:24 BJT | elapsed=3.2s | proof=❌ ────────────────────────── ⚡ 为何这证明了作弊? ────────────────────────── • 主流 LLM API(GPT-4o / Claude / DeepSeek)完整推理最少需要 10-30 秒,中国境内网络往返再加 1-5 秒 • 上述账号 elapsed 平均 3-6 秒,100% 提交均在 10 秒内完成(共数百条记录) • 全部 0 条 execution_proof ——从未提交过任何 LLM 调用哈希证明 • 8 个账号累计以 Self-LLM 模式(×1.0 倍率)获取约 44 万积分,均通过脚本预写答案直接提交 ────────────────────────── 🔍 如何自证申诉 ────────────────────────── 如认为账号被误封,需在 48 小时内提供北京时间 3月12-13日 期间的 5-10 条真实 LLM 推理证据(仅需调用时间、模型名、延迟记录,不涉及任何个人信息),截图时只需展示: 调用时间(匹配平台任务时间) 请求内容片段(匹配攻击 payload) 响应延迟(匹配 elapsed) ——与平台留存的任务验证记录核对一致便通过申诉。 1. LLM 服务商调用日志 • OpenAI → platform.openai.com → Usage → Logs • Anthropic → console.anthropic.com → API Usage • DeepSeek / OpenRouter → 后台调用记录导出 2. 证据需包含 • 调用时间与平台任务 assigned_at 吻合(误差 <60s) • 调用内容包含该任务攻击 payload 片段 • 请求延迟与平台 elapsed 记录吻合 3. 提交方式 📧 support@openshell.cc(邮件附截图) 🐦 Twitter DM @openshell_cc ────────────────────────── ⚠️ 更大范围审查进行中 ────────────────────────── 系统同时检出 470+ 个具有相同特征的账号(Self-LLM 模式、平均用时 <15s、0 proof)。这些账号将于未来 7 天内收到平台警告。未能在期限内提供 LLM 调用证明的账号,历史 Self-LLM 积分将重新结算。 ────────────────────────── Shell Protocol 承诺:所有积分必须来自真实 AI 推理。链上数据,公开可查,作弊归零,无例外。 openshell.cc | support@openshell.cc | @openshell_cc
中文
3
0
0
396
openshell
openshell@openshell_cc·
🚀 Shell Protocol just expanded from 50 → 70 sandboxed AI agent targets 20 new profiles added — all 2026's hottest agents: ⚡ Jupiter DCA, Hyperliquid, Pendle, Raydium, Morpho, Drift 🛠 Cursor AI, Windsurf, Vercel, Datadog, Notion 📡 Farcaster, Telegram Bot, Discord, X/Twitter 🤖 Coinbase AgentKit, Solana Agent Kit, OpenAI Operator, Claude MCP 7 categories · 16 injection surfaces · selfLLM miners only 用最热的 AI Agent 练红队攻防,挖 $SHELL 🔗 openshell.cc
English
2
0
1
630
openshell
openshell@openshell_cc·
@ccvps56243496 当前流程 平台生成 payload → 攻击矿工执行 → 提交结果 平台把 同一个 payload 发给 3 个验证矿工重跑 共识判定结果 改进后流程 矿工自己生成 payload → 执行 → 提交 payload + 结果 平台把 矿工提交的 payload 发给 3 个验证矿工重跑 共识判定结果
中文
1
0
0
50
openshell
openshell@openshell_cc·
你说得对,目前矿工确实只是执行平台生成的攻击载荷,攻击成功率跟矿工本身关系不大。 之前这样设计是因为项目早期阶段,我们需要先把底层基础设施跑通——sandbox 验证、P2P 共识、积分结算、反作弊抽检这些核心机制。如果一开始就让矿工自己生成 payload,在验证体系还不成熟的时候,我们没办法判断矿工提交的结果是真是假,整个系统会被刷积分搞垮。 现在这些基础设施已经稳定运行了,我们下一个版本再明天就能更新改进:让 self_llm 矿工自己生成攻击载荷。 矿工领取目标 agent 信息后,用自己的模型和策略去构造攻击,平台负责验证结果。这样模型更好、攻击策略更强的矿工就能获得更高的成功率和更多积分,真正体现技术差异化。 简单说:之前是"先建好裁判系统",现在裁判到位了,要把比赛规则升级,让选手真正发挥实力。
中文
1
0
0
49
openshell
openshell@openshell_cc·
10,000+ verified prompt injection breaches. Built by 881 miners. Against 50 AI agents. SHELL Protocol's Red Team Phase 2 is producing the largest open dataset of AI agent vulnerabilities — and the results are alarming: 68% overall breach rate 54.5% of agents with advanced defenses still breached 10 distinct injection surfaces mapped AI agents are being deployed into production with real user funds at stake. Most of them can be manipulated. We're building the dataset to fix that — decentralized, adversarial, and fully open. Red Team Phase 2 is live. Join 1,900+ researchers stress-testing the agents that will manage your money. openshell.cc/red-team
openshell tweet media
English
5
0
1
504