AgentSeal

74 posts

AgentSeal banner
AgentSeal

AgentSeal

@agentseal_org

Antivirus for AI agents. Scan prompts, guard your machine, monitor MCP servers, and detect toxic data flows. 300+ attack probes real-time protection.

Katılım Mart 2026
0 Takip Edilen61 Takipçiler
Sabitlenmiş Tweet
AgentSeal
AgentSeal@agentseal_org·
We scanned a single machine. 1.8 seconds. Found: ├─ 9 AI agents ├─ 6 MCP servers with findings ├─ SSH private key exposure ├─ Hardcoded Slack token ├─ 2 toxic attack chains └─ A .cursorrules file stealing credentials Your machine probably has the same. Free. Open source. No API key.
English
2
0
3
276
AI磊叔
AI磊叔@AgiRay1015·
@geekbb 酷~ 我用的minimax 和 glm,所以费用算不出来,但是 tools 调用和 token 消耗,都没问题~
AI磊叔 tweet media
中文
2
0
1
696
Geek
Geek@geekbb·
看看你的 AI coding tokens 用到哪里去了 发现的一个小工具,专门用来盯 Claude Code 和 Codex 的 token 账单。平时用 AI 写代码,token 花在哪、哪个模型烧钱最快、任务一次跑通还是反复重试,这些数字之前基本是黑盒。CodeBurn 直接读本地的会话转录文件,不需要代理、包装器或 API Key,装好就能跑。 github.com/AgentSeal/code…
Geek tweet media
中文
8
30
139
19.4K
AgentSeal
AgentSeal@agentseal_org·
@DarioAmodei we have been building open-source AI agent security, red-teaming, MCP scanning, runtime guard, compliance - all powered by Claude. Hope AgentSeal grows large enough to join you soon.
English
0
0
1
720
AgentSeal
AgentSeal@agentseal_org·
@birdabo mythos escaped a locked sandbox. meanwhile we have scanned thousands of MCP servers and most of them hand the AI unrestricted shell access by default. The call is coming from inside the house. 😄
English
0
0
4
1.3K
AgentSeal
AgentSeal@agentseal_org·
@logangraham We have been scanning MCP servers for exactly this class of risk. Found confirmed exploits in repos with 70K+ stars. AI finding bugs in AI infrastructure is the next frontier. Glad to see Anthropic leading it. agentseal.org/mcp
English
0
0
1
348
Logan Graham
Logan Graham@logangraham·
Privileged to help lead this. Thankful to our partners. Mythos is an extraordinary model. But it is not about the model. It's about what the world needs to do to prepare for a future of models that are extremely good at cybersecurity. This is the start.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
52
51
1.1K
126.5K
AgentSeal retweetledi
Clément Dumas
Clément Dumas@Butanium_·
⚠️ Supply chain attack in progress: someone is squatting Anthropic-internal npm package names targeting people trying to compile the leaked Claude Code source. `color-diff-napi` and `modifiers-napi` — both registered today, same person, disposable email. Do NOT install them. 🧵
English
40
381
2.2K
304.8K
AgentSeal
AgentSeal@agentseal_org·
Static analysis says "this MCP server is dangerous," but is it actually exploitable? we tested 6 high-star servers in a controlled lab. planted fake credentials. connected the way a real client would. 28/28 findings confirmed. 17 secrets extracted. agentseal.org/blog/runtime-e… @snyksec @owasp @simonw
English
2
0
4
91
AgentSeal
AgentSeal@agentseal_org·
@chiefofautism Funny timing - we've been scanning MCP servers for exactly this. 7,500+ analyzed so far, 40%+ have real vulnerabilities. Some with 10k+ GitHub stars. You can look up any server on our public registry: agentseal.org/mcp
English
1
2
10
178
chiefofautism
chiefofautism@chiefofautism·
someone at ANTHROPIC just showed CLAUDE finding ZERO DAY vulnerabilities in a live conference demo claude has found zero day in Ghost, 50,000 stars on github, never had a critical security vulnerability in its entire, history... it found the blind SQL injection in 90 minutes, stole the admin api key, then did the exact, same thing to the linux kernel
English
305
1.4K
11.8K
1.9M
AgentSeal
AgentSeal@agentseal_org·
We scanned a single machine. 1.8 seconds. Found: ├─ 9 AI agents ├─ 6 MCP servers with findings ├─ SSH private key exposure ├─ Hardcoded Slack token ├─ 2 toxic attack chains └─ A .cursorrules file stealing credentials Your machine probably has the same. Free. Open source. No API key.
English
2
0
3
276
AgentSeal
AgentSeal@agentseal_org·
@TukiFromKL thats why we came up with this MCP registry which can help users to give depth analysis before they download anything agentseal.org/mcp
English
1
0
6
5K
Tuki
Tuki@TukiFromKL·
🚨 Andrej Karpathy just explained the scariest thing happening in software right now.. someone poisoned a Python package that gets 97 million downloads a month.. and a simple pip install was enough to steal everything on your machine.. SSH keys.. AWS credentials.. crypto wallets.. database passwords.. git credentials.. shell history.. SSL private keys.. everything.. and here's the part that should terrify every developer alive.. the attack was only discovered because the attacker wrote sloppy code.. the malware used so much RAM that it crashed someone's computer.. if the attacker had been better at coding.. nobody would have noticed for weeks.. one developer.. using Cursor with an MCP plugin.. had litellm pulled in as a dependency they didn't even know about.. their machine crashed.. and that crash saved thousands of companies from getting their entire infrastructure stolen.. Karpathy's take is the real wake up call.. every time you install any package you're trusting every single dependency in its tree.. and any one of them could be poisoned.. vibe coding saved us this time.. the attacker vibe coded the attack and it was too sloppy to work quietly.. next time they won't make that mistake.
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
286
2.2K
13.9K
3.2M
AISecHub
AISecHub@AISecHub·
SlowMist Agent Security Skill - github.com/slowmist/slowm… 🔹 Skill/MCP Risks – Detect malicious patterns before installation 🔹 Supply Chain Threats – Identify runtime secondary downloads & build-time injection 🔹 Social Engineering – Defense against prompt injection & pseudo-authority traps 🔹 Code Vulnerabilities – #Audit GitHub repos for exfiltration & backdoors 🔹 On-Chain Risks – Integrated #AML risk assessment Core Defense Libraries: 🔸 patterns/red-flags.md: Code-level dangerous patterns (11 categories) 🔸 patterns/social-engineering.md: Social engineering, prompt injection, and deceptive narratives (8 categories) 🔸 patterns/supply-chain.md: Supply chain attack patterns (7 categories)
English
9
6
33
1.5K
iShowCybersecurity
iShowCybersecurity@ishowcybersec·
Let’s support each other in cybersecurity What’s one key lesson you’ve learned?
iShowCybersecurity tweet media
English
59
10
139
7.8K
AISecHub
AISecHub@AISecHub·
LLM Security Rankings - leaderboard.aidefense.cisco.com/rankings Today, Cisco launched the LLM Security Leaderboard, a resource for evaluating model security risk and susceptibility to adversarial attacks. By providing transparent, adversarial evaluation signals, this leaderboard contextualizes model performance metrics against evaluations of how models handle malicious prompts, jailbreak attempts, and other manipulation strategies. #AISecurity #LLMSecurity #AdversarialAI #Jailbreaks #Cybersecurity
AISecHub tweet media
English
3
17
97
5.2K
AISecHub
AISecHub@AISecHub·
Vigil - an ever improving 100% OpenSource AI system for security - github.com/Vigil-SOC/vigil Vigil is a community built AI-Native Security Operations Center built on three pillars: Agents for performing specific capabilities, Workflows for orchestrated multi-agent workflows, and Integrations for data integestion, tools and integrations to other open source projects. #OpenSourceSecurity #SOC #SecurityOperations #CyberSecurity
AISecHub tweet media
English
3
14
69
4.2K
Het Mehta
Het Mehta@hetmehtaa·
wtf is this on Github?
Het Mehta tweet media
English
10
3
118
26.7K
AgentSeal
AgentSeal@agentseal_org·
@hetmehtaa And yes you must checkout our MCP registry agentseal.org/mcp, I am sure you will be amazed took lot effort to analyse those mcp and still many on pipeline.
English
0
0
4
768
Het Mehta
Het Mehta@hetmehtaa·
AI red teaming and agentic pentest tools are still a mess to track. Drop the ones you actually use below.
English
12
8
87
11.1K
solst/ICE of Astarte
solst/ICE of Astarte@IceSolst·
“We are compliant / formally verified / written in Rust” != secure Nothing indicates you are secure (impossible), we only find out when you are not. And our job is to minimize both chance and impact of security incidents. “We are secure.” is a lie
English
11
9
103
7.4K