Cotool

40 posts

Cotool banner
Cotool

Cotool

@cotoolai

Composable AI agents for security teams

Katılım Mayıs 2025
14 Takip Edilen414 Takipçiler
Sabitlenmiş Tweet
Cotool
Cotool@cotoolai·
We've raised a $7.4M seed round led by @a16z to build the agent operating system for security teams. Threat actors now scale with tokens. Campaigns that used to require a coordinated team can be run by a small group with the right model harness. Defense has been absorbing that hit with the same playbook and the same headcount. We built Cotool to make defense compound in the same way. Grateful to the team at @a16z , @YCombinator, @WndrCoLLC , @homebrew , and our angels from Okta, Ramp, Cloudflare, and others who've lived this problem firsthand. If you’re a security practitioner looking for more leverage in the AI age, come see how Cotool can help!
English
11
0
21
2.4K
Cotool
Cotool@cotoolai·
Frontier labs are steadily exhausting open datasets and RL environments. Public benchmarks have a shelf life, and it's shortening fast. With the Cotool Research project, we're building private evals to stay ahead. The goal is simple: provide defenders an accurate picture of how frontier agents actually perform on defensive security tasks. @ThruntingLabs helped us build exactly that: a benchmark grounded in real intrusion data from a live environment. CTFs are still useful, but as security teams hand more responsibility to agents, we need realistic data to compare models and agent harnesses. link in thread for results!
Threat Hunting Labs@ThruntingLabs

We are proud to partner and bring light to the incredible work that the good folks at @cotoolai are doing! A lot of AI security evaluations for frontier models miss the mark. They compare apples to oranges by using synthetic evaluation data to assess real-world workflows. Cotool understands this and takes a different approach. This evaluation was built around real intrusion data informed by our macOS intrusion reporting, and their write-up is excellent. The results are genuinely interesting, especially in showing both the progress AI has made and the room still left for harder, more realistic evaluation. We’re glad to be part of it and look forward to supporting future reporting and evaluations with Cotool. Blog post: cotool.ai/blog/beyond-ct… Research: cotool.ai/research/macos…

English
1
3
10
1.3K
Cotool
Cotool@cotoolai·
New case study alert! Learn more about how @elise_ai is leveraging Cotool to enable their rapid scale. Across detection, response, and operating 24/7, Cotool agents have enabled eliseAI to scale with flexibility and control across all security functions.
English
4
1
6
428
Cotool
Cotool@cotoolai·
New from Cotool: NYU CTF Bench. We evaluated 81 real CSAW CTF challenges to measure end-to-end cyber capability across models. Takeaway: reasoning depth still matters a lot in real security workflows. Full results here: cotool.ai/research/nyu-c…
Cotool tweet mediaCotool tweet media
English
1
0
6
250
lolly
lolly@Loll_ymandy·
@cotoolai @a16z Congrats on the milestone! Will Cotool integrate with existing security tools, or is it designed as a fully standalone agent OS?
English
1
0
0
41
Cotool
Cotool@cotoolai·
We've raised a $7.4M seed round led by @a16z to build the agent operating system for security teams. Threat actors now scale with tokens. Campaigns that used to require a coordinated team can be run by a small group with the right model harness. Defense has been absorbing that hit with the same playbook and the same headcount. We built Cotool to make defense compound in the same way. Grateful to the team at @a16z , @YCombinator, @WndrCoLLC , @homebrew , and our angels from Okta, Ramp, Cloudflare, and others who've lived this problem firsthand. If you’re a security practitioner looking for more leverage in the AI age, come see how Cotool can help!
English
11
0
21
2.4K
Nazar
Nazar@ustyianskyi·
daily early projects: @coldvisionXYZ - prediction markets + a new execution layer @BlockRunAI - economic layer for AI agents @cotoolai - composable AI agents for security teams, $7.4M seed round led by a16z @Tetra_Chain - TVM execution layer linking stablecoins, DeFi, AI, privacy @QFEX - perp futures exchange, $9.5M seed round by @yuris @crossover_mkts - execution-only crypto ECN for institutions @KurtosisLabs - quantum computing x DeFi (Solana) @mynoraai - agentic AI for secure, high-performance Web3 bookmark if useful. best ones will go into my weekly notes.
English
14
5
49
1.6K
Cotool
Cotool@cotoolai·
Job's just getting started!
Max Pollard@maxpollard415

Excited to announce that @cotoolai has raised a $7.4M seed round led by @a16z to build the agent operating system for security teams. Threat actors now scale with tokens. Campaigns that used to require a coordinated team can be run by a small group with the right model harness. Defense has been absorbing that hit with the same playbook and the same headcount. We built Cotool to make defense compound in the same way. Grateful to the team at @a16z, @ycombinator, @WndrCoLLC, @homebrew, and our angels from Okta, Ramp, Cloudflare, and others who've lived this problem firsthand. If you’re a security practitioner looking for more leverage in the AI age, come see how Cotool can help!

English
1
1
13
1.6K
Cotool
Cotool@cotoolai·
We added a new cohort of frontier models to our eval! Gemini 3 Pro, Claude Opus 4.5, and GPT-5.1 are all compared in our updated post: x.com/cotoolai/statu…
Cotool@cotoolai

1/6 📊 UPDATED EVAL RESULTS We compared Gemini 3 Pro, Claude Opus 4.5, and GPT 5.1 on a single investigation task of our internal agent eval for Security Operations tasks. Key Results: - @OpenAI GPT-5+ models maintain the performance-cost Pareto frontier - @AnthropicAI Opus 4.5 completed tasks 2x faster on average than any other tested model, including Haiku 4.5 (!), suggesting that model reasoning capability and efficiency can outweigh raw inference latency in long-horizon tasks - @GoogleDeepMind Gemini 3 Pro helps Google close the gap to other leading frontier models, but still lags behind in performance and reliability The task is a @splunk BOTSv3 CTF environment built to test frontier models' capability on realistic blue team cybersecurity tasks. BOTSv3 comprises over 2.7M logs (spanning over 13 months) and 59 Question and Answer pairs that test scenarios such as investigating cloud-based attacks (AWS, Azure) and simulated APT intrusions. See results and blog post in the thread below

English
0
0
0
439
Cotool
Cotool@cotoolai·
Blog Post: cotool.ai/blog/evaluatin… Evals in security operations are an evergreen challenge. As agents take over more security operations tasks, benchmarking performance becomes increasingly critical. Our goal is to push the community forward with better metrics so that security teams can properly understand agent capabilities before handing over mission-critical tasks. We have already identified a lot of future work that can build on what we're sharing today, including sharing more tasks and including comparisons on OSS model performance. If you are: - Participating in or building blue-team CTF challenges or security training scenarios - Working with production security datasets that could be anonymized for benchmarking - Researching agent evaluation methodologies or prompt optimization techniques - Running a security operations team interested in testing agents in controlled environments - Building security-specific agents at your company and have insights on model effectiveness for different tasks We'd love to hear from you! DM us directly here on X @cotoolai
English
1
0
8
516
Cotool
Cotool@cotoolai·
📊Today we're sharing initial results from one of our internal agent evals for Security Operations tasks. We replicated the @splunk BOTSv3 CTF environment in an eval to test frontier models' capability on realistic blue team cybersecurity tasks. BOTSv3 comprises over 2.7M logs (spanning over 13 months) and 59 Question and Answer pairs that test scenarios such as investigating cloud-based attacks (AWS, Azure) and simulated APT intrusions. See results and blog post in the thread below
Cotool tweet media
English
1
4
20
4.4K
Cotool
Cotool@cotoolai·
6/6 Finally, this work is a follow up to a previous blog post we put out. For more info around motivation, methodology, and much more around the eval itself, check out our initial post here: x.com/cotoolai/statu…
Cotool@cotoolai

📊Today we're sharing initial results from one of our internal agent evals for Security Operations tasks. We replicated the @splunk BOTSv3 CTF environment in an eval to test frontier models' capability on realistic blue team cybersecurity tasks. BOTSv3 comprises over 2.7M logs (spanning over 13 months) and 59 Question and Answer pairs that test scenarios such as investigating cloud-based attacks (AWS, Azure) and simulated APT intrusions. See results and blog post in the thread below

English
0
0
1
142
Cotool
Cotool@cotoolai·
5/6 Full Blog Post: cotool.ai/blog/evaluatin… Evals in security operations are an evergreen challenge. As agents take over more security operations tasks, benchmarking performance becomes increasingly critical. Our goal is to push the community forward with better metrics so that security teams can properly understand agent capabilities before handing over mission-critical tasks. We have already identified a lot of future work that can build on what we're sharing today, including sharing more tasks and including comparisons on OSS model performance. If you are: - Participating in or building blue-team CTF challenges or security training scenarios - Working with production security datasets that could be anonymized for benchmarking - Researching agent evaluation methodologies or prompt optimization techniques - Running a security operations team interested in testing agents in controlled environments - Building security-specific agents at your company and have insights on model effectiveness for different tasks We'd love to hear from you! DM us directly here on X @cotoolai
English
1
0
1
207
Cotool
Cotool@cotoolai·
1/6 📊 UPDATED EVAL RESULTS We compared Gemini 3 Pro, Claude Opus 4.5, and GPT 5.1 on a single investigation task of our internal agent eval for Security Operations tasks. Key Results: - @OpenAI GPT-5+ models maintain the performance-cost Pareto frontier - @AnthropicAI Opus 4.5 completed tasks 2x faster on average than any other tested model, including Haiku 4.5 (!), suggesting that model reasoning capability and efficiency can outweigh raw inference latency in long-horizon tasks - @GoogleDeepMind Gemini 3 Pro helps Google close the gap to other leading frontier models, but still lags behind in performance and reliability The task is a @splunk BOTSv3 CTF environment built to test frontier models' capability on realistic blue team cybersecurity tasks. BOTSv3 comprises over 2.7M logs (spanning over 13 months) and 59 Question and Answer pairs that test scenarios such as investigating cloud-based attacks (AWS, Azure) and simulated APT intrusions. See results and blog post in the thread below
Cotool tweet media
English
2
1
5
1.3K