TestMachine

617 posts

TestMachine banner
TestMachine

TestMachine

@testmachine_ai

Smart contract security at unmatched scale, speed and accuracy. Try the TestMachine App Now - https://t.co/vjMgyTrdbi

United States Beigetreten Aralık 2022
263 Folgt1.7K Follower
Angehefteter Tweet
TestMachine
TestMachine@testmachine_ai·
🚀 New on the TestMachine blog: We sat down with Coinbase's security team to unpack how TestMachine powers token safety behind Coinbase’s new DEX expansion, enabling trading across millions of ERC-20 tokens without compromising trust. From hidden token privileges to continuous AI-driven monitoring, here’s the inside story on how @coinbase scales token listings securely with TestMachine. Read the full interview 👉 testmachine.ai/blog-posts/coi…
TestMachine tweet media
English
4
5
18
5K
Martin Marchev
Martin Marchev@MartinMarchev·
There is no single place that lists all AI tools for web3 security. So I made one. 50 tools. AI auditors, agent toolkits, AI-powered on-chain monitoring, benchmarks, datasets. Every link verified by hand. It's all yours now. 👇
Martin Marchev tweet media
English
6
8
96
3K
TestMachine
TestMachine@testmachine_ai·
@chrisdior777 If auditors had high-signal, validated outputs, spam wouldn’t be the problem in the first place. That is why auditors are turning to TestMachine.
English
0
0
0
26
chrisdior.eth
chrisdior.eth@chrisdior777·
HackenProof introduces $1–5 submission fees to reduce AI-driven spam in bug bounty reports. Projects would be able to enable paid submissions per program. (it's optional) This should cut spam and help valid findings get reviewed faster. I personally support that👍
sashko.eth🇺🇦@d0rsky

Paid submissions? Let’s talk We need to be honest about what’s happening to bug bounty right now We live in AI era, where submission volume is growing fast, but signal is not A lot of reports getting lost, delayed, or stuck in review loops And this hurts everyone - especially professional whitehats with real findings Over the last months, we’ve been trying to fix this step by step Reputation points system was first you submit spam → you get penalty points → you lose ability to submit simple incentive on quality Then - MCP Which helps teams triage faster, identify duplicates, reduce review time. Many companies already using it. And now we are introducing a new option - submission fees. We’ve been hearing this request from many companies and honestly, it feels like a next logical step to make the game more fair for everyone. This is optional, not default, and not something every company will enable. Fees going to be small ($1-$5), so this is not about monetization too This is about adding a bit of friction, so people think twice before submitting something they are not confident in Because today, there is almost no downside to spam. With $20 subscription, any user can generate thousands of reports even without understanding of them. At the same time, we fully understand concerns, whitehats are our biggest asset and we still want new researchers to join the space, so we added: • free credits for new users (via coupons) • support for high-signal researchers Goal is very simple - improve signal without losing important reports I will keep you in a loop once any of HackenProof clients will enable it Lets fix bug bounty together

English
4
3
35
2.5K
TestMachine
TestMachine@testmachine_ai·
If you need to add fees to stop AI noise, auditors are using the wrong tooling. Security should be about validated findings, not volume. That’s the approach we’re taking at TestMachine.
sashko.eth🇺🇦@d0rsky

Paid submissions? Let’s talk We need to be honest about what’s happening to bug bounty right now We live in AI era, where submission volume is growing fast, but signal is not A lot of reports getting lost, delayed, or stuck in review loops And this hurts everyone - especially professional whitehats with real findings Over the last months, we’ve been trying to fix this step by step Reputation points system was first you submit spam → you get penalty points → you lose ability to submit simple incentive on quality Then - MCP Which helps teams triage faster, identify duplicates, reduce review time. Many companies already using it. And now we are introducing a new option - submission fees. We’ve been hearing this request from many companies and honestly, it feels like a next logical step to make the game more fair for everyone. This is optional, not default, and not something every company will enable. Fees going to be small ($1-$5), so this is not about monetization too This is about adding a bit of friction, so people think twice before submitting something they are not confident in Because today, there is almost no downside to spam. With $20 subscription, any user can generate thousands of reports even without understanding of them. At the same time, we fully understand concerns, whitehats are our biggest asset and we still want new researchers to join the space, so we added: • free credits for new users (via coupons) • support for high-signal researchers Goal is very simple - improve signal without losing important reports I will keep you in a loop once any of HackenProof clients will enable it Lets fix bug bounty together

English
1
1
5
130
TestMachine
TestMachine@testmachine_ai·
@blckhv That’s the idea behind Azimuth, an AI companion for auditors focused on signal over noise, verifying every finding so you’re working with real, exploitable issues, not just raw output.
English
0
0
0
59
Blckhv
Blckhv@blckhv·
Done 16 audits for Q1. And saw 2 very different coding patterns: 1/ Vibecodemaxxed master prompt projects: easy, boring bugs, weak architecture. 2/ Measured delegation: basics cleared, clean code, only the interesting bugs left. AI is a companion, not your replacement.🫡
English
1
0
20
567
TestMachine
TestMachine@testmachine_ai·
@MartinMarchev @p_tsanev That’s the key shift from pattern matching to structured, auditor-like workflows. The real question is: are the findings actually validated and exploitable, or just well-packaged signals?
English
1
0
1
33
Martin Marchev
Martin Marchev@MartinMarchev·
Most AI agents throw patterns at code and hope something sticks. This one runs an 8-phase pipeline like an auditor would. 92 skills. 4 chains. Auto PoC generation. Auto fuzz tests. Open-source. Free. No excuses left, anon. This is serious work, @p_tsanev 🫡
Plamen Tsanev@p_tsanev

🚀Dear builders and auditors, your Claude Code sub just became a 100x audit team. Up to 95 specialized AI security agents running in one orchestrated autonomous pipeline. Fully open-source. "Plamen" is live 🔥🐉

English
2
2
33
2.8K
TestMachine retweetet
Pandit | Ξ🦇🔊
Pandit | Ξ🦇🔊@panditdhamdhere·
AI is dangerously good at smart contract security.
English
11
2
61
3.7K
TestMachine
TestMachine@testmachine_ai·
@ShieldifySec Exactly. The best SRs won’t be replaced by AI, they’ll be the ones using it better than everyone else.
English
0
0
1
33
Shieldify Security
Shieldify Security@ShieldifySec·
Many Web3 security researchers feel anxious about AI. Don’t. Do what you’ve always done—learn it, use it, make it work for you. AI is leverage, not competition. Real security talent will be needed more than ever 🫡
English
6
1
31
1.1K
TestMachine
TestMachine@testmachine_ai·
@ChanniGreenwall Exactly. Audits are a point-in-time check, while risk is continuous. If security isn’t integrated into the pipeline, you’re just hoping nothing breaks after launch.
English
0
0
2
15
Channi Greenwall
Channi Greenwall@ChanniGreenwall·
Continuous security monitoring integrated into a development pipeline costs a fraction of what a single significant exploit costs, and yet most protocols still treat the pre-launch audit as their primary security investment. The math on this has never been complicated, but the industry spent a decade pretending audits were sufficient anyway.
English
1
0
2
57
TestMachine
TestMachine@testmachine_ai·
An auditor recently told us "I currently use Claude's model as part of my auditing workflow, but the results are not very satisfying. Sometimes I spend a lot of time verifying issues that ultimately do not exist." That’s the exact problem we built Azimuth to solve. Instead of just flagging potential issues, Azimuth runs contracts through a real execution environment and tests whether a vulnerability can actually be exploited. That is real security. Contact us for a 30-min walkthrough. #Contact-Us" target="_blank" rel="nofollow noopener">testmachine.ai/#Contact-Us
English
0
0
1
85
TestMachine
TestMachine@testmachine_ai·
@asen_sec Most are optimizing for number of findings, not whether those findings are actually validated and exploitable. That is what we do differently at TestMachine.
English
0
0
3
95
0xasen
0xasen@asen_sec·
Everyone I've seen building AI security tools is optimizing for the wrong metric. And it's leading to the wrong architecture. 🧵
English
5
6
31
2.6K
TestMachine
TestMachine@testmachine_ai·
@MartinMarchev Comfort kills curiosity, and in security, that’s where the real bugs hide.
English
0
0
1
19
Martin Marchev
Martin Marchev@MartinMarchev·
Unpopular opinion: the biggest risk AI poses to security researchers is not replacing them. It's making them comfortable.
English
10
3
76
2.9K
TestMachine
TestMachine@testmachine_ai·
@0xcuriousapple The real edge is combining both AI for reasoning, fuzzing for coverage.
English
1
0
0
104
curiousapple
curiousapple@0xcuriousapple·
ai this ai that i promise your ai cant even think of 70%+ scnearios fuzzer fuzzes with fuzzing remains chad for bigger codebases
English
4
0
17
1.4K
TestMachine
TestMachine@testmachine_ai·
@InfectedCrypto You need systems like TestMachine that prove exploitability or refute it, otherwise you’re just arbitrating opinions.
English
1
0
1
34
InfectedCrypto
InfectedCrypto@InfectedCrypto·
After having two LLMs argue about the validity of a finding (one saying its valid, the other it is not valid), I finally understand how it feels to be a judge during contests escalations Both are so convinced they are right, or they don't care and just want to push their version Man that's exhausting I can't imagine doing this for 10s of issues
InfectedCrypto tweet media
English
3
0
8
544
TestMachine
TestMachine@testmachine_ai·
@pashov AI Web3 Security with TestMachine>
English
0
0
1
50
pashov
pashov@pashov·
AI Web3 Security AI Web3 Security AI Web3 Security AI Web3 Security AI Web3 Security AI Web3 Security AI Web3 Security
Română
9
3
109
4.9K
TestMachine
TestMachine@testmachine_ai·
Completely agree.AI can surface signals, but validation is critical. That’s the idea behind TestMachine's Azimuth. AI to accelerate discovery, with Azimuth’s validation engine confirming real exploitability, reducing the false positives that usually require manual triage.
English
0
0
0
35
0xFrankCastle🦀
0xFrankCastle🦀@0xcastle_chain·
Every audit should incorporate a hybrid approach of AI and SR. Don't you agree that the role of SR is crucial in eliminating false-positives, interacting with the team, and ensuring that the tool is working effectively? The SR's function is akin to the era before AI, meticulously examining details and delving into the implementation process. Additionally, they filter the outcomes of AI scanners and provide direction for them. Remember to stay focused on your primary responsibilities! #Audit #AI
English
1
0
20
980
TestMachine
TestMachine@testmachine_ai·
@RoundtableSpace Not surprising. Autonomous AI red teaming is going to accelerate both offense and defense in cybersecurity. The pace of vulnerability discovery is about to increase a lot. Thats what we do best at TestMachine.
English
0
0
0
37
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
CYBERSECURITY IS ABOUT TO CHANGE FAST. Someone just open sourced an autonomous AI red team made of multiple agents that coordinate with almost no human input.
English
84
123
895
221.4K
TestMachine
TestMachine@testmachine_ai·
@0x15_eth True. AI in security didn’t start with the hype cycle. TestMachine was founded in 2022 with the goal of helping auditors use AI during the audit process. The difference now is simply that the rest of the industry is catching up.
English
0
0
2
186
0x15.eth
0x15.eth@0x15_eth·
It’s funny when people who’ve never really used AI for auditing start using an AI agent from some company shilling it and suddenly they’re amazed. AI has been catching bugs since last year.. you just didn’t pay attention until the hype. People have been using AI to find bugs and win bounties and contests since way back, they just kept it low-key because of the stigma around AI back then. It’s not new. It’s just new to you. New is just old happening to new people.
English
8
0
48
3.2K