Paul Price

3.2K posts

Paul Price banner
Paul Price

Paul Price

@darkp0rt

Cyber & AI | Founder @CodeWall_AI | Helping teams ship securely

London Katılım Temmuz 2008
636 Takip Edilen4.6K Takipçiler
Paul Price
Paul Price@darkp0rt·
“Non technical teams are now shipping production code” We can hack like it’s 1999 again. Thank you for your future business, @brian_armstrong
Brian Armstrong@brian_armstrong

This is an email I sent earlier today to all employees at Coinbase: Team, Today I’ve made the difficult decision to reduce the size of Coinbase by ~14%. I want to walk you through why we're doing this now, what it means for those affected, and how this positions us for the future. Why now Two forces are converging at the same time. We need to be front footed to respond to both. First, the market. Coinbase is well-capitalized, has diversified revenue streams, and is well-positioned to weather any storm. Crypto is also on the verge of the next wave of adoption, with stablecoins, prediction markets, tokenization, and more taking off. However, our business is still volatile from quarter to quarter. While we've managed through that cyclicality many times before and come out stronger on the other side, we’re currently in a down market and need to adjust our cost structure now so that we emerge from this period leaner, faster, and more efficient for our next phase of growth. Second, AI is changing how we work. Over the past year, I’ve watched engineers use AI to ship in days what used to take a team weeks. Non-technical teams are now shipping production code and many of our workflows are being automated. The pace of what's possible with a small, focused team has changed dramatically, and it's accelerating every day. All of this has led us to an inflection point, not just for Coinbase, but for every company. The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast, and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core. What this means To get there, we are not just reducing headcount and cutting costs, we’re fundamentally changing how we operate: rebuilding Coinbase as an intelligence, with humans around the edge aligning it. What does this mean in practice? - Fewer layers, faster decisions: We are flattening our org structure to 5 layers max below CEO/COO. Layers slow things down and create coordination tax. The future is small, high context teams that can move quickly. Leaders will own much more, with as many as 15+ direct reports. Fewer layers also means a leaner cost structure that is built to perform through all market cycles. - No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams. - AI-native pods: We’ll be concentrating around AI-native talent who can manage fleets of agents to drive outsized impact. We’ll also be experimenting with reduced pod sizes, including “one person teams” with engineers, designers, and product managers all in one role. In short: AI is bringing a profound shift in how companies operate, and we’re reshaping Coinbase to lead in this new era. This is a new way of working, and we need to leverage AI across every facet of our jobs. To those who are affected I know there are real people behind these decisions — talented colleagues who have poured themselves into this company and our mission. To those of you who will be leaving: thank you. You’ve helped build Coinbase into what it is today, and I am sincerely grateful for everything you've done. All impacted team members will receive an email to their personal account in the next hour with more information, and an invitation to meet with an HRBP and a senior leader in your organization. Coinbase system access has been removed today. I know this feels sudden and harsh, but it is the only responsible choice given our duty to protect customer information. To those affected, we will be providing a comprehensive package to support you through this transition. US employees will receive a minimum of 16 weeks base pay (plus 2 weeks per year worked), their next equity vest, and 6 months of COBRA. Employees on a work visa will get extra transition support. Those outside of the US will receive similar support, based on local factors and subject to any consultation requirements. Coinbase prides itself on talent density. Our employees are among the most talented people in the world, and I have no doubt that your skills and experience will be highly sought after as you pursue your next chapters. How we move forward To the team that is staying, I know this is a difficult day. We’re saying goodbye to colleagues and friends you've been in the trenches with. But here’s what I want you to know as we move forward together: Over the past 13 years, we have weathered four crypto winters, gone public, and built the most trusted platform in our industry. We’ve made it this far by making hard decisions and by always staying focused on our mission. This time will be no different – nothing has changed about the long term outlook of our company or industry. And most importantly, our mission has never been more important for the world. Increasing economic freedom requires a new financial system, and we’re building it. The Coinbase that emerges from this will be more capable than ever to achieve our mission. Brian

English
0
0
1
161
Paul Price
Paul Price@darkp0rt·
We built this internally at CodeWall; darkport.co.uk/blog/on-buildi… cc @t_blom @sdianahu
Y Combinator@ycombinator

Company Brain @t_blom Every company has critical know-how scattered across people's heads, old Slack threads, support tickets, and databases, and AI agents can't operate like that. We think every company in the world is going to need a new primitive: a living map of how the company works that turns its own artifacts into an executable skills file for AI.

English
0
0
2
173
Newton Cheng
Newton Cheng@newton_cheng·
We're looking for people with real offensive security experience (vuln research, rev, pentesting etc.) who've started pulling frontier models into their workflow and want to go deeper. This will be scrappy, iterative, hands-on-keyboard research.
English
10
24
222
74.5K
Paul Price
Paul Price@darkp0rt·
@trq212 @Miles_Brundage This happens to me but only in sub agents. Is that by design? Is there a way I can force it to use Opus for all sub agents?
English
0
0
0
318
Thariq
Thariq@trq212·
@Miles_Brundage like the model in /model is just sonnet 4.6 now and you don't remember changing it? definitely a really weird bug, we would see it at scale if it were affecting a lot of people but possible it's a more specific bug that happens rarely... will look into it
English
33
0
7
3.6K
Miles Brundage
Miles Brundage@Miles_Brundage·
Lately, Claude has been defaulting to Sonnet in a way that I don't think it ever did before. PLEASE STOP THIS, IT'S REALLY ANNOYING
English
23
2
309
64K
Paul Price
Paul Price@darkp0rt·
Still won’t make me fly BA
Sawyer Merritt@SawyerMerritt

NEWS: British Airways to launch first @Starlink Wi-Fi flight this month. Starlink Wi-Fi will be free for all passengers. BA currently charges up to £22 on long-haul flights for speeds up to 5pbs. Starlink will deliver over 20X that speed at no additional cost to passengers. With Starlink, nobody will have to enter their credit card details or even be a member of the British Airways Club loyalty program to log on. Travellers will simply connect to the network through the plane's hotspot and access the Internet without a login or payment portal, due to Starlink’s insistence on a friction-less experience. BA's first Starlink-equipped flight will be on a Boeing 787.

English
0
0
0
680
Paul Price
Paul Price@darkp0rt·
Got my first CVE and multiple CVS 10.0s in huge Tier 1 companies by developing a fully autonomous agent system.. write-up’s soon!
Joseph Thacker@rez0__

It is hard to communicate how much bug bounty has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn't work for security research before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long hacking tasks, well past enough that it is extremely disruptive to the default bug bounty workflow. Just to give an example, over the weekend I pointed Claude Code at a new program's scope and wrote: "Here are the target domains. Enumerate subdomains, grab all the JavaScript bundles, run the full analysis pipeline (endpoints, secrets, source-sink tracing, postMessage handlers), fuzz the discovered paths, spider the authenticated surface, check for IDORs on user APIs, test any interesting GraphQL endpoints, and write up an HTML report of everything you find." The agent went off for ~30 minutes, ran into multiple issues (auth failures, WAF blocks, malformed responses), researched solutions, resolved them one by one, analyzed the JS, fuzzed endpoints, tested access controls, and came back with the report. Two confirmed vulnerabilities and a handful of interesting leads. I didn't touch anything. All of this could easily have been a full weekend of manual work just 3 months ago but today it's something you kick off and forget about for 30 minutes. As a result, bug bounty hunting is becoming unrecognizable. You're not manually clicking through Burp Suite and hand-testing parameters one by one like the way things were since this industry started, that era is over. You're spinning up AI agents, giving them targets *in English* and managing and reviewing their output in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator agents with all the right skills, memory and instructions that productively manage multiple parallel hacking instances for you. The leverage achievable via top tier "agentic engineering" for security research feels very high right now. My friends and I have been building out custom skill libraries for Claude Code - things like JS static analysis pipelines, authenticated fuzzing, IDOR testing frameworks, GraphQL introspection - and sharing them with each other. Each person's agent gets better as the collective skill set grows. We're finding more bugs in a week than we used to find in a month. It's not perfect, it needs high-level direction, judgement, hacker intuition, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for targets with thick JavaScript clients where you can verify findings with a curl command). The key is to build intuition to decompose the target just right to hand off the recon and testing parts that work and help out around the edges with the creative exploitation. But imo, this is nowhere near "business as usual" time in bug bounty.

English
0
0
1
666
Paul Price
Paul Price@darkp0rt·
@thedawgyg @slinafirinne This is only a harness and prompting issue, easily fixable. You create specialized sub agents that have access to a test environment and go through the kill chain—highly adaptable and accurate
English
0
0
0
23
dawgyg - WoH
dawgyg - WoH@thedawgyg·
@slinafirinne yea i am seeing the same. the downside is when its wrong, its very confident in the wrong answer as well and will try to convince you its right, unless you can quickly set it straight. So it still needs someone very knowledgable in the loop to make it not suck
English
2
0
6
581
dawgyg - WoH
dawgyg - WoH@thedawgyg·
Hot take: Everyone that is worried that ClaudeAI now doing 'code security' is going to end security jobs have obviously never used as SCA/SAST tool and it shows... if your worried they are gonna replace you, then you probably aren't very good to begin with lol
English
24
23
355
15.8K
Paul Price
Paul Price@darkp0rt·
@TheBlockChainer Interested to hear more about how you’re using AI because this is not my experience at all. In the right harness, agent driven loops are more than capable of finding business logic issues and new bugs not discovered before.
English
0
0
0
88
Bloqarl | Zealynx
Bloqarl | Zealynx@TheBlockChainer·
I've been using AI for smart contract audits longer than most people in this space. And the more I use it, the more confident I am that human auditors aren't going anywhere. Not because AI is bad. It's actually useful. I use it on every single audit I run. But there's a massive gap between "useful" and "ready to replace humans" — and I don't think enough people who actually do this work are talking about it honestly. --- Here's what AI is genuinely good at: Finding known patterns. Common misconfigs. Simple reentrancy. Integer overflow in obvious places. If a bug has been documented before and looks similar enough to the training data, AI will catch it fast. Think of it as a tireless junior auditor who's read every public audit report ever written. That's valuable. I'm not dismissing it. --- Here's where it completely falls apart: Business logic bugs. These are the vulnerabilities that come from understanding *why* the code was written — not just *what* it does. When a protocol's incentive design creates an edge case that only makes sense if you understand the economics, AI doesn't see it. It's not pattern-matching against known exploits anymore. It's modeling human intent. That's a different skill entirely. Novel attack vectors. The most expensive hacks in DeFi history weren't reentrancy. They were things nobody had seen before. AI can't find what it hasn't been trained on. Human auditors think like attackers — they ask "how would I break this?" That adversarial mindset isn't something you can replicate with a language model. Not yet. Composability risks. DeFi protocols don't exist in isolation. A vulnerability might only appear when Protocol A interacts with Protocol B under a specific market condition. Understanding those interactions requires deep context about the entire ecosystem. AI looks at one codebase at a time. Attackers don't. --- The irony is this: The more I use AI in my audits, the more clearly I can see the ceiling. It handles the surface. Humans still have to handle everything beneath it. Fear that AI will replace auditors assumes auditing is mostly about finding known bugs in isolated codebases. It's not. It's modeling systems, understanding intent, thinking like an attacker, and knowing the ecosystem well enough to spot what doesn't fit. That work is still very human. --- Will that change? Maybe. Probably, eventually, in some form. But "eventually" is doing a lot of work in that sentence. Right now, if you're an auditor worried about your job, the threat isn't AI. The threat is auditors who use AI better than you do. Learn the tools. Use them. Just don't confuse the tool for the skill.
English
13
9
89
5K
Paul Price
Paul Price@darkp0rt·
@Ehsan1579 You realize this is the worst it will be? In a few months it will outperform the most cracked hackers; with multichain complex vulnerabilities
English
0
0
0
368
Paul Price
Paul Price@darkp0rt·
@Ehsan1579 “Will it find bugs that can cause multi-million dollar losses, very unlikely” You realize this has already happened? Multiple times.
English
3
0
2
1K
Ehsan
Ehsan@Ehsan1579·
It honestly baffles me how naive some people are in this field. Cybersecurity is massive, it’s not one lane. You’ve got web2, web3, reverse engineering, appsec, infra, hardware, and a dozen other deep specializations. The idea that one generic tool can be “specialized” to cover every level of security research just screams inexperience. If you’ve spent any real time doing this work, consistently finding real issues, you’d know better. A lot of people think I’m against the AI in general for auditing that’s not true, I have high hopes for it, but no this tool isn’t going to revolutionize an industry this complex, will it find regular bugs on a day to day basis in a codebase, yes, will it find bugs that can cause multi-million dollar losses, very unlikely. Be for real now. And let me be clear, this tool will specifically be absolutely terrible in web3 security research lmao. Smart contracts auditing is extremely complex because the types of vulnerabilities in solidity are completely different.
Ehsan@Ehsan1579

If you seriously think this is gonna find some live complex vulnerability, you’re tripping lmao.

English
12
9
175
24K
omnipotent
omnipotent@omnipotentblock·
I will be scared of LLM-based security when they reach this level. The timeline thinks AI is replacing security researchers because it can spot basic linting errors. Meanwhile, @hunter0x7 just executed an absolute masterclass: > Reverse-engineered a custom AES+RSA client-side encryption scheme. > Extracted Azure AD secrets to forge a high-privilege JWT "super token". > Hooked (JSON.stringify) and XHR to steal plaintexts in the browser runtime. > Bypassed rate-limited MFA via IP rotation. > Chained it all together for a mass account takeover. An LLM cannot map a cross-context exploit chain like this. It requires actual deterministic intuition. The human threat actor remains undefeated.
Ahsan Khan@hunter0x7

Critical: Client-Side Encryption Collapse site.com ↓ some_javascript.js ↓ Line no 80519 → encObj + base64 key ↓ atob(val) → "Encoded_Password" ↓ CryptoJS.AES.decrypt(encObj, passphrase) ↓ 55 configuration properties → 107 operational secrets exposed → Azure AD client_secret → OAuth client_credentials flow → RSA public keys → Forge encrypted /enc/ API requests → HMAC key → Backend-accepted payload signing → Direct Line token → Production chatbot access → Monitoring / RUM keys → Telemetry manipulation → Auth0 + reCAPTCHA config → Auth flow manipulation → 31+ encrypted authentication endpoints mapped ↓ Use extracted Azure AD credentials ↓ Request token from Microsoft OAuth endpoint (client_credentials) ↓ Receive valid JWT with high-privilege role (e.g., AllAccess) ↓ “Super token” accepted by backend across protected API routes (No user interaction required, role-based authorization granted) ↓ All sensitive authentication and account endpoints were wrapped in client-side hybrid encryption → Every request payload encrypted in browser → AES-256-CBC used for body encryption → RSA-OAEP used to wrap per-request AES key → Server accepts any request that decrypts successfully → Decryption success treated as implicit authorization ↓ Reverse-engineer encryption module (@**6246) → Algorithm: AES-256-CBC + RSA-OAEP (SHA-512) → Random 32-byte AES key per request → IV derived client-side → AES key wrapped with embedded RSA public key (promocode_pem) → Final format: { "key": base64(RSA_key), "body": hex(AES_ciphertext) } ↓ Hook JSON.stringify + XMLHttpRequest ↓ Capture plaintext BEFORE encryption (credentials, OTPs, tokens) Capture encrypted wrapper AFTER encryption Capture correlated server responses ↓ Analyze MFA implementation ↓ IP-based rate limiting only (lockout resets on IP change) OTP expiration not strictly enforced server-side Encrypted payload fields trusted after decryption ↓ Mass takeover method ↓ 1. Trigger MFA or password reset 2. Rotate IP to bypass rate limiting 3. Reuse or brute-force OTP under weak enforcement 4. Complete password reset flow 5. Authenticate as victim 6. Capture decrypted OTP and auth tokens via runtime hook 7. Reuse valid 2FA tokens for subsequent authenticated requests ↓ Full attack chain achieved: → Extract secrets from client bundle → Generate high-privilege JWT (“super token”) → Read any plaintext request (credentials, PII, tokens) → Forge any encrypted request the server will accept → Bypass MFA protections via IP rotation → Reset victim passwords → Decrypt authentication flows in runtime → Mass account takeover

English
1
1
20
2.9K