jdeleonsqa

328 posts

jdeleonsqa

jdeleonsqa

@jdeleonsqa

VP for SQA² QA leader with 10+ years' experience in automation, data-driven QA, and leading teams. Expertise in e-commerce, healthcare, media, and blockchain.

Katılım Eylül 2024
12 Takip Edilen9 Takipçiler
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most teams bring QA in at the end. Then wonder why releases are painful. Here's what actually changes when QA is embedded from the start: 1. Bugs get caught in design, not deployment. My team reviews requirements and user stories before a single line of code is written. That's where the real shift happens. 2. The feedback loop shrinks from weeks to hours. When QA sits inside the sprint, engineers hear about issues the same day, not after a release candidate is already cut. 3. Release gates stop being last-minute scrambles. A pattern I see often: teams without embedded QA spend their final week before launch in pure triage mode. Embedded QA makes that final week boring, in the best way. 4. Engineers start thinking in edge cases earlier. When QA has a voice in backlog grooming and standups, the whole team's quality instincts sharpen. 5. "Done" actually means done. Not "done pending QA review." Done. The question I ask every engineering leader I meet: at what point in your cycle does QA have a voice? The answer tells me almost everything about why your releases feel the way they do.
English
0
1
1
3
jdeleonsqa
jdeleonsqa@jdeleonsqa·
AI writes code faster than any engineer I know. It also breaks in production faster than most. Here is what I keep seeing: 1. AI-generated code solves for what the prompt says, not what the user actually needs. The "why" behind a requirement never makes it into the output. 2. Edge cases get ignored. The happy path works. Everything outside it is a gamble. 3. Tests are optional unless you ask for them explicitly, and even then, they often validate what the AI wrote, not what the system requires. 4. When something fails at 2am, "the AI wrote it" is not a root cause your on-call engineer can act on. 5. Speed of generation does not equal quality of delivery. Shipping faster without testing is just failing faster. I use these tools. My team uses these tools. They are genuinely useful for moving quickly. But AI is not a QA strategy. Someone still has to own the coverage, the edge cases, and the accountability when it breaks. Are you treating AI-generated code the same way you would treat any other untested code hitting production?
English
0
1
1
5
jdeleonsqa
jdeleonsqa@jdeleonsqa·
A single payment processing bug in production doesn't just break a transaction. It breaks trust. Here's what I've seen it actually cost a fintech: 1. Failed transactions at scale. Even a short window of payment failures can affect thousands of users simultaneously. The dollar amount of declined or lost transactions adds up fast. 2. Regulatory exposure. Payment bugs in fintech aren't just a UX problem. Depending on the failure type, you're looking at potential compliance flags, audit trail gaps, and conversations with your banking partners you didn't want to have. 3. Support queue explosion. Customer support tickets spike immediately. Each one costs agent hours and goodwill you can't get back. 4. Engineering all-hands. Your best engineers drop everything. That's lost sprint capacity, delayed roadmap items, and a team now in reactive mode instead of building. 5. Churn you can't see coming. Users who hit a failed payment often don't complain. They just leave. Quietly. The bug itself might take a few hours to fix. The aftermath takes weeks. I've seen teams treat QA as the thing they'll get to later. In fintech, later is expensive. What's your team doing to catch payment edge cases before they reach production?
English
0
1
1
5
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Speed and quality are not opposites. The teams that ship fast AND safely treat QA as part of the build cycle, not a gate at the end. Here is how my team approaches it: 1. Define "done" before you start. If your definition of done does not include test coverage, you are setting up a fire drill at the end of every sprint. 2. Shift testing left. The earlier you catch a defect, the cheaper it is to fix. This is not a new idea, but most teams still treat QA as the last step. 3. Automate the repetitive layer. Regression testing does not need a human every time. Free your QA engineers to focus on edge cases, integrations, the things automation misses. 4. Keep feedback loops short. When QA is in the same timezone as your engineers, questions get answered in minutes, not the next morning. 5. Make the cost of bugs visible. When a team can see what a production incident actually costs in engineering hours and customer trust, they make different decisions earlier. The question I always ask: does your team have an actual answer to how you ship fast AND safely, or is that just something you hope happens?
English
0
1
1
3
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most QA reports I see are technically accurate and completely useless. 1. A 98% pass rate sounds great until the failures are all in your checkout flow. The number is real. The context is missing. 2. Your product owner has one question: is this safe to ship? A defect density chart does not answer that. 3. Your delivery lead needs to know if the timeline is at risk, not how many test cases ran this sprint. 4. Your stakeholders are not QA engineers. If they have to decode your report, you have already lost them. 5. Every metric in a QA report should point to a decision someone can make. If it does not, cut it. A report that looks good but drives no action is just noise with better formatting. What question is your QA reporting actually built to answer?
English
0
1
1
5
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most people think QA is just clicking through a UI. The engineers who work with my team quickly realize it goes much deeper than that. Here is what knowing the full stack actually means in practice: 1. Front end testing means thinking like a user. You find the friction, the broken flows, the things that make someone abandon a checkout before they ever write a support ticket. 2. Back end testing means understanding data. A field that looks fine on screen can be silently wrong at the database layer. That gap is where real damage hides. 3. API testing reveals what the product actually does, not what the UI pretends it does. That distinction matters more than most teams expect. 4. Performance testing shows how a system behaves under pressure. Load, concurrency, degradation. You stop assuming and start knowing. 5. Putting it all together means I can trace a bug from a user click to a failed query without handing it off to four different people. A pattern I see often: teams don't realize how much cross-stack context a strong QA engineer holds until they have one embedded full-time. What layer of the stack do you think QA engineers most often overlook?
English
0
1
1
2
jdeleonsqa
jdeleonsqa@jdeleonsqa·
AI is the best tool on a QA engineer's bench right now. It is not a substitute for the engineer holding it. Here is what I have seen working with testing teams: 1. AI can generate test cases fast, but it generates them against what it is told. Deciding what to tell it, and what to leave out, requires judgment built from real experience. 2. AI does not attend sprint planning. It does not hear why a scope change happened mid-week. My team carries that context. AI cannot. 3. Exploratory testing is still human territory. Finding the edge case nobody thought to specify is pattern recognition trained on years of breaking things intentionally. 4. AI output needs review. If the engineer cannot evaluate whether a generated test covers the right risk, the noise will bury the signal. 5. The engineers who thrive with AI are the ones who were already strong without it. The tool amplifies skill. It does not install it. AI is making experienced QA engineers faster and more thorough. It is not producing QA engineers from scratch. If your team is adopting AI-assisted testing, what skills are you prioritizing in new hires?
English
0
1
1
25
jdeleonsqa
jdeleonsqa@jdeleonsqa·
In fintech, a defect isn't just a bug. It's a compliance event. I've seen what happens when a payment processing flaw slips past QA into production. The cost isn't just engineering time to fix it. Consider what actually accumulates: 1. Regulatory exposure. A failed transaction or miscalculated fee can trigger audit review. That's legal hours, documentation, and potential fines, not just a hotfix. 2. Customer trust, once broken. Fintech users move fast. A frozen account or failed transfer sends them to a competitor before your team even knows there's an issue. 3. The 30x rule. Catching a defect in QA typically costs a fraction of what it costs in production. In fintech, that multiplier often runs higher because of downstream dependencies. 4. Incident response lag. If your QA partner is offshore, a Saturday morning payment outage means waiting for Monday. Same-timezone coverage is not a luxury in this industry. 5. Remediation scope creep. One bug in a payment flow rarely stays contained. It touches fraud detection, reconciliation, reporting. Each layer adds hours. The real question isn't whether you can afford QA. It's whether you can afford what happens without it.
English
0
1
1
8
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most teams don't have a hotfix process. They have a hotfix reaction. I've seen this play out the same way across engineering teams: something breaks in production, everyone drops what they're doing, and the fix ships under pressure with almost no testing. Then they wonder why the next incident happens two days later. Here's what a real hotfix process looks like: 1. A defined severity threshold. Not every production issue is a hotfix. Knowing which ones are changes everything. 2. A pre-approved test scope. My team keeps a short, maintained list of smoke tests that run on every hotfix, no discussion required. 3. A dedicated deployment path. Hotfixes shouldn't ride the same queue as sprint releases. Separate path, separate approvals. 4. A post-mortem trigger. Every hotfix should automatically kick off a lightweight root-cause review, not a blame session, a process check. 5. A documented rollback plan. If the fix makes things worse, you need a decision tree, not a debate. The difference between panic and process is preparation. My team built our hotfix protocol before we needed it, and it has saved us more than once. Does your team have a written hotfix process, or is it just the people who happen to be online?
English
0
1
1
8
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Quarterly business reviews happen without fail. Quarterly QA reviews? Almost never. Here's the checklist I keep coming back to: 1. Test coverage gaps. Not line coverage. Are your highest-risk user paths actually being tested? Start there. 2. Flaky test count. If your automation suite has persistent intermittent failures, your team is already tuning out signals it shouldn't be. 3. Escaped defects from the last 90 days. Every bug that reached production tells you something about your process. Are you reading those patterns? 4. QA involvement in sprint planning. If testers are receiving tickets the day before release, the process isn't working regardless of how good your engineers are. 5. Documentation freshness. Outdated test plans create false confidence. A stale checklist is almost worse than no checklist. None of this requires a big initiative. A focused 60-minute review each quarter, with real follow-through, moves the needle more than most tooling investments. What's the one item your team keeps deferring to next quarter?
English
0
1
1
11
jdeleonsqa
jdeleonsqa@jdeleonsqa·
AI is writing code faster than most teams can review it. That's exactly why shift left matters more now, not less. A pattern I see often: teams adopt AI coding tools, velocity spikes, and QA gets pushed later in the cycle because "we'll catch it in testing." Then the bug load grows. Here's what I tell engineering leaders: 1. AI tools amplify output, not quality. If your requirements are vague, the AI generates code against vague requirements. QA needs to be in that conversation early. 2. AI-generated code is harder to reason about at speed. When developers ship significantly more code per day, quick reviews miss what careful ones would catch. 3. The cost of a bug in production hasn't changed. What has changed is how fast you can create one. 4. Shift left means QA is involved at the requirements and design stage, not just at the end of the sprint. That's where it actually matters. 5. Teams that integrate QA early with AI-assisted workflows see fewer surprises at release. The ones that don't see more. The AI era is a velocity era. Velocity without quality gates is just a faster way to ship problems. What does your QA process look like now that AI is writing more of your code?
English
0
1
1
9
jdeleonsqa
jdeleonsqa@jdeleonsqa·
AI didn't eliminate QA roles. It split them. I've watched teams hand AI tools to junior engineers and expect magic. What they get instead is a flood of auto-generated test cases that look impressive but miss the business logic that actually breaks in production. Here's the shift I'm seeing across every team we embed with. Junior QA engineers should now focus on: - Learning to evaluate AI output, not just write test scripts from scratch - Understanding product context deeply enough to spot what AI misses - Building judgment, because AI can generate 200 test cases in minutes but can't tell you which 15 matter most Senior QA engineers should now focus on: - Designing the coverage strategy that AI executes against - Validating AI-driven triage so critical bugs don't get buried under noise - Training junior engineers to question AI output instead of trusting it blindly On my team, we use AI for test case generation and coverage gap analysis on every engagement. But a senior engineer reviews and prunes every output before it touches a test plan. That step alone has caught misclassified P1 bugs that AI labeled as low priority more times than I can count. The real risk isn't AI replacing your QA team. It's your QA team trusting AI without the experience to challenge it. What's the biggest skill gap you're seeing on your QA team right now?
English
0
1
1
19
jdeleonsqa retweetledi
SQA²
SQA²@SQAsquared·
AI in QA is a force multiplier for the expertise you already have. If your QA foundation is solid, AI can help your team move faster, cover more ground, and catch more defects. But it can't replace the people who know what good testing actually looks like.
SQA² tweet media
English
0
2
1
9
jdeleonsqa
jdeleonsqa@jdeleonsqa·
A bug caught in dev costs you an hour. The same bug in production costs you a week, a customer, and your team's momentum. I've seen this pattern repeat across dozens of engineering orgs. A defect slips past staging, hits production on a Friday afternoon, and suddenly three senior engineers are pulled off sprint work for incident response. The direct fix might take two hours. The total cost, including rollback, customer communication, regression testing, and lost velocity, is 5 to 10x higher. The math is not complicated. It's just uncomfortable. Most teams I talk to don't have a QA problem. They have a timing problem. Testing happens too late in the cycle, coverage gaps go unnoticed until something breaks, and regression suites aren't prioritized against what actually changed in the build. My team uses AI to flag coverage gaps and prioritize regression tests by code change impact. But the AI doesn't make the call on what matters. Our senior engineers do. That combination, speed from automation plus judgment from experience, is what moves defect detection earlier without slowing delivery. Shifting left on quality isn't a philosophy. It's a budget decision. If you tracked the true cost of your last three production incidents, including engineer hours, customer impact, and delayed roadmap items, what number would you land on?
English
0
1
1
9
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Everyone knows about the engineering hours to hotfix it. The sprint disruption. The rollback. But here is what quietly bleeds your business dry: the trust tax. One critical bug in production does not just cost you 5 to 10x what it would have cost to catch in QA. It costs you the confidence of your sales team mid-demo. It costs you the enterprise prospect who saw your status page go red during their evaluation window. It costs you the senior engineer who is tired of firefighting and starts interviewing somewhere else. None of that shows up in a Jira ticket. I have led QA for teams that learned this lesson the hard way. The pattern is always the same. Testing gets compressed to hit a release date, coverage gaps go undocumented, and the first week in production becomes an unplanned QA cycle run by your customers. My team now uses AI to map coverage gaps against every build before it ships, but the AI is not the fix. The fix is having senior engineers who know which gaps actually matter and which ones are noise. The business case for quality is not about preventing every bug. It is about making sure the expensive ones never reach your users. What is the most expensive production bug your team has dealt with this year, and how far upstream could it have been caught?
English
0
1
1
10
jdeleonsqa
jdeleonsqa@jdeleonsqa·
What keeps you up at night about quality? I ask this in almost every discovery call, and the answers have shifted dramatically over the past year. It used to be "we don't have enough test coverage." Now it's "we have tests, but we don't trust them." That gap between having tests and trusting them is where most teams are stuck. They shipped fast, automated what they could, and now they're sitting on a regression suite that passes every build but still lets bugs slip into production. My team sees this pattern constantly. The fix is never "more tests." It's better signal from the tests you already have. We've started using AI to map test coverage against actual code changes per build, so engineers stop wasting cycles running 4,000 tests when only 300 matter. But the AI doesn't decide what matters. A senior QA engineer does. So here's my Friday question for engineering leaders, QA managers, and anyone who's ever been paged at 2am over a bug that "passed all tests": What's your single biggest quality concern heading into Q2? Drop it in the comments. I read every one.
English
0
1
1
9
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most QA investment pitches fail because they lead with headcount instead of risk. I have sat in dozens of budget conversations where engineering leaders open with "we need more testers" and watch the CFO's eyes glaze over. The problem is not that leadership does not value quality. It is that they value it in dollars, not in test coverage percentages. Here is the framework I use with my team when advising engineering directors on these conversations. Start with a single production incident from the last 90 days. Calculate the actual cost, including engineering hours to fix, customer support tickets, revenue impact, and any SLA credits. That number is your opening line. Not "we need three more QA engineers." Instead, "this one bug cost us $140K, and here is how we catch it earlier next time." Then connect the investment to a specific, measurable outcome. "Adding regression coverage to our payments flow reduces escaped defects by 60% based on what we saw after covering the onboarding module last quarter." Leadership does not fund vague quality improvements. They fund risk reduction with a clear before and after. The final piece most people skip is the cost of doing nothing. Frame the status quo as a choice with its own price tag. Every sprint without adequate QA coverage is a bet that nothing breaks in production. Eventually that bet loses. What is the one production incident you would use to open your next QA budget conversation?
English
0
1
1
7
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Every client my team has onboarded in the last two years had one thing in common. They were already paying for QA. Offshore, lower cost on paper, and consistently missing critical bugs before release. One pattern I see over and over: a Series B SaaS company scales fast, hires an offshore QA vendor to keep costs down, then spends 3 to 6 months dealing with timezone lag, miscommunication on edge cases, and regression bugs that slip into production. The math looks good on a spreadsheet. The reality is a different story. One team I worked with was burning 20+ engineering hours a week just triaging bugs that should have been caught before staging. That is not savings. That is hidden cost. When my team stepped in, we embedded directly with their engineers, same timezone, same standup, same Slack channels. Within the first sprint we identified 40% more regression paths using AI coverage analysis, validated by our senior engineers. Not because offshore teams lack talent. They do not. But real-time collaboration changes what is possible. When a deploy breaks at 2pm ET, my team is already in the codebase, not waking up to a Jira ticket 8 hours later. I am not here to knock anyone's vendor. But if your QA team feels like a separate company instead of part of your engineering org, that gap has a price tag. What would change for your team if QA could respond in minutes instead of hours?
English
0
1
1
11
jdeleonsqa
jdeleonsqa@jdeleonsqa·
AI testing tools will not tell you when they are confidently wrong. That is the part no vendor demo covers. I have watched AI generate 200 test cases in minutes, and roughly 30% of them tested nothing meaningful. They looked right. They passed. But they validated logic that did not actually match the product's business rules. The problem is not that AI is bad at testing. It is shockingly good at pattern recognition and coverage mapping. My team uses it daily to flag gaps in regression suites and prioritize which tests to run based on what changed in the latest build. But AI has no context for what matters most to your users. It cannot distinguish between a low-risk UI tweak and a payment flow that, if broken, costs you six figures by morning. That distinction requires someone who has triaged production incidents at 2 a.m. and knows which failures trigger churn. On my team, every AI-generated test case gets reviewed by a senior engineer before it touches a pipeline. Every coverage gap analysis gets validated against real user behavior, not just code paths. AI does the heavy lifting. Engineers make the judgment calls. The companies getting this right are not choosing between AI and human QA. They are pairing AI speed with senior engineering oversight. The ones getting it wrong are trusting the output without questioning it. What is the most confidently wrong result you have seen from an AI testing tool?
English
0
1
1
12
jdeleonsqa
jdeleonsqa@jdeleonsqa·
Most IT leaders carry stress they rarely talk about. Will this release blow up? Will we miss something obvious? Will I get surprised in front of the business? Good QA reduces that anxiety quietly. Not by promising perfection, but by creating predictability. Risks are visible. Tradeoffs are explicit. Bad news travels early. When leaders trust their quality process, they stop bracing for impact. That confidence changes how teams plan, communicate, and execute. Quality is not just a technical function. It is a leadership stabilizer.
English
0
1
1
8