

Gabriela Moreira
1.6K posts

@bugarela
Building @quint_lang at @informalinc | Born and living in Brazil 🇧🇷




.@TheLeadDev just published a piece featuring Quint's @bugarela on engineers who aren't letting AI push them out. On why AI makes her work more relevant, not less: "It actually feels like the industry is now catching up to the problem I've spent the past several years working on."

St. Patrick's Day may be over but we're still thinking about fake gold 🍀 New post: we spec'd the Zerocash faerie-gold vulnerability in Quint and could clearly see the scenario where it gets exploited. Then we saw how a Zcash-like fix solves it.




Software development is converging on three primitives: specification, verification, and optimization. Current stack: @quint_lang, deterministic simulation testing, auto research










A peak at how it feels to write a Quint spec from a blank file. Writing a new spec for Tendermint consensus, now with all the learning and techniques we developed over the last 3 years. My favorite part are the run descriptions. Writing them made everything click so well!

Ever find yourself asking "how do I even get to the state I want to test?" Witnesses in Quint answer that question. Define the state you're looking for and Quint finds an execution path that gets you there. No manual trace construction required.

41% of all code shipped in 2025 was AI-generated or AI-assisted. The defect rate on that code is 1.7x higher than human-written code. And a randomized controlled trial found that experienced developers using AI tools were actually 19% slower than developers working without them. Devs have always written slop. The entire software industry is built on infrastructure designed to catch slop before it ships. Code review, linting, type checking, CI/CD pipelines, staging environments. All of it assumes one thing: the person who wrote the code can walk you through what it does when the reviewer asks. That assumption held for 50 years. It broke in about 18 months. When 41% of your codebase was generated by a machine and approved by a human who skimmed it because the tests passed, the review process becomes theater. The reviewer is checking code neither of them wrote. The linter catches syntax, not intent. The tests verify behavior, not understanding. The old slop had an owner. Someone could explain why temp_fix_v3_FINAL existed, what edge case it handled, and what would break if you removed it. The new slop has an approver. Different relationship entirely. Arvid’s right that devs wrote bad code before AI. The part he’s missing: the entire quality infrastructure of software engineering was designed around a world where the author and the debugger were the same person. That world ended last year and nothing has replaced it yet.



HOLY FUK I JUST LEARNED ABOUT TLA+ AND IT'S SO GOOD FOR AGENTIC CODING ur telling ME that i can mathematically fact check every possible scenario of my design STATE to prevent bugs and crashes AND IF IT FINDS SOMETHING THE AGENTS GET INSTANT FEEDBACK AND LOOP FIXING IT TILL IT ALL POSSIBLE BUGS IN THE DESIGN ARE PATCHED LOL THIS IS OP



A Quint Connect refresher: did you know you can automatically test that your Rust implementation matches your Quint spec? That's what Quint Connect is for, model-based testing designed to work well with LLMs.