Brandon 🚀 Flightcontrol

27K posts

Brandon 🚀 Flightcontrol banner
Brandon 🚀 Flightcontrol

Brandon 🚀 Flightcontrol

@flybayer

Making Cloud easy for devs at @flightcontrolhq | CEO & cofounder | Creator @blitz_js | ✝️ Jesus Follower | 🛩🚁 Pilot l 🎹🎷Musician

Dayton, OH Katılım Eylül 2014
2.2K Takip Edilen14.1K Takipçiler
Brandon 🚀 Flightcontrol retweetledi
Matteo Collina
Matteo Collina@matteocollina·
We benchmarked TanStack Start, React Router, and Next.js running the exact same eCommerce app at 1,000 req/s on AWS EKS. The results were eye-opening.
Matteo Collina tweet media
English
97
207
2.3K
377.6K
Brandon 🚀 Flightcontrol retweetledi
staysaasy
staysaasy@staysaasy·
Re: SaaS death - I actually know of two separate SaaS companies that had employees leave in the last two years to build competitors and in both cases the competitive products are now dead, with zero traction. And the people that left those companies were very, very smart. And the products they built were the same shape as the companies they left, and they used AI to build them. But they had absolutely 0 success.
English
65
15
502
108.9K
Bruno Faviero
Bruno Faviero@Bfaviero·
Hard to describe the exact moment a VC decides you’re of zero value mid-convo and finds the fastest possible exit
English
61
11
1.4K
320.1K
Oleg | webstudio.is
One thing is for sure: LLMs put an end to nit-picking and bike-shedding in code. Nobody gives a s. any more what conventions and formatting you are using as long as it works.
English
1
0
2
394
ben
ben@benhylak·
i think that, with perfect discipline, it is possible to increase long term velocity. much in the same way it's technically possible to walk out of a casino while you're ahead.
David Cramer@zeeg

im fully convinced that LLMs are not an actual net productivity boost (today) they remove the barrier to get started, but they create increasingly complex software which does not appear to be maintainable so far, in my situations, they appear to slow down long term velocity

English
7
2
117
10.6K
Brandon 🚀 Flightcontrol retweetledi
William Huster
William Huster@whusterj·
I am developing a formal theorem I call the Verification Complexity Barrier. In a nutshell, if a program has some components `n` that have connectivity factor of `k > 0` , then verification complexity increases superlinearly for each new component. Therefore the time required to fully verify the system always exceeds time to generate components. After a while, because it's superlinear, the verification complexity takes off and becomes impossible to keep up with in some finite amount of time. This was true before AI, but is much starker now as code generation time trends towards zero. You hit the barrier sooner. We all have finite capacity - even AI agents - so there will always be a certain number of components `n` where the wall is hit. You must spend more and more effort on verification for each new component in the system. The best thing you can do is spend time changing the "topology" of the problem - change the software architecture - so that the exponent of verification complexity is lowered ([1] Lehman) and the curve is flattened. You can bundle components into modules, you can add automated tests, you can use formal proofs, you can use type systems. These things push the barrier to the right. They buy you more components and a more complex system. But the theorem suggests you can only ever defer the barrier, never completely eliminate it. AI Agents can burn tokens all day long generating software components and tests for those components. They can find bugs and fix them, but they cannot prove the absence of bugs (per [2] Dijkstra, [3] Rice, [4] Smith, and others). And "Bug" needs to be defined against someone's spec for what "working" and "not buggy" looks like. The more you build with Agents, the heavier the verification burden becomes. This may sound like the same trite observation others have made here on X: "Our bottleneck is no longer writing code, but reviewing code" [5] and "I am the bottleneck now." [6] True, but I don't think anyone has captured the magnitude of the problem. The math is: at a certain component count `n`, it is literally impossible for you, your team, and your agents to completely verify a system. The bottleneck goes to zero, and nothing gets through. So what software companies do in the real world is release incompletely verified software and massively scale up. This shifts the burden of verification onto their customers, because "given enough eyeballs, all bugs are shallow" ([7] Raymond). If you can get enough eyeballs, this is a very cost-effective way to shift the barrier to the right by massively increasing your team's capacity. You walk the tightrope of doing enough internal verification before release so you don't lose customers, while tolerating a certain amount of escaped bugs, which - if those bugs matter at all - your customers will find for you. Meanwhile, massively scaling up just accepts the growing cost of complexity. You can push `n*` from 15 to 30 by quadrupling your capacity. To get to 60 you need to quadruple again, and then again to get to 120. Your cost curve is superlinear to get linear gains in system size. At a big enough scale, you amortize the cost across your customer base and the economics work. Contrast that with a sufficiently complex vibecoded app built for a small audience - high complexity costs can't be amortized at small scale. I expect to see many people and companies try and fail at vibecloning complex SaaS in the near term. Complexity cost economics only scale with audience size (I will share another model for this). I do think SaaS prices will be corrected downwards to account for savings in code generation, but I predict that once the irrational exuberance for vibing fades, we'll see that it still makes sense to buy rather than self-build complex SaaS. A broader implication is that AI Agents will never be able to self-verify. Humans, too, will never be able to fully verify their behavior, because LLMs are by design of maximal complexity. Did you see the size of those error bars in the latest METR results? [8] The longer the horizon on a task, the more spread in AI agent outcomes. This is the Barrier in action. Spread is a feature of GenAI, but in practice it means heaps more output to review and verify. The Complexity Barrier shows you literally won't have time to review it all. At the inflection point of verification complexity, you have to fall back on vibes. The implication for fast-takeoff AGI is even scarier: if AI does reach a point of recursive self-improvement, this theorem suggests it will be structurally impossible to know that behavior is aligned, because you won't be able to completely verify. Drift is bad enough in vibe coding. Runaway AI will drift massively and there's no way of knowing where it will end up. All that's to say, verification should be the focal point of AI Engineering for the foreseeable future and maybe forever. That is: how do you capture what you want to do, refine that into specifics, and then follow up with automated tests, assertions, evals, and customer feedback to progressively harden your software? The verification problem is acute now because of how cheap software generation is. Because of the superlinear nature of software verification complexity, companies that push hard on the barrier and successfully shift it right will have a built-in moat versus those who fail to put in the verification work. Detailed blog post and interactive model incoming. Links: [1] en.wikipedia.org/wiki/Lehman%27… [2] cs.utexas.edu/~EWD/transcrip… [3] en.wikipedia.org/wiki/Rice%27s_… [4] cse.buffalo.edu/~rapaport/Pape… [5] x.com/shl/status/194… [6] x.com/thorstenball/s… [7] en.wikipedia.org/wiki/Linus%27s… [8] metr.org/blog/2025-03-1…
William Huster tweet media
English
4
7
60
13.4K
Alem Tuzlak 🇧🇦
Alem Tuzlak 🇧🇦@AlemTuzlak·
Today marks one of the happiest days of my life and one of my greatest achievements. ❤️ Today I have become a father to a healthy baby boy 👶 I am beyond happy and I am looking forward to the adventures with him and my beautiful wife who is my rock and my biggest love. 👨‍👩‍👦
English
44
3
310
6.9K
Brandon 🚀 Flightcontrol retweetledi
Aaron Francis
Aaron Francis@aarondfrancis·
The new Solo app is out! Run your agents, terminals, and devstack all in one app. I've increased the free tier, cleaned up the terminal, made trusting commands less annoying, fixed a few zsh issues, shift+enter issues, etc. Lots of good stuff!
English
38
20
356
38.9K
Scott Tolinski - Syntax.fm
I dumped Superwhisper for Whisper flow and I'm so much happier. I've never hit cancel on one tools and buy on another so fast after trying. Referral link, if you are interested. wisprflow.ai/r?SCOTT2931
English
17
2
39
6.9K
Brandon 🚀 Flightcontrol retweetledi
atulit
atulit@atulit_gaur·
dude computers are actually so fucking insane when you really think about it. we literally figured out how to write some fake-ass rules called code and somehow convinced rocks to follow them. like actual rocks. sand, melted, purified, carved into tiny pathways where electricity just flows in patterns. that’s it. that’s the whole magic. and yet from that we get operating systems, compilers, kernels, networks, distributed systems, machine learning models, entire virtual worlds running inside other virtual worlds. billions of tiny electrical decisions per second, all because we defined some abstract logic. humans basically invented a language of instructions and taught matter itself to execute it.
English
729
4.5K
43.5K
1.5M
Brandon 🚀 Flightcontrol
been using both @greptile and @coderabbitai for couple months. coderabbit has most thorough review and majority comments are good and accurate. greptile has sparse review comments, almost always important. we use greptile for PR description, it's very good
English
0
0
3
1.3K
kitze 🛠️ tinkerer.club
one day you're young and next thing you know you are comparing sanders on amazon dot com... such is life
kitze 🛠️ tinkerer.club tweet media
English
31
0
60
9.4K
Brandon 🚀 Flightcontrol retweetledi
Colin | clerk.com
Colin | clerk.com@tweetsbycolin·
The fundamental holes in MCP auth: * Dynamic Client Registration is too dangerous. It's too easy for an attacker to register a client that pretends to be the ChatGPT / Claude harness, and tricks victims into granting them scopes. * Dynamic Client Registration != agent-driven signups. The latter is the big unlock that @zenorocha / @resend are trying, but many feel is too dangerous. We need a way to enable signups from agents with lower fraud risk. * There is no enterprise oversight or revocation of agents. Enterprises don't trust employees to sign in without SSO. They also will not trust agents operating on their data without oversight, even if an employee is doing the delegation. * MCP Auth depends on OAuth, where scoping is per-client (for our purposes, it will be per-harness). This will result in a harness accumulating more and more scopes over time. This pattern breaks the principle of least privilege, and it would be safer if scopes were granted narrowly per-task instead of per-client. * MCP is only designed for human delegation. If you want to build a chatbot agent that has read-access to other systems, that chatbot generally needs to assume a human's identity. MCP Auth doesn't give Services a meaningful way to differentiate delegated agent from autonomous agent. --- The broad reason these challenges exist is because it's a square-peg/round-hole situation. OAuth was never intended for Agents, and the MCP team is mostly pulling old OAuth specs off the shelf to see if they can work (Dynamic Client Registration, Protected Resource Metadata) The assumption is probably that "massaging OAuth will go faster than writing a new spec" - but with so much time having passed, it's getting harder to believe that's still true.
English
11
6
39
9.1K
Daniel Lockyer
Daniel Lockyer@DanielLockyer·
Funny how the "ralph loop" has completely disappeared from conversation
English
107
10
760
98K