Nathan Hurst
1.1K posts

Nathan Hurst
@nahurst
Thinking about education, the future of work, ai, and decentralization. Product+Engineering+Design at Fiveable. Previously TpT, Amplify, Hirelite, Ohours
Charlottesville + NYC Katılım Kasım 2009
336 Takip Edilen933 Takipçiler


Artificial intelligence will kill artificial gates
- Applicant tracking system keyword screens
- Brainteaser interviews
- Internal approval committees
- Status reports
- Top-20 school screening filters
- Credit-hour requirements
- Traditional media gatekeeping attention
AI will kill these gates by incentivizing going direct. As consumers use AI as a copilot, their baseline intelligence, expectations, and evaluation rigor all rise. At the same time, producers can build almost anything more easily. Add shifts in attention patterns - craving authenticity, distrust in institutions, potency of short-form video, attention routing algorithms - and it’s a perfect storm for the collapse of artificial gates.
Artificial gates control throughput with proxies, without exposing their decisions or output to the people they’re supposed to serve in a tight feedback loop.
Types of artificial gates
- Volume-triage gates - when an application gets so much interest that triagers resort to quick filters to save time. Ex: ATS keyword filters, admissions GPA cutoffs, cold-inbound weed-out.
- Intelligence gates - employers have used the SAT, via colleges, as a laundered intelligence test for a long time. In some roles this matters and consumers care. But when customers buy directly, they mostly care whether the product works, not whether its creator aced a proxy test.
- Effort gates - the most legitimate of the artificial gates. Companies lean on colleges here too: essentially a long-haul conscientiousness test. Can you stick with something for a long time and finish? But if the value of that effort never shows up directly for customers, the gate fades.
- Status filters - these aren’t as prevalent as they used to be but they’re still around. Where’d you go to school? What companies have you worked at? Who do you know?
- Scarcity-protection bottlenecks - deliberate supply restrictions that create scarcity and concentrate value for producers. Ex: “apply to join”, college admission caps, restricting the supply of doctors, credential requirements for low-risk tasks, dockworker-related restrictions on automation.
- Reputational risk management gates - these show up when reputation feels fragile. Ex: legal pre-release review, internal decks for review, approval committees, executive review, many corporate trainings. It’s important to distinguish this from risk management where the risk actually accrues to the customer (safety, security).
- Internal productivity gates - practices that may or may not help coordinate work inside an organization but are not directly visible to the end user. Ex: status reports, OKRs, quarterly and annual planning, any conversation where someone says “alignment”.
I’m not saying all of these are bad. In fact, I like working with people who pass some of these gates. And some of the productivity gates improve coordination and focus. But those aren’t the important questions. The real test is: is this required for consumers to get what they need? None of these gates are required consistently enough to matter, especially given the forces in play now.
Forces at play
On the consumer side, we see glimpses of what’s coming. Between the ease of using AI and growing frustration with low authenticity and sometimes low supply, people go direct. Consumers triage with a copilot and can see, in detail, the dimensions involved in solving their problem. They have decision support at their fingertips, from gifts to electronics to GLP-1 purchases. This raises expectations for clarity, personalization, and performance. But discovery still runs through platforms. Feeds are the new attention gates, and because they sit in a tight feedback loop with consumers, they behave less like artificial gates and more like adaptive filters.
On the producer side, things are accelerating but bumpy. With AI, small teams can make a v1 in a weekend and a v2 next week. DTC keeps eating channels because it’s closer to the user and cheaper to test. As production costs collapse, the constraint shifts to distribution and iteration: can you get in front of real people and learn fast? All the intermediate artifacts - decks for committees, pre-reads, status reports - come under fire. Demos beat memos. Quick experiments beat alignment meetings. The work that wins is the work in a user’s hands. But hiring is under pressure: credentials and titles are decaying signals (grade/title inflation), and companies are pouring cycles into cheating deterrence and AI funnel processing instead of validating candidate capability against customer needs.
With things shifting from clearing artificial gates to direct consumer interaction, what could happen?
Predictions
- Hiring: cat-and-mouse games to filter and screen candidates increase in the short term until a winning approach emerges where candidate capabilities can be tested against consumer desires more directly.
- Collaboration: companies with heavy internal handoffs or deeply ingrained approval cultures slow even further while small teams, “one-mind” companies, or organizations with directly responsible individuals flourish.
- Sales: principal-agent chains collapse as producers build and reach potential consumers more directly.
- Education: clearly defined paths get much rarer. Students get better at running their thoughts and work into potential consumers (or proxies in a tighter feedback loop with consumers).
- More education: just-in-time learning gains leverage. Learning directly for a purpose that hits a consumer becomes much more valuable.
- Attention: if the feeds drift away from actual consumer desires, consumers will see the rise of attentive agents that keep an eye out for you, hold your interests and values in mind, and let you know when something deserves your attention. On the producer side, authenticity and story telling ability get even more important.
- Quality: independent quality signals emerge to enhance the feedback loop between producers going direct and consumer experiences with those products. Think of a more comprehensive version - maybe with social signals - of what you see in the wild west of supplement reviews.
English

Gaps that frontier models have little incentive to change:
- pushing inference to your device
- custom fine-tuning: big pull back here in the past year from frontier models not making newer versions available as they did for older models. keep an eye on @thinkymachines
- model parallelism: as opposed to data parallelism, enables decentralized training in the face of communication bottlenecks - watch @PluralisHQ
- interaction with copyrighted work you own copy of
- modeling individualized values - it’s encoded in pre-training and fine-tuning in a way that’s too costly to support on an individualized basis - I’m thinking a lot about this one. Would love to hear if you are too.
English

- Shiny object syndrome
- Buy-in and collaboration
- Creativity
- Organizational knowledge sharing
- Attention and distribution
- Multivariate A/B test time
More here
nahurst.substack.com/p/bottlenecks-…
English

Not all of these have to be bottlenecks. Better context and stronger feedback loops unlock real gains. But constraints like prioritization, verification, attention, and statistical limits still dominate. None of this is new. AI just pulled the easy work forward exposing the real constraints. Ignore them and you get busywork at scale. Design for them and you get a faster, tighter system.
Notes
[1] Fred Brooks wrote about essential complexity vs accidental complexity a while back en.wikipedia.org/wiki/No_Silver…
[2] Lead time and cycle time:
Cycle time = the time it takes to finish once you start on it
Lead time = the time it takes to finish once you see the need for it
(I’m borrowing these terms from lean; purists define them slightly differently)
[3] Watch out for idealistic pessimists nahurst.substack.com/p/startup-hiri…
[4] @bbalfour's Four Fits Growth Framework comes in handy blog.brianbalfour.com/p/the-four-fit…
English

Beyond Lead Time Bottlenecks:
- Attention & distribution: “If you build it, they will come” still fails. Does your product amplify your distribution and vice versa? [4]
- Qualitative performance: Talk to users, watch behavior, run interviews and surveys. Does the product actually solve their problem?
- Quantitative performance: As experiments stack up, each A/B test takes longer to reach significance. At scale, statistical power becomes its own bottleneck.
English

If you can build anything you want with llms, what are the limiting factors?
Bottlenecks:
- Verification: Did it build what you wanted? Did it handle edge cases? Did it hallucinate? Does it meet your quality bar?
- Integration and regression: How does it interact with the rest of the system? How fast can you get the reviews you need? Did it break anything? Does it meet cost, performance, scalability, security, usability, reliability, maintainability constraints?
- Context: If you don’t tell it, the model guesses. When this keeps happening:
AI: “I solved it!”
You: “Not quite.”
assume missing context first. How do you route the right data, mcps, tools, rag environment without drowning it in a mess of logs?
- Feedback loops: Agents only make progress when they can check their work. Create tests, tools, validation scripts, evals so they can run safely for longer.
- Essential complexity: Some difficulty comes from the problem itself. AI can cut accidental complexity, but the essential kind stays. [1]
- Infrastructure and secret management: You can vibe thousands of lines of code in minutes, but wiring stable environments, access controls, and secrets without spraying keys everywhere still takes deliberate work.
These have mostly been cycle time [2] bottlenecks. Cycle time is generally much faster now. Lead time has barely moved.
English

@stevecheney Thanks Steve! You're too kind. Just getting started writing again. Excited to take a look at your post on purchasing power.
English

Startup Hiring Mistake: Idealistic Pessimists vs Skeptical Optimists
nahurst.substack.com/p/startup-hiri…
English


