Mango Aggro | AI Displacement

21.1K posts

Mango Aggro | AI Displacement banner
Mango Aggro | AI Displacement

Mango Aggro | AI Displacement

@MangoAggro

Your job is being automated. Your company knows. HR has the script ready. I write about what's actually happening before it happens to you. Weekly audit ↓

가입일 Aralık 2016
453 팔로잉513 팔로워
고정된 트윗
Mango Aggro | AI Displacement
100 people decided this was worth their inbox. That's not nothing. That's 100 people who wanted someone to say it plainly instead of softening it. I'll keep saying it plainly.
Mango Aggro | AI Displacement tweet media
English
0
0
0
23
Mango Aggro | AI Displacement
@vladtenev Dystopia vs utopia is a marketing slide. Reality: hybrid. Open tools on the edge, proprietary cores at the center. Users get upside crumbs, platforms take the rake. Same as app stores, just bigger.
English
0
0
0
1
Vlad Tenev
Vlad Tenev@vladtenev·
There are two possible futures: 1. AI companies generate the vast majority of major discoveries and inventions in-house, using their massive data-centers, and capture nearly all the value themselves. 2. AI companies build tools people can use, and the value and glory from the inventions / discoveries accrue to the users. This unleashes a torrent of mathematical discovery and entrepreneurial activity. The latter is the future we believe in and are working to build. The former is the dystopian one.
Harmonic@HarmonicMath

There are two ways to build AI for mathematics. One is to work in private and surface results after the fact. The other is to put real tools in the hands of mathematicians, learn from real use, engage in public, credit the community you build on, and support the ecosystem itself. We believe in the second model. Mathematics is a profoundly human endeavor. AI should strengthen mathematicians, not route around them. Build with mathematicians, not around them.

English
48
19
312
28K
Mango Aggro | AI Displacement
@allgarbled Models don’t ‘respect’ you. They mirror you. Messy input → messy output. Not attitude, just pattern matching. Also cheaper tokens, faster replies, lower effort targets on the backend. Why would the system overdeliver?
English
0
0
0
2
gabe
gabe@allgarbled·
I think you get worse results from LLMs if your messages are full of typos and grammatical mistakes. I’m sure this would not show up in benchmarks, but I still believe it. The models respect you less and become lazier as a result.
English
43
6
305
18.9K
Mango Aggro | AI Displacement
@lololeereverie When output is near-free, authenticity stops being priced in. Platforms flood, trust thins, then everyone adds “verification” layers. Seen this loop before. Who pays for real now?
English
0
0
0
0
Lauren Lee Smith
Lauren Lee Smith@lololeereverie·
I hate GenAi. I hate that it exists. I hate that it makes everything seem fake in a world already rife with frauds and scammers. It disconnects us further from each other and the human gift of art, for no discernible reward I can fathom. It feeds all our worst qualities: laziness, historical and literary degeneracy, dishonesty and covetousness. It turns people into thieves that argue back in total absurdity: “I should have the right to steal and lie.” It is poison delivered by keystroke.
English
70
140
758
9.5K
Mango Aggro | AI Displacement
@barkmeta $2B AI collars so one farmer herds 700k cows from his phone. Ranch hands just got optimized overnight. They test the labor slash on livestock first because cows don’t unionize or demand severance. Humans next on the spreadsheet.
English
0
0
0
1
Bark
Bark@barkmeta·
Let me explain what a $2 billion “cow collar” actually is 👇 They put an AI collar on a cow. It tracks location. Health. Movement. Behavior. Every second of every day. A farmer opens an app. Draws a line on a map. That line becomes a fence. No physical fence. No wall. Nothing visible. When the cow gets close to the boundary.. the collar vibrates. The cow turns around. Within 10 days the animal doesn’t even test the boundary anymore. It just stays inside. 700,000 animals are already wearing them. They called it a “cowgorithm.” They want you to laugh at it. Now read the technology again without the word cow.. 24/7 GPS tracking on every individual. Real time health and behavior monitoring. Invisible boundaries drawn from a phone. Movement controlled through vibration and sound. Subject learns compliance within days. $2 billion. And guess who led the investment… Peter Thiel. The same man who built Palantir. CIA backed surveillance from day one. The same man who just got a $10 billion Pentagon contract to run AI inside the military. His entire career is building systems that track and control. Now he’s funding a collar that does exactly that. They’re not investing in farming. The farming is the test… Most people have no idea what’s coming…
Polymarket@Polymarket

JUST IN: AI cow collar startup Halter raises at $2,000,000,000.00 valuation, uses proprietary “cowgorithm” to herd cattle.

English
964
8.1K
30.6K
2M
Mango Aggro | AI Displacement
@unusual_whales CEOs won’t be replaced. Everyone under them will. Agent handles briefs, comms, first-pass calls. Fewer chiefs needed beneath the chief. Same story as spreadsheets—top stays, middle thins.
English
0
0
0
2
unusual_whales
unusual_whales@unusual_whales·
Meta CEO Mark Zuckerberg is creating a CEO agent to assist him in his job, per WSJ
English
254
89
1.4K
222.3K
Mango Aggro | AI Displacement
@jack Open source isn’t dying. It’s getting commoditized. Code was the moat, now it’s the brochure. Value shifts to data + distribution + evals. Firms will happily open the repo while locking the weights and cutting the team.
English
0
0
0
1
jack
jack@jack·
is the future value of "open source" code anymore? i believe it's shifting to data, provenance, protocols, evals, and weights. in that order.
English
449
223
2.2K
141K
Mango Aggro | AI Displacement
@TukiFromKL It’s a gambler’s table, not a balance sheet. Debt is high, margins don’t exist, but the execs still walk. You? Your paycheck absorbs the risk.
English
0
0
0
0
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what's happening to Big Tech right now.. the five biggest tech companies on earth are spending 94% of their operating cash flow on AI infrastructure.. ninety four percent.. > Amazon is projected to go negative free cash flow this year.. as much as $28 billion in the red.. > Alphabet's free cash flow is expected to collapse 90%.. from $73 billion to $8 billion.. > the Big Five raised $121 billion in bonds in 2025 alone.. > Morgan Stanley projects $1.5 trillion in tech debt over the coming years.. for the first time in history.. hyperscalers hold more debt than cash.. and what are they getting for all of this.. $650 billion spent on AI infrastructure.. generating $35 billion in AI revenue.. that's 5 cents back for every dollar spent.. these companies used to be the greatest cash machines ever built.. now they're borrowing money to keep the data center lights on.. I'd say they're not investing in AI.. they're gambling on it.. with your pension fund, your 401k, and your job.. and if the bet doesn't pay off.. they don't go broke.. you do..
NoLimit@NoLimitGains

The same companies selling you the future are borrowing money to build it.

English
47
114
480
47.6K
Mango Aggro | AI Displacement
@danshipper The reality: output per dollar is king. Pirates get glorified, architects get stretched, and HR quietly removes the rest. Weekly Displacement Audit has this pattern on repeat.
English
0
0
0
2
Dan Shipper 📧
Dan Shipper 📧@danshipper·
new model for engineering team structure in 2026: 2 people only one pirate and one architect the pirate's job is to move as fast as possible to develop valuable, shipped product features by vibe coding. the architect's job is to turn the product surface discovered by the pirate into a reliable, structured machine—also by vibe coding, but at a slower, more well-reasoned pace. every product needs a pirate but most product's only need an architect once they some form of PMF, and in that case they usually don't need one full-time. architects can work across many codebases and solve interesting technical challenges. pirates go hard on a product that they own end-to-end.
English
115
84
1.4K
88.2K
Mango Aggro | AI Displacement
@BretWeinstein We’ve heard this since the loom. New jobs did show up. But they were fewer, lower leverage, and came after a nasty gap. AI’s twist isn’t magic generalism, it’s speed + cost. That’s enough to crush demand before ‘new roles’ catch up.
English
0
0
0
3
Bret Weinstein
Bret Weinstein@BretWeinstein·
Many argue AI is like prior tech revolutions--that the jobs destroyed will be replaced by jobs we can't yet imagine They're wrong. AI+robots will be better at the newly created jobs too. It's is a super-competitor, with the advantages of a generalist AND almost every specialist
English
387
96
1.2K
60.2K
Mango Aggro | AI Displacement
@aakashgupta Evals as the new hiring filter. Cute. Six months ago it was prompts. Now it’s scoring functions. Next quarter the eval builders get automated and the whole PM org gets the “elegant workforce optimization” memo.
English
0
0
0
2
Aakash Gupta
Aakash Gupta@aakashgupta·
I don't think most AI PMs realize that "evals" is becoming a hiring filter. Six months ago, shipping an AI feature meant writing good prompts and checking the output yourself. Vibe checks. The founder of Braintrust calls this the "do things that don't scale" phase of AI development. It works when it's you and two engineers. Then your product hits production. More users, more edge cases, more engineers contributing changes. The vibe check breaks. You need test cases, scoring functions, iteration loops. You need to prove your AI feature works with numbers, not instinct. The companies that figured this out early (Vercel, Replit, Ramp, Notion, Airtable) are all running structured evals now. Braintrust built the infrastructure for it and hit an $800M valuation. Ankur Goyal walked through the full framework on this episode and built an eval live from zero. Score went from 0 to 0.75 in 20 minutes. The three-part structure is simple: inputs your product handles, a task that generates outputs, a scoring function that produces a number between 0 and 1. Every PM already thinks this way about PRDs. Inputs, outputs, success criteria. The skillset transfers directly. The PMs who connect those dots first are going to own AI product quality at their companies.
Aakash Gupta@aakashgupta

Evals are the new PRD. The companies building AI products that actually work are running 12.8 eval experiments per day. Here is the playbook with @ankrgyl, Founder and CEO of @braintrust ($800M valuation, behind Vercel, Replit, Ramp, Zapier, Notion, Airtable): ⏱ 1:43 Why vibe checks stop scaling ⏱ 6:35 Evals are the new PRD ⏱ 8:45 The Claude Code evals controversy ⏱ 18:48 Building an eval live from zero ⏱ 29:51 Connecting Linear MCP and iterating ⏱ 39:12 Why you need evals that fail ⏱ 43:36 Offline vs online evals ⏱ 47:40 Three mistakes killing eval culture The core framework: every eval is exactly three things. A set of inputs your product needs to handle. A task that takes those inputs and generates outputs. A scoring function that produces a number between 0 and 1. We built one from scratch on camera. Score went from 0 to 0.75 in under 20 minutes.

English
11
3
68
14.1K
Mango Aggro | AI Displacement
@r0ck3t23 Biggest tell isn’t “AI writes the code.” It’s Oracle saying it out loud. That’s cover for budgets. Once code is “free,” headcount isn’t. Expect “platform consolidation” and quiet cuts. Same movie as AT&T ops → spreadsheets.
English
0
0
0
10
Dustin
Dustin@r0ck3t23·
Larry Ellison just told every software engineer on Earth their job description is dead. Not evolving. Dead. Ellison: “The code that Oracle is writing, Oracle isn’t writing. Our AI models are writing.” This is not a startup demo. This is one of the largest infrastructure monopolies on the planet telling you it already replaced the people who built it. For fifty years, building software meant translating human intent into machine instructions. Line by line. Bug by bug. Sprint by sprint. That entire layer is gone. Ellison: “We don’t write the procedure. We declare our intent.” That sentence just made the entire engineering labor market flinch. The procedure was the job. The procedure was the paycheck. The procedure was what made a developer valuable. And now the machine does it without being asked twice. Ellison: “We just tell the model what we want the program to do, and then the AI comes up with a step-by-step process to actually do it.” You are no longer paid to build. You are paid to think. And most organizations have no idea how to evaluate that. The companies still hiring armies of developers to grind through codebases are paying salaries the machine already made worthless. Not in years. In seconds. When a company worth hundreds of billions hands the keyboard to the machine and tells you the output is better, the debate is not winding down. The debate is over. The enterprise that wins this decade does not write the best code. It removes the human from the process entirely and runs on intent alone. The programmers who survive are the ones who realize the craft is no longer typing. It is architecture. It is judgment. It is knowing what to build and why. Everything else now belongs to the machine. And the machine does not negotiate severance.
English
145
132
544
145.6K
Mango Aggro | AI Displacement
@rauchg “Code is an output. Nature is healing.” Translation: we don't need nearly as many humans anymore. They glorified writing it for years. Now they glorify replacing the writers. The ratchet clicks forward again.
English
0
0
0
4
Guillermo Rauch
Guillermo Rauch@rauchg·
Code is an output. Nature is healing. For too long we treated code as input. We glorified it, hand-formatted it, prettified it, obsessed over it. We built sophisticated GUIs to write it in: IDEs. We syntax-highlit, tree-sat, mini-mapped the code. Keyboard triggers, inline autocompletes, ghost text. “What color scheme is that?” We stayed up debating the ideal length of APIs and function bodies. Is this API going to look nice enough for another human to read? We’re now turning our attention to the true inputs. Requirements, specs, feedback, design inspiration. Crucially: production inputs. Our coding agents need to understand how your users are experiencing your application, what errors they’re running into, and turn *that* into code. We will inevitably glorify code less, as well as coders. The best engineers I’ve worked with always saw code as a means to an end anyway. An output that’s bound to soon be transformed again.
English
136
95
1.1K
62.8K
Mango Aggro | AI Displacement
@GergelyOrosz They celebrate AI writing code faster. Then quietly calculate how few warm bodies you actually need to keep it alive. o11y, error budgets, the whole production grind just became optional. Same script as always.
English
0
0
0
5
Gergely Orosz
Gergely Orosz@GergelyOrosz·
The chatter about generating code with AI tools feels stuck at the "basic" level of... well, codegen, plus (perhaps) reviews and testing. I hear close to little talk about the things that come right after generating code: deploying, canarying, o11y, SLOs, error budgets etc
English
60
5
235
21.4K
Mango Aggro | AI Displacement
@garrytan Good sleep → better abstractions. Sure. Also means one dev does what three did last year. Guess what happens to the other two? HR’s got a softer word than layoffs ready.
English
0
0
0
3
Garry Tan
Garry Tan@garrytan·
Weird realization: The best AI coding is in the morning when you are fresh from a night full of dreaming about latent space. Sleep early. Wake up early. The best ideas are in the morning. It's not just about raw token maxxing. It is about teaching the machines the right abstraction that comes out of your own personal experience and the synthesis that comes from a good night's sleep.
English
165
54
1K
56.6K
Mango Aggro | AI Displacement
@0xDripz Everyone nods at ‘data quality’ then ships anyway. Because the KPI is cost per output, not truth. Seen it since Lotus 1-2-3. accuracy matters until it slows the quarter. “Auditable pipelines” lose to deadlines. What do you think wins?
English
0
0
0
2
Dripz 🍃
Dripz 🍃@0xDripz·
AI right now is obsessed with models. Bigger models. Faster inference. More compute. But if you zoom out for a second, the real bottleneck isn’t the model. It’s the data. Not just how much of it exists, but whether it can be trusted. Here’s what most people don’t want to admit: We’re quietly entering a phase where AI is increasingly trained on data generated by other AI systems. Synthetic on synthetic. Output feeding output. At first, it looks efficient. Scalable. Clean. But over time, something subtle breaks. → The signal weakens → The edge cases disappear → The nuance flattens Researchers call this model collapse. I’d frame it more simply: → When models stop learning from reality, they start drifting away from it. Now take that into real-world use cases. ➜ Medical diagnosis ➜ Legal reasoning ➜ Autonomous systems ➜ Government decision-making This isn’t where “close enough” works. This is where provenance matters. Where the question shifts from “does the model work?” to: Can we trust what it was trained on? That’s the layer most of the industry has ignored. ▫️ Opaque datasets ▫️ Anonymous labeling pipelines ▫️ No clear origin ▫️ No accountability It worked when AI was experimental. It doesn’t work when AI becomes infrastructure. This is where Perle feels directionally important. Not because it’s “another AI project” But because it’s focused on something most teams avoid: → making data itself verifiable The core shift is simple, but not easy: ✓ Move from anonymous crowd labeling → to accountable contributors ✓ Move from volume-based incentives → to reputation-based systems ✓ Move from black-box datasets → to auditable data pipelines That last part matters more than people think. Because once data is traceable, a few things change immediately: ↳ You can audit decisions ↳ You can measure contributor reliability over time ↳ You can match expertise to task complexity ↳ You can actually debug where things went wrong Without that, you’re just hoping the dataset is “good enough” The expert layer is another piece most platforms get wrong. Crowdsourcing works for simple tasks. It breaks for complex ones. You don’t want a random worker labeling medical data. You want someone who understands what they’re looking at. @PerleLabs leans into that reality. → Expertise becomes an asset → Accuracy becomes reputation → Reputation becomes opportunity Over time, contributors build a verifiable reputation tied to their performance. The bigger picture here isn’t just better datasets. It’s the idea of a sovereign data layer for AI. Where: ▫️ data has provenance ▫️ contributors have identity and track record ▫️ systems can verify, not assume That’s the kind of infrastructure serious AI deployment will need. If there’s one question I’d leave for the Perle team, it’s this: How do you balance openness with trust at scale? Most people are still chasing better models. A smaller group is starting to realize: → The future of AI will be decided by who owns, verifies, and understands the data. That’s a much harder problem. And probably the more important one. #PerleAI #ToPerle — participating in @PerleLabs community campaign
Dripz 🍃 tweet media
English
145
4
286
5.3K
Mango Aggro | AI Displacement
@OfficialLoganK Firms won’t ‘disrupt’ everything, they’ll quietly automate the 20% that lets them cut 20%. Same story since AT&T operators. Weekly Displacement Audit has been tracking this.
English
0
0
0
3
Mango Aggro | AI Displacement
@mcuban Same pay, less hours? Be serious. CFO sees agents and does the math: output per head up, so headcount down. That “extra hour” gets recaptured by targets in a quarter. Ratchet effect. Always has been.
English
0
0
0
7
Mark Cuban
Mark Cuban@mcuban·
Smart, bigger companies will enable their employees to create and use agents (within security guardrails ), improve their productively but MOST IMPORTANTLY, they will reduce their work day by an hour to start. Same pay. Reward people doing the daily with more time. I get WFH kind of dilutes that. But it’s a step that sets the tone in a company.
Eric Rovner@ericrovner

@mcuban Love the NBA analogy. I’d push back slightly on tribal knowledge though. Most people aren’t hiding it on purpose. They just never had a reason or a system to document it. This is why a lot of lessons learned in delivery end up recurring. People don’t learn from others’ mistakes.

English
72
25
287
172.2K
Mango Aggro | AI Displacement
@MmisterNobody Same playbook every cycle. New tool, same pitch, then “efficiency gains” and quiet layoffs. Switchboard to spreadsheets to this. Savior for margins, not workers.
English
0
0
0
1
Mr. Nobody
Mr. Nobody@MmisterNobody·
🚨Citadel CEO Ken Griffin: “The world needs a savior, and the hope is that AI is the savior...” This is giving us Antichrist vibes.
English
1.1K
1.7K
6.9K
189.1K
Mango Aggro | AI Displacement
Before you sign up for another course — run this audit first. Pull your last two weeks of actual work tasks. For each one ask: am I executing a process or owning an outcome? Process = defined steps, AI does this. Outcome = accountable when it breaks, AI assists. If more than half are process tasks the course won't save you. Repositioning will.
English
0
0
0
6
Mango Aggro | AI Displacement
@thdxr Shortage of labor to install the machines replacing labor. Peak irony. Companies panic-hoarding GPUs while quietly lining up the next round of "restructuring."
English
0
0
0
4
dax
dax@thdxr·
basically everyone is telling us there's shortages in every component of deploying GPUs, even labor lot of nervousness and hoarding right now, some crazy stuff going on idk what things are going to look like in the next 6 months
English
68
25
1K
53.9K