Rich Nanda
1.7K posts

Rich Nanda
@richnanda
Deloitte Consulting | Chief Strategy Officer | Author of The Transformation Myth (MIT Press) | #growth #strategy #tech #AI & occasionally #wine #chicagosports


The Jevons Employment Effect From AI apollo.com/wealth/the-dai… // While AI might make it seem like professional services (law, consulting, finance) get easily replaced, the opposite is happening already including recent graduates. (charts)


If you read this and don’t understand why it’s happening it’s an opportunity to reset your understanding of how the real world works. The real world will need a ton of help actually getting agents going in the enterprise. Companies have legacy tech stacks they need to modernize, data in tons of fragmented tools, knowledge that isn’t captured or digitized, and change management needed to actually utilize agents effectively. And they have to do all this while still running their business day-to-day, unlike startups. This is why there is so much opportunity for companies (software or services) to actually deploy agents in specific domains and workflows. This remains a big opportunity for both existing services providers but also tons of new startups as well. Every new technology wave produces a new era of consulting firms that can deliver on that technology. It’s also why the FDE model is going to be alive and well for a long time because companies will want to have their vendor actually help drive the change management and implementation for their new workflows. The people aren’t going away. Far from it.

OPENAI IS WORKING WITH CONSULTANTS TO SELL CODEX - WSJ

🦔Goldman Sachs reports that companies are blowing past their AI inference budgets by orders of magnitude, with inference costs in engineering now approaching 10% of total headcount costs and potentially reaching parity with salaries within several quarters. KPMG surveyed 2,100 senior leaders and found US companies plan to spend an average of $178 million on AI over the next 12 months, with Asia-Pacific firms budgeting $245 million and EMEA $157 million. The two reports together show companies are spending more than planned and intend to spend even more. My Take Inference costs approaching headcount parity is an extraordinary number that most finance teams did not model when they approved their AI strategies twelve months ago. The compute crunch, electrical component shortages, and GPU spot prices up 48% in two months are all flowing into corporate operating costs faster than anyone budgeted for, and Goldman's trajectory suggests it accelerates from here. What I find hard to reconcile is that $178 million average sitting alongside enterprise data showing eight in ten workers are either avoiding AI tools or not using them at all. Companies are committing to nine-figure inference budgets while their own employees aren't using what's already been deployed. I've watched this dynamic build all year and my honest read is that a significant portion of this spending is driven by competitive fear rather than demonstrated returns. Nobody wants to be the company that didn't invest in AI when everyone else did. That's how bubbles get funded, and at some point boards are going to demand a number that justifies it. Hedgie🤗

Anthropic CEO Dario Amodei: “50% of all tech jobs, entry-level lawyers, consultants, and finance professionals will be completely wiped out within 1–5 years.”

People are waking up to the fact that AI is a complementary tool for your skill set, and not a complete replacement for a skill If you are already a good coder or writer, AI tools can enhance that skill and make you more productive But if you bad at it, AI is not going to magically solve your deficiency This is the primary reason why I think the fears around AI replacement of labor are overblown






"Based on our analysis, 29% of the Fortune 500 and ~19% of the Global 2000 are live, paying customers of a leading AI startup." a16z.news/p/ai-adoption-…



There's kind of an analogy with what spreadsheets did with a lot of quantitative work. For example, it used to be that it was a chore to run different school aid formulas in our office. Now that you can do it effortlessly, you're just expected to do far more analyses.



AI has become the justification for every layoff. It's the perfect excuse card, but there is a lot of spin involved. Every layoff is some combo of the following five very different AI stories. 1. Nothing changed, we just realized we have too many people. We are going to blame AI, but we are bullshitting. This is the AI as an excuse; it was really sloppy hiring, and we are just blaming AI. (See Block) 2. Growth has gone away so now we have too many people. This may be because of AI if you are a SaaS company. All the customer love is now going to AI. But it's less AI as a productivity lift, and more about you just building a less ambitious growth company. (See Salesforce and most every SaaS company) 3. We spent our money on capex to build AI so now we can’t afford as many people. Management may say it’s about AI making us productive (4 below) but my gut is a lot of it is about Nvidia getting our money so now there is none for you. (See Meta and Oracle) 4 We are really using AI the way god intended us to. We don't need as many people. This is the ONLY version of the story that is actually about a productivity increase. It's real, it's happening, but I wonder if it is even the majority of the layoffs. (See some software engineering departments right now) @jasonlk raised a fifth reason that doesn't get talked about enough: we just have the wrong people. Maybe we don't need 20 engineers who all know C++, but rather eight who have strong AI skills. This I think should be happening everywhere. Every time a layoff announcement comes out, I try and mentally categorize per the above.



Famously (there is a beautiful Works in Progress piece on this) in 2016, Geoffrey Hinton told an audience in Toronto that medical schools should stop training radiologists, since AI would soon outperform them at reading scans. Ten years later, there are more radiologists than ever, and they earn more than they did then. Hinton was right about the task, but he was wrong (so far!) on the future of the radiology profession. Times have never been better for them. The gap between those two claims, the difference between tasks and jobs, is the subject of a paper I have written with Jin Li and Yanhui Wu, and that we release today: "Weak Bundle, Strong Bundle: How AI Redraws Job Boundaries." (Very relatedly we are also finishing the first draft of our book "Messy Jobs" on AI and Jobs!! You will be the first to hear). We start from the observation that the growing literature on AI and labor markets measures the AI shock by task exposure: people count how many tasks AI can perform in a given occupation AI can perform, and infer that more exposure means more displacement. Eloundou et al. published a paper in Science in 2024 that started this literature, and many follow the same logic. The inference they make is that the more exposed tasks, the worse the outcomes. This is incomplete, because labor markets price jobs, not tasks. A radiologist does not just sell image classification, but does many other jobs: triages cases, communicates with other physicians, trains residents, makes the difficult decisions, and signs a diagnosis. The market buys a bundled service. The question AI poses is not whether it can do one task inside the bundle. The question is whether that task can be pulled out. Thread (1/3) dropbox.com/scl/fo/689u1g7…


Anthropic CEO: “50% of all entry-level Lawyers, Consultants, and Finance Professionals will be completely wiped out within the next 1–5 years." grad students and junior hires are cooked.




