Juan Manuel Barreto

3.6K posts

Juan Manuel Barreto banner
Juan Manuel Barreto

Juan Manuel Barreto

@Barreto7Jm

Creating value for people | G. Orgánico SAS | Tradición 1915 | Inv: @BlackRockMX

Se unió Eylül 2013
1.2K Siguiendo652 Seguidores
Juan Manuel Barreto retuiteado
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
Last quarter I rolled out Microsoft Copilot to 4,000 employees. $30 per seat per month. $1.4 million annually. I called it "digital transformation." The board loved that phrase. They approved it in eleven minutes. No one asked what it would actually do. Including me. I told everyone it would "10x productivity." That's not a real number. But it sounds like one. HR asked how we'd measure the 10x. I said we'd "leverage analytics dashboards." They stopped asking. Three months later I checked the usage reports. 47 people had opened it. 12 had used it more than once. One of them was me. I used it to summarize an email I could have read in 30 seconds. It took 45 seconds. Plus the time it took to fix the hallucinations. But I called it a "pilot success." Success means the pilot didn't visibly fail. The CFO asked about ROI. I showed him a graph. The graph went up and to the right. It measured "AI enablement." I made that metric up. He nodded approvingly. We're "AI-enabled" now. I don't know what that means. But it's in our investor deck. A senior developer asked why we didn't use Claude or ChatGPT. I said we needed "enterprise-grade security." He asked what that meant. I said "compliance." He asked which compliance. I said "all of them." He looked skeptical. I scheduled him for a "career development conversation." He stopped asking questions. Microsoft sent a case study team. They wanted to feature us as a success story. I told them we "saved 40,000 hours." I calculated that number by multiplying employees by a number I made up. They didn't verify it. They never do. Now we're on Microsoft's website. "Global enterprise achieves 40,000 hours of productivity gains with Copilot." The CEO shared it on LinkedIn. He got 3,000 likes. He's never used Copilot. None of the executives have. We have an exemption. "Strategic focus requires minimal digital distraction." I wrote that policy. The licenses renew next month. I'm requesting an expansion. 5,000 more seats. We haven't used the first 4,000. But this time we'll "drive adoption." Adoption means mandatory training. Training means a 45-minute webinar no one watches. But completion will be tracked. Completion is a metric. Metrics go in dashboards. Dashboards go in board presentations. Board presentations get me promoted. I'll be SVP by Q3. I still don't know what Copilot does. But I know what it's for. It's for showing we're "investing in AI." Investment means spending. Spending means commitment. Commitment means we're serious about the future. The future is whatever I say it is. As long as the graph goes up and to the right.
English
5K
25.4K
169.9K
24.8M
Juan Manuel Barreto retuiteado
Aakash Gupta
Aakash Gupta@aakashgupta·
The layoff wave tells two stories, not one. Tech giants like Amazon, Meta, and Microsoft are cutting to fund GPU purchases. Their revenues are growing. Their stock prices are climbing. They're firing people to free up cash for compute. This isn't cost-cutting during a downturn. It's a forced reallocation from payroll to datacenter capacity. The math is brutal: every percentage point of headcount reduction funds another batch of H100s. Meanwhile, UPS, Nestle, Ford, and Target are cutting for the opposite reason. They've already deployed AI tools that work. Customer service automation, supply chain optimization, generative design systems. The productivity gains are real and compounding. These companies don't need to buy massive GPU clusters. They're renting inference from hyperscalers and cutting headcount because the math finally works. Both sides are feeding the same beast. Tech companies are buying the shovels. Everyone else is buying the gold those shovels dig up. Semiconductor companies sit in the middle, collecting rent from the entire value chain. TSMC, NVIDIA, and ASML are printing money while employment craters on both ends. The timing matters. We're at 10% enterprise AI adoption, heading toward 50%. History says this phase moves fastest and generates the most wealth. But that wealth is concentrating in compute, not labor. The gap between market cap growth and wage growth has never been wider. This isn't a recession. It's a rebalancing. And most workers are on the wrong side of it.
The Kobeissi Letter@KobeissiLetter

Recent Layoff Announcements: 1. UPS: 48,000 employees 2. Amazon: Up to 30,000 employees 3. Intel: 24,000 employees 4. Nestle: 16,000 employees 5. Accenture: 11,000 employees 6. Ford: 11,000 employees 7. Novo Nordisk: 9,000 employees 8. Microsoft: 7,000 employees 9. PwC: 5,600 employees 10. Salesforce: 4,000 employees 11. Paramount: 2,000 employees 12. Target: 1,800 employees 13. Kroger: 1,000 employees 14. Applied Materials: 1,444 employees 15. Meta: 600 employees The labor market is clearly weakening.

English
320
2K
11.9K
2.6M
Juan Manuel Barreto retuiteado
Lucas Lopatin
Lucas Lopatin@llopatin·
Manifiesto de todo bootstrappero: Elegimos crecer con lo que tenemos. Elegimos libertad por encima de capital. Elegimos la realidad por sobre la narrativa.
Español
1
5
31
1.6K
Juan Manuel Barreto retuiteado
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
Only 6% of Americans own a business. And of that, less than 1% ever hit $1,000,000 in revenue. After investing in businesses for 15 years, here are three reasons why 99% of businesses never scale past 7-figures:
English
53
112
1.5K
215.3K
Juan Manuel Barreto retuiteado
Freddy Vega
Freddy Vega@freddier·
Menos del 20% recuerda lo que escribió usando chatgpt. Las conexiones neuronales se reducen 47%. Y tras pocos meses de usar chatgpt, la mayoría pierda la capacidad de escribir bien. Estudio del MIT del impacto de ChatGPT en la capacidad de pensar ☹️
Freddy Vega tweet media
Español
87
468
2K
121.6K
Juan Manuel Barreto retuiteado
Ruben Hassid
Ruben Hassid@rubenhassid·
BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests)
Ruben Hassid tweet media
English
2.6K
9.1K
63.2K
14.2M
Juan Manuel Barreto retuiteado
Adam Oxsen
Adam Oxsen@AdamOxsen·
Billionaire investor Chamath Palihapitiya’s interview with Andrew Schulz just went viral. He exposed the truth about Trump’s tariffs: It's more than a trade war...he's completely redistributing American wealth. Here are the untold secrets of the President's $750B gamble: 🧵
Adam Oxsen tweet media
English
1.3K
12.4K
64K
14.3M
Juan Manuel Barreto retuiteado
GREG ISENBERG
GREG ISENBERG@gregisenberg·
SaaS is being dismantled as we speak! We're witnessing the slow-motion collapse of an entire business model that dominated tech for two decades. The $1.3 trillion SaaS is being quietly hollowed out from within by AI agents. Here's how I see it playing out: Phase 1 (Now): AI as co-pilot. We're seeing this everywhere, Copilot for developers, Gamma for presentations, Harvey for legal research etc. These AI layers sit atop existing software, making it more efficient. The SaaS companies feel safe, even excited, as AI seems to make their products more valuable. They're bringing knives to what they think is a knife fight. Phase 2 (Next 12-18 months): The agent invasion. AI moves from co-pilot to autonomous operator. They're replacement workers that can fully operate existing software on your behalf. The dam breaks when someone can say "analyze our Q2 performance" rather than clicking through Tableau, or "optimize our ad campaigns" instead of navigating Meta's ad manager. The expertise previously bundled with the software gets unbundled by agents. Phase 3 (2-3 years): Software invisibility. The final phase happens when the agents bypass the human interfaces altogether. Why render dashboards, buttons and menus when AI can just access the APIs directly? The value proposition of SaaS, bundling software, workflow, and expertise into user-friendly interfaces unravels completely. The interfaces were designed for humans, but agents don't need them. Most SaaS incumbents don't see it coming because this isn't a classic disruption pattern. It's not about competing products with better features. It's about the evaporation of the core assumption that humans will operate software. What's more, the barrier to creating custom, internal software is collapsing simultaneously. Companies that once had to choose between expensive custom development or off-the-shelf SaaS can now spin up bespoke solutions in days instead of months. Why pay Hubspot $1,500/month for a CRM when your team can build 'HubspotForUs' with an AI coding assistant over a weekend? The same features, perfectly tailored to your workflow, with no ongoing subscription costs. This democratization of software creation means every company becomes a potential software producer rather than just a consumer. The specialized knowledge that SaaS companies monopolized is now available to anyone with access to an AI coding agent and domain expertise. It went from $1M to build an MVP to build a SaaS to basically free overnight. I bet the metrics will be puzzling at first, DAUs remain strong while feature usage mysteriously declines. The power users who drive revenue suddenly need fewer seats. Customer success calls shift from "how do I use this feature?" to "can your software work with my AI agent?" Or worse: "we built our own version that better fits our workflow." The survivors won't be those with the best features or even those who add AI features fastest (from no AI to "ai-assisted"). The winners will be companies that expose their software's capabilities through agent-friendly APIs and position themselves as the most trustworthy information sources and execution engines in their domain. There's also the shift from monthly subscriptions to outcome based software (pay per outcome, pay per task etc) but that's a tweet for another day! The $1T question: Will Microsoft, Atlassian, Adobe etc. successfully navigate this transition, or will they be the Digital Equipment Corporation of our era too invested in the previous paradigm to adapt to the new one? All I know is this will be a golden era for startups in the space. SaaS is being dismantled, piece by piece, workflow by workflow, interface by interface. Am I wrong?
English
572
805
5.9K
1.1M
Juan Manuel Barreto retuiteado
John Rush
John Rush@johnrushx·
If you’re 18, pls read this: This whole flood of success stories is the reason why an entire generation is depressed. “18-year-old started an AI startup valued at billions in just 12 months”. The media and algorithms glorify such stories…but what's the reality? - out of 100 million of those who tried, less than 0.1% made it from their first attempt in their first 2 years. The reality is different. The absolute majority of the founders will fail for many years, have setbacks and struggles, and maybe never gonna overcome them. There is no proven recipe for success. If anyone says there is, he or she is a liar and charlatan. If the sole purpose of starting a business is to make money, I promise you that working for a corporation will have better odds and outcomes for such a goal. However, if you still wanna be a business founder, and you love the freedom & challenge, then here is the advice I wanna give you: See it as a lifelong journey where you must optimize your whole life for it. - Make sure you can pay your basic bills cuz the business might be unprofitable for a long while, maybe forever. - Make sure you have a place to go when things are sad and your mood is down. Your family and friends mean everything in such moments. I’ve seen so many founders totally crash and get even ill due to severe depression, just because they had no family and friends who could give them love regardless of their business trophies. TLDR: - only enter this path if the goal isn't just money. - expect a long journey of 10+ years at least - make sure you can make money outside of your biz for the first 3-10 years - build up a strong family around you and friendships to get the love that heals all pain
English
128
65
732
72K
Juan Manuel Barreto retuiteado
Science girl
Science girl@sciencegirl·
Perspective
English
94
1.3K
7.6K
531.2K
Juan Manuel Barreto retuiteado
GREG ISENBERG
GREG ISENBERG@gregisenberg·
This chart is nuts. Software developer jobs down 70% from peak. People will blame the end of free money. But something way more interesting is happening. The middle class engineer is dying. And it's dying because they're not needed anymore. One good dev with Github Copilot ships what entire teams did five years ago. Microsoft just reported the highest revenue per employee in history. The "entry-level engineer" doesn't exist anymore. Instead, we have product builders who happen to code. Armed with AI, they ship entire products in days. Meanwhile, the truly elite engineers are making more money than ever. And they've shifted to working mostly on frontier tech. I mean the stuff that's really hard. AGI at OpenAI. Designing rockets at SpaceX. Self-driving car tech at Tesla. Product builders are becoming solopreneurs and creators Frontier engineers are making hedge fund money In 2025, "software engineer" doesn't mean what it meant in 2020. And that's what this chart really shows. The middle is gone. The top is elite status. And everyone else is becoming a builder.
GREG ISENBERG tweet media
English
1.2K
2K
15K
4.2M
Juan Manuel Barreto retuiteado
Aaron Levie
Aaron Levie@levie·
AI Agents will dramatically expand the size of the software market. Here's how that will work.  Traditionally, software companies are stuck within the constraints of existing IT budgets, with IT expenses running in most companies somewhere between 3-7% of revenue (with tech and banking often a bit higher). This has always introduced a natural ceiling on the amount of spend for most categories of software. But in an era where the software, because of AI, is *solving* the problem for the customer and not just *enabling* a solution to the problem, the ceiling gets blown up. In an AI-first enterprise, AI Agents will: help marketing teams will spin up campaigns faster in all regions; code and test software for engineering; answer and triage first layer of support tickets; scale outbound campaigns and generate leads; automatically review and work through contracts; and so on. None of these outcomes traditionally came from IT spend. This then directly leads to software TAM growth. Just take a micro example of legal use-cases as an easy case in point. In the US, the contract management and ediscovery software categories are a few billion dollars each, give or take. However, the size of the legal services market in the US is somewhere around $400B, nearly 100X larger than the related software categories. If AI made the legal services operations even 20% more efficient (which is likely an understatement in the medium run), the software spend in this space could very easily grow by 5-10X. You can apply this logic to basically any category of work, and the math is similar. Importantly, the spend is not inherently going to be zero sum with what we spent on before. Net new dollars for AI (not replacing labor) will appear in many areas: startups and small businesses will go after problems they couldn't afford before; teams in large companies can scale out an operation far more than they would've otherwise; and teams that maybe had a business requirement but not enough budget or weren't prioritized before, can now solve a problem more quickly.  Going forward, a company will simply decide how much productivity they want to spin up in the form of AI Agents, and they can just modulate quickly based on the ROI of whatever Agents they're using. Because of this flexibility for scaling out work, the new-use cases it now solves, and the ability to go past the typically limited IT budgets. AI Agents will make software markets much, much larger.
English
233
127
755
151.2K
Juan Manuel Barreto retuiteado
Alex Cheema
Alex Cheema@alexocheema·
Market close: $NVDA: -16.91% | $AAPL: +3.21% Why is DeepSeek great for Apple? Here's a breakdown of the chips that can run DeepSeek V3 and R1 on the market now: NVIDIA H100: 80GB @ 3TB/s, $25,000, $312.50 per GB AMD MI300X: 192GB @ 5.3TB/s, $20,000, $104.17 per GB Apple M2 Ultra: 192GB @ 800GB/s, $5,000, $26.04(!!) per GB Apple's M2 Ultra (released in June 2023) is 4x more cost efficient per unit of memory than AMD MI300X and 12x more cost efficient than NVIDIA H100! Why is this relevant to DeepSeek? DeepSeek V3/R1 are MoE models with 671B total parameters, but only 37B are active each time a token is generated. We don't know exactly which 37B will be active when we generate a token, so they all need to be ready in high-speed GPU memory. We can't use normal system RAM because it's too slow to load the 37B active parameters (we'd get <1 tok/sec). On the other hand GPUs have fast memory but GPU memory is expensive. Apple Silicon, however, uses Unified Memory and UltraFusion to fuse dies - a tradeoff that favors a large amount of medium-fast memory at a cheaper cost. Unified memory shares a single pool of memory between the CPU and GPU rather than having separate memory for each. There's no need to have separate memory and copy data between the CPU and GPU. UltraFusion is Apple's proprietary interconnect technology for connecting two dies with a super high speed, low latency connection (2.5TB/s). Apple's M2 Ultra is literally two Apple M2 Max dies fused together with UltraFusion. This is what enables Apple to achieve such a high amount of memory (192GB) and memory-bandwidth (800GB/s). Apple M4 Ultra is rumored to use the same UltraFusion technology to fuse together two M4 Max dies. This would give the M4 Ultra 256GB(!!) of unified memory @ 1146GB/s. Two of these could run DeepSeek V3/R1 (4-bit) at 57 tok/sec. All of this and Apple has managed to package this in a small form-factor for consumers with great power efficiency and great open-source (uncharacteristic of Apple!) software. MLX (h/t @awnihannun) has made it possible to leverage Apple Silicon for ML workloads and @exolabs has made it possible to cluster together multiple Apple Silicon devices to run large models, demonstrating DeepSeek R1 (671B) running on 7 M4 Mac Minis. It's unclear who will build the best AI models, but it seems likely that AI will run on American hardware, on Apple Silicon.
Alex Cheema tweet mediaAlex Cheema tweet mediaAlex Cheema tweet media
English
215
1.1K
7.1K
1.2M
Juan Manuel Barreto retuiteado
Shay Boloor
Shay Boloor@StockSavvyShay·
The Ultimate Cheat Sheet for 25 Companies Shaping the Semiconductor Value Chain 1. $NVDA -- Designer of AI GPUs & CUDA platform for model training 2. $TSM -- Manufacturer of advanced 3nm & 5nm chips for AI workloads 3. $AVGO -- Supplier of networking ASICs & high-speed connectivity components 4. $ASML -- Exclusive supplier of EUV lithography machines for chip manufacturing 5. $QCOM -- Designer of Snapdragon AI processors for mobile & edge devices 6. $ARM -- Developer of energy-efficient CPU architectures for AI & IoT 7. $AMD -- Provider of Ryzen & EPYC processors for high-performance computing 8. $MU -- Manufacturer of HBM3 memory for AI accelerators 9. $INTC -- Integrated producer of Xeon processors for data centers 10. $MRVL -- Developer of OCTEON processors & networking chips for AI infrastructure 11. $AMAT -- Innovator of deposition & etch tools for semiconductor fabrication 12. $DELL -- Provider of PowerEdge servers for AI data centers 13. $TXN -- Supplier of analog chips & embedded processors for industrial AI 14. $KLAC -- Developer of wafer inspection & metrology tools 15. $WDC -- Leader in enterprise HDDs & SSDs for AI and cloud storage 16. $ANET -- Provider of CloudVision software and 400G switches for AI data centers 17. $SNPS -- Provider of EDA tools for chip design & verification 18. $CDNS -- Leader in chip simulation software for AI & automotive applications 19. $HPE -- Provider of HPE Apollo systems for AI & HPC workloads 20. $AEHR -- Supplier of burn-in and test systems for silicon carbide chips 21. $SMCI -- Custom server solutions optimized for AI & cloud computing 22. $ADI -- Provider of precision signal processing ICs for industrial AI 23. $ON -- Manufacturer of SiC power modules for EVs & ADAS systems 24. $ALAB -- Developer of PCIe connectivity chips for AI & data center workloads 25. $NVTS -- Supplier of GaN power ICs for fast charging & AI hardware
Shay Boloor tweet media
English
46
295
1.4K
259.1K
Juan Manuel Barreto retuiteado
Gavin Baker
Gavin Baker@GavinSBaker·
1) DeepSeek r1 is real with important nuances. Most important is the fact that r1 is so much cheaper and more efficient to inference than o1, not from the $6m training figure. r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild. Simple math is that every 1b active parameters requires 1 gb of RAM in FP8, so r1 requires 37 gb of RAM. Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud. Would also note that there are true geopolitical dynamics at play here and I don’t think it is a coincidence that this came out right after “Stargate.” RIP, $500 billion - we hardly even knew you. Real: 1) It is/was the #1 download in the relevant App Store category. Obviously ahead of ChatGPT; something neither Gemini nor Claude was able to accomplish. 2) It is comparable to o1 from a quality perspective although lags o3. 3) There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference. Training in FP8, MLA and multi-token prediction are significant. 4) It is easy to verify that the r1 training run only cost $6m. While this is literally true, it is also *deeply* misleading. 5) Even their hardware architecture is novel and I will note that they use PCI-Express for scale up. Nuance: 1) The $6m does not include “costs associated with prior research and ablation experiments on architectures, algorithms and data” per the technical paper. “Other than that Mrs. Lincoln, how was the play?” This means that it is possible to train an r1 quality model with a $6m run *if* a lab has already spent hundreds of millions of dollars on prior research and has access to much larger clusters. Deepseek obviously has way more than 2048 H800s; one of their earlier papers referenced a cluster of 10k A100s. An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m. Roughly 20% of Nvidia’s revenue goes through Singapore. 20% of Nvidia’s GPUs are probably not in Singapore despite their best efforts. 2) There was a lot of distillation - i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1. As @altcap pointed out to me yesterday, kinda funny to restrict access to leading edge GPUs and not do anything about China’s ability to distill leading edge American models - obviously defeats the purpose of the export restrictions. Why buy the cow when you can get the milk for free?
English
224
1.4K
9K
3.3M
Juan Manuel Barreto retuiteado
Robert Sterling
Robert Sterling@RobertMSterling·
Might be a dumb question, but can’t OpenAI, Anthropic, and other AI companies just incorporate the best parts of DeepSeek’s source code into their code, then use the massive GPU clusters at their disposal to train models even more powerful than DeepSeek? Am I missing something?
English
1.6K
654
20K
2.7M
Juan Manuel Barreto retuiteado
Business Insider
Business Insider@BusinessInsider·
Why banning red no. 3 in America took decades
English
2
9
19
45.8K
Juan Manuel Barreto retuiteado
Joko
Joko@jokowords·
I bet my life savings OpenAI will go bankrupt by 2026-28 Not because Elon Musk could win his lawsuit against OpenAI. But also because the vital figures are even worrier. Here’s what you need to know whether you have a ChatGPT account or not:🧵
Joko tweet media
English
540
689
8K
5M
Juan Manuel Barreto retuiteado
Javier Lacort
Javier Lacort@lacort·
Mientras la mayoría apuesta por modelos en la nube, Apple insiste en ejecutar la IA en el dispositivo cuando sea posible. Es coherente con su enfoque en privacidad. Private Cloud Compute, una extensión en la nube.
Español
1
8
134
38.6K