
SBP
8K posts



Nike wiped out $200B+ in market cap since November 2021. And the chart actually understates how bad it is. This company made one bet that destroyed everything: the direct-to-consumer pivot. During COVID, Nike's online sales surged, and management convinced themselves the stay-at-home economy was permanent. They pulled product from Foot Locker, Dick's, and thousands of wholesale partners to push buyers through Nike.com and Nike stores. That ceded physical shelf space to On Running, Hoka, New Balance, and every competitor happy to fill the void. By the time Nike brought Elliott Hill in as CEO, customers had already moved on. The China numbers are staggering. Seven straight quarters of declining revenue. Greater China sales dropped 17% last quarter. Next quarter Nike expects a 20% plunge. Meanwhile Lululemon is posting double-digit growth in the same market. Anta and Li-Ning are eating Nike's share from below. Nike's China revenue contribution fell from 18.6% in 2021 to 14.2% in 2025. Yesterday Goldman Sachs, JPMorgan, and Bank of America all downgraded the stock on the same day. Net income fell 35% year over year. Gross margin has declined for seven consecutive quarters. And the stock still trades at 38x forward earnings, a premium over the S&P 500 average of 22x. This is what a slow-motion brand collapse looks like with a luxury multiple attached to it. The turnaround keeps getting pushed further out. Management promised growth by early 2027. Wall Street priced that in. Now it's late 2027 at best. The scariest part: Nike is still the #1 sportswear company by market cap. If this is what #1 looks like, the rest of the industry is running a different race entirely.







🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
















