@TheJerzWay The issue isn't the "secret is out" the peso increased value against USD a lot, plus significant inflation. Pretty soon will be Spain prices, especially in Medellin which is up to 40% more expensive for some things
Back in Bogotá…
World-class food. Creative chefs. Real culture.
But here's the thing no one's saying:
Bogotá isn't cheap anymore.
The "move to Colombia and live like a king for $1,500/month" era is over.
Good apartments: $2,500+ per month
Nice dinners: $50-100 per person
Quality of life: High
Cost of that life: Climbing fast
Still worth it? Absolutely.
But if you're coming here expecting 2019 prices, you're going to be surprised.
The secret's out. Act accordingly.
AI value is stacking at the top 10% of earners. Research shows it may also be degrading cognitive capacity for the 80% who use it as a thinking replacement. This has direct product implications.
At the bottom 10% of earners, about 13% use AI daily. In tech and finance, the top decile climbs past 70%. In every other sector, it barely scrapes 48%.
#MUSTREAD This fantastic and personal article by Brittany Hobbs dives into the challenges of keeping up with the pressures of "doing more AI"
Every design decision that optimizes for the power user pulls you further from the people the FT chart says aren't coming. And some of the new research that's just come out suggests that the decline in motivation to think among people who use AI is massive. John Burn-Murdoch plotted it using the FT/Focaldata Workforce AI Tracker : the share of US and UK workers who use AI on most days at work, broken out by salary bracket.
The Financial Times just published the cleanest picture I've seen of where AI is actually landing. But the underlying claim is backed by research we've covered here: "Against Frictionless AI" in Nature (Inzlicht & Bloom): removing struggle from AI workflows destroys the learning that builds expertise.
Strip the FT chart of its sector breakdown and the shape is brutal: a fivefold gap inside a single economy, for a tool that costs less than a streaming subscription. Layer in what the research has been saying all year: 84% of the world has never used AI.
80% of ChatGPT users sent fewer than 1,000 messages in all of 2025 , per Benedict Evans's analysis . Microsoft Copilot plateaued at 30% weekly active usage after six months — inside enterprises with full licenses and mandatory rollouts, per The Information .
Find it on Product Impact Podcast: AI value is stacking at the top 10% of earners. Research shows it may also be degrading cognitive capacity for the 80% who use it as a thinking replacement. This has direct product implications.
At the bottom 10% of earners, about 13% use AI daily. In tech and finance, the top decile climbs past 70%. In every other sector, it barely scrapes 48%.
hashtag#MUSTREAD This fantastic and personal article by Brittany Hobbs dives into the challenges of keeping up with the pressures of "doing more AI"
Every design decision that optimizes for the power user pulls you further from the people the FT chart says aren't coming. And some of the new research that's just come out suggests that the decline in motivation to think among people who use AI is massive. John Burn-Murdoch plotted it using the FT/Focaldata Workforce AI Tracker : the share of US and UK workers who use AI on most days at work, broken out by salary bracket.
The Financial Times just published the cleanest picture I've seen of where AI is actually landing. But the underlying claim is backed by research we've covered here: "Against Frictionless AI" in Nature (Inzlicht & Bloom): removing struggle from AI workflows destroys the learning that builds expertise.
Strip the FT chart of its sector breakdown and the shape is brutal: a fivefold gap inside a single economy, for a tool that costs less than a streaming subscription. Layer in what the research has been saying all year: 84% of the world has never used AI.
80% of ChatGPT users sent fewer than 1,000 messages in all of 2025 , per Benedict Evans's analysis . Microsoft Copilot plateaued at 30% weekly active usage after six months — inside enterprises with full licenses and mandatory rollouts, per The Information .
Find it on productimpactpod.com/news/the-10-pe…
The companies that understand this distinction will make better decisions. The rest will learn the hard way.
Imagine every pixel on your screen, streamed live directly from a model. No HTML, no layout engine, no code. Just exactly what you want to see.
@eddiejiao_obj, @drewocarr and I built a prototype to see how this could actually work, and set out to make it real. We're calling it Flipbook. (1/5)
The Free Ride Is Over: AI Economics Is Now Your Most Important Strategy Decision
OpenAI losing $14B in 2026. GitHub Copilot pausing signups. Anthropic briefly removing Claude Code from Pro.
productimpactpod.com/news/ai-econom…
You Are Probably Being Asked to Solve One of Three Problems
If you lead product or digital strategy at an established organization right now, you are likely navigating one of these situations — and maybe all three.
The first: your leadership believes AI is going to erode the current business and wants you to modernize the customer experience fast enough to stay ahead.
The competitor landscape has shifted. Customers are arriving with expectations shaped by ChatGPT, not by your industry. The CX refresh that was scheduled for next year suddenly needs to ship this quarter — and it needs an AI story that is defensible.
The second: your organization committed to AI adoption at the board level, and every team now has an AI deliverable attached to its annual plan. The directive is to deploy, at scale, at speed. The timeline is aggressive and the success criteria are vague. You have been handed the accountability without the usual luxuries of discovery, research, or phased rollout.
The third: your organization already bought the licenses. Copilot, an enterprise LLM contract, an AI platform commitment that was signed a year ago. Utilization is flat. Your CFO is asking what the return is, and the answer your team has today is a list of pilots that never quite scaled. The pressure now is to prove value — or to explain, in the next board deck, why the investment hasn't materialized.
In every one of these situations, the pull is the same: ship something, deploy something, demonstrate progress. The pressure is real, and it is rational. AI is reshaping what customers expect and what competitors can deliver, and hesitation has a cost.
But here is what the best product leaders understand — and what this article is trying to help you hold onto: this is the exact moment that most demands making the right decision, not the fast one. The organizations that will compound the most value from AI over the next five years are not the ones that deployed first. They are the ones that invested a few weeks in the right research before they deployed. They will not be remembered for moving carefully. They will be remembered for getting it right.
The research artifact that protects the decision — and the roadmap — is customer journey mapping. Done properly, it is the single highest-leverage investment you can make before the AI work begins in earnest.
ph1.ca/blog/customer-…
The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown:
> Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying."
> Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself."
> Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him.
> Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself.
> Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time."
> OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission.
> Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review.
> Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it.
> Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different.
> A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic."
> OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.
@globeandmail@grok tabulate Canada's ranking across six major global happiness and quality of life rankings. Then find a comparable country that has the same trajectory and analyze why it is happening
youtu.be/UabBYexBD4k?si…
Here's the explanation of what has changed with how LLMs can handle more of your context files directly.
It's a technical explanation but an important lesson in what shifted to make Claude Skills, Code, and many other new valuable AI platforms posible.
TL;DR
LLMs have much bigger 'short-term memory' (context windows) now. You can upload entire books or documents directly for the AI to analyze, instead of searching.
How is it that Microsoft and OpenAI’s CEOs are telling us to panic because white collar jobs are going to be replaced by AI,
Then there’s endless evidence of the opposite: Most companies that implement AI see little gains, with execs from over 80% of companies reporting no productivity gains at all.
In this episode of the Product Impact Podcast we tackle Why Your AI Metrics Are Lying to You. We’ll provide you with a framework for improving AI product performance.
In this episode you’ll learn:
- Agents hide friction from view, creating dangerous impact blindness
- Balance power, speed, impact & joy to win in the AI era, like F1 cars
- Success doesn’t equal satisfaction—you must measure both outcomes
- Measure outcomes and feelings, not just activity logs and checkmarks
Thank you for listening to the Product Impact Podcast (Formerly Design of AI)
Prove impact. Improve impact. Scale impact. Learn frameworks and strategies to ensure your product is delivering impact to users, teams, businesses, and communities. We investigate enterprise adoption and highlight builders/startups disrupting value creation.
Subscribe to productimpactpod.substack.com for AI Strategy resources
Brought to you by PH1 ph1.ca an AI strategy consultancy specialized in improving the success of your AI product.
A very insightful interview with an $MSFT employee who works on Copilot on the SaaS disruption debate:
1. According to him, AI doesn't eliminate software value; it redistributes it. He thinks the recent declines in software share prices due to AI risk are partially justified, as it can compress margins, lower switching costs, and shift value from the application layer to the platform or maybe even the model layer. He thinks companies with strong proprietary data, deep workflow integration, and AI execution capability are more likely to expand value rather than lose it.
2. He gives a good example of value for a SaaS provider. The advantage isn't that we host your data, it's that we see patterns like no single customer can see, explaining the value of pattern intelligence across millions of records. He also thinks that in an AI world, owning where revenue decisions happen may be more valuable than owning where attention happens.
3. If SaaS gross margins move from 85% down to 50-60%, the math forces a redesign of the SaaS model. He thinks that if 20-30% of gross margin disappears, the most logical offset is sales and marketing efficiency. The industry will no longer look like the classic SaaS industry, but will more closely resemble an infrastructure economy. Not all players can sustain that 30% operating margin.
4. According to him, $MSFT's Satya always says we're going to have 1.3 billion agents by 2028. He thinks we are rapidly moving from AI that answers to a more agentic mode where AI does. The next 6-12 months are all about orchestration and enterprise control. He thinks the year won't be about AI getting smarter, but more about AI becoming more reliable and integrated into real business processes.
5. He thinks OpenAI and Anthropic realized they can't win enterprise adoption with a raw model alone. They're building distribution, partnerships, and verticalization layers to solve real workflows, not just offer a smart chatbot. The battle is shifting from model intelligence to deployment simplicity and ROI clarity. The winning formula is reducing friction between capabilities and a business outcome. Enterprise adoption accelerates when AI feels like a feature of the existing system and not a science experiment that's just bolted on the side.
found on @AlphaSenseInc