arpy

3.4K posts

arpy

arpy

@arpyd

Product Strategy for AI Era | Product Impact Podcast | Founder of PH1

Vancouver เข้าร่วม Mart 2009
1K กำลังติดตาม399 ผู้ติดตาม
arpy
arpy@arpyd·
Yay there's a new Claude model. Oh shit, now it seems Sonnet has been throttled to be useless.
English
0
0
0
11
arpy
arpy@arpyd·
AI value is stacking at the top 10% of earners. Research shows it may also be degrading cognitive capacity for the 80% who use it as a thinking replacement. This has direct product implications. At the bottom 10% of earners, about 13% use AI daily. In tech and finance, the top decile climbs past 70%. In every other sector, it barely scrapes 48%. #MUSTREAD This fantastic and personal article by Brittany Hobbs dives into the challenges of keeping up with the pressures of "doing more AI" Every design decision that optimizes for the power user pulls you further from the people the FT chart says aren't coming. And some of the new research that's just come out suggests that the decline in motivation to think among people who use AI is massive. John Burn-Murdoch plotted it using the FT/Focaldata Workforce AI Tracker : the share of US and UK workers who use AI on most days at work, broken out by salary bracket. The Financial Times just published the cleanest picture I've seen of where AI is actually landing. But the underlying claim is backed by research we've covered here: "Against Frictionless AI" in Nature (Inzlicht & Bloom): removing struggle from AI workflows destroys the learning that builds expertise. Strip the FT chart of its sector breakdown and the shape is brutal: a fivefold gap inside a single economy, for a tool that costs less than a streaming subscription. Layer in what the research has been saying all year: 84% of the world has never used AI. 80% of ChatGPT users sent fewer than 1,000 messages in all of 2025 , per Benedict Evans's analysis . Microsoft Copilot plateaued at 30% weekly active usage after six months — inside enterprises with full licenses and mandatory rollouts, per The Information . Find it on Product Impact Podcast: AI value is stacking at the top 10% of earners. Research shows it may also be degrading cognitive capacity for the 80% who use it as a thinking replacement. This has direct product implications. At the bottom 10% of earners, about 13% use AI daily. In tech and finance, the top decile climbs past 70%. In every other sector, it barely scrapes 48%. hashtag#MUSTREAD This fantastic and personal article by Brittany Hobbs dives into the challenges of keeping up with the pressures of "doing more AI" Every design decision that optimizes for the power user pulls you further from the people the FT chart says aren't coming. And some of the new research that's just come out suggests that the decline in motivation to think among people who use AI is massive. John Burn-Murdoch plotted it using the FT/Focaldata Workforce AI Tracker : the share of US and UK workers who use AI on most days at work, broken out by salary bracket. The Financial Times just published the cleanest picture I've seen of where AI is actually landing. But the underlying claim is backed by research we've covered here: "Against Frictionless AI" in Nature (Inzlicht & Bloom): removing struggle from AI workflows destroys the learning that builds expertise. Strip the FT chart of its sector breakdown and the shape is brutal: a fivefold gap inside a single economy, for a tool that costs less than a streaming subscription. Layer in what the research has been saying all year: 84% of the world has never used AI. 80% of ChatGPT users sent fewer than 1,000 messages in all of 2025 , per Benedict Evans's analysis . Microsoft Copilot plateaued at 30% weekly active usage after six months — inside enterprises with full licenses and mandatory rollouts, per The Information . Find it on productimpactpod.com/news/the-10-pe… The companies that understand this distinction will make better decisions. The rest will learn the hard way.
arpy tweet media
English
0
0
0
7
arpy รีทวีตแล้ว
Zain Shah
Zain Shah@zan2434·
Imagine every pixel on your screen, streamed live directly from a model. No HTML, no layout engine, no code. Just exactly what you want to see. @eddiejiao_obj, @drewocarr and I built a prototype to see how this could actually work, and set out to make it real. We're calling it Flipbook. (1/5)
English
1K
3.3K
26K
5.5M
arpy
arpy@arpyd·
The Free Ride Is Over: AI Economics Is Now Your Most Important Strategy Decision OpenAI losing $14B in 2026. GitHub Copilot pausing signups. Anthropic briefly removing Claude Code from Pro. productimpactpod.com/news/ai-econom…
English
0
0
0
104
arpy
arpy@arpyd·
You Are Probably Being Asked to Solve One of Three Problems If you lead product or digital strategy at an established organization right now, you are likely navigating one of these situations — and maybe all three. The first: your leadership believes AI is going to erode the current business and wants you to modernize the customer experience fast enough to stay ahead. The competitor landscape has shifted. Customers are arriving with expectations shaped by ChatGPT, not by your industry. The CX refresh that was scheduled for next year suddenly needs to ship this quarter — and it needs an AI story that is defensible. The second: your organization committed to AI adoption at the board level, and every team now has an AI deliverable attached to its annual plan. The directive is to deploy, at scale, at speed. The timeline is aggressive and the success criteria are vague. You have been handed the accountability without the usual luxuries of discovery, research, or phased rollout. The third: your organization already bought the licenses. Copilot, an enterprise LLM contract, an AI platform commitment that was signed a year ago. Utilization is flat. Your CFO is asking what the return is, and the answer your team has today is a list of pilots that never quite scaled. The pressure now is to prove value — or to explain, in the next board deck, why the investment hasn't materialized. In every one of these situations, the pull is the same: ship something, deploy something, demonstrate progress. The pressure is real, and it is rational. AI is reshaping what customers expect and what competitors can deliver, and hesitation has a cost. But here is what the best product leaders understand — and what this article is trying to help you hold onto: this is the exact moment that most demands making the right decision, not the fast one. The organizations that will compound the most value from AI over the next five years are not the ones that deployed first. They are the ones that invested a few weeks in the right research before they deployed. They will not be remembered for moving carefully. They will be remembered for getting it right. The research artifact that protects the decision — and the roadmap — is customer journey mapping. Done properly, it is the single highest-leverage investment you can make before the AI work begins in earnest. ph1.ca/blog/customer-…
English
0
0
0
4
arpy รีทวีตแล้ว
Stopa
Stopa@stopachka·
After 4 years, we’re announcing Instant 1.0. Instant is the best backend for AI-coded apps. Let us tell you why.
English
120
94
836
223.7K
arpy รีทวีตแล้ว
Ryan
Ryan@ohryansbelt·
The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown: > Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." > Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself." > Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him. > Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself. > Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time." > OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. > Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review. > Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it. > Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different. > A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic." > OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.
Ryan tweet media
English
283
2.2K
14.4K
3.2M
arpy
arpy@arpyd·
Bigger models simply can't result in better outcomes. LLMs fail by design without a knowledge graph or orchestrated context
Robert Youssef@rryssf

Your AI has been quietly forgetting everything you told it. Not randomly. Not loudly. Systematically. Starting with the decisions that matter most. > The constraint you set three months ago "never use Redis, the client vetoed it after a production incident." Gone. The GDPR deployment region restriction. Gone. The retry limit you tested empirically after the cascade failure. Gone. > The model never told you. It just started using defaults. > This is called context rot. And Cambridge and Independent researchers just quantified exactly how bad it is. > Every production AI system that runs long enough will eventually compress its context to make room for new information. That compression is catastrophically lossy. They tested it directly: 2,000 facts compressed at 36.7× left 60% of the knowledge base permanently irrecoverable. Not hallucinated. Not wrong. Just gone. The model honestly reported it didn't have the information anymore. > Then they tested something worse. They embedded 20 real project constraints into an 88-turn conversation the kind of constraints that emerge naturally in any long-running project then applied cascading compression exactly like production systems do. After one round: 91% preserved. After two rounds: 62%. After three rounds: 46%. > The model kept working with full confidence the entire time. Generating outputs that violated the forgotten constraints. No error signal. No warning. Just silent reversion to reasonable defaults that happened to be wrong for your specific situation. > They tested this across four frontier models. Claude Sonnet 4.5, Claude Sonnet 4.6, Opus, GPT-5.4. Every single one collapsed under compression. This isn't a model problem. It's architectural. → 60% of facts permanently lost after single compression pass → 54% of project constraints gone after three rounds of cascading compression → GPT-5.4 dropped to 0% accuracy at just 2× compression → Even Opus retained only 5% of facts at 20× compression → In-context memory costs $14,201/year at 7,000 facts vs $56/year for the alternative The AI labs know this. Their solution is bigger context windows. A 10M-token window is a larger bucket. It's still a bucket. Compaction is inevitable for any long-running system. The window size only determines when the forgetting starts not whether it happens.

English
0
0
0
9
arpy
arpy@arpyd·
@globeandmail @grok tabulate Canada's ranking across six major global happiness and quality of life rankings. Then find a comparable country that has the same trajectory and analyze why it is happening
English
2
0
0
144
arpy
arpy@arpyd·
youtu.be/UabBYexBD4k?si… Here's the explanation of what has changed with how LLMs can handle more of your context files directly. It's a technical explanation but an important lesson in what shifted to make Claude Skills, Code, and many other new valuable AI platforms posible. TL;DR LLMs have much bigger 'short-term memory' (context windows) now. You can upload entire books or documents directly for the AI to analyze, instead of searching.
YouTube video
YouTube
English
0
0
0
36
arpy
arpy@arpyd·
Some frightening data
arpy tweet media
English
0
0
0
9
arpy
arpy@arpyd·
but all failing haha
English
0
0
0
2
arpy
arpy@arpyd·
building 8 Lovable projects at once cause yeah
English
1
0
0
35
arpy
arpy@arpyd·
How is it that Microsoft and OpenAI’s CEOs are telling us to panic because white collar jobs are going to be replaced by AI, Then there’s endless evidence of the opposite: Most companies that implement AI see little gains, with execs from over 80% of companies reporting no productivity gains at all. In this episode of the Product Impact Podcast we tackle Why Your AI Metrics Are Lying to You. We’ll provide you with a framework for improving AI product performance. In this episode you’ll learn: - Agents hide friction from view, creating dangerous impact blindness - Balance power, speed, impact & joy to win in the AI era, like F1 cars - Success doesn’t equal satisfaction—you must measure both outcomes - Measure outcomes and feelings, not just activity logs and checkmarks Thank you for listening to the Product Impact Podcast (Formerly Design of AI) Prove impact. Improve impact. Scale impact. Learn frameworks and strategies to ensure your product is delivering impact to users, teams, businesses, and communities. We investigate enterprise adoption and highlight builders/startups disrupting value creation.⁠ Subscribe to ⁠productimpactpod.substack.com⁠ for AI Strategy resources Brought to you by PH1 ⁠ph1.ca⁠ an AI strategy consultancy specialized in improving the success of your AI product.
English
0
0
0
17
arpy รีทวีตแล้ว
Rihard Jarc
Rihard Jarc@RihardJarc·
A very insightful interview with an $MSFT employee who works on Copilot on the SaaS disruption debate: 1. According to him, AI doesn't eliminate software value; it redistributes it. He thinks the recent declines in software share prices due to AI risk are partially justified, as it can compress margins, lower switching costs, and shift value from the application layer to the platform or maybe even the model layer. He thinks companies with strong proprietary data, deep workflow integration, and AI execution capability are more likely to expand value rather than lose it. 2. He gives a good example of value for a SaaS provider. The advantage isn't that we host your data, it's that we see patterns like no single customer can see, explaining the value of pattern intelligence across millions of records. He also thinks that in an AI world, owning where revenue decisions happen may be more valuable than owning where attention happens. 3. If SaaS gross margins move from 85% down to 50-60%, the math forces a redesign of the SaaS model. He thinks that if 20-30% of gross margin disappears, the most logical offset is sales and marketing efficiency. The industry will no longer look like the classic SaaS industry, but will more closely resemble an infrastructure economy. Not all players can sustain that 30% operating margin. 4. According to him, $MSFT's Satya always says we're going to have 1.3 billion agents by 2028. He thinks we are rapidly moving from AI that answers to a more agentic mode where AI does. The next 6-12 months are all about orchestration and enterprise control. He thinks the year won't be about AI getting smarter, but more about AI becoming more reliable and integrated into real business processes. 5. He thinks OpenAI and Anthropic realized they can't win enterprise adoption with a raw model alone. They're building distribution, partnerships, and verticalization layers to solve real workflows, not just offer a smart chatbot. The battle is shifting from model intelligence to deployment simplicity and ROI clarity. The winning formula is reducing friction between capabilities and a business outcome. Enterprise adoption accelerates when AI feels like a feature of the existing system and not a science experiment that's just bolted on the side. found on @AlphaSenseInc
Rihard Jarc tweet mediaRihard Jarc tweet mediaRihard Jarc tweet mediaRihard Jarc tweet media
English
51
164
992
288.5K
arpy
arpy@arpyd·
All of us on the frontlines have spent years debating whether AI is a force for good. We've debated its capacity to reason and make decisions. We're now at the point of no-return when it comes to the inevitability of its impact. We now have to decide whether we want the next 5 years to be a disaster movie —where we're watching a catastrophic tsunami smashes our realities apart— or if we use the kinetic energy of that wave to rebuild the foundations of our communities and industries. 6 years ago we squandered another opportunity like this when the world was shut down for COVID. Rather than using that time to rebuild community structures and transform healthcare delivery, nations focused on propping up monopolies and padding polls. If we treat this as a disaster movie we will see many roles and industries decimated and replaced by offshore products and services, further draining the equity from our economies. We've already watched as main streets everywhere have become wrecked by successive waves of platforms consolidating power over consumers. This is going to happen now to B2B. What do communities and inequity look like when we a decade from now we see that half of all lawyers, architects, journalists, bookkeepers, payroll officers, etc are slashed? Early adopters working in tech are seeing the value of their expertise and capabilities commoditized and turned into skill-file mulch by LLMs. Even the most technical, like Matt Shumer, are seeing quarterly leaps and bounds that defy logic. His essay captures this thoughts and his roadmap. Anyone who listens to Design of AI podcast knows that I am not an AI hyper. I believe in the cautious and effective use of the technology in ways that deliver positive impact to businesses and communities. This isn't an alarm to say "everything you've read about AI is true and all the doubters were wrong" —it is an acknowledgement that we're past the point of debating whether Oppenheimer should build the nuclear weapon, we're at the point where we need to build defensive strategies to protect economies and the livelihood of citizens. My heritage is Canadian-Hungarian-Colombian —three countries who are middling middle-powers in the world's grand decision-making. They're countries who fell out of the grand world order for various self-inflicted and geopolitical struggles. Can you imagine that Hungary was one of the most powerful kingdoms 600 years ago? Now its greatest super power is a spoiler or a tie-breaking vote. That's what will happen to the businesses and workers who turn AI from a weakness to an advantage. Everyone will gain from developing AI strategies to guide the responsible and targeted use of AI to deliver positive impact. We're entering into an era the unscrupulous use of AI will be celebrated as case studies of capitalism. The collective "we" can't let that happen.
Matt Shumer@mattshumer_

x.com/i/article/2021…

English
0
0
0
16