

Michael Heraghty
2.3K posts

@UserJourneys
🤓 UXer | ✍️ Lazy blogger @ https://t.co/7Qdg8jAHO4 | 🐒 Status-seeking monkey | 🌟 Thoughts on UX, AI, and random



Voice notes are massive in some countries but not the UK. This is why. bbc.in/4t0zffb


🚨 OpenAI is reportedly building a phone designed to replace the iPhone. And it’s further along than anyone realized. Analyst Ming-Chi Kuo, the same man who predicted every major Apple product cycle for 20 years, just dropped this. Important details: 1: OpenAI is partnering with Qualcomm AND MediaTek to develop custom smartphone processors, not one chip partner, but two competing giants simultaneously 2: Luxshare has been named the exclusive system co-design and manufacturing partner, the same company that assembles Apple products 3: Mass production is targeted for 2028, the hardware roadmap is already in motion 4: The phone will run OpenAI’s own OS, replacing traditional apps entirely with AI agents that complete tasks autonomously, without you ever opening a single app 5: The processor is being designed around on-device AI performance, with complex tasks offloaded to OpenAI’s cloud infrastructure for seamless integration 6: OpenAI’s core thesis: users don’t want apps, they want results. The phone will continuously understand context, habits, and preferences in real time This isn’t a gadget. It’s a direct attempt to replace the operating system layer that Apple and Google have owned for 20 years. I’m doing more research, and what I’m about to post will blow your mind. You’ll wish you followed me sooner, trust me.

Can water intake prevent Alzheimer’s disease? No. This is fully AI-generated… but the data below could easily pass as real. The new ChatGPT image model is truly impressive, but I think it poses a real risk for scientific integrity in future. For example, I could just generate a dataset with a single prompt that appears to show something like water preventing Alzheimer’s disease. Ironically, we used to laugh at obvious “AI slop” (like those weird generated mice), but that’s changing pretty fast. If I were reviewing this fake figure today, I’m not sure I could reliably tell whether this figure is real or AI-generated? The bigger issue is that the usual signals we rely on e.g., how realistic or plausible something looks are no longer enough. I think we really need more comprehensive AI detection and, more importantly, stronger verification standards for scientific submissions going forward. We’ll probably also need better ways to digitize lab notebooks and ensure access to raw data, something closer to how code and version history are tracked...







