Jim Lanzone

4.2K posts

Jim Lanzone banner
Jim Lanzone

Jim Lanzone

@jlanzone

CEO @Yahoo!

Katılım Şubat 2009
747 Takip Edilen14K Takipçiler
Sabitlenmiş Tweet
Jim Lanzone
Jim Lanzone@jlanzone·
Appreciate the deep dive and as @rustybrick, @mgsiegler and others have noted these were all core values of the product from the start. Also important to note that the quality (and distinctiveness) of our answers stems in large part from the unique, structured Yahoo data that we package up and send to the LLM. This will amp up even more as we add personalization for our logged in users.
Aleyda Solis 🕊️@aleyda

🤖 Yahoo! released Scout, its new AI search engine, a few days ago, and it has, hands down, the most “open web” friendly interface compared to ChatGPT or AI Mode… and honestly, the best interface overall 👇 I’ve been comparing Yahoo! Scout vs. ChatGPT & AI Mode across answer quality, formats, and link inclusion, testing different search intents and verticals, and I couldn’t believe how consistently Yahoo! Scout delivers a more user- and open-web-friendly answer experience: * Across most answers, it features a prominent, highlighted “Read More” button linking directly to the most relevant source behind the answer. * It includes links to external sites directly within the answers, starting from the very first paragraph, often showing dozens of links to original sources (in addition to linking to all sources together at the end in a sidebar). In contrast, other AI platforms include far fewer links in the answer itself, and mostly push users to a sidebar to access external sources. * It’s honestly crazy that after so much time of complaints from website owners, publishers, and the SEO community, Yahoo! ends up releasing a far more “open web”-friendly interface in its AI platform than… Google, that has supposedly been committed to improving theirs. Although I was skeptical when I first saw the announcement, I’m more than happy to eat my words, and I want to congratulate Yahoo! for designing not only an easy-to-use interface, but one that genuinely incentivizes users to keep browsing the sites where the answers come from 👏 I really hope product managers and designers at @Google and @OpenAI take note.

English
7
2
14
2.6K
Jim Lanzone retweetledi
Chris Long
Chris Long@chris_nectiv·
Interesting: This SEO study found that 85% of the sources that ChatGPT retrieves NEVER get cited.
Chris Long tweet media
English
4
5
26
2.2K
Jim Lanzone retweetledi
Brian Sozzi
Brian Sozzi@BrianSozzi·
👋 Friends. Try some of the hot new features on @Yahoo Scout. It's so crazy awesome. Check out the new 'add a tile' feature. Been using the platform hardcore this week for insights on oil prices, Operation Epic Fury, the Strait of Hormuz and the stock market. So impactful to your daily life life and work life: scout.yahoo.com I know, surprise surprise @YahooFinance is my biggest tile on here.
Brian Sozzi tweet media
English
1
5
10
1.3K
Jim Lanzone
Jim Lanzone@jlanzone·
One of the most frequent product requests I get is from people who miss MyYahoo. Today did them one better. We launched the AI version: MyScout. It's pretty slick. Add tiles on any topic by prompting Yahoo Scout and they will automatically update going forward. Just the start of the personalization roadmap for Scout.
Axios@axios

Yahoo debuts personalized AI homepage MyScout trib.al/mcM5Hmc

English
4
3
20
6K
Jim Lanzone retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.
Alex Prompter tweet media
English
333
905
3K
481.1K
Jim Lanzone retweetledi
Nick Meacham
Nick Meacham@SportsProNick·
🏎️ Apple are not going to make the same mistakes they made with Major League Soccer with Formula 1. And they’ve already made two major announcements that prove it. Before their new deal with F1 in the US even begins, Apple has announced one deal with Netflix and another with Yahoo that will significantly enhance reach and visibility in their key media market. With Netflix — the Canadian GP will be available live in the US, and Apple TV will get access to Drive to Survive on their own platform. With Yahoo — they will live stream F1 practice and qualifying sessions, starting with the F1 Miami Grand Prix in May. ❓ Why are they doing it? With MLS, they tried to create an exclusive destination for the league behind a paywall on their platform. It didn’t work. ⚽ They quickly began sublicensing rights to other broadcasters, followed by recent announcements about moving MLS in front of the paywall and cancelling their agreement early. This approach clearly shows that Apple learned from their mistakes — yes, even Apple makes mistakes — and are now ramping up their #F1 exposure through key media partnerships from very early in the rights cycle. 💷 What’s looks smart to me is that these two deals don’t start until a few weeks into the season, allowing them to focus first on driving engagement and access on their own platform before widening the net for the North American races. I’m curious how far they go with this approach. I’d still expect them to talk to major outlets like ESPN or NBC Sports about sublicensing the live US races. The other races aren’t real needle-movers if you look at the ratings they generate, so I’d expect Apple to keep those on their own channels unless someone comes in with a big offer. ⏰ Summary: This is all about Apple getting more value from the rights, but it's also critical for F1 to ensure they gain wider visibility and regain audience growth and momentum- Apple's ratings will get nowhere close to what they pulled with ESPN for obvious reasons. If audience interest and ratings do dip under this new agreement, the next rights cycle could lead to a nasty shock to the system when offers start coming in below expectations. #streaming #sportsbiz
English
2
4
25
4.2K
Jim Lanzone retweetledi
Kendall Baker
Kendall Baker@kendallbaker·
Nine years ago, I had an idea: What if I put SportsCenter in an email? That idea became Sports Internet (2017–18). Then Axios Sports (2019–23). Then Yahoo Sports AM. And along the way, it grew into something far bigger than I ever imagined. Now, it's time for the next chapter.
Kendall Baker tweet media
English
50
11
392
31K
Jim Lanzone
Jim Lanzone@jlanzone·
Incredibly meaningful first book by my longtime friend and former investor @bgurley. Equally relevant if you’re just starting out or find yourself at a life/career crossroads. The message is the same: don’t keep waiting for the right moment to pursue your true passion (even if your 10,000 hours was spent perfecting the art of the mixtape like @ajsfour in chapter 7). Run don’t walk for this one.
Bill Gurley@bgurley

Today is launch day. 🚀🚀🚀 We find ourselves in a time with much fear/anxiety about careers. I hope this book can be a positive antidote. The permission, motivation, and methodology to do what you truly love. a.co/d/02q0aD2N

English
1
1
10
1.5K
Jim Lanzone retweetledi
Ryan Spoon
Ryan Spoon@ryanspoon·
Very excited to announce a meaningful partnership and integration between @YahooFinance x @coinbase. Beginning today, Coinbase will be connected to thousands of crypto tickers and equities on Yahoo Finance - helping users move from research to action with just one click. And over time, Coinbase data will be integrated across Yahoo Finance to make a richer experience around crypto, data, and more.
Ryan Spoon tweet mediaRyan Spoon tweet media
English
18
18
144
34.4K
Josh Elman
Josh Elman@joshelman·
So hyped for tomorrow Go Seahawks!!!
Josh Elman tweet media
English
5
0
28
2.2K
Gianluca Fiorelli
Gianluca Fiorelli@gfiorelli1·
@aleyda @rustybrick @jlanzone Yes! Google could do it mixing the best of Web Guide with the best of AI Mode. On the contrary I don’t have that much hope for classic LLMs (e.g., ChatGPT) especially for when they use only training data. Congrats Yahoo! You was the 1st Search Engine I used. I will use you again.
English
1
0
2
105
Aleyda Solis 🕊️
Aleyda Solis 🕊️@aleyda·
🤖 Yahoo! released Scout, its new AI search engine, a few days ago, and it has, hands down, the most “open web” friendly interface compared to ChatGPT or AI Mode… and honestly, the best interface overall 👇 I’ve been comparing Yahoo! Scout vs. ChatGPT & AI Mode across answer quality, formats, and link inclusion, testing different search intents and verticals, and I couldn’t believe how consistently Yahoo! Scout delivers a more user- and open-web-friendly answer experience: * Across most answers, it features a prominent, highlighted “Read More” button linking directly to the most relevant source behind the answer. * It includes links to external sites directly within the answers, starting from the very first paragraph, often showing dozens of links to original sources (in addition to linking to all sources together at the end in a sidebar). In contrast, other AI platforms include far fewer links in the answer itself, and mostly push users to a sidebar to access external sources. * It’s honestly crazy that after so much time of complaints from website owners, publishers, and the SEO community, Yahoo! ends up releasing a far more “open web”-friendly interface in its AI platform than… Google, that has supposedly been committed to improving theirs. Although I was skeptical when I first saw the announcement, I’m more than happy to eat my words, and I want to congratulate Yahoo! for designing not only an easy-to-use interface, but one that genuinely incentivizes users to keep browsing the sites where the answers come from 👏 I really hope product managers and designers at @Google and @OpenAI take note.
Aleyda Solis 🕊️ tweet mediaAleyda Solis 🕊️ tweet mediaAleyda Solis 🕊️ tweet mediaAleyda Solis 🕊️ tweet media
English
10
21
77
7.8K