
GIANCARLO STANTON DOES IT AGAIN FOUR STRAIGHT GAMES ARE YOU KIDDING?!
Shanthi
7.4K posts

@ssc627
@MLB Research. @SyracuseU ‘20. Inquire within for baseball analogies. Inflammatory opinions my own. She/her

GIANCARLO STANTON DOES IT AGAIN FOUR STRAIGHT GAMES ARE YOU KIDDING?!

🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?

The 17-year old Joseph Contreras gets the reigning MVP Aaron Judge to roll into a double play to end the inning! 😳 What a moment.

Most WBC home runs before turning 24: Harry Ford: 3 Carlos Correa: 3

"Three-batter minimum, get rid of it. Bigger bases, limiting throwovers, get rid of it. Runner on second base, get rid of it." Joe Maddon wants to "get the real game back" but does believe the pitch clock and PitchCom were necessary changes.

I’ll never forget when I was on the Hill in 2019 when the Notre Dame fire happened. Every member of Congress was talking about it and posting on their socials. Now UNESCO sites in Iran were BOMBED but because they’re not in the west, it’s not viewed as important.

Stephen Miller: "What you're seeing right now is a military under President Trump's leadership that's not fighting politically correct"

No it’s not. This is what’s wrong with media. It’s not just a picture of a woman. It’s a picture of a public official when she made a fashion choice that she is no longer choosing. That look is common in narcissistic women in government bureaucracy. The original post was being flippant, and doesn’t want to explain exactly what the problem is because they have never tried to explain it. When a female government employee has short hair, it’s because she “doesn’t have time” for it. It’s not a good or a bad thing, but it’s the reason for the choice. If they are a selfish leader, they’re a nightmare. If they are a giving person, they are the best people you have. Our governor is selfish, and with that haircut, you know she is no fun behind closed doors. It’s basic pattern recognition. The look peaks your attention. The behavior then drives how you feel about the individual. To simply say it’s a picture of a woman is virtue signaling, which is grey goo journalism. Tell the real story. Why is @katiehobbs disliked? She’s selfish, and men are boorish on the internet for attention. Let’s try and be adults please.

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

Welcome to baseball season, Aaron Judge homered AGAIN

>Sell morons on taxing the billionaires >Immediately raise property taxes on every middle-class family in New York

want to think about so far in the future, because you're trying to get through the day. So I haven't let myself get there."

“AI wiLL RePlAcE eVeRy WhItE cOllAr JoB”

“AI wiLL RePlAcE eVeRy WhItE cOllAr JoB”