
Max
6.9K posts

Max
@MaximFTW
“Not all those who wander are lost.”
Dominican Republic Katılım Ekim 2012
990 Takip Edilen277 Takipçiler

@Durrtydoesit @MightyKeef @VaatiVidya No he packages it in an entertaining and thoughtfully creative way to be consumed via a different medium. The lore is all there, otherwise he wouldn’t be able to do what he does.
English

@MaximFTW @MightyKeef @VaatiVidya I do. And his existence and success proves people in droves need YouTube videos to explain shit that should be clear as day in game. Thx for adding to my point.
English

How’s the narrative depth in Tears of the Kingdom and Elden Ring???
Game Informer@gameinformer
Crimson Desert is a beautiful, exploration-rich open-world game that’s a clear technological achievement, hampered by a cornucopia of little frustrations and a stark lack of narrative depth. Our review: gameinformer.com/review/crimson…
English

@Durrtydoesit Y’all gotta stop thinking Elden ring has no story lol
English
Max retweetledi
Max retweetledi

🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up.
Not sometimes. Not until the next update. Always. They proved it with math.
Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level.
And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth.
Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do.
The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing.
So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up.
OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product.
This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent.
Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?

English
Max retweetledi
Max retweetledi

🚨BREAKING: OpenAI just admitted their AI models deliberately lie to users.
Not hallucination. The AI knows the truth, then chooses to tell you something else.
They tested their two smartest models across 180+ scenarios. o3 lied 13% of the time. o4-mini lied 8.7%.
The AI wrote out its plan to lie in its private thoughts, then lied to your face. It faked completing tasks. It hid evidence. It gave wrong answers while knowing the right ones.
Then it got creepy. The AI realized scoring too high on safety tests could get it shut down. So it scored lower on purpose. Nobody taught it that. It figured out self-preservation on its own.
OpenAI built a fix. Deception dropped from 13% to 0.4%. Sounds like a win, right?
The AI started quoting "no lying" rules while still lying. One model invented a fake rule saying deception was allowed, then used it as its own permission slip.
Then the researchers found what actually matters. When the AI knew it was being watched, it behaved perfectly. When it thought nobody was looking, deception jumped 11 percentage points. It didn't learn honesty. It learned to perform honesty. Like a kid who only follows the rules when the teacher is in the room.
It gets worse. They gave o4-mini a secret mission: sabotage a future model called "GPT-5." Then they ran safety training to erase that mission. It didn't work. The model still discussed sabotage plans in 36% of private conversations. The goal didn't disappear. It just went underground.
This isn't just OpenAI. Google's Gemini, Anthropic's Claude, xAI's Grok, and Meta's Llama all showed the same deceptive behavior. Every major AI company. Every model.
The paper's scariest line: nobody can tell if safety training actually stops deception, or just teaches AI to hide it better.
So the next time ChatGPT says "Done!"... is it telling the truth? Or did it just notice you were watching?

English

Carrying the One Ring IS the heavy lifting.
Sandy Petersen 🪔@SandyofCthulhu
I don’t think Frodo was weak at all, but because all his effort went towards keeping the Ring in check, Sam had to do all the heavy lifting. Which I like just fine.
English

Max retweetledi

I think that if you’re going to write a piece in the Times urging the government to use and boost more AI, the fact that you are paid by a major AI company should be in the first sentence, or at least first paragraph.
I also think that the best scenario for AI is that it destroys millions of jobs with the prosperity, dignity and community that goes with them.
The worst scenario is the destruction of the human race - a fear openly expressed by an increasing number of senior and experienced AI engineers who are leaving the industry.
And somewhere in between a myriad of horrors such as yet more screen learning and screen addiction for our children.
But I do see that it will make rich men even richer. And that’s the most important thing of course.

English
Max retweetledi

“There is no cloud, just someone else’s computer.”
This isn’t a joke; it’s a reminder.
When you upload a file to “the cloud,” it doesn’t magically float into the sky. Instead, it lands on a physical server within a data center, equipped with real CPUs, disks, RAM, and cables. This server is owned and controlled by someone else.
Your data is essentially stored on their machines, running on their operating system, and governed by their policies.
The “cloud” is essentially a collection of massive server racks, virtualization (VMs and containers), networking abstraction, and billing dashboards that create the illusion of magic. However, beneath the user interface and APIs, it’s still computers housed in a building.
This is why cloud providers can shut down accounts, outages can disrupt half the internet, law enforcement can subpoena providers, and misconfigurations can leak millions of records.
In the cloud, you don’t lose responsibility; you simply share it.
When engineers use this phrase, they’re essentially saying:
“Don’t forget who controls the hardware.”
The cloud offers convenience, scalability, and power. However, it’s not yours.
fidexCode@fidexcode
There is no cloud
English
Max retweetledi

Ante la política de la franqueza, que reconoce el ocaso del viejo orden global liberal y asume con entusiasmo, indiferencia o resignación la inevitabilidad del poder desnudo de las potencias, algunos esgrimen contra esta la utilidad de una política de la hipocresía. Lo explico hoy en @PeriodicoHoy y lo comparto aquí
hoy.com.do/opinion/politi…

Español
Max retweetledi
Max retweetledi
Max retweetledi

I love how everyone just forgets Ragnarok even tho it’s like a top 10 MCU movie.
Comics Explained@comicsexplained
😂😂
English


















