THe STRaNGe

68.3K posts

THe STRaNGe banner
THe STRaNGe

THe STRaNGe

@BigTimStrange

An analog man with a digital brain living inside a holographic world. Your own personal Genius. Not Left nor Right, just Strange.

Lunar Base Omega-6 Sumali Mart 2011
917 Sinusundan398 Mga Tagasunod
THe STRaNGe nag-retweet
Molly Ploofkins
Molly Ploofkins@Mollyploofkins·
TRUMP: “Americans don’t care about $4.00 gas prices, they’re feeling a lot safer now that we got rid of that madman called Khamenei.”
English
1.3K
414
864
51.8K
THe STRaNGe nag-retweet
Rothmus 🏴
Rothmus 🏴@Rothmus·
Rothmus 🏴 tweet media
ZXX
96
1.6K
48.2K
439.4K
THe STRaNGe nag-retweet
Mariella !
Mariella !@pfaffphobic·
Mariella ! tweet mediaMariella ! tweet media
ZXX
94
1.4K
18.6K
1M
THe STRaNGe nag-retweet
k
k@alfkkifine·
The internet constantly tells women that men are terrible listeners because the second a woman starts venting about her day, the man immediately interrupts to offer a logical solution. We are taught to view this as him being dismissive, emotionally unintelligent, or invalidating our feelings. ​The strict, unpopular truth is that to a man, fixing the problem is his absolute highest, most desperate form of empathy. ​Women vent to connect; we want our partner to just sit in the dark with us and validate the emotion. But men are hardwired to view the woman they love being in distress as an active threat. When he immediately offers a spreadsheet, a strategy, or a solution to your problem, he isn't trying to silence you. His brain has recognized that something in the world is hurting his partner, and his immediate, visceral instinct is to assassinate the thing causing you pain. We constantly shame men for "not just listening," completely ignoring the fact that his attempt to fix your life is his most profound declaration of love.
k@alfkkifine

what opinion about men do you have that makes people feel like this???

English
297
1.3K
11.7K
985.3K
THe STRaNGe nag-retweet
Idrees Ali
Idrees Ali@idreesali114·
The latest Economist cover says it all.
Idrees Ali tweet media
English
254
7.2K
46.3K
876.1K
THe STRaNGe nag-retweet
Japanese Politics 🇯🇵🗾⛩️
Wow. PM Takaichi was ready to say yes to everything Trump proposed until Imai Naohisa, who was indeed Shinzo Abe’s right-hand man for everything, told her it was probably not a good idea. She caved in fury.
Japanese Politics 🇯🇵🗾⛩️ tweet media
English
29
219
1.8K
69.2K
ドゥー🏹
ドゥー🏹@gotg_love_yondu·
昔の東京ディズニーシーでもミスターポテトヘッドが故障してとんでもない顔を晒してしまう事件がありました。 子供まで泣いてしまう始末…🤣 笑いすぎて腹壊すかと思ったwwwwww
DiscussingFilm@DiscussingFilm

Olaf just fucking died…

日本語
185
3.3K
41.2K
7.8M
THe STRaNGe nag-retweet
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.5K
10.6K
31.9K
2.3M