mrs.oliver 🤍 to you

83K posts

mrs.oliver 🤍 to you banner
mrs.oliver 🤍 to you

mrs.oliver 🤍 to you

@tsymonevisuals

✨ 29 | art + design + books + certified yapper RG: 10/85📚✨

Orlando, FL Katılım Mart 2013
3K Takip Edilen1.2K Takipçiler
mrs.oliver 🤍 to you retweetledi
ToonHive
ToonHive@ToonHive·
Happy 61st Birthday to the talented Lance Robertson 🎉 Best known as DJ Lance Rock on ‘Yo Gabba Gabba!’
ToonHive tweet mediaToonHive tweet media
English
211
6.7K
68.8K
1.1M
mrs.oliver 🤍 to you retweetledi
Urban_Tree✝️🍃
Urban_Tree✝️🍃@Urban__Tree·
“Duuuude. Benson is gonna be mad…”
Urban_Tree✝️🍃 tweet media
English
218
21K
176K
2M
mrs.oliver 🤍 to you retweetledi
Zay | Design Politix
Zay | Design Politix@designpolitix·
think about the world dialectically and you won’t be wrong about anything either 🤓 #DesignPolitix
Zay | Design Politix tweet media
English
1
2
2
19
mrs.oliver 🤍 to you
mrs.oliver 🤍 to you@tsymonevisuals·
I’ve been so tired and irritable all week I’m on my own nerves atp 😭😂
English
0
0
0
27
mrs.oliver 🤍 to you retweetledi
❀
@marsrled·
i severely dislike when people pedestalize me or create expectations of me in their head and then question my character for not fitting the narrative they created of me in their head. like i am not a project or projection of your wanted image of me. i’m me
English
62
5.5K
17.4K
321.3K
Scorpio 🦂
Scorpio 🦂@BriBreIsHerName·
Sweet Dreams is slowly crawling into my top 5 favorite songs by Beyonce 😭
English
1
0
0
65
mrs.oliver 🤍 to you retweetledi
claudia
claudia@C111AUDIA·
U people can’t let yourselves enjoy anything that’s why u grow to be old & bitter
English
0
3.5K
18.4K
329.4K
mrs.oliver 🤍 to you retweetledi
Romeo S (Reloaded) 🇩🇴🥇
Thats why I don’t touch that shit
Nav Toor@heynavtoor

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?

English
20
3.1K
16.9K
1.1M