Esha

54.4K posts

Esha banner
Esha

Esha

@jeezesha

26. A generalist.

🇮🇳, HYD Katılım Haziran 2019
480 Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Esha
Esha@jeezesha·
🐅🇮🇳💙
Esha tweet media
QME
10
36
355
333.9K
Esha retweetledi
ruza
ruza@detailsofsvn·
They are untouchable nobody is even coming closer. Legends indeed.
English
1
836
4.4K
25.3K
Esha retweetledi
🕸️
🕸️@koostrous·
congratulations to the music industry for having these 7 artists in a group, because there will never be another group like BTS.
English
225
9.6K
37.2K
166.6K
Esha retweetledi
BasedBlondexx
BasedBlondexx@BasedBlondex·
Animals are literally angels on earth, they’re Gods best creations.. stop harming them
English
50
961
5.2K
49K
Esha retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.5K
12.1K
36.2K
3.8M
Esha
Esha@jeezesha·
👁️ 👄 👁️
ART
0
4
63
933
Esha retweetledi
avg samosa enjoyer
avg samosa enjoyer@scoopshot63·
v disappointed with the discourse around Chiraiyaa, joh desh trans logo ki rights tak cheen ne pe tula hua hai woh auraton ke favour mein ek law kya pass karega. Aur yaha ke mardon ki gaand fat jaati no sunke batao. 1/
Indonesia
1
1
13
292
Esha
Esha@jeezesha·
All likers of this post need to be jailed We need martial rape to be criminalised hard At this point idgaf which man it negatively impacts Women have been negatively impacted by 837812 things since the beginning of time, welcome to our fucking boat and funnily for you, you’ll still never be able to reach the level there that women have
Honest Cricket Lover@Honest_Cric_fan

The logic of Chiraiya feminist fan 🤡

English
12
8
123
3.3K
Esha retweetledi
Stutii
Stutii@Sam0kayy·
govt should launch beta padhao beti bachao asap.
English
68
614
4.9K
48.6K