ebbi

34.8K posts

ebbi banner
ebbi

ebbi

@eeeeeebeeeeeee

เข้าร่วม Temmuz 2011
292 กำลังติดตาม4.3K ผู้ติดตาม
ทวีตที่ปักหมุด
ebbi
ebbi@eeeeeebeeeeeee·
Taco Twednesday 🌮🤠
ebbi tweet media
English
2
3
87
19.8K
ebbi รีทวีตแล้ว
i can be your long lost pal
i can be your long lost pal@PallaviGunalan·
me liking a tweet and then immediately liking a counter tweet
GIF
English
84
13.9K
177.9K
1.3M
ebbi รีทวีตแล้ว
Phillygirl
Phillygirl@24tog·
The best thing about Amazon is they rather refund you than have a conversation about it
English
523
18K
291.6K
4.9M
ebbi รีทวีตแล้ว
Pop Base
Pop Base@PopBase·
Earth stuns in new photo taken by Artemis II. (Via: @NASA)
Pop Base tweet media
English
1.8K
27K
278.6K
23.4M
ebbi รีทวีตแล้ว
Joon Lee
Joon Lee@joonlee·
me on twitter vs. me on linkedin
English
980
73.7K
263.6K
0
ebbi รีทวีตแล้ว
martha
martha@mxmsworld·
Them fuel prices going to make everything unfunny init.
Ryanair@Ryanair

Update...

English
23
2.6K
47.3K
1.2M
ebbi รีทวีตแล้ว
Neet
Neet@neet_sol·
Thanks for ending the meeting 4 minutes early and “giving me some time back.” Now I can finally pursue my passions
English
53
2.5K
33K
480.4K
ebbi รีทวีตแล้ว
Cjay
Cjay@ced_jayy·
This is a nation of professional protesters.
English
355
30.1K
281.9K
4.9M
ebbi รีทวีตแล้ว
Mehdi Hasan
Mehdi Hasan@mehdirhasan·
We're so screwed as a society.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
329
10.2K
61.4K
5M
ebbi รีทวีตแล้ว
LynnToku 🔜 UMAD
LynnToku 🔜 UMAD@Im_Just_Lynn·
Told the dennys waiter to round up to 70 for tip and this nigga used his employee discount on our order 😭
LynnToku 🔜 UMAD tweet media
English
2.2K
4.7K
390.5K
28.8M
ebbi รีทวีตแล้ว
Ed
Ed@eddo75·
Pancake day next Tuesday? That’s creped up on us
English
48
579
11.3K
188.4K
ebbi รีทวีตแล้ว
Unhinged
Unhinged@DistressDark·
Brilliant
Unhinged tweet mediaUnhinged tweet media
English
489
17.2K
267.2K
7.4M
ebbi รีทวีตแล้ว
dior ✞
dior ✞@deeore5·
leave your window open tonight
English
293
9.8K
185.5K
10M