∫ebitas

6.4K posts

∫ebitas banner
∫ebitas

∫ebitas

@sebas_dlmb

matemático pero esencialmente Lana stan

Katılım Şubat 2017
993 Takip Edilen7.3K Takipçiler
Sabitlenmiş Tweet
∫ebitas
∫ebitas@sebas_dlmb·
sí conozcan a sus ídolos !!!
∫ebitas tweet media
Español
26
77
6.9K
115.7K
∫ebitas retweetledi
Aqsa
Aqsa@cryinghoursonly·
Your 20s will teach you that you’ll be alone on the saddest day of your life but the sun will rise again and you’ll live
English
162
28.5K
155.7K
1.8M
∫ebitas retweetledi
chappell roan daily
chappell roan daily@dailyroan·
NASA played chappell roan’s pink pony club to wake up the astronauts on artemis II this morning! “we were all eagerly awaiting the chorus”
English
38
2.5K
18.9K
1.4M
∫ebitas retweetledi
5hahem
5hahem@shaTIRED·
Twigs bringing out the heavy hitters from the ballroom scene for her NYC show really does highlight that she has a pulse on culture and not only platforms it but lives with it too. No shade, but she’s a genius
ICYESTTWAT@FUCCl

TWIGS BROUGHT THE BALLROOM GIRLS OUTSIDE

English
6
439
5.5K
82.8K
∫ebitas retweetledi
“paula”
“paula”@paularambles·
real walkers never press “start” when using google maps
English
156
3.3K
62.6K
1.6M
Roberto
Roberto@SolarPowerade1·
Llamando a mi banco para denunciar mis boletos de Rosalía como fraude (no metió Candy al setlist del Lux Tour)
Español
12
99
1.5K
22.3K
∫ebitas retweetledi
۟
۟@filmfae·
the last 20 minutes of sentimental value where the movie tries to kill you
GIF
English
9
1.1K
13.1K
146.4K
yalitza apariciosus
yalitza apariciosus@dunevillenuve·
how it genuinely feels still rooting for sentimental value
English
22
1.2K
15.8K
268K
Michael Sellars | Horror Writer
Michael Sellars | Horror Writer@HorrorPaperback·
I was in Sports Direct earlier and... wtf is this? Is David Cronenberg making sportswear now?
Michael Sellars | Horror Writer tweet media
English
289
217
2.1K
1M
∫ebitas retweetledi
Mehdi Hasan
Mehdi Hasan@mehdirhasan·
We're so screwed as a society.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
329
10.2K
61.5K
5M