Karina Gidi

16.3K posts

Karina Gidi banner
Karina Gidi

Karina Gidi

@KarinaGidi

Se pronuncia Yidi. Señora. Bicho teatral. Actriz. De sangre palestina. Siempre planeando emigrar a un pueblo. Contacto @mmagency_ @mendozamel

Katılım Ocak 2012
1K Takip Edilen24.4K Takipçiler
Karina Gidi
Karina Gidi@KarinaGidi·
Ya tienes tus boletos para La fiesta? Te esperamos en @UnTeatro a partir del 3 de abril
Karina Gidi tweet media
Español
0
1
10
224
PEDRO FERRIZ
PEDRO FERRIZ@pedroferriz3·
@tatclouthier @CiroGomezL ¿No te parece suficiente un atentado a su vida por parte del gobierno? también te vas a hacer pendeja con eso
Español
28
5
162
3.8K
Ciro Gómez Leyva
Ciro Gómez Leyva@CiroGomezL·
Quizá en Palenque encontró el silencio propicio para sentarse a producir textos que pretenden reescribir la historia de México. Y quizá yo sea un necio, pero, año y medio después, sigo creyendo que, si no sale, es porque no tiene a dónde ir. excelsior.com.mx/opinion/ciro-g…
Español
2K
1.7K
5.1K
201K
Karina Gidi retweetledi
Bernie Sanders
Bernie Sanders@BernieSanders·
As we focus on Iran and Lebanon, let's not forget what’s happening in the West Bank. In one year, more than 36,000 Palestinians were forcibly displaced and 240 were killed. There were over 1,700 attacks by Israeli settlers. We must end U.S. military aid to Netanyahu.
English
3.1K
11.6K
49.8K
2.4M
chidoguan
chidoguan@chidoguan·
Siento que el In Memoriam de los óscares lo debería presentar un actor muerto, que lo haga uno vivo me suenta a apropiación cultural la vdd
Español
3
16
226
7.5K
Karina Gidi
Karina Gidi@KarinaGidi·
Salí de mi clase de tenis, cantando. Me fui cantando al súper. Discretamente seguí cantando hasta llegar a casa. Y me acordé que mi papá decía que yo me despertaba cantando cuando era bebé. Qué bonito que haya cosas de uno que siempre sean. Y qué bonito acordarme de mi papá
Karina Gidi tweet media
Español
4
6
432
4K
bustamante
bustamante@eBustamant3·
Una vez llegó Karina Gidi en el mismo teatro donde yo estaba trabajando. Verla fue sentir tantas ganas de llorar porque por mucho tiempo Los adioses fue mi único lugar
Español
1
0
3
64
DiscussingFilm
DiscussingFilm@DiscussingFilm·
Odessa A’zion says she will no longer star in Sean Durkin and A24’s film adaptation of ‘DEEP CUTS’, over controversy regarding the character’s Mexican heritage. “I am with ALL of you and I am NOT doing this movie.”
DiscussingFilm tweet mediaDiscussingFilm tweet media
English
767
1.4K
58.5K
10.6M
Karina Gidi
Karina Gidi@KarinaGidi·
@vladzecua Jajaja gracias, Vladimir, por recordarlo con tanto cariño.
Español
0
0
1
21
Karina Gidi retweetledi
Ana Francisca Vega 🌿
Ana Francisca Vega 🌿@anafvega·
Esto es muy, muy serio.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

Español
19
212
1.5K
335.8K