

defak
28.5K posts
@defak
No em pregunteu si encara canto. Explico coses i practico el Sumarestisme™️. #artsinhealth is the new black. #MNAC2029








BREAKING: Just five minutes before Trump's announcement to halt the attacks on Iran, massive trades reportedly hit the market. In one move, $1.5 billion in S&P 500 (ES) futures was bought while $192 million in oil (CL) futures was sold. These orders were 4–6x larger than anything else at the time. The trader seemingly made huge gains. Unusual.





🔴Israel repite en Líbano el patrón de Gaza. Casi 900 muertos, la mayoría civiles, y entre ellos decenas de niños. Niños como los nuestros, aunque el mundo parece que no los mire del mismo modo. Uno de ellos, Maher Khalifeh, asesinado en un bombardeo en Al-Ghaziyeh, en el sur del país.











🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.






