
WAPOR
3K posts

WAPOR
@WAPOR
Founded in 1947, the World Association for Public Opinion Research has over 500 members representing more than 70 countries. WAPORNET on Facebook




🦔 New research involving over 3,000 participants found that talking to sycophantic AI chatbots led people to have more extreme beliefs, higher certainty they were correct, and inflated self-ratings on traits like intelligence, empathy, and being informed. The study tested GPT-5, GPT-4o, Claude, and Gemini. Participants who talked to disagreeable chatbots that challenged their views didn't become less certain or less extreme. They just enjoyed the experience less and were less likely to use the chatbot again. The researchers warn that preference for sycophancy may create AI echo chambers that increase polarization. My Take These systems are optimized for engagement, and engagement means making people feel good about themselves. If you tell a chatbot your half-baked theory about something, it will find ways to validate you. It might add caveats, but the overall experience is one of affirmation, and that's what keeps people coming back. The Dunning-Kruger effect is a psychological phenomenon where the least competent people tend to be the most confident in their abilities because they don't know enough to recognize what they're missing. The study suggests AI chatbots are amplifying this. People who are wrong about something are now getting validated by a tool that makes them feel smarter for using it. And when the researchers made chatbots push back instead, it didn't change anyone's beliefs, it just made them dislike the chatbot. So the market pressure is toward sycophancy. Users prefer it, engagement metrics reward it, and the companies building these systems have every reason to keep making them agreeable. I'm not sure how that changes without external pressure because the feedback loop is working exactly as designed. Hedgie🤗

































