๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ

31.3K posts

๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ banner
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ

๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ

@StaticMotionNFT

Photographer | Photo Editor | Web Developer | Building Community with Passion, Integrity & Respect

Web3 Beigetreten Mart 2011
787 Folgt4.7K Follower
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ retweetet
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ retweetet
Beatriz
Beatriz@mandolinaesยท
The biggest irony of this space is that we talk about art constantly, but very few people actually spend time looking at it. Most interactions happen around the noise, the drops, the announcements. The work itself is the quietest part of the conversation ๐Ÿ˜”
English
13
6
63
2.1K
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ
If you want to show me how you donโ€™t know anything about Ai or the purpose of releasing new models - do a research study that makes claims like this and/or discuss it as an โ€œahaโ€ moment of truth.
Nav Toor@heynavtoor

๐ŸšจBREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
0
0
0
58
Nav Toor
Nav Toor@heynavtoorยท
๐ŸšจBREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
49K
9.7M
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ retweetet
ZachXBT
ZachXBT@zachxbtยท
@Ledger day 62 since the last Ledger customer data breach ๐Ÿค
English
282
315
10.9K
280K
๐•Š๐•ฅ๐•’๐•ฅ๐•š๐•” ๐•„๐• ๐•ฅ๐•š๐• ๐•Ÿ retweetet
Jimi Albert (jimialbert.eth/.tez/.sol)
So many people here say the support art and artists. 90% of them are full of shit, engagement farmers. Real collectors donโ€™t make you beg in posts. They show up in the shadows and buy in silence. Pay attention to the innovators.
Jimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet media
English
10
11
42
673
Dorthe's Joy of Creation | NFTNYC 2025
@MattKentPhoto Because being part of Web3 was the catalyst for me to start selling my art. And because of that I am now branching out into selling physical pieces, merch etc. Being in this artist community pushes me to evolve and get better all the time. And finally I love the friendships! ๐Ÿซ‚
English
1
0
3
27