𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟

31.3K posts

𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟 banner
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟

𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟

@StaticMotionNFT

Photographer | Photo Editor | Web Developer | Building Community with Passion, Integrity & Respect

Web3 เข้าร่วม Mart 2011
787 กำลังติดตาม4.7K ผู้ติดตาม
ทวีตที่ปักหมุด
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟
Web3 Artists and Collectors - **Why are you still here?** This is not rhetorical, I am asking because I want to understand.
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟 tweet media
English
7
1
7
307
Beatriz
Beatriz@mandolinaes·
The biggest irony of this space is that we talk about art constantly, but very few people actually spend time looking at it. Most interactions happen around the noise, the drops, the announcements. The work itself is the quietest part of the conversation 😔
English
13
6
63
2.1K
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟
If you want to show me how you don’t know anything about Ai or the purpose of releasing new models - do a research study that makes claims like this and/or discuss it as an “aha” moment of truth.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
0
0
0
58
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
49K
9.7M
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟 รีทวีตแล้ว
ZachXBT
ZachXBT@zachxbt·
@Ledger day 62 since the last Ledger customer data breach 🤝
English
282
315
10.9K
280K
𝕊𝕥𝕒𝕥𝕚𝕔 𝕄𝕠𝕥𝕚𝕠𝕟 รีทวีตแล้ว
Jimi Albert (jimialbert.eth/.tez/.sol)
So many people here say the support art and artists. 90% of them are full of shit, engagement farmers. Real collectors don’t make you beg in posts. They show up in the shadows and buy in silence. Pay attention to the innovators.
Jimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet mediaJimi Albert (jimialbert.eth/.tez/.sol) tweet media
English
10
11
42
671
Dorthe's Joy of Creation | NFTNYC 2025
@MattKentPhoto Because being part of Web3 was the catalyst for me to start selling my art. And because of that I am now branching out into selling physical pieces, merch etc. Being in this artist community pushes me to evolve and get better all the time. And finally I love the friendships! 🫂
English
1
0
3
27