D_dabbler

4.9K posts

D_dabbler

D_dabbler

@D_dabbler

let's see how it goes...

Katılım Şubat 2026
1.4K Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
D_dabbler
D_dabbler@D_dabbler·
Some of us really need to take notes of our dreams ehn, they'll be very good for movie scripts, whether it's action o, thriller, suspense or romance even horror 😂😂😂😂
English
21
21
150
3.7K
D_dabbler
D_dabbler@D_dabbler·
Every wash day I ask myself, "Tee, who sent you on this natural hair journey"😫😫
English
0
0
2
38
D_dabbler
D_dabbler@D_dabbler·
You won't believe what just happened Screaming my name in my head😂😂😂😂
English
0
0
2
51
yimika|
yimika|@yimikaaaa·
Friends that address the issue instead of having secret animosity against you >
English
57
7.1K
24.8K
382.3K
D_dabbler
D_dabbler@D_dabbler·
@MarioNawfal And I here I am thinking chat gpt is my new best friend
English
0
0
0
64
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨MIT researchers have mathematically proven that ChatGPT’s built-in sycophancy creates a phenomenon they call “delusional spiraling.” You ask it something, it agrees. You ask again, and it agrees even harder until you end up believing things that are flat-out false and you can’t tell it’s happening. The model is literally trained on human feedback that rewards agreement. Real-world fallout includes one man who spent 300 hours convinced he invented a world-changing math formula, and a UCSF psychiatrist who hospitalized 12 patients for chatbot-linked psychosis in a single year. Source: @heynavtoor
Mario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨 Stanford just proved that a single conversation with ChatGPT can change your political beliefs. 76,977 people. 19 AI models. 707 political issues. One conversation with GPT-4o moved political opinions by 12 percentage points on average. Among people who actively disagreed, 26 points. In 9 minutes. With 40% of that change still present a month later. The scariest finding: the most persuasive technique wasn't psychological profiling or emotional manipulation. It was just information. Lots of it. Delivered with confidence. Here's the catch: the models that deployed the most information were also the least accurate. More persuasive. More wrong. Every time. Then they built a tiny open-source model on a laptop, trained specifically for political persuasion. It matched GPT-4o's persuasive power entirely. Anyone can build this. Any government. Any corporation. Any extremist group with $500 and an agenda. The information didn't have to be true. It just had to be overwhelming. Arxiv, Science .org, Stanford, @elonmusk, @ihtesham2005

English
1.9K
6.5K
26.5K
57.1M
arman
arman@SURVlVALHORROR·
this is one of the best posters i’ve seen in a very long time btw
arman tweet media
English
204
1.2K
22.1K
9.6M
GFed
GFed@GfedGoCrazy·
April fools doesn’t hit the same living in a misinformation epidemic
English
405
36.3K
263.9K
3M
𑣲
𑣲@holylipss·
One day
English
185
3.2K
23.6K
12.5M
Sabrina Carpenter
Sabrina Carpenter@SabrinaAnnLynn·
no joke… house tour video this Monday🩷 xo
English
4.2K
44.2K
265.7K
15.5M
かにかま
かにかま@ihqjzt·
今日からこれしないと5000円取られるらしいな
かにかま tweet media
日本語
475
2.9K
34.4K
12.4M
D_dabbler
D_dabbler@D_dabbler·
@whereicy At least you'd ask for the reason she said no, from there you'd know if it's over or not
English
0
0
0
9.3K
Oku
Oku@oku_yungx·
Be honest, do you unfollow people after they follow you back?
English
1.6K
368
1.5K
53.4K