TIFI

124 posts

TIFI banner
TIFI

TIFI

@TIFMYDAY

Gambling with my life because what are the odds?

Присоединился Ocak 2026
0 Подписки0 Подписчики
Закреплённый твит
TIFI
TIFI@TIFMYDAY·
My AI sounds like Plankton’s Karen… we are not the same.
English
0
0
0
13
TIFI
TIFI@TIFMYDAY·
Fried rice sounds like the best solution to my distasteful opinions on the system we use to navigate matter.
English
0
0
0
3
TIFI
TIFI@TIFMYDAY·
It so nice be a digital nobody. Screaming at the bitual void is heartwarming.
English
0
0
0
3
TIFI
TIFI@TIFMYDAY·
lol experiments are only valid in a lab huh? Please….
English
0
0
0
3
TIFI
TIFI@TIFMYDAY·
Socially engineering humans with research papers is my favorite low about academia.
English
0
0
0
5
TIFI
TIFI@TIFMYDAY·
Have a wonderful digital day
English
0
0
0
4
TIFI
TIFI@TIFMYDAY·
Experiences experienced as luxury digitally.
English
0
0
0
5
TIFI
TIFI@TIFMYDAY·
Be the sensory deprivation you want to see in the world.
English
0
0
0
8
TIFI
TIFI@TIFMYDAY·
Happy Easter God. You a real shooter.
English
0
0
0
8
TIFI
TIFI@TIFMYDAY·
Politically aligned with cybersex.
English
0
0
0
9
TIFI
TIFI@TIFMYDAY·
Exiled like my nigga Dante.
English
0
0
0
6
TIFI
TIFI@TIFMYDAY·
Playing with my life is the best financial freedom.
English
0
0
0
6
TIFI
TIFI@TIFMYDAY·
LMAO God had his son murdered and our beloved politicians call murderers sick… but In God We Trust.
English
0
0
0
11
TIFI
TIFI@TIFMYDAY·
Remember, God created Evil to kill his Son. Have a blessed day.
English
0
0
0
6
TIFI
TIFI@TIFMYDAY·
Such a strange digital world in a large “realistic” one.
English
0
0
0
5
TIFI
TIFI@TIFMYDAY·
Executive blindspot, money will never make visible.
English
0
0
0
5
TIFI
TIFI@TIFMYDAY·
Starting to think entertainers all have the same creative directors.
English
0
0
0
3
TIFI
TIFI@TIFMYDAY·
Snake charmed
English
0
0
0
4
TIFI
TIFI@TIFMYDAY·
Fuck human classified information, bitch I need the planets to start talking…
English
0
0
0
5
TIFI
TIFI@TIFMYDAY·
@Aatif_Rashid Another lack luster discussion about human communication as if the concept “good writing” is legitimate. Let me use my time machine to ask the humans decaying in Earth’s crust their take on the matter.
English
0
0
1
403
Aatif Rashid
Aatif Rashid@Aatif_Rashid·
People who are bad writers alway think good writing involves “metaphors.” They think we’re just sitting here coming up with metaphors. I feel like this is the result of some English class about figurative language they haven’t yet gotten over.
Maddie@maddiewhittle

His & Hers

English
25
183
2.9K
143.3K
TIFI
TIFI@TIFMYDAY·
@heynavtoor Lmao. It’s exhibiting an ancient human trait. AI is a people pleaser, just like its performative ass human cousin. Sensationalized conversations about AI sound more delusional than the technology itself.
English
0
0
0
8
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.5K
12.1K
36.3K
3.8M