KULTRA

3.7K posts

KULTRA banner
KULTRA

KULTRA

@Levariousx

Hypnotic Living. Creative mastery. Art over algorithm.

เข้าร่วม Ocak 2025
63 กำลังติดตาม143 ผู้ติดตาม
ทวีตที่ปักหมุด
KULTRA
KULTRA@Levariousx·
Kultra, Welcome.
English
1
1
43
3.2K
KULTRA
KULTRA@Levariousx·
Creativity: Bipolar + Spirit: ADHD + Mind: Stubborn = Wallet: Obese
English
0
1
0
11
Louise
Louise@LeRoyDesCimes·
psychedelics gave me (no not enlightenment) (not wisdom) (no universal love either) a deep and nuanced appreciation for the color purple
English
49
62
1.2K
30.1K
KULTRA
KULTRA@Levariousx·
@DejaRu22 Back when he was trustworthy
English
0
0
0
75
KULTRA
KULTRA@Levariousx·
@PathOfMen_ The richest dudes get absolutely destroyed by women They’re seen as prey because they lead w their pockets
English
0
0
3
225
KULTRA
KULTRA@Levariousx·
@Helios_Movement Why would weed cause Cannabinoid Hyperemesis Syndrome?
English
1
0
0
1.2K
Aristo
Aristo@aristomarinetti·
One of the best anti-aging supplements is having fun.
English
28
823
4.2K
72K
KULTRA
KULTRA@Levariousx·
@drdating007 Travelling for the sake of travelling is even worse imo Glorified procrastinating Do it with a purpose bigger than just “seeing life”
English
0
0
0
84
Dr. Dating
Dr. Dating@drdating007·
GET OUT OF YOUR HOMETOWN. Even if it’s not forever. Move. Travel. See the world. There is more to life than the same 10 people and the same 2 bars.
English
59
222
2.1K
107.4K
gabrielle
gabrielle@legitimatetiger·
rip carl jung you would’ve loved spirituality twitter
English
43
411
2K
41.2K
KULTRA
KULTRA@Levariousx·
@BStulberg Compare it to those who KNOW they are the best
English
0
0
0
28
Brad Stulberg
Brad Stulberg@BStulberg·
A study with over 70K people found those who obsess about being the best have much worse outcomes than those who are focused on being the best at getting better, who pursue mastery, and who define success on their own terms.
Brad Stulberg tweet media
English
24
352
2.1K
50.2K
KULTRA
KULTRA@Levariousx·
@Kurrco What’s the best song?
English
5
0
0
6.3K
Kurrco
Kurrco@Kurrco·
Pitchfork rates Ye's 'BULLY' a 3.4 out of 10: "After a public apology, Ye returns to music as a hollowed-out shell of his former self."
Kurrco tweet mediaKurrco tweet media
English
673
209
5.8K
4.7M
KULTRA
KULTRA@Levariousx·
@BonesawMD This is what secret societies must feel like
English
0
0
0
7
BONESAW 🕊️
BONESAW 🕊️@BonesawMD·
AI is vastly overrated, yet also really underrated amongst the general population at the same time
English
12
9
217
6.8K
ominous
ominous@OMINOUSLUMINOUS·
do i have adhd or am i just naturally lazy and retarded
English
61
2.7K
19.7K
334K
⚡️🌞 Sol Brah 🌞🐬
Don’t over optimise your life. Getting your groceries delivered. Home gym so you don’t have to travel. Working from home too. Every part of your life dialled and scheduled. Guess what, now you have no room for the divine encounters that can give you the MAGIC of life.
English
35
78
1.3K
26.8K
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.5K
10.8K
32.5K
2.4M