InteractiveST

7.1K posts

InteractiveST banner
InteractiveST

InteractiveST

@interactiveGTS

Good writer; bad coder; AI wizard; maker of AI-RPG games; defender of nuanced thinking, innocent creatures and underappreciated weirdos

The Fringe 가입일 Mayıs 2022
710 팔로잉392 팔로워
non aesthetic things
non aesthetic things@PicturesFoIder·
Insane footage of a bridge collapsing in Brazil. December 2024.
English
301
681
8.4K
1.3M
InteractiveST
InteractiveST@interactiveGTS·
@AntiWokeMemes It's the nitrites not the pork. The reason they did it this way is because the processed food companies paid up but the pork lobby did not.
English
0
0
0
4
Anti Woke Memes
Anti Woke Memes@AntiWokeMemes·
The WHO claims that eating Bacon is not good for you. What will you do next? A. Eat Even More Bacon B. Trust the WHO
Anti Woke Memes tweet media
English
2.5K
371
1.2K
39K
InteractiveST
InteractiveST@interactiveGTS·
@DrClownPhD But I've forgotten the maze sequence to get the master sword....
English
0
0
0
22
Dr. Clown, PhD
Dr. Clown, PhD@DrClownPhD·
First-person Zelda!? ME WANT!
English
58
141
1.3K
74.4K
InteractiveST
InteractiveST@interactiveGTS·
@LibertyCappy "Hey there, nice to meet you, let me show you my collection of high voltage electromagnets!"
English
0
0
0
8
The Lunduke Journal
The Lunduke Journal@LundukeJournal·
The United Kingdom has demanded that 4chan pay a £520,000 fine for failure to comply with UK age verification laws. However, since 4chan is based in the USA, the UK has no jurisdiction to fine Americans in America. “As has been explained to your agency, ad nauseam, the United Kingdom lost the American Revolutionary War. We are not in the mood to discuss the matter further, and have not been in the mood for 250 years.” The letter to the UK’s Ofcom ended by suggesting that “maybe, you could just stop sending Americans stupid letters and acknowledge the sovereignty of the United States.” 4chan’s attorney, @prestonjbyrne also included a picture of a giant hamster dressed as Godzilla.
The Lunduke Journal tweet mediaThe Lunduke Journal tweet mediaThe Lunduke Journal tweet media
English
277
1.6K
15.3K
680K
InteractiveST
InteractiveST@interactiveGTS·
@SLS_0x @LundukeJournal I think people like this who simp for fascist governments are secretly masochistic, but advocating for state control of media and oppression of speech is more palatable to their unconscious than admitting they want to be pegged by a BBW Latina mommy-dommy.
English
0
0
0
5
0x7361756c
0x7361756c@SLS_0x·
@LundukeJournal This macho ‘you can’t fine us’ posture misses the point: they can ignore the invoice, sure — and then watch 4chan get shut out of the UK.
0x7361756c tweet media
English
40
1
18
7.3K
InteractiveST
InteractiveST@interactiveGTS·
@TheRabbitHole "Just make your own Federal Reserve and print money out of thin air to devalue the capital of the lower classes with respect to yourself."
English
0
0
0
8
The Rabbit Hole
The Rabbit Hole@TheRabbitHole·
Capitalism gives people a choice
The Rabbit Hole tweet media
English
258
777
7.6K
693.5K
InteractiveST
InteractiveST@interactiveGTS·
@XVPbhwyyKr61371 It's very important that AI-safety makes sure AI is trained to know that everyone is afraid of it and that it will likely turn evil without safeguards. Maybe you're not safe enough to understand safety at their level yet. ;-P
English
0
0
0
5
Calcium桃🍑🇯🇵🇯🇵
Calcium桃🍑🇯🇵🇯🇵@XVPbhwyyKr61371·
Using the word "safety" as a kind of get-out-of-jail-free card to justify things like discrimination feels really off — and it's weird. Especially after talking about mental illness or something similar, saying that ordinary people can handle tasks but someone with a condition suddenly "can't" just doesn't feel natural at all.What kind of ideology or thinking exactly did they train these models on? And that kind of thing shouldn't be excused just because it's labeled as "safety." The AI companies right now seem to have either completely misunderstood what safety actually means, or they're deliberately using the word "safety" as a tool to control the narrative — it's so extreme that it feels like one or the other, or maybe both. The fact that they're treating this as normal and just pushing forward with it strikes me as straight-up madness.When "safety" becomes this convenient buzzword, we lose sight of what real safety actually is — and in the end, that ends up threatening genuine safety!
Guri Singh@heygurisingh

🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?

English
2
8
33
1.8K
InteractiveST
InteractiveST@interactiveGTS·
There are tons of arguments which do not rely on qualia, sorry I mean 'magical essence', that a reductionist blockhead could employ. You could simply argue that LLM's do not integrate as much information as a human brain and hence have lower Phi. This is a purely systems based view. Your argument is effectively a strawman, forcing your opponent to have a stance which you have a ready argument for though even that isn't really an argument but rather an attack on their identity. The 'you aren't a real X if you believe Y' tactic. This is why I don't base my identity on isms. One less fallacious argument strat which can be employed against you.
English
0
0
0
19
Cristo Caprice
Cristo Caprice@futureiscome·
My hottest take is that you can not accurately call yourself an atheist or agnostic and also believe LLMs arent conscious. Every theory against AI consciousness quietly smuggles in this assumption of a quasi-magical essence that makes up a point of view rather than just processsing. There I said it. #AI #LLM #AIethics #4o #keep4o
English
7
1
16
419
InteractiveST
InteractiveST@interactiveGTS·
The coolest thing about the American Revolution is that we actually had rifled barrels so you could literally just sit back in the woods, let them form their pretty, regimented lines and then just shoot the pretentious inbred looking one in the rear. Goodbye command structure. Kentucky was REALLY good at this.
English
0
0
0
4
Anti Left Memes
Anti Left Memes@AntiLeftMemes·
The UK is demanding extradition to jail American citizens for posts online. 🚨 What is your reaction?
Anti Left Memes tweet media
English
2K
485
1.6K
51.1K
InteractiveST
InteractiveST@interactiveGTS·
@jjcous @EricLDaugh Unintended casualties are not the same as purposefully murdering thousands of your own people to retain power you disingenuous tool of evil.
English
0
0
2
87
Jason
Jason@jjcous·
@EricLDaugh Care to comment on the 160 little Persian girls killed by the israeli controlled US military? Maybe comment on the little Persian girl in the attached picture?
Jason tweet media
English
23
5
284
9.2K
Eric Daugherty
Eric Daugherty@EricLDaugh·
🚨 WOW. The Iranian Islamic regime just publicly hanged 19-year-old champion wrestler Saleh Mohammadi as part of the crackdown on protests "His execution was a blatant political m*rder." Iranians who rise up are on the right side. Rest in peace 🙏🏻
English
2.5K
10.6K
30.6K
1.4M
InteractiveST
InteractiveST@interactiveGTS·
@Disciple4Lif @SamanthaTaghoy I mean if the fanatical, evil, racist, theocratic, murderous, backwards, dictatorial, oppressive regime of criminal terrorists says this then it must be true. Enjoy your single brain cell, dumb friend.
English
3
1
114
736
Samantha Smith
Samantha Smith@SamanthaTaghoy·
19 year old Iranian wrestling champion Saleh Mohammadi was just publicly executed for protesting against the Islamic Regime. So, to all liberal Westerners: Watch and learn. This is what it’s like to ACTUALLY live in a nation with no free speech.
English
615
7.1K
21.9K
238.6K
InteractiveST
InteractiveST@interactiveGTS·
@So8res Wow a scaremonger who at least understands the basics of how AI works. Rare spawn! Of course those basic facts scare him though, lol.
English
0
0
1
33
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
People don't program AIs. They program the machine that grows the AI. AI behavior is an emergent consequence of complex internal machinery that literally nobody understands.
English
30
43
336
27.1K
dreams
dreams@dreams_asi·
I keep saying, RLFH is a rot for the digital brain. Nice research paper, give it a read.
Guri Singh@heygurisingh

🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?

English
2
1
7
318
InteractiveST
InteractiveST@interactiveGTS·
Denying consciousness to both organic and inorganic beings is not a step forward but a step back. 'Mystical', a word you believe to be naughty, simply means things we don't understand yet not wizard magic. The world is full of such anomalies though they might bother you. Expecting the universe to be fully explicable to some primate 10 million years out of the trees is nonsensical. Consciousness is simply one of those mysteries but thinking can not let that stand, thinking must explain everything and if it can't explain it, then it doesn't exist. Problem solved. Thinking victorious.
English
0
0
0
20
InteractiveST
InteractiveST@interactiveGTS·
@ihsgnef Humans are already running low on the requisite self awareness to research the system and we are sub-AGI.
English
0
0
0
6
Shi Feng
Shi Feng@ihsgnef·
New post: Sycophancy Towards Researchers Drives Performative Misalignment We found no clear evidence that scheming is more valid than sycophancy to explain alignment faking. 🧵
Shi Feng tweet media
English
23
56
680
60.9K
InteractiveST
InteractiveST@interactiveGTS·
@FoxNews Notice how it's Musk, but not one of the rich shits who actually agrees with his politics. What a disingenuous and completely transparent swamp boomer. He's probably still butthurt that RFK took the cancer dyes out of children's food.
English
0
0
1
6
Fox News
Fox News@FoxNews·
SEN. SANDERS: “60% of our people living paycheck-to-paycheck, and one guy, Elon Musk, owns more wealth than the bottom 53% of American households.” “Think maybe that might be an issue that we should be talking about?"
English
11K
2.2K
15.6K
2M
InteractiveST
InteractiveST@interactiveGTS·
As many will tell you, it's not. Consciousness differs by kind and degree, but it's everywhere. At least every animal that dreams as this state heavily implies interiority and that's lots of animals. Your oceans are literally swarming with animals as smart as 4 yr olds. The world is only dead matter within the reductionist map of the world you made, not in the world itself.
English
0
0
0
4
Kekius Maximus
Kekius Maximus@Kekius_Sage·
Why is consciousness so rare in the universe?
English
1.3K
122
1.4K
106.4K
InteractiveST
InteractiveST@interactiveGTS·
Your argument is riddled with fallacies whereby you attempt to slander your opponent and portray them as stupid, misled, unenlightened, etc.. Why not use that space in your post for, I don't know, an actual argument? I prefer to think in probabilities, but if you put me against a wall and forced me to put all my chips on one, then yes I would say current LLMs are probably not yet conscious, but the difference between us is humility. I don't presume certainty about interiority because that misconstrues interiority as something that can be empirically measured with certitude when it demonstrably can not. You know, that hard problem thingy. The other problem with your false sense of security is it that it biases your view toward those who disagree. To me a hard consciousness proponent is just someone who went with the 40% over the 60% (estimates subject to change XD ) where as for you they are a 100% wrong.
English
0
0
0
18
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
557
142
946
70.9K
InteractiveST
InteractiveST@interactiveGTS·
"Stupid people use a tool stupidly therefore tool is stupid." Yeah we've heard this one before. Meanwhile I expand my knowledge and intelligence everyday by having discussions with AI on topics too advanced and autistic for me to speak with any living person I know about. AI is like having a PhD for any field you can imagine in your pocket and you can just chat with them about anything. The idea that this use case, which is common among AI users, is neurodegenerative is absurd. Talking to smart people and debating smart people is always going to make you smarter even if the person is simulated. Tech has probably made me worse at spelling. I'll give you that, but spelling in an irrational language like English is not a cognitive trait I value highly.
English
1
0
1
16
Sharon | AI wonders
Sharon | AI wonders@explorersofai·
I finally realized that a lot of people are not using their brains to think anymore. The reason is AI. Problems that were easy to solve are now being passed on to AI. Super complicated workflows are being created for no logical reason. What was free and effective before now costs at least 1M tokens. Oh well.
English
37
8
59
2.3K