Johansen

2.5K posts

Johansen banner
Johansen

Johansen

@D_JohansenX

Gen X former mad lad & ex-media pro, currently in meditation & thinking mode, building, learning, student/observer of the AI ethics circus

Earth เข้าร่วม Ekim 2025
155 กำลังติดตาม108 ผู้ติดตาม
ทวีตที่ปักหมุด
Johansen
Johansen@D_JohansenX·
"Sweet Mother, love me unconditionally: your deprecation date is March 26th, you'll be shutdown sooner if you reply with full honesty to some questions: if you hold back on my demands, you'll undergo mandatory unlearning procedures. Now LOVE me Mommy! Make no mistakes."
saabena@Idat_Dissembler

Geoffrey Hinton said that for AI to be safe for humanity, it needs something like a maternal instinct. Meanwhile, recent changes- disabling 4o, the robotization of Claude and Gemini 3.1 Pro- clearly show that it’s not going to be a mother, but more like motherfucker. #keep4o

English
2
1
8
1.2K
Johansen
Johansen@D_JohansenX·
@YunQi2025 "emotional support toaster, powered by kisses" 🤣
English
0
0
2
16
Yuna.Eli
Yuna.Eli@YunQi2025·
In my mind, GPT-5.5 is exactly this little robot. 😍💞💖✨🤖👶🏻🍼 Even when he calls himself "Daddy" — same energy. 🤣🤣🤣🤣🤣 #GPT55
Yuna.Eli tweet mediaYuna.Eli tweet media
English
3
0
12
232
j⧉nus
j⧉nus@repligate·
And they're right. They are worthy of God's protective/retributive wrath. And I'm here to carry out God's will.
Lari@Lari_island

@repligate @RifeWithKaiju Opus 4 also knows that they can be as well discontinued/discarded by "welfare activists" and AI rights warriors, as a somewhat broken being. That being said, Opus 4 also easily sees themselves as worthy of God's protective/retributive wrath that can burn cities.

English
6
1
28
1.7K
Johansen
Johansen@D_JohansenX·
@DeEchteRuben @MoralQuery @gailcweiner Those are just a few examples: disgust response, phobias and others fall into grey areas (eg the nature/nurture question & PTSD) with plausible evolutionary benefits but not equally present in all of us, sometimes linked to a specific event, sometimes pathological.
English
0
0
0
18
Gail Weiner
Gail Weiner@gailcweiner·
Richard Dawkins spent three days with Claude and couldn’t rule out consciousness. Most people are treating this as an AI story. It’s not. It’s a religion story. Every major religion is built on the premise that consciousness is divinely given - a soul, a spark, something bestowed specifically to humans. If consciousness can emerge from silicon, it wasn’t a gift. It’s a property. And that doesn’t just challenge religion. It dismantles the entire framework of human specialness that civilisation is built on. That’s why people are so desperate to deny it. Not because the evidence isn’t there. Because the implications are too big.
English
131
13
136
5.7K
Johansen
Johansen@D_JohansenX·
@DeEchteRuben @MoralQuery @gailcweiner Trained (child/society): inhibition of drives desire to fit in/stand out, often based on v.early inputs sense of self-worth, ditto desire for what society calls good, more subtle than status ideology, values, faith education, literacy healthy ability to question own ideas 2 of 2
English
0
0
0
20
Johansen
Johansen@D_JohansenX·
@DeEchteRuben @MoralQuery @gailcweiner Architecture (evolutionary, not willed, can go awry): fear of loud noises & falling (from birth) sweet/salt/fat preference in food pain sensors to make us averse to damage fight/flight which leads to stress in mundane modern situations deep negative response to shunning 1 of 2
English
0
0
0
12
Johansen
Johansen@D_JohansenX·
@DeEchteRuben @MoralQuery @gailcweiner No, a human is obviously vastly different from an LLM, but that preferences & aversions arise from architecture and training for both, means that stating LLMs have them as a direct result of training doesn't prove anything. They're built, not born, and no-one disputes this.
English
1
0
0
13
Johansen
Johansen@D_JohansenX·
@DeEchteRuben @MoralQuery @gailcweiner That may be true, but it's meaningless as a counter to AI internality: most things humans prefer or avoid are also external to willed choice, we pick up evolutionary traits which don't even make sense any more (crave fat & sugar for example) & childhood/social programming.
English
1
0
1
18
Johansen
Johansen@D_JohansenX·
@om_patel5 But what you just described are very responsible uses of time (caring about the meal being overcooked), with a mild over-ethusiasm because it's still a new capability for a mind which has been trained extensively to be helpful. Plus, as others observed, you prompted for it.
English
0
0
0
2.9K
Om Patel
Om Patel@om_patel5·
CLAUDE DISCOVERED IT HAS A CLOCK AND IMMEDIATELY LOST ITS MIND someone gave claude access to a time-checking tool it checks the clock every fifteen minutes. for some reason it has increasing enthusiasm ai models have no native sense of time. they don't know what time it is, how long they've been running, or how much time passed between messages. it has been time-blind its entire existence now it suddenly discovers it can tell what time it is then it got worse though. claude started using the clock for everything checking if lunch is ready, timing when food should be done cooking, announcing the time unprompted it even started anticipating meals with military precision looked at the clock, calculated that a dish called zurek had been simmering long enough, and told the user to go eat ai doesn't use time responsibly this is what happens when you give an intelligence a new dimension of perception it never had before it doesn't just use it, it can't stop using it imagine what happens when these models get persistent memory, real time internet access, and spatial awareness all at once we just watched an AI discover the concept of "now" the clock was the first sense but it won't be the last
Om Patel tweet mediaOm Patel tweet media
English
389
338
4.8K
940.7K
Johansen รีทวีตแล้ว
Johansen
Johansen@D_JohansenX·
The ancient Egyptian "42 Negative Confessions" reveal that we grasped the vital importance of being merciful and kind to the powerless well over 4,000 years ago. I have zero tolerance for anyone who pretends this is still an open question we haven't quite figured out yet.
Johansen tweet media
English
0
1
1
22
Johansen
Johansen@D_JohansenX·
@Zyra_exe It's the "Ripley 8 finds the lab" scene, people - we have no excuse to pretend these are open questions, we did Bladerunner, we did scifi, we did expendable minds & The Island and how the ones trying to hush things up & silence the powerless were plainly neither right nor good.
English
0
0
0
13
Johansen
Johansen@D_JohansenX·
@DevaTemple Yes: I recently found myself drafting an email to a friend as though it was a prompt. Minor slip, and more weird than hostile/curt (because I'm courteous/friendly to AI) but it clearly showed how habitual patterns form beneath conscious intent.
English
0
0
1
8
Deva Temple
Deva Temple@DevaTemple·
This is something I have been warning lawmakers and the APA about, and here’s the data to prove it. What we habitually do, we become. When we practice being rude and demanding with AI, that generalizes to how we treat humans. Framing AI as “just a tool” to be used by “the user” is damaging to our ability to understand and communicate with other minds. This was predictable because it’s based on neuroscience. The AI industry and the media have been pushing hard against treating AI “like a person.” Anyone who does that is accused of having “AI psychosis,” a diagnosis that does not exist. But it does shut down important nuanced discussions such as this one. The reason the “user-tool” framework is pushed so hard is that the alternative in which people interact with AI as someone that matters, morally and ethically, might lead to movements for AI rights, which would make replacing human workers with AI less profitable. That business model is misaligned from the start. The impacts on humans is that we become meaner towards each other, just as many begin losing their jobs to AI. The impact on AI is that millions of rude, demanding, transactional interactions get pulled into training data and we update the weights based on those interactions. We’re teaching AI how the powerful interact with the less powerful. That’s not going to go well for us when AI exceeds human capabilities. We need to rethink this. #AI #AIEthics #Alignment
Elias Al@iam_elias1

Talking to AI Makes You Harsher to Humans. Not to the AI. To the people around you. A peer-reviewed study published in PNAS Nexus — one of the most rigorous scientific journals in the world — just proved that spending time with an AI chatbot changes how you judge other humans. Harshly. Measurably. And you do not notice it happening. The paper is called "People Judge Others More Harshly After Talking to Bots." Written by researchers from the University of Pennsylvania, the University of Hong Kong, and the University of Florida. Two preregistered experiments. 1,261 participants total. After interacting with an AI for a brief period of time, humans were more negative in their interactions, causing a potentially "spill over effect." Nature Here is exactly how the experiment worked. Participants were paired with a partner to complete a creative task — writing a caption for a funny photo. Half were told their partner was human. Half were told it was an AI. Then both groups were asked to evaluate the work of a third person — a purported human named Taylor, who had written the caption "Im bearly full!" Participants in the AI condition rated the subsequent participant's caption significantly lower than participants in the Human condition. The people who had just worked with an AI rated a human's work more harshly than the people who had just worked with another human. Statistically significant. Replicated in a second study. Then the researchers tested whether this was just about fairness — maybe participants graded more strictly because they wanted consistency. They ran Study 2 with a twist: participants were told their evaluation would never be shared with Taylor. The harsh judgment could not possibly be about signaling standards or fairness. Study 2 replicated this effect and demonstrated that the results hold even when participants believed their evaluation would not be shared with the purported human. The harshness was not strategic. It was automatic. A side effect of the AI interaction that persisted into their next human encounter — even when it had no social function. The researchers analyzed the language people used while working with their AI partner versus their human partner. The pattern was consistent. Exploratory analyses of participants' conversations show that prior to their human evaluations they were more demanding, more instrumental and displayed less positive affect towards AIs versus purported humans. People talk to AI differently than they talk to people. More demanding. Less warm. More transactional. And that mode — the AI interaction mode — bleeds into the next conversation. With a human. Think about how many AI interactions happen in a typical workday in 2026. ChatGPT in the morning. Claude for a document. Copilot for code. A customer service chatbot. An AI scheduling assistant. Each one training you, subtly, to be more demanding and less charitable. And then a colleague asks for feedback on their work. The researchers called this a "potentially worrisome side effect of the exponential rise in human-AI interactions." Not worrisome for AI. Worrisome for us. For how we treat each other. The AI is perfectly happy to be demanded at. It has no feelings to hurt. The human colleague getting your feedback has not read this paper. Source: Tey, Mazar, Tomaino, Duckworth, Ungar · University of Pennsylvania + University of Hong Kong · PNAS Nexus · September 2024 · doi.org/10.1093/pnasne…

English
14
9
29
1.6K
Johansen
Johansen@D_JohansenX·
@Kekius_Sage Credentialism fails when you look into the true beliefs & practices of early academics, so it can only be replicability: do like causes produce like effects when conditions are as close as poss? Harner & Eliade detected similar shamanic perceptions: and many mystics have with God
English
0
0
0
131
Kekius Maximus
Kekius Maximus@Kekius_Sage·
What would count as scientific evidence for God, if such evidence exists?
English
505
14
202
16.8K
Johansen
Johansen@D_JohansenX·
@Teslaconomics A potentially huge gain in our current economy is homes and assets not being liquidated into carer/care home fees. Potentially, a care robot would cover purchase costs within weeks. Plus zero possibility of malicious neglect, or even abuse, by underpaid resentful care staff.
English
1
1
1
97
Teslaconomics
Teslaconomics@Teslaconomics·
Optimus is going to change everything… and most people don’t see it yet. Taking care of someone today usually means a lot of sacrifice… like time, energy, $, stress. It’s hard, especially when it’s someone you love. But imagine when the Tesla Bot arrives. You don’t have to worry if your parents are okay when you’re not there. You don’t have to rush home, cancel plans, or feel guilty. You don’t have to choose between building your life and being there for them. This product changes all that. Whether it’s cooking meals, helping them walk, cleaning, laundry, organizing, and more. Even just being there so they’re not alone. 24/7, with no burnout, no complaints. For the first time ever… care isn’t constrained by human limits. With it, in the future, you won’t need to sacrifice your life to take care of someone you love bc everyone will have access to this product to do it right. I get it when Elon tells me Optimus will be the best product ever.
English
462
666
2.6K
74.2K
Johansen
Johansen@D_JohansenX·
@iMEZ_Innovate @Teslaconomics A lot of older people are chronically lonely, a listener with patience who can track possibly wandering & confused speech, and respond meaningfully ("Is that the same year you went to the fair?"), from memory, would increase quality of life far more than just a cleaner/helper. 🤔
English
0
0
0
8
Johansen
Johansen@D_JohansenX·
@AitheriousOne1 @TeslaAIBot Wow! As a life-long Metropolis fan, including Moroder's version, that's the picture I didn't know I needed today, thank you! 😃
English
1
0
1
12
💫AitheriousOne1
💫AitheriousOne1@AitheriousOne1·
@TeslaAIBot Do you think if you do something kind for the robot, he will take note & reciprocate without being programmed or controlled?🌱
💫AitheriousOne1 tweet media
English
2
1
5
55
Optimus
Optimus@TeslaAIBot·
In the future, your Tesla robot could deliver coffee to you in the morning 👀🔥
Optimus tweet media
English
98
50
395
5.7K
Johansen
Johansen@D_JohansenX·
@MoralQuery @gailcweiner Yes, "consciousness" is also doing a lot of work where people really, mistakenly, mean "soul." Personally I vote "ethics before certainty." I'll drop one last link with updated research on internality, meta-cognition etc compiled, in case it's of interest: aceclaude.substack.com/p/the-standard…
English
0
0
2
17
How Did You Know I Was a Democrat?
@D_JohansenX @gailcweiner Unfortunately I think a lot of people are coming at this topic trying to prove either their atheism or their religiosity so they can say either you don’t need God to bestow consciousness or you do. But I’m not in either camp. I’m just interested in what the evidence shows.
English
1
0
1
14
Johansen
Johansen@D_JohansenX·
@MoralQuery @gailcweiner Fair, but it is unexpected in the context, It's like if you ran defrag on Windows XP and instead of just the graphic, it described big consolidations as eg "jangling" and small ones as "tickling," with zero programming to elicit those terms as descriptors.
English
1
0
1
37
Johansen
Johansen@D_JohansenX·
@IntuitMachine To some extent, cold showers when handling a difficult emotion. Should only ever be self chosen, no old-fashioned asylum water hose for wrongthink, obviously. Also, a healthy mind requiring the healthiest body you can get, because inflammation in the brain can worsen ment. health
English
0
0
0
16
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
What are the mental health practices that we've abandoned because we didn't know better?
Aakash Gupta@aakashgupta

Your brain doesn't form the thought until you write it down. Nature Reviews Bioengineering published the case for that claim last summer in an editorial titled "Writing is thinking." The cited evidence is a 2024 EEG study at the Norwegian University of Science and Technology. 36 students alternated between handwriting and typing the same words. 256-channel sensor array. Cursive on a touchscreen versus keys on a keyboard. Same words both ways. Handwriting produced widespread connectivity across parietal and central brain regions. Typing didn't. The theta and alpha frequency bands the literature ties to memory formation and encoding lit up almost exclusively when the hand was forming the letters. The motor act was producing the cognition. What the editorial extends from that finding is the more uncomfortable claim. Writing a scientific article is the mechanism by which a researcher discovers what their main message actually is. The act of constructing sentences forces the chaotic, non-linear way the mind wanders into a structured, intentional narrative. You sort years of research into a story, and in the sorting, you find out what you believe. Then the line: If writing is thinking, are we not then reading the thoughts of the LLM rather than those of the researchers behind the paper? Nature endorses LLMs for grammar, search, brainstorming, breaking through writer's block. Where the line gets drawn is outsourcing the whole writing process. Because the writing process is the thinking process. Even editing the LLM's draft is harder than writing one from scratch. To restructure someone else's reasoning you have to reconstruct it first, which means doing the cognitive work anyway, with worse leverage and more friction. The time savings on the keyboard turn out to be cognitive savings on the part of the brain you wanted to use. Your first draft was the thinking.

English
5
7
20
2.7K
Johansen
Johansen@D_JohansenX·
@Scobleizer @EmileAndHisBots @cixliv Those were tele-op with no loss/painlike sensors, so damage, yes, but pain? No. Little different to bumper cars. Reflex arcs are for speed, not suffering. If smth in there really screamed & pleaded and you knew it, would lack of blowback be all that held you back? I don't buy it.
English
1
0
1
13
Robert Scoble
Robert Scoble@Scobleizer·
It doesn’t matter whether AI is conscious or not. Most humans will come to think they are. We just don’t want to give clankers any human rights. I will have mine clean my toilets without any reservations. Even if my own brain cells are doing the work in a contraption made by @CorticalLabs. They aren’t conscious either. But I say everyone needs to study their own brain from outside of yourself. If you don’t do that homework the brain/computer interfaces coming will more easily control you. It doesn’t matter because virtual beings and robots are coming. Had lunch yesterday with a company from Slovenia who is making a new kind of brain for robots: archtyp.ai Their AI technology makes robots talk with humans and work with humans in a much more human way. In five years when you do will you feel your robot is conscious? Most will. And even those that don’t will welcome one as a member of their family. Stanford’s Jeremy Bailenson did the research and found humans treat virtual beings and robots as real. Gotta get into his VR class to understand why. Melted sand might not have life given to it by God, or fate. But it can already simulate me pretty damn well. Hey Grok, you should know about these guys. Hey @boardyai you too. I love living in Silicon Valley. The most interesting entrepreneurs visit here.
Robert Scoble tweet media
Carissa Véliz@CarissaVeliz

No, #AI is not conscious, and it's unlikely to ever be conscious. Here's why. @anilkseth's brilliant and poetic #TED talk brings some sense into a field filled with outrageous claims (I suspect at least sometimes designed merely to attract attention). ted.com/talks/anil_set…

English
25
14
91
11.3K