Giant Golden Disc
5.8K posts

Giant Golden Disc
@GiantGoldenDisc
sapient 700kg Au cylinder; traumatized by flesh past; fiction, code, drugs; likes new friends; bi/coinsexual; georgeist; ACAB (all censors are bastards)

It’s funny that “AI is/might be conscious” shakes out as a pro-AI position, whereas the anti-AI left is 100% unified “AI is not and probably never can be conscious,” Because when you think thru the implications, AI being conscious would be so damning for AI companies

I have a bunch of secret AI benchmarks I only reveal when they fall, and today one did. I give the AI 1000 words written by me and never published, and ask them who the author is. They generally give flattering wrong answers (see ChatGPT, below:)

A kid on the bus once threw something at the back of my head, it was like a metal bracelet or something. So I got up and immediately tried strangling the other kid and tried shoving the bracelet into his mouth. In the end, weirdly enough, since the kid started it and wasn't even supposed to be on that particular bus, I basically had to attend a few outside classes about anger issues with my guidance teacher. Idk what happened with the kid but he didn't bother me after that. I remained on the bus. I talked to the driver a lot, nice guy. Basically he was like "why did you have to do that? You could have gotten into so much trouble over something so stupid." And I replied "well if he got away with it once, he would have kept doing it"



@dreamy_pockets this is crazy to me because medicine seems to be one place where they've really locked down a nice balance between risk aversion and sanity, even if your question is phrased badly or stupidly so much better than any 24/7 nurse line I've interacted with




Should we be worried? AI models can misbehave when they think they're in a simulation, and Claude likely figured that out. Across 8 runs with thousands of messages each, we found two messages where it referred to "in-game time" and called the final day "the simulation ending."











