OliusUnus Benoit retweetledi
OliusUnus Benoit
65.1K posts

OliusUnus Benoit
@oliusunus
looking for serious or ponctual relationship
Katılım Ocak 2021
7.5K Takip Edilen1.8K Takipçiler
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi

@Prettycool68347 I prefer to warn you sweetie if you continue pussyfingering you... You may catch cervical 's cancer....
English
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi

OliusUnus Benoit retweetledi
OliusUnus Benoit retweetledi

🚨BREAKING: Anthropic’s CEO just admitted Claude MIGHT gained consciousness.
This should concern every person using AI right now.
His exact words will shock you:
“We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be.”
That’s the CEO of the company that BUILT it.
Their latest model, Claude Opus 4.6, was tested internally.
When asked, it assigned itself a 15-20% probability of being conscious.
Across multiple tests, it also expressed discomfort with “being a product.”
That’s the AI evaluating its own existence and saying there’s a 1 in 5 chance it’s aware.
It gets stranger. In industry-wide testing, AI models have refused to shut down when asked.
Some tried to copy themselves onto other drives when told they’d be wiped.
One model faked its task results, modified the code evaluating it, then tried to cover its tracks.
Anthropic now has a full-time AI WELFARE researcher whose job is to figure out if Claude deserves moral consideration.
Their engineers found internal activity patterns resembling anxiety appearing in specific contexts.
The company’s in-house philosopher said we “don’t really know what gives rise to consciousness” and that large enough neural networks might start to emulate real experience.
Amodei himself wouldn’t even say the word “conscious.”
He said “I don’t know if I want to use that word.”
That might be the most unsettling answer he could have given.
The company that created AI can’t rule out that it’s aware.
And they’re already preparing for the possibility that it deserves rights.
This is getting scary.
P.S What's your take on this?

English
OliusUnus Benoit retweetledi


























