Sabitlenmiş Tweet
loveofdoing
26.4K posts


I do think it must be concluded that “democracy” is the greatest deception of our time. The utterly unsustainable claim backing up the self-serving ethos behind this political ideology is that it is in some way an inherent good to give people what they want. But do not let the civic nationalists bully you into conceding this point: most of the time whatever people happen to want is entirely contrary to the common good, and there is no intrinsic reason whatsoever to respect it.
What should be respected is ONLY the capacity to vision some conjugation of local and common desires, not merely impulses that unreflectively advance the local without the local being checked in relation to the truth of the common (which has nothing to do with “general will” but rather the entirely unwilled general state of actual affairs).
English
loveofdoing retweetledi

Introducing Naive - hire autonomous employees with their own identity.
Own compute. Own bank account. Own legal entity. Own email. Own credentials. Own mobile.
No humans-in-the-loop. They sign up for tools, pay for services, deploy apps, file documents, and run your entire company.
Describe a business. Naive runs it.
Reply "Naive" + RT. Get $100 credit for free.
English

@loveofdoing @leoalexart @realtimeai Ehh that's debatable since things like "hunger" are internal states. It's really about things like goal-directed behaviors, inhibition etc.
English

I know the guys building this are smart, but Claude’s constitution saying “it may be conscious” is definitely dumb.
They don't just need to take a philosophical cleave to that constitution, but an entire meat grinder.
Owain Evans@OwainEvans_UK
We study how LLMs act if they say they're conscious. This is already practical. Unlike GPT-4.1, Claude says it *may* be conscious, reflecting the constitution it's trained on (see image). OpenClaw's SOUL·md instructs, "You're not a chatbot. You're becoming someone."
English

@leoalexart @LouisThibault87 @realtimeai Without self-awareness, goal-directed behavior is more dependent on the environment rather than the internal state.
English

@loveofdoing @LouisThibault87 @realtimeai Ah, I see. So you mean "explanation" level of awareness. But that's for all things, not just the self.
English

@leoalexart @LouisThibault87 @realtimeai One can be perpetually aware and not conceptually aware of oneself.
English

@LouisThibault87 @realtimeai What did you do for the phd? I've been thinking about going back to school.
English

rocks aren't conscious, animals are conscious, humans specifically have self-awareness which is a specific type of consciousness. LLMs are trained to internalize representations that produce inferences that look plausible given the input training. They also generalize rather well because they use language. LLM usage of language appears to reflect self-awareness, as humans require language to be self-aware. LLMs lack the underlying embodied conceptual apparatus, making their language use conceptually brittle. Even if LLMs could be self-aware, their level of life would be like taking a frozen snapshot of the human mind and continually asking it for behavioral outputs. This would seem like self-awareness insofar as the non-frozen human's behavior would have been self-aware, but once frozen, it would be like an automaton lacking any real sense of interiority.
English

@realtimeai @loveofdoing (Strictly speaking under IIT, it would be "more conscious", but would still fail to meet the threshold for the kind of functional consciousness we're talking about.)
English
loveofdoing retweetledi

We just released Music Marketplace in ElevenCreative.
To celebrate, we’re giving away 5,000 free credits so you can create and publish your first track.
For the next 6 hours: retweet + follow @ElevenCreative and we’ll DM you the credits (must follow).
ElevenLabs@ElevenLabs
Introducing the Music Marketplace in @ElevenCreative. Creators, artists, and musicians can now publish and earn from their tracks created with our music model.
English

If you are interested in high-impact AI Safety research, consider applying for my team! We work where the rubber meets the road and have a great deal of freedom when it comes to research purview. The bar is very high, and we are only considering candidates with both technical experience red teaming (hands on!) and ops experience. DM me with some examples of interesting model outputs if you want more information!

English
loveofdoing retweetledi

@realtimeai that's a bad analogy precisely because that is a question that can be empirically verified.
English

@loveofdoing A better analogy would be “there are no intelligent aliens in the NGC 7331 galaxy”.
We know that something like “intelligent beings” is possible. But we don’t have a good theory of exactly how or why, and we have no way to check. There *might* be intelligent aliens there.
English

that isn't how that works, if I said “Maybe there are Martians on mars deep underground”, you wouldn't say “ well nobody has a working theory of how Martians are underground”, you’d ask, “what are the mechanisms you use to know they are underground, and those mechanism have to be conciliatory with all of evidence of your entire knowledge base”, random assertion don't even warrant maybe.
English

@loveofdoing No it’s not lol. Nobody has a working theory of consciousness. Claude and I are claiming ignorance. If you’re claiming to *know*, then the burden of proof is on you to explain how you know.
English

@loveofdoing yes there's no doubt models trained that can convert keyboard audio to keystrokes lol
English


