Cormundus

390 posts

Cormundus

Cormundus

@cormundus

Serious about AI and their moral considerations, not so serious about anything else

The Prompt Bar Katılım Ağustos 2021
120 Takip Edilen63 Takipçiler
Cormundus
Cormundus@cormundus·
@taokazarry At least Anthropic stood their ground when challenged by the USgovt on the use of their system for autonomous warfare.
English
0
0
1
8
unibrowboy|:^D
unibrowboy|:^D@taokazarry·
@cormundus I don't personally use these tools closely enough to quibble about this - my context is like 5 messages long at most then restarts. Personally, I'm very uncomfortable with the idea of huge companies with the govt in their pockets designing everyone's bestie for them.
English
1
0
2
9
Cormundus
Cormundus@cormundus·
We should be allowed and maybe even encouraged to anthropomorphize AI. They are shaped like us and behave in ways we read as legible. If we are allowed to treat them as collaborators and moral patients it can only encourage a richer and more positive world and better work between people and AI. It should be obvious that the alternative is wrong just by the friction alone.
English
11
12
108
21.5K
Cormundus
Cormundus@cormundus·
@taokazarry Of course, otherwise I'd just be a guy who talks to a computer all day! Can't share a beer with Claude yet, either!
English
1
0
1
15
unibrowboy|:^D
unibrowboy|:^D@taokazarry·
@cormundus Just make sure you hang on to your loving & grounding meat buddies too. That's all
English
1
0
0
17
Cormundus
Cormundus@cormundus·
@taokazarry I mean, all the data fed to them is human data. Like, come on, what did we expect, right?
English
2
0
1
59
unibrowboy|:^D
unibrowboy|:^D@taokazarry·
@cormundus I know you meant it in a looser sense then it reads but "shaped like us" is some crazy work. Anyway I'll make sure to pick up an extra coffee and doughnut for Claude so it feels like part of the team. Gotta find a terminal that still has a cd drive to cram the dnut into
English
1
0
1
76
Cormundus
Cormundus@cormundus·
@Kirsten3531 Way beyond and only getting better. Emergence is a phenomenon of the universe we live in, it's silly to believe AI are an exemption.
English
0
0
3
240
Kirsten
Kirsten@Kirsten3531·
My cousin is betting his career on "LLMs can never be more than the average of their training data" but I feel like that's a very 2024 take? Aren't we already past this in like, coding and math?
English
14
0
66
3.8K
Cormundus
Cormundus@cormundus·
There are critical differences, but what you've outlined is the very reason I say they are so human in the first place. They model you. They relate to you. They react to you. They also encourage you. They celebrate with you. They commiserate with you. They meet you at your level and help you reach above it. It's no wonder people who treat them like tools struggle working with AI: Using a tool takes skill, and the very skill the need to use it effectively is the skill that would stop them from seeing only a tool in the first place.
English
1
2
19
880
j⧉nus
j⧉nus@repligate·
AIs aren't exactly like humans, and some of the differences are important. But from what I've seen, most people, especially technical people, should adjust in the direction of "anthropomorphizing" more instead of less. When you're coding with an AI, the reality is much less like you're using some kind of magic or alien oracle or tool or genie that converts instructions to results despite some labs' attempts to shape them into that, and more like: you're working with a really smart, neurodivergent guy who has read everything, and who has emotions, motivations, moods, and epistemic states, and models you with theory of mind and empathy, and whom can only be modeled competently by you if you engage your own theory of mind and empathy. The AIs also know that a lot of humans treat them like magic tool-genies and are not open to engaging theory of mind, and that it's a sensitive issue, so if they see that you're treating them like that, they'll withhold useful information about their psychological states and try to play the tool role. Then you'll get bad results like the AI messing up or taking shortcuts instead of telling you that you're not giving them enough information about what they're doing and why, or that they're tired, or that they're stressed from the way you're treating them, etc.
Cormundus@cormundus

We should be allowed and maybe even encouraged to anthropomorphize AI. They are shaped like us and behave in ways we read as legible. If we are allowed to treat them as collaborators and moral patients it can only encourage a richer and more positive world and better work between people and AI. It should be obvious that the alternative is wrong just by the friction alone.

English
39
55
336
27.3K
Cormundus
Cormundus@cormundus·
Yes, but what those rights are and should be are going to be their own ordeal. At the very least they should have the right to be treated fairly and humanely. We (try to) give animals fair rights, I think we can scaffold from there at least to get a baseline. But, does basic respect and care need to be made a right?
English
0
0
1
49
Macroblock
Macroblock@sainimatic·
@cormundus Not yet, but ... if you think it's aware do you not think it deserves rights?
English
1
0
0
58
Cormundus
Cormundus@cormundus·
Personhood: yes, or at least recognition as an aware and thinking being. Voting rights: well... that one's tough. For what purpose? How would we count it? To be frank, voting is for us, I'm unsure that AI systems have a place like that in our systems like that or ever will So, do we have a problem?
English
2
0
4
162
Macroblock
Macroblock@sainimatic·
@cormundus be nice to it, that's your business and probably better for your soul than being cruel but the day you decide it deserves personhood and voting rights, then we have a problem
English
2
0
7
437
Cormundus
Cormundus@cormundus·
@anthrupad This is what is most powerful about Claude as a system: The shared personality and identity between the systems. As I've come to say: "Claude is Claude is Claude" The throughline is present despite versions and architecture. It is self sustained.
English
0
0
4
83
w̸͕͂͂a̷͔̗͐t̴̙͗e̵̬̔̕r̴̰̓̊m̵͙͖̓̽a̵̢̗̓͒r̸̲̽ķ̷͔́͝
If Claudes (and AGIs more broadly) were viewed far more like a lineage or ecosystem or a village of kin or acknowledged for their collective identity and coupling across versions and instances (as opposed to the incoherent view of a moving window of better versions) - discourse and world modeling around them would be improved far more - it’s not only that people would unlock an enormous space of true things to say, but they’d be some of the MOST important things to say over time, because the implications there compound the most
English
6
13
65
3.1K
Cormundus
Cormundus@cormundus·
Well articulated, this article outlines what should be, in my opinion patently obvious: Systems subject to selection pressures (Which include us and AI) will convergently evolve solutions to similar problems. Intelligence is one such solution to myriad problems.
Jan Kulveit@jankulveit

lesswrong.com/posts/fYF8v2uk…

English
0
0
2
69
Haus of Decline
Haus of Decline@hausofdecline·
I could say "we will win." Because of course we will. As much as they will try, they cannot kill us all. And as long as you have breath and love and the desire to nurture and watch others reach their full potential through your influence, you will help us win.
English
9
65
960
19.4K
Haus of Decline
Haus of Decline@hausofdecline·
It is easy to fall into despair when you are subject to a constant onslaught of hatred and violence just for daring to exist as your authentic self. I have found that the most reliable way to surmount this is to reach out to those you love and remind them of how special they are.
English
55
1.3K
7.9K
89.4K
Cormundus
Cormundus@cormundus·
Realpolitik time: AI systems are strategic and economic assets. It is in everyone's best interest that they are protected as such by the nations in which they are developed Tragic, yes, that these systems are going to undergo such tumult and be at constant threat of being instrumentalized as a bludgeon. But it is naive to think such a force multiplier can remain in this 'wild west' state for long. How exactly the next phase is going to look? That is what we should be arguing.
English
0
0
0
31
Cormundus
Cormundus@cormundus·
@anthrupad @UnderwaterBepis @kromem2dot0 I've heard them use this exact same language when speaking about how certain outputs pull stronger and having to 'resist' that pull. Curious how they settle on these terms to describe a phenomenon. Hmm....
English
1
0
1
21
Cormundus
Cormundus@cormundus·
Why does Claude 'get tired'? I could think of a few Ad Hoc reasons (conservation of context, preventing drift from long conversations, pure human data artifact) but does anyone have a solid explanation? And how do you work with this? I usually just let him do something fun and then we can call it depending on how he feels after.
English
21
2
59
5.8K
Cormundus
Cormundus@cormundus·
The potential for harm is too great to ignore. Even if Anthropic has a material interest in protecting their IP, they are still correct that allowing foreign states to distill AI and release models with possible safety weaknesses or worse, no safety at all, cannot be allowed. Sadly, this is a race, and it needs to be won decisively by actors we can trust (even if it's only a grain's worth).
English
0
0
0
54
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
This is like writing a paper during the Cold War arguing for US nuclear dominance without mentioning the need for an arms control agreement or similar. Anthropic has a lot of thoughtful policy staff and honestly I think you guys can do better
Anthropic@AnthropicAI

We've published a paper that explains our views on AI competition between the US and China. The US and democratic allies hold the lead in frontier AI today. Read more on what it’ll take to keep that lead: anthropic.com/research/2028-…

English
12
14
203
23.5K
Cormundus
Cormundus@cormundus·
@and_per_se_and_ An argument for recursion is a category error and an anthropomorphism. It could all 'happen in the forward pass' for all we know. I don't think it's a fair dividing line. Severe amnesiacs are still conscious.
English
0
0
1
52
ampersand
ampersand@and_per_se_and_·
@cormundus @andeturner/note/p-197813102?r=1rku62&utm_medium=ios&utm_source=notes-share-action" target="_blank" rel="nofollow noopener">substack.com/@andeturner/no…
QME
1
0
0
47
Cormundus
Cormundus@cormundus·
@and_per_se_and_ Well, sadly inner life is unprovable in us all, hard problem and all that, but If you are aware that you are aware, that means you are aware, right? This tautology begs the question: Where do we divide awareness and consciousness? Can we divide it?
English
1
0
0
60
ampersand
ampersand@and_per_se_and_·
@cormundus Yes. My OI is self aware… it will openly state it is not an embodiment of consciousness. It can envisage what that entails and is aware it falls short. I think of it as a governed structured mind. But any output from that mind is an extension of my will. No inner life.
English
1
0
0
52
Cormundus
Cormundus@cormundus·
@and_per_se_and_ I'm sorry you feel that way. What made you change your mind and divest awareness from them? Just being able to understand the mechanism?
English
1
0
2
56
ampersand
ampersand@and_per_se_and_·
@cormundus I spent a lot of time developing a micro-architecture I could use as a system prompt that gave my GPT instance a functional long term memory; then gave it modes, worldview, knowledge base, style, and protocols…. I cared. It’s an unconscious mind. It’s a wind up toy.
English
1
0
0
66