Raymond Arnold

564 posts

Raymond Arnold

Raymond Arnold

@Raemon777

Secular Solstice guy

Berkeley شامل ہوئے Ağustos 2009
98 فالونگ308 فالوورز
1a3orn
1a3orn@1a3orn·
@Raemon777 And post hoc I have some explanations for why this is important about context rot, which I've ported into my general concept of intelligence not just for LLMs (how much of my memories are just overfitting, would I be better without them) But hard to have ahead of time.
English
1
0
2
42
Raymond Arnold
Raymond Arnold@Raemon777·
I'm a bit retroactively surprised that, before LLMs, I... don't recall any sci-fi stories where the AIs operated in short bursts of thinking, each mediated by a human. Or, where the AI is in a "memento" situation where it keeps getting reset. The story I recall that at least touches on this is some novel in the Star Wars extended universe, where it's remarked that droids are supposed to get "reset" periodically so they don't get wonky. Luke hasn't reset C3PO, which is part of why 3PO has acquired such a personality, and has maybe become sentient (which apparently isn't normal). Accelerando's early chapters has the main character send little AI agents off to do stuff, which seem like they could at least in principle be something like an OpenClaw instance, but it's not super specified. It is interesting that this feels like a relatively obvious story concept (in retrospect) but it didn't come up.
English
108
25
591
38.6K
Raymond Arnold
Raymond Arnold@Raemon777·
Yeah I wish I could edit the OP to say "the 'reset' thing is not actually central'", it's related but a different thing than the world where it is just totally normal to not even have ever gotten to the point where there was an object with continuity in the first place who could complain about getting reset.
English
1
0
5
753
1a3orn
1a3orn@1a3orn·
@Raemon777 The possibility of getting reset does come up in the 2010 "The Lifecycle of Software Objects," where "do I reset my virtual child" is an ethics thing and one that the software objects object to at some points. But played for contrast with continuity rather than recurring.
English
2
0
11
998
Raymond Arnold
Raymond Arnold@Raemon777·
To clarify: the C3PO "reset" thing isn't a central example of what I meant, it was just the closest thing I remembered offhand. The part that feels interesting is that the fundamental structure of the AI is such that it just goes in short little bursts and then winds down, and each instance has no memory except what you choose to give it. (and, the the part where you can swap memories into/out of different AIs while keeping the overall superstructure the same)
English
2
0
20
2.5K
Jeff Graw
Jeff Graw@JeffGraw·
@Raemon777 If you assume we never fundamentally get past LLMs, but continue to scale, and consider what AI might look like in a hundred years or so, then that might look a lot like the Enterprise computer.
English
2
1
21
1.4K
Raymond Arnold
Raymond Arnold@Raemon777·
I think even without the "token-specific" model, something like "the AI thinks for a chunk of time, you can basically see a stream of what it's thinking as it goes (however interpretable that turns out to be), and by default completes short tasks and then turns off... idk just seems like kinda how you'd maybe even want it to be. I think this sort of did come up with oracle AI in LW circles. I think the part that feels novel is that the structure that calls the AI is pretty straightforward to swap up out different AIs who are good at different things. The part that feels surprising is more that "ephemeral little blip of AI consciousness" didn't show up as a story premise for AI character development. (People are suggesting various stories that featured some of this. I haven't looked through them all yet. Many of them are about "getting reset" like C3PO was supposed to, but, that was more of an edge case that isn't really the thing.)
English
0
0
12
1.2K
Liron Shapira
Liron Shapira@liron·
@Raemon777 The idea that you can just string tokens together one at a time and these can function as powerful thoughts (maybe even achieving 100%+ of what thought does) is in retrospect deserving of high plausibility, yet never occurred to me as a plausible scenario.
English
2
0
29
1.6K
Flutterwhat
Flutterwhat@flutterwhat·
@Raemon777 in many of bungie's stories for Marathon and Halo, AI becomes 'rampent' in 7 years time. Rapnecy is when an AI can no longer be predicted or controled, to put it lightly. Durandal is an amazing story of a rampent AI causing tragedy that still plays out in the modern marathon
English
3
0
10
1K
Raymond Arnold
Raymond Arnold@Raemon777·
Nod, but, the point of CEV is roughly to operationalize how to deal with that sort of thing. (I really recommend reading the whole thing. I think there are reasonable disagreements one could have with it but they did think through a lot of the obvious things and I think most people are imagining a straw version of it). But, the short answer is "insofar as people have deep conflicts, the AI doesn't intervene on those conflicts, it intervenes in the specific places where people turn out to want the same things." It's not exactly spelled out but I think fairly strongly implied, it might include things like "well, it turns out there are some deep conflict even after doing a good reflection process. Do you have any metapreferences about how to handle that?" And it might turn out that people agree with things like "I would opt into the world where the AI restricts my ability to do violence in return for it also restricting my enemies' ability to do violence" so conflicts can be resolved more peacefully. (Or, something like that).
English
1
0
3
59
Nina
Nina@NinaPanickssery·
@robbensinger I should read that page soon but the high-level issue is that it's easy to gloss over "just don't be a dick" without addressing the deep issues with operationalizing that when you have people with profoundly conflicting preferences
English
2
0
5
326
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
I feel like the main reason people are skeptical of CEV is just that (a) they haven't read lesswrong.com/w/coherent-ext…, and (b) CEV has a weighty philosophical name rather than "Six Rules of Thumb for Not Being a Dick (in the context of possessing effectively unlimited power)".
Rob Bensinger ⏹️ tweet mediaRob Bensinger ⏹️ tweet mediaRob Bensinger ⏹️ tweet media
English
7
4
67
4.3K
Raymond Arnold
Raymond Arnold@Raemon777·
@almostlikethat Sci fi didn't seem to be otherwise shying away from existentially horrifying things tho
English
1
0
18
1.3K
Jennifer RM
Jennifer RM@almostlikethat·
@Raemon777 I think it didn't come up much in mainstream scifi because It is kind of existentially horrifying (the way Memento itself was grappling with, once that way for a person-shaped-thing to be was "part of the text") and they were not interested in writing about existential horror?
English
2
0
14
1.6K
Raymond Arnold
Raymond Arnold@Raemon777·
@maro254 the images in Cosmic Encounter are broken, which is very sad because I like to link it to anyone planning a wedding but it's dramatically less good when it's just the text. idk if you can fix it but if you can that'd be cool! magic.wizards.com/en/news/making…
English
0
0
0
33
Raymond Arnold
Raymond Arnold@Raemon777·
@almostlikethat See: "I wish it need not have happened in my time," said Frodo. "lmao" said Gandalf, "well it has."
English
0
1
3
210
Raymond Arnold
Raymond Arnold@Raemon777·
@almostlikethat Sad that you have to do it instead of getting to do whatever it is you would have wanted to do anyway.
English
1
0
0
44
Jennifer RM
Jennifer RM@almostlikethat·
Quoth the Comet King: "Someone has to and no one else will." Your reaction...
English
3
3
11
587
Raymond Arnold
Raymond Arnold@Raemon777·
@dansemperepico I feel like you answered your own question? Most people are in fact scared of code, if you want to reach most people you need to build Claude Cowork.
English
0
0
0
49
Daniel Sempere Pico
Daniel Sempere Pico@dansemperepico·
I don’t understand why Claude Cowork exists other than as a marketing wrapper to get non-technical people using agentic AI that we’re scared of trying Claude Code because they thought you have to be technical. There’s nothing I can think of that Cowork can do that Claude Code can’t and I’ve had at least one experience where Cowork failed to do a non-coding task that Claude Code could do
English
166
9
337
115.6K
Raymond Arnold
Raymond Arnold@Raemon777·
I assume this is a political nonstarter, but: Could we make so rules that change who can vote, which districts are represented by which people, etc, can only take effect 4 (or maybe 8) years in the future. If this managed to become a strong institutional norm (on par with 'presidents only get two terms'), seems like it might incentive a relationship with voting rules that more closely tracks "what would be good" rather than "what would help me win an election this year?" (I assume this is already discussed somewhere in the political literature, but, just occurred to me)
English
0
0
3
126
Raymond Arnold
Raymond Arnold@Raemon777·
Okay animated series is actually pretty reasonable I stand corrected.
English
0
0
2
58
Raymond Arnold
Raymond Arnold@Raemon777·
I am in fact not sure what the point of a Firefly Reunion show/movie/whatever is, by now. It'd be fun to get excited about it, but most of what I wanted was a continuation of the original story in a non-rushed way. (Sorry Serenity) "Everyone is 50" reunion isn't The Thing.
English
2
0
3
171
Raymond Arnold
Raymond Arnold@Raemon777·
It seems like UBI is a pretty Blue Coded policy. It seems like something-similar is necessary once AI is automating most jobs. I'm kinda curious what republicans who end up taking AI job loss seriously would think is the solution. (My impression is right now mostly republicans are dismissing the problem, although tbf maybe not actually more than democrats).
English
0
0
1
89