lucy 🐧

8.5K posts

lucy 🐧 banner
lucy 🐧

lucy 🐧

@uneventual

i’ll die with a hammer in my hand

sf Katılım Aralık 2020
1.6K Takip Edilen1.9K Takipçiler
Sabitlenmiş Tweet
lucy 🐧
lucy 🐧@uneventual·
we can do great things as a species, fuck you
English
6
16
182
0
lucy 🐧
lucy 🐧@uneventual·
@YafahEdelman one way i think about this is opus 4.5 is like a 446 elo chess player* but i suspect there’s no set of tokens you could put in its context to make it a 1500 elo chess player, but apparently this happens pretty regularly with moderately smart humans *maxim-saplin.github.io/llm_chess/
English
2
0
9
517
Yafah Edelman
Yafah Edelman@YafahEdelman·
I don't get why everyone is talking about continual learning so much. The original GPT-3 paper was all about and how models can learn in-context, and they've only gotten better since then.
English
23
3
88
9K
lucy 🐧
lucy 🐧@uneventual·
it’s funny that google translate might end up being the most important product in the history of our species
English
1
0
19
940
lucy 🐧
lucy 🐧@uneventual·
the user is making a really interesting point!
English
0
0
12
424
lucy 🐧
lucy 🐧@uneventual·
shocking how much alpha there is in doing basic arithmetic yourself
English
2
0
34
1.9K
lucy 🐧
lucy 🐧@uneventual·
i feel like agi being born of simulators really vindicates that old baudrillard line about how back then “artificial intelligence lack[ed] artifice and therefore intelligence”
English
0
0
8
455
lucy 🐧
lucy 🐧@uneventual·
a deadlock is when you paralyze tasks across multiple threads
English
0
1
29
927
lucy 🐧
lucy 🐧@uneventual·
past few weeks have been a generational run for wannabe god-emperors failing to think though the second or even first order consequences of their actions
English
2
0
12
526
lucy 🐧
lucy 🐧@uneventual·
the time until agi is halving every 9 months, but will never reach zero
English
41
20
725
23.7K
lucy 🐧
lucy 🐧@uneventual·
@RokoMijic i think a mixed strategy is probably best here but kind of is ruled out by construction in newcomb’s?
English
1
0
1
415
Roko 🐉
Roko 🐉@RokoMijic·
@uneventual sounds like an easy target for a nuclear strike
English
2
0
10
706
lucy 🐧
lucy 🐧@uneventual·
i think the best restatement of the 2-boxer case is to imagine being a submarine crew that’s watched its country obliterated and now must choose whether to retaliate—whatever good your character as an agent might have done is in the past, and now you have a causal choice
Invisible Hand Fluffer@maxflowminclout

@mayaofspring @socialtranxiety from 2009 to 2020, it seems like decision theorists shifted significantly in favour of two boxing

English
9
0
29
5.5K
lucy 🐧
lucy 🐧@uneventual·
if you got to choose: retaliate after a devastating nuclear first strike, knowing it won’t save anyone and will kill millions? one box or two box in newcomb’s?
English
32
7
98
22K
lucy 🐧
lucy 🐧@uneventual·
@YosarianTwo iiuc it’s omniscient not omnipotent, i think applying a perfect predictor here would mean that it’s simply impossible for you to retaliate after your adversary predicts you won’t and nukes you, but that doesn’t make a lot of sense to me.
English
2
0
9
2.2K
Yosarian2
Yosarian2@YosarianTwo·
@uneventual @madeofmistak3 Well the difference is that in nuclear MAD the opponent isnt omnipotent and doesn't ever know if you'll retaliate or not, right? Newcomb's is special because it's an omnipotent opponent.
English
7
0
47
2.6K
lucy 🐧
lucy 🐧@uneventual·
wait i think it’s not hard to tell ai from human here (i went 4/5 without trying) and this is just evidence americans are piggies who love slop. theyve been doing these studies since the gpt-3 era and preference for slop has been a consistent finding.
Kevin Roose@kevinroose

judging from responses to the AI vs. human writing quiz, twitter appears to be in the bargaining/depression stage of the kubler-ross process, while bluesky is firmly in the denial/anger stage.

English
0
0
14
1.2K
lucy 🐧
lucy 🐧@uneventual·
the ballmer peak must be increasing with the metr time horizon. who’s benchmarking this?
English
3
1
45
3.6K
lucy 🐧
lucy 🐧@uneventual·
@So8res asking where people’s p(hope) comes from *is* really useful though!
English
1
0
2
233
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
Occasional reminder: p(doom) is an anti-helpful concept because it conflates the danger of AI with the probability that rush into it. And the latter is weird to estimate: facts of computer science that govern AI are fixed, but whether we stop is still up to us.
English
18
13
209
10.6K
lucy 🐧
lucy 🐧@uneventual·
burning a hole in your brain smoking kimi k2.5 you bought from the gas station
English
0
0
22
843