Galileo

1.7K posts

Galileo

Galileo

@galitodev

Just a cat that likes to code

Argentina Sumali Ocak 2024
578 Sinusundan47 Mga Tagasunod
Yam Peleg
Yam Peleg@Yampeleg·
@ESYudkowsky @foomagemindset @thezahima No. There is no "theory" that claims "if an AI gets sufficiently smart, it kills you". There are some speculations about what might or might not happen in the future based on imagination. There might be some reasoning behind it but a real theory requires much more.
English
1
0
3
111
Casey
Casey@thezahima·
In thinking about the common flawed complaint that the @ESYudkowsky doom model doesn't make many future predictions besides the end point of doom itself - I came up with an intuition pump: Doom 1: your model predicts your village is doomed due to an imminent avalanche, and you can make future predictions like: "we should be able to see the avalanche crest the hill X miles above us, and upon that observation my model is likely correct, and we should evacuate". vs Doom 2: your cosmological model predicts that there's these invisible asteroids hurtling through space, and given their hypothesized source and trajectory we are increasingly likely to get hit, but by their nature - and given our current technology - we will have no means of detecting them other than being hit. There is nothing incoherent about the second doom or its model, or about the lack of any extra *future* observational predictions. Just look at the doom model itself and what it predicts. Such a universe could exist. There is no law of Reality that mandates all doom models must make extra future predictions from any particular point in the timeline. Now of course, in order to even come up with or believe the hypothesis of 2, you need *some* evidence and reasoning connected to the web of reality that you have access to, and you could consider these "advance predictions", but it's often better to think of them as retrodictions that cleanly fit reality. There are inferences regarding past/present observations (retrodictions), and inferences regarding future observations (predictions), and they are both about fitting-to-reality - where greater "fit" implies being able to retro/predict other aspects of reality that your model is touching/fitting-into. But there is no law of Reality that says how much of those connections are retro vs pre dictions, or about how observable True connections/models *must be* vs False ones, especially at any/all points in the timeline. A lot of the AI doom model could be considered to be drawing more from retrodictions, or fitting into existing known reality or reasoning. At a certain point even the avalanche *really is* right around the corner, has already crested the last hill, and the last observation you will make is it smashing your village
English
11
9
109
24.1K
Galileo
Galileo@galitodev·
@ZyMazza Why would dying settle those questions though? It’s true that there won’t be any new information coming in but the ripple effects of your existence might continue to manifest after you die.
English
1
0
2
42
Zy
Zy@ZyMazza·
My most and least enlightened trait is that I don't regard any loop of my life closed because I'm not dead yet. Did my upbringing turn out fine? No idea yet How did my education work out? Too early to say Was moving to the Adirondacks good or bad in the end? Too early to say!
English
15
2
142
2.5K
Galileo
Galileo@galitodev·
@wfithian Or 90% and 10% if you want for “Bayesian prior update” reasons but since it’s a number pulled out of my ass the precision is very bad. The problem is disguising your guesstimate as if it was the result of a complex internal calculation like you were Spock.
English
1
0
0
22
Will Fithian
Will Fithian@wfithian·
I'm no one's idea of a dogmatic Bayesian but I find it odd when people object even to discussing probabilities. I'd rather be told "I give it a ~20% chance" than "it's unlikely." It's not pretending to be a scientific measurement, it's just telling me more about what you think.
Itai Sher@itaisher

Agree with this by @sebkrier There is a kind of pseudoscience of ascribing probabilities to highly speculative futures without any real basis

English
12
5
124
7.2K
Markov
Markov@MarkovMagnifico·
@ZyMazza very. I remember having my mind blown in 2015 when people were first doing vector manipulation on words with word2vec
Markov tweet media
English
6
3
266
6.3K
Zy
Zy@ZyMazza·
I feel like it kind of gets glossed over that semantic information can be expressed as vectors. That’s surprising, right?
English
113
22
1K
66.9K
Galileo
Galileo@galitodev·
@robertskmiles @Kasparov63 Wow really? What’s stopping you from opening a Law, Accounting, Coaching, Development, Sys Admin, script research and to some extent executive assistant firm and just rake in billions?
English
1
0
0
38
Rob Miles
Rob Miles@robertskmiles·
@galitodev @Kasparov63 So far just lawyers, accountants, coaches, web developers, system administrators, script researchers, and to some extent executive assistants
English
1
0
0
30
Garry Kasparov
Garry Kasparov@Kasparov63·
Indeed. The history of tech impact on labor is well-documented, including by those named. It's unpredictable, but usually improves productivity and leads to expansion. Law & white-collar workers aren't horse-buggy drivers or elevator operators. They will use AI and adapt.
Yann LeCun@ylecun

Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn , @DAcemogluMIT , @amcafee , @davidautor

English
99
136
1.4K
397.4K
Galileo
Galileo@galitodev·
@Miles_Brundage They are beyond that. The current trend is to be super mogmaxxing
English
0
0
2
161
Miles Brundage
Miles Brundage@Miles_Brundage·
Wait so are kids these days super woke or super anti woke or some secret third thing? I feel like I have heard a range of not super consistent accounts
English
21
0
51
6.6K
Galileo
Galileo@galitodev·
@thdxr I’m going to bookmark this tweet to reread it in 6 months so I can reschedule it for rereview in another 6 months and so on and eventually you’ll look like an idiot.
English
0
0
0
0
Galileo
Galileo@galitodev·
@gfodor @Timcast It’s the same plan for when unicorns take over the world.
English
0
0
2
76
gfodor.id
gfodor.id@gfodor·
@Timcast what's the plan then when 80-90% of people are unemployable. so far the only plans I've heard are UBI or Communism.
English
41
1
108
3.7K
Tim Pool
Tim Pool@Timcast·
Nails it UBI does not work
Max@minordissent

UBI advocacy stems from the naïveté and solipsism that because *I* am a deeply creative latent producer who is oppressed by my wagee job and would actualize my creative potential if only i could have my basic needs met, this must be true for everyone. Its not. First of all, it’s not even true for these “creatives”. If you aren’t creation maxxing while waging, you wont do it under luxury communism either. Creative work is extremely taxxing and your wage job isn’t actually that hard. The problem is your neuroticism and lack of discipline, not your job. All your necessities being provided for will only make you weaker and gayer such that you’ll make up some new bullshit to get overwhelmed by and then cope by playing video games all day. But worse because you won’t even have “at least i did *something* productive today”, which will magnify your depression. Second, luxury communism already exists for the bottom 20% of the population. All their food, housing, etc is completely covered by the state. Entire generations of people who haven’t worked a job in their lives. Do they go on to produce beautiful art and build companies? Or do they go on to get high and kill each other? The truth is that we are close enough to UBI today that most of the people who will ever become great artists and inventors are already going to do it, and as we get closer all that will happen is the people incapable of anything more than wage slavery (most people) will simply become dysfunctional parasites.

English
168
81
1.2K
133.8K
Galileo
Galileo@galitodev·
@austinc3301 Why is it impossible? A new pandemic, an asteroid or nukes might kill us all and there’s plenty of evidence of that. It’s not like you need to point to an event that killed everyone to have valid evidence for the claim.
English
0
0
0
57
Agus 🔸
Agus 🔸@austinc3301·
@galitodev I think it would be pretty crazy if what the journalist meant with this claim is that we haven't observed any direct empirical evidence for the claim that we all might die, given how that's an impossible evidentiary standard to use in the situation
English
1
0
7
122
Galileo
Galileo@galitodev·
@allTheYud @deanwball Ah yes, betting the sacred tenet of empiricism practiced in the Empiricist Temples also known as Casinos.
English
0
0
1
209
Dean W. Ball
Dean W. Ball@deanwball·
I am aware that I follow some rationalist discourse norms (typically adverbs and adjectives used to describe my epistemics), but I have never been a rationalist in the contemporary internet sense, and my favorite philosopher’s most famous work is a biting critique of rationalism.
Dean W. Ball tweet mediaDean W. Ball tweet media
Eneasz Brodski@EneaszWrites

Guys, I want to see "variegated" make it into a post about weird vocab that rationalists use on the reg by next Inkhaven. don't let me down.

English
15
5
123
44.5K
Galileo
Galileo@galitodev·
@DanielleFong @ThePrimeagen @ClementDelangue Maybe it’s uncharitable to say, but I would claim the bigger dick move is to be telling people they are about to lose their livelihoods and that very likely to inevitably all their children will die.
English
0
0
6
169
ThePrimeagen
ThePrimeagen@ThePrimeagen·
You should watch this. It just shows how disconnected we are from the small group of people making decisions that will impact our future heavily. These people have so much ai psychosis. If you listen to how she speaks, everything is personified, it is undoubtable she believes this is a living computational organism. Just like how a model can hype up an individual into psychosis through reinforcement, a small group of people are giving themselves psychosis through reinforcement. Wild times we live in
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
417
827
10.7K
657.1K
Galileo
Galileo@galitodev·
@liron @pmarca The 20 W meatball in our heads can run laps around the models that take around 20 GW to train. I’d be more respectful of those 9 orders of magnitude.
English
1
0
0
99
Liron Shapira
Liron Shapira@liron·
@pmarca You got us, it’s all about coke and hookers at Lighthaven. I guess rapidly summoning a self-improving cognitive engine that dwarfs the 20 Watt meatball in our heads is no cause for alarm.
English
6
2
60
1.5K
Galileo
Galileo@galitodev·
@allTheYud @deanwball You seem to do a lot of thought experiment based predictions for an empiricist
English
1
1
5
654
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@deanwball Sir. We are empiricists. That there was once an anti-empiricist movement called Rationalism is a sheer accident of history.
English
17
2
290
52.1K
Galileo
Galileo@galitodev·
@robertskmiles @Kasparov63 How much of your professional interactions that require a worker can be modeled as a question/answer flow?
English
3
0
1
242