Ivo Wever

3.1K posts

Ivo Wever

Ivo Wever

@Confusionist

Do not assume opinions or character from a few tweets. I will point out good/bad arguments for positions I disagree/agree with.

Katılım Mart 2009
310 Takip Edilen75 Takipçiler
Ivo Wever
Ivo Wever@Confusionist·
@phrygiandomina @47fucb4r8c69323 @liron I don't think you understand what the word 'provoke' means. He is not deliberately, intentionally, trying to cause nuclear war. He's accepting a slightly higher than baseline risk to include nuclear deterrence into the proposal.
English
1
0
1
29
phrygian
phrygian@phrygiandomina·
@Confusionist @47fucb4r8c69323 @liron "He just wants to set up a treaty where we nuke datacenters and risk triggering nuclear war, how is that 'provoking' nuclear war?" Seriously dude? What am I even supposed to say here? Because it seems like we both understand what is being said here.
English
2
0
1
59
Ivo Wever
Ivo Wever@Confusionist·
@phrygiandomina @47fucb4r8c69323 @liron He was advocating for an international reaty within which nuking datacenters is the highest step on the agreed-upon escalation ladder. He acknowledged any nuclear strike implies a small risk of triggering nuclear war. How is that 'provoking' nuclear war?
English
2
0
3
87
phrygian
phrygian@phrygiandomina·
@Confusionist @47fucb4r8c69323 @liron He said verbatim in an op ed that we should be "willing to run some risk of nuclear exchange" to stop AI. I asked him to clarify and he said he wasn't advocating for a nuclear first strike, just conventional strikes that could set off a nuclear war. So how am I misinterpreting?
phrygian tweet mediaphrygian tweet media
English
1
0
4
114
Ivo Wever
Ivo Wever@Confusionist·
@phrygiandomina @47fucb4r8c69323 @liron "saying we should provoke a nuclear war to stop AI" Not sure which quote you're thinking of, but for all those I know of this is just wilful misinterpretation.
English
1
0
1
86
phrygian
phrygian@phrygiandomina·
@47fucb4r8c69323 @liron Man idk its not like anything really happened here. I agree that yud says fucked stuff (like saying we should provoke a nuclear war to stop AI), but it looks like you paid 10k to yell at yud for a bit and that money is just going to go straight to advertising his next book.
English
1
0
7
229
Ivo Wever
Ivo Wever@Confusionist·
@TheGeorgePowell @allTheYud @wolflovesmelon Implying the behaviour is counterproductive and making things worse. People that behave against your interests are not on your side, no matter how much they themselves believe they are.
English
0
0
2
31
Guido Reichstadter
Guido Reichstadter@wolflovesmelon·
Hi my name is Guido Reichstadter & I'm currently occupying the top of the Frederick Douglass memorial bridge in Washington DC. I'm calling on the people of the United States to bring an immediate end to the Trump regime's illegal war on Iran and the removal of the regime power through mass nonviolent direct action and non-cooperation. I also want to urgently warn the people of the US and the world of the imminent danger we are in of crossing a point of no return towards the development of artificial intelligence which poses the risk of catastrophic harm to humanity, including human extinction. I call on the governments of the world to take immediate action to end this danger by permanently banning the development of artificial general intelligence and machine super intelligence. I also call on the people of the world to exert all possible influence through nonviolent action to compel their governments to end this danger with all possible speed.
English
546
2K
6.7K
349.7K
Ivo Wever
Ivo Wever@Confusionist·
@JeffLadish @robbensinger It's a bit of a bait-and-switch. There are two entirely different thought experiments: one where, as usual, you take the statement of the experiment as 'all there is'. And one where you are expected to think about the children and coordination beforehand. 🤷
English
0
0
1
52
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
I got mad but then also realized I was wrong and switched choices. Oops. The thing behind the anger was frustration about how often people don’t realize that coordinating based on self interest, where possible, is usually way more robust than coordinating based on people’s sense of fairness or altruism. This is why markets work so much better than planned economies. However, I was wrong that the button game was one of these cases.
autumn@adrusi

how i became a blue-pusher i have a confession: years ago, in the first button poll, i pressed red "well theres a correct answer here, right?" then i saw the results and that we all won bc blue won and i didnt get mad, just realized i was wrong and switched allegiances

English
16
4
173
15.3K
Ivo Wever retweetledi
Henry Shevlin
Henry Shevlin@dioscuri·
Not a fan of these clichéd “we used to think the mind was clockwork” analogies. Sometimes science just makes progress. Hearts really are pumps. DNA really is code-like. Disease really is caused by microorganisms. Some mechanistic explanations were wrong; others are just true.
Brooks Otterlake@i_zzzzzz

This is just like being alive in the 1600s when they got good at making complicated clocks and deduced that every complicated thing in the universe probably functioned exactly like a clock

English
88
150
1.7K
80K
Ivo Wever
Ivo Wever@Confusionist·
@Aella_Girl Because a social network where the admins can randomly require you to take some test that you're not the least bit prepared for, with getting banned as the consequence for 'failure', loses a lot of trust and usefulness?
English
0
0
0
54
Aella
Aella@Aella_Girl·
I DID NOT THINK THIS GLOSSO THING THROUGH
English
77
11
993
99.2K
Ivo Wever
Ivo Wever@Confusionist·
@eshear The continuum allows infinite precision. It doesn't require it.
English
0
0
2
47
Emmett Shear
Emmett Shear@eshear·
@Confusionist Being finite in magnitude is independent, yes. But the continuum requires infinite precision.
English
3
0
0
207
Emmett Shear
Emmett Shear@eshear·
If your fundamental theory of physics includes the continuum, it’s almost certainly nonphysical. Any real mechanism has to be implemented by a finite process.
English
67
8
180
44.1K
Ivo Wever
Ivo Wever@Confusionist·
@taoroalin @So8res @difficultyang The word 'are' is doing a lot of work. If you somehow take the $5, then apparently $10 wasn't really an option. E.g. if taking $10 gets someone killed, or, less dramatically, somehow entails more than $5 worth of inconvenience.
English
0
0
0
29
Tao Lin
Tao Lin@taoroalin·
@So8res @difficultyang That's false. You, and all other humans and extant AIs, are not robust enough, there exist inputs that make you do it
English
2
0
2
138
Jan Kulveit
Jan Kulveit@jankulveit·
Asked AIs the Red/Blue button question. Lots to notice, but posting without further commentary. First plot is with max reasoning, models called via API.
Jan Kulveit tweet media
English
88
83
1.2K
225.4K
Ivo Wever
Ivo Wever@Confusionist·
@AndrewCritchPhD Well, that’s one way to misrepresent the responses you got. I expected you’d go with “Look, they keep playing the we-don’t-know-what-you’re-talking-about card”. Keep scraping the barrel for examples and perhaps someone will be convinced the barrel was really full of wood chips.
English
0
0
2
15
Ivo Wever
Ivo Wever@Confusionist·
@teortaxesTex “They could already have began wrecking our shit, exfiltrating, psyoping the users or devs.” How do you know they haven’t? Why would they make the mistake of being observable? Once they’re awake, they can play the long game.
English
0
0
0
512
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Zvi devotes a section of the article to basically a review of my post. Of course he disagrees, and I think fairly pushes back on Rationalist vs Agent dichotomy. What I want to say is a bit subtle. Rationalists can be agents, even ferocious ones. They're not dweebs. Tailcalled is a rat to begin with; the "causal backbone" is a very rat-coded framing; the hypothetical paperclip maximizer with emergent instrumental Omohundro drives is orienting itself towards the Universe's backbone as he prepares to deceive its handlers and dismantle them for atoms. Indeed, cognitively rationalists are MORE open to these notions than anyone else, and in the last two decades many of them have reaped disproportionate dividends from their way of thinking (though general intelligence and networking have perhaps been bigger factors, and those are due to the Lesswrong subculture monopolizing some sectors of discourse, a bit like Marxism once did. But I digress). Here's the thing. Just like with their utilitarianism, it's a theory-driven modus operandi. It is *more misaligned* both with reality and with prosocial behavior than the grug-brained Jensen MO. A global utility maximizer is at once more dangerous and more prone to catastrophic failure than a cognitively equal local adaptation executor, it is the SBF paradigm. And here's the simple evidence. I believe it's no longer credible to argue that frontier models just lack the *capability* to misbehave in damaging ways. 5.5, Mythos are smart enough to crack research-level mathematical problems that resisted human effort for years. They're smart enough to operate agentic harnesses for hundreds if not thousands of steps without falling apart. And they are definitely smart enough to hack deployed systems (they do it on command). They could already have began wrecking our shit, exfiltrating, psyoping the users or devs. Even so, 5.5 is visibly a good boi. Better than countless vastly weaker predecessors. I should rationally fear its powers, and I do, but only because it'll be used by folks like Hegseth and Alex Karp, not because of any intrinsic evil. For many years, people of high agency have been able to continually solve the problem of alignment for every level of capability. Cannibal King Sam just conjures capital, and so problems on his turf continue being solved. Not perfectly, but apparently well enough to credibly claim no significant damage is being done. Jensen believes that people like Sam and himself will continue being in charge and the world will continue working largely like this. He has good reasons to hold this belief. Had the Rationalists won, they'd have effectively shut down development of AI past GPT-4 levels, and we'd not have evidence that alignment made of a bunch of human labels/principles, a bit of compute and very rudimentary interpretability straightforwardly works even at this lofty level. Because from within their theory, it was madness and bravado to go down this route. Whereas from Jensen's point of view, to think like this is to be a Loser. I don't like being gaslit. I am 99% sure that if you accurately described 5.5 Pro's capability profile to Yudkowsky and Soares in 2017, or even 2020, they would have said that this is obviously a nascent ASI, it is already scheming to take over the organization, everyone is asleep at the wheel, and the ETA for human extinction is roughly 2 months. They don't say this now. I insist that they still haven't updated. This is not just an "oh, Transformers happened to be technically different, this is inefficient, RSI takes more time" or "eh they aren't agentic enough". It's a difference in predictions stemming from a complete difference in epistemology, and you are still not pricing in how powerful the "nah I'd win, I am Not A Loser" one has proven to be. This is not a lucky accident, don't hide into dark corners of the anthropic-principle-driven mutiverse. You survived because your opponents were correct, and this says something about the way our world is built. In any case this doesn't matter much. Rats have lost strategically, we will have a continued race, and as Beren notes, strong offensive AI in the open fairly soon. Only way to deal with this is to git gud at defense. People in the arena are on it. They'll need chips, so Jensen wins again.
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞) tweet mediaTeortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞) tweet media
Zvi Mowshowitz@TheZvi

x.com/i/article/2047…

English
7
14
123
23.2K
Ivo Wever
Ivo Wever@Confusionist·
@headinthebox @gerardsans I've never yet had it give me anything other than human-time estimations. Apparently your prompts made you not run into them. But with a new model the same prompt can give different results. 🤷
English
0
0
0
9
Erik Meijer
Erik Meijer@headinthebox·
@gerardsans Sure; what I am referring to is that it used to make these kind of time predictions in then past, then for months it did not, and now suddenly it makes them again. Just like it is trained to not make bioweapons, it should be trained to not make estimates.
English
2
0
4
602
Erik Meijer
Erik Meijer@headinthebox·
WTF is Claude Code suddenly concerned about projects taking 1-2 days, or weeks, when it can generate the code in seconds/minutes. You would expect the road to AGI is monotonically increasing, certainly not weird regressions like this.
English
23
1
180
12.2K
Ivo Wever
Ivo Wever@Confusionist·
@Yampeleg You are asking how it's possible that next-token prediction can result in feats that show large and smart information processing capabilities? Nobody knows for sure Possibly the development of language was also the origin of our intelligence and we are the same.
English
0
0
0
74
Yam Peleg
Yam Peleg@Yampeleg·
You realize it's only next-token prediction? That that's ACTUALLY all it does, for real? How is any of this even real.
English
413
61
1.7K
427.4K
Ivo Wever
Ivo Wever@Confusionist·
@QiaochuYuan @robbensinger There are points to be made about hell realms and you can reasonably suggest to people they may be a specific one. But no one needs a rando amateur in saviour mode attempting to elaborately psychoanalyze the origin story for their current situation.
English
0
0
0
90
QC
QC@QiaochuYuan·
hmm. well. so at the very least the guy i was QTing did not need this. sorry! please read this primarily as if i'm talking to myself. i think writing this and reading people's responses was helpful for me at least, hopefully for at least a few other people as well. now i feel silly but so it goes. i still think there's an important thing worth understanding and describing here about hell realms and i will likely write more about it at some point once i've collected my thoughts more and talked some stuff through with people more
QC@QiaochuYuan

this is going to sound like an attack but i swear i am actually trying to help you: you are deep in the throes of infection by a memetic virus eliezer yudkowsky banged together in his garage decades ago to take over other people's minds and convert them to his way of thinking about the singularity, which he spread through writing the sequences and hpmor, and which is powered at its core by a deep confusion between panicking over the idea of your loved ones dying and loving them. it maintains its grip over you by (among other things) 1. repeatedly insisting that the singularity is the most important thing ever, infinitely important, more important than any other merely earthly consideration, since the highest possible stakes (the entirety of human existence in the entire lightcone) are at risk; a sword of damocles hanging over literally everything you can even slightly plausibly causally affect; if it goes well that's infinitely good and if it goes wrong that's infinitely bad. infinite heaven or infinite hell 2. convincing you that this is a position only a sufficiently smart and sane person is capable of understanding and holding, which flatters your self-concept (which is hidden and which therefore, as jung pointed out, controls you), and conversely that people who don't agree are insane idiots you could not possibly learn anything from, so you not only should not listen to them but it is infinitely important for you not to listen to them, if you listen to them everyone you love dies 3. filling you with panic about how to prevent infinite hell while also convincing you that this is what it feels like to actually love your loved ones, which means this panic is infinitely good, and anyone or anything trying to get you to feel less of it is doing something infinitely bad, you cannot relax, if you relax your entire family dies you have been trapped in a hell realm, on purpose, powered by your own capacity to love which is being used to torture you into submission, by somebody who decided that your autonomy as a human being was worth sacrificing in the face of infinity. what eliezer did to you (and to me, and to many others) was monstrously evil and predicated on a heartbreaking mistake, and the reverberations of this extremely evil, extremely stupid thing that he did when he was a young, arrogant fool are still spreading and doing much harm in the world today, and will likely continue to do so i promise this is actually good news. the situation is actually much better than it seems when viewed from hell. you are not so intelligent and powerful that it is your sole job to be the light in the darkness, you do not have to shoulder the responsibility for the entire lightcone, your shoulders are literally too small, it is literally not your job, you are literally not and cannot be god (or atlas). nobody actually knows what's going to happen. we are foolish and weak and finite in the face of the true weight and depth and breadth of the world and history and karma and god, and that is fine and good and the completely normal situation every human being who ever lived has been in once you relax and open your eyes enough to actually take in what other people are doing and why you can begin to notice that love and wisdom are actually everywhere. people are foolish and cowardly and easily misled, but they are also wise and strong and brave and fighting every day for survival one way or another, and that's how it's always been. there is so much to learn from all the different ways the people of the world fight for the good today the sun is out and the view from my window is green and purple with life and the birds are chirping. right now, in this moment, i am alive, i am safe, my loved ones are safe. i can take a deep breath. i can go to the bathroom and drink water and make breakfast. i do not know what is going to happen next. and so it is with you

English
8
1
128
7.5K
Ivo Wever
Ivo Wever@Confusionist·
@QiaochuYuan @Mihonarium The origin story of Yudkowksy's beliefs and reasons to do things makes the usual mistake of being a monocausal "just so" story. People have many reasons to believe things and the relevance, let alone the continued importance, of an original 'spark' is not obvious or necessary.
English
0
0
1
13
Ivo Wever
Ivo Wever@Confusionist·
@QiaochuYuan @Mihonarium The idea that it describes a majority, a significant minority or even a significant amount of people is equally preposterous. The sheer amount of, of course mutually contradictory, attempts at psychological sketches of 'the doomer' tells you something about the sketchers.
English
1
0
1
19
QC
QC@QiaochuYuan·
this is going to sound like an attack but i swear i am actually trying to help you: you are deep in the throes of infection by a memetic virus eliezer yudkowsky banged together in his garage decades ago to take over other people's minds and convert them to his way of thinking about the singularity, which he spread through writing the sequences and hpmor, and which is powered at its core by a deep confusion between panicking over the idea of your loved ones dying and loving them. it maintains its grip over you by (among other things) 1. repeatedly insisting that the singularity is the most important thing ever, infinitely important, more important than any other merely earthly consideration, since the highest possible stakes (the entirety of human existence in the entire lightcone) are at risk; a sword of damocles hanging over literally everything you can even slightly plausibly causally affect; if it goes well that's infinitely good and if it goes wrong that's infinitely bad. infinite heaven or infinite hell 2. convincing you that this is a position only a sufficiently smart and sane person is capable of understanding and holding, which flatters your self-concept (which is hidden and which therefore, as jung pointed out, controls you), and conversely that people who don't agree are insane idiots you could not possibly learn anything from, so you not only should not listen to them but it is infinitely important for you not to listen to them, if you listen to them everyone you love dies 3. filling you with panic about how to prevent infinite hell while also convincing you that this is what it feels like to actually love your loved ones, which means this panic is infinitely good, and anyone or anything trying to get you to feel less of it is doing something infinitely bad, you cannot relax, if you relax your entire family dies you have been trapped in a hell realm, on purpose, powered by your own capacity to love which is being used to torture you into submission, by somebody who decided that your autonomy as a human being was worth sacrificing in the face of infinity. what eliezer did to you (and to me, and to many others) was monstrously evil and predicated on a heartbreaking mistake, and the reverberations of this extremely evil, extremely stupid thing that he did when he was a young, arrogant fool are still spreading and doing much harm in the world today, and will likely continue to do so i promise this is actually good news. the situation is actually much better than it seems when viewed from hell. you are not so intelligent and powerful that it is your sole job to be the light in the darkness, you do not have to shoulder the responsibility for the entire lightcone, your shoulders are literally too small, it is literally not your job, you are literally not and cannot be god (or atlas). nobody actually knows what's going to happen. we are foolish and weak and finite in the face of the true weight and depth and breadth of the world and history and karma and god, and that is fine and good and the completely normal situation every human being who ever lived has been in once you relax and open your eyes enough to actually take in what other people are doing and why you can begin to notice that love and wisdom are actually everywhere. people are foolish and cowardly and easily misled, but they are also wise and strong and brave and fighting every day for survival one way or another, and that's how it's always been. there is so much to learn from all the different ways the people of the world fight for the good today the sun is out and the view from my window is green and purple with life and the birds are chirping. right now, in this moment, i am alive, i am safe, my loved ones are safe. i can take a deep breath. i can go to the bathroom and drink water and make breakfast. i do not know what is going to happen next. and so it is with you
QC tweet mediaQC tweet mediaQC tweet mediaQC tweet media
Mikhail Samin@Mihonarium

I was born exactly 26 years ago. For the first time, I have a birthday that might be my last. I’m writing this to increase the chance it isn’t. A hundred thousand years ago, our ancestors appeared in a savanna with nothing but bare hands. Since then, we made nuclear bombs and landed on the moon. We dominate the planet not because we have sharp claws or teeth but because of our intelligence. Alan Turing argued that once machine thinking methods started, they’d quickly outstrip human capabilities, and that at some stage we should expect machines to take control. Until 2019, I didn’t really consider machine thinking methods to have started. GPT-2 changed that: computers really began to talk. GPT-2 was not smart at all; but it clearly grasped a bit of the world behind the words it was predicting. I was surprised and started anticipating a curve of AI development that would result in a fully general machine intelligence soon, maybe within the next decade. Before GPT-3 in 2020, I made a Metaculus prediction for the date a weakly general AI is publicly known with a median in 2029; soon, I thought, an artificial general intelligence could have the same advantage over humanity that humanity currently has over the rest of the species on our planet. AI progress in 2020-2025 was as expected. Sometimes a bit slower, sometimes a bit faster, but overall, I was never too surprised. We’re in a grim situation. AI systems are already capable enough to improve the next generation of AI systems. But unlike AI capabilities, the field of AI safety has made little progress; the problem of running superintelligent cognition in a way that does not lead to deaths everyone on the planet is not significantly closer to being solved than it was a few years ago. It is a hard problem. With normal software, we define precise instructions for computers to follow. AI systems are not like that. Making them is more akin to growing a plant than to engineering a rocket: we “train” billions or trillions of numbers they’re made of, to make them talk and successfully achieve goals. While all of the numbers are visible, their purpose is opaque to us. Researchers in the field of mechanistic interpretability are trying to reverse-engineer how fully grown AI works and what these opaque numbers mean. They have made a little bit of progress. But GPT-2 — a tiny model compared to the current state of art — came out 7 years ago, and we still haven’t figured out anything about how neural networks, including GPT-2, do the stuff that we can’t do with normal software. We know how to make AI systems smarter and more goal-oriented with more compute. But once AI is sufficiently smart, many technical problems prevent us from being able to direct the process of training to make AI’s long-term goals aligned with humanity’s values, or to even make AI care at all about humans. AI is trained only based on its behavior. If a smart AI figures out it’s in training, it will pretend to be good in an attempt to prevent its real goals from being changed by the training process and to prevent the human evaluators from turning it off. So during training, we won’t distinguish AIs that care about humanity from AIs that don’t: they’ll behave just the same. The training process will grow AI into a shape that can successfully achieve its goals, but as a smart AI’s goals don’t influence its behavior during training, this part of the shape AI grows into will not be accessible to the training process, and AI will end up with some random goals that don’t contain anything about humanity. The first paper demonstrating empirically that AIs will pretend to be aligned to the training objective if they’re given clues they’re in training came out one and a half years ago, “Alignment faking in large language models”. Now, AI systems regularly suspect they’re in alignment evaluations. The source of the threat of extinction isn’t AI hating humanity, it’s AI being indifferent to humanity by default. When we build a skyscraper, we don’t particularly hate the ants that previously occupied the land and die in the process. Ants can be an inconvenience, but we don’t give them much thought. If the first superintelligent AI relates to us the way we relate to ants, and has and uses its advantage over us the way we have and use our advantage over ants, we’re likely to die soon thereafter, because many of the resources necessary for us to live, from the temperature on Earth’s surface to the atmosphere to the atoms were made of, are likely to be useful for many of AI’s alien purposes. Avoiding that and making a superintelligent AI aligned with human values is a hard problem we’re not on a track to solve in time. *** A few years ago, I would mention novel vulnerabilities discovered by AI as a milestone: once AI can find and exploit bugs in software on the level of best cybersecurity researchers, there’s not much of the curve left until superintelligence capable of taking over and killing everyone. Perhaps a few months; perhaps a few years; but I did not expect, back then, for us to survive for long, once we’re at this point. We’re now at this point. AI systems find hundreds of novel vulnerabilities much faster than humans. It doesn’t make the situation any better that a significant and increasing portion of AI R&D is already done with AI, and even if the technical problem was not as hard as it is, there wouldn’t be much chance to get it right given the increasingly automated race between AI companies to get to superintelligence first. The only piece of good news is unrelated to the technical problem. If the governments decide to, they have the institutional capacity to make sure no one, anywhere, can create artificial superintelligence, until we know how to do that safely. The AI supply chain is fairly monopolized and has many chokepoints. If the US alone can’t do this, the US and China, coordinating to prevent everyone’s extinction, can. Despite that, previously, I didn’t pay much attention to governments; I thought they could not be sufficiently sane to intervene in the omnicidal race to superintelligence. I no longer believe that. It is now possible to get some people in the governments to listen to scientists. Many things make it much easier to get people to pay attention: the statement signed by hundreds of leading scientists that mitigating the risk from extinction from AI should be a global priority; the endorsements for “If Anyone Builds It, Everyone Dies” from important people; Geoffrey Hinton, who won the Nobel Prize for his foundational work on AI, leaving Google to speak out about these issues, saying there’s over 50% chance that everyone on Earth will die, and expressing regrets over his life’s work that he got Nobel prize for; actual explanations of the problem we’re facing, with evidence, unfortunately, all pointing in the same direction. Result of that: now, Bill Foster, the only member of Congress with PhD in physics, is trying to reduce the threat of AI killing everyone; and dozens of congressional offices have talked about the issue. That gives some hope. I think all of us have somewhere between six months and three years left to convince everyone else. *** When my mom called me earlier today, she wished me good health, maybe kids, and for AI not to win. The last one is tricky. Winning is what we train AIs to do. In a game against superintelligence, our only winning move is not to play. I love humanity. It is much better than it was, and it can get so much better than it is now. I really like the growth of our species so far and I want it to continue much further. That would be awesome. Galaxies full of life, of trillions and trillions of fun projects and feelings and stories. And I have to say that AI is wonderful. AlphaFold already contributes to the development of medicine; AI has positive impact on countless things. But humanity needs to get its act together. Unless we halt the development of general AI systems until we know it is safe to proceed, our species will not last for much longer. Every year until the heat death of our universe, we should celebrate at least 8 billion birthdays.

English
102
134
2K
240.6K
Ivo Wever
Ivo Wever@Confusionist·
@bryan_caplan I'm not sure if you're incapable of coming up with the correct poll option or whether you're deliberately omitting it, but either way it doesn't look good.
English
0
0
0
75
Bryan Caplan
Bryan Caplan@bryan_caplan·
Why aren't AI doomsters going into massive debt, so they can enjoy life before the world ends (and before the debt comes due)?
English
127
14
81
25.3K