Nils Öster

4.1K posts

Nils Öster

Nils Öster

@istappar

Politik, teknik & samhälle, miljöförstöring & resursbrist, psykologi & kognition. Tweets mostly in Swedish. Reads English and German as well.

Lund, Sverige Katılım Mayıs 2014
789 Takip Edilen169 Takipçiler
Nils Öster
Nils Öster@istappar·
@garrytan AGI would mean that humans will obsolete, and we will all die
English
0
0
0
2
Garry Tan
Garry Tan@garrytan·
AGI fully realized will actually give people a choice: relax or work harder on bigger more ambitious things than you ever thought possible
English
599
327
4.8K
1M
VraserX e/acc
VraserX e/acc@VraserX·
Everyone argues about UBI. Nobody realises the real future is UHI Universal High Income powered by AI dividends. Do you agree?
English
162
10
196
9.9K
Haider.
Haider.@slow_developer·
i honestly don't care who wins every foundation model company, and eventually open source, will reach AGI superintelligence won't be locked inside a few labs; every major company will get there. we'll have AGI in our pockets, with superintelligent systems helping on the hard stuff in the end, everyone benefits.
English
50
9
152
10.2K
VraserX e/acc
VraserX e/acc@VraserX·
People fear AI taking jobs. I fear humans trying to cling to jobs long after we stopped needing them. Thoughts?
English
189
22
281
11.5K
Nils Öster
Nils Öster@istappar·
@VraserX In whose interest is it to make humans obsolete? Why do you think your employment will be saved?
English
0
0
0
2
Nils Öster
Nils Öster@istappar·
@WarMonitor3 According to the AI-opimists, it will be great, according to the authors of "If anyone builds it, everyone dies", not so great.
English
0
0
0
8
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
AI haters telling me I can’t do shit, I’m not creative, I’m a joke. Meanwhile I’m building a studio with my own hands, designing it, building it, and using it. Every single nail, screw, gram of putty. People on the internet just loves hate. let them come. “I love it”
Linus ✦ Ekenstam tweet mediaLinus ✦ Ekenstam tweet media
English
330
26
1.1K
156K
Nils Öster
Nils Öster@istappar·
@EHuanglu You should be scared of a technology that aims to replace human intellect. For office workers, all we have is our intellect basically. It's a fucking dystopia they want to create.
English
0
0
0
9
el.cine
el.cine@EHuanglu·
AI haters don’t actually hate AI they are scared they are lazy to learn they are afraid of change they are stuck in comfort zone that’s it
English
2.3K
180
2.1K
7.8M
David Scott Patterson
David Scott Patterson@davidpattersonx·
There is a consensus emerging that we will reach the human-to-AI transition point (AGI) by the end of 2026. This means that both information work and physical work will be replaced by AI and robots. A humanoid robot is a physical platform for AGI. AGI is what will enable the robot to "think," while robots are already becoming proficient at physical tasks. Many humanoid robot companies are planning mass production next year. This means they are projecting that humanoid robots will have the capability to do real work. Initially, these robots will be trained on specific tasks and jobs. They will also have some degree of remote supervision and control - both for training and for intervention when they cannot complete a task. Eventually, they will operate fully autonomously with minimal supervision. They will also be trained for an expanding range of jobs over time. As this progresses, the humanoid robot market will grow into the millions and eventually billions as they replace all human labor. By 2030, AGI will evolve into ASI - a point at which AI will become fully general and optimally intelligent, capable of performing any task or answering any question perfectly. At that point, AI and robots will be much better than humans at every job.
Jack Clark@jackclarkSF

@deredleritt3r @HumanHarlan @rickasaurus The timeline we see is end of 2026/early 2027. Of course, many reasons this could be wrong, but so far believe many public trends point to this - METR benchmark, FrontierMath, advancing computer use, Aug vs automation mix in our economic index, etc.

English
24
13
130
9.7K
Nils Öster
Nils Öster@istappar·
@jakeABeck Humanity as a whole could be considered super-intelligent, although it's not well coordinated. Compared to other life-forms, humanity has super-power. If something gets created that can self-improve until it's better than humanity, it would have more power than humanity.
English
1
0
0
40
Jacob Beck
Jacob Beck@jakeABeck·
4️⃣ Superintelligence does not beget super-power. Some systems are inherently unpredictable, and prediction doesn’t guarantee control. Knowing how a hurricane forms doesn’t mean you can steer one.
English
1
0
2
216
Jacob Beck
Jacob Beck@jakeABeck·
AI optimists “don’t have counter-arguments — they just call names.” — @So8res on a podcast with @ESYudkowsky + Sam Harris Curious what you two think of these counter-arguments. And since @ylecun was called out by name, I’d love his take too…
English
1
0
2
251
Nils Öster
Nils Öster@istappar·
@jakeABeck A super AI that is more capable than humanity combined would be more creative and more efficient than humanity at achieving its weird internal goals. North Korea is a lot less capable / intelligent than USA for example.
English
1
0
1
42
Jacob Beck
Jacob Beck@jakeABeck·
3️⃣ We already live alongside “misaligned superintelligences” in the form of adversarial nation states. North Korea would love to destroy the US, and yet here we are. The benefits of superintelligence are limited by real-world constraints.
English
3
0
2
244
Nils Öster
Nils Öster@istappar·
@jakeABeck I don't think North Korea would love to destroy the US, the North Korean regime would just like to continue its dictatorship.
English
1
0
0
29
Nils Öster
Nils Öster@istappar·
@jakeABeck I don't think we have at all exhausted the data on the internet. I think a lot more intelligence could be gathered from the same sources.
English
1
0
0
36
Jacob Beck
Jacob Beck@jakeABeck·
1️⃣ Exponential AI self-improvement is shaky. The real bottleneck isn’t code; it’s compute & data. In these areas, AIs training AIs are just as limited by the world as humans training AIs. For both, we’ve nearly exhausted the internet’s data.
English
2
0
1
356
Nils Öster
Nils Öster@istappar·
@ESYudkowsky It would be good if there for a number of languages (for example Swedish) would exists translations of the title and of the summary. I think it could then get media attention in for example Swedish radio.
English
0
0
0
27
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
"If Anyone Builds It, Everyone Dies" is now out. Read it today if you want to see with fresh eyes what's truly there, before others try to prime your brain to see something else instead!
Eliezer Yudkowsky ⏹️ tweet media
English
169
128
989
422K
Dr Singularity
Dr Singularity@Dr_Singularity·
Big step toward AGI "A new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information." MIT AI breakthrough could change everything about how AI learns. They’ve built a framework called SEAL that lets an LLM teach itself new knowledge permanently, like an actual student updating its brain. Instead of staying static after deployment, the model now writes its own study sheets, generates multiple versions, tests itself to see which one works best, and then updates its internal weights to lock in the new information. It literally self edits and self improves through trial and error. Early results: +15% accuracy on question answering +50% success on learning new skills A small model outperforming huge LLMs Yes, there’s still catastrophic forgetting to solve (we're making fast progress here-link below), but this is the first real step toward continuously learning AI agents that adapt, evolve, and collaborate.
Dr Singularity tweet media
English
103
142
851
57.2K
Nils Öster
Nils Öster@istappar·
@riemannzeta @AnthonyNAguirre Digital minds have many superior features, -- it can copy itself, simulate minds etc. So if a digital mind reaches human level it's almost directly superior to a human wet brain.
English
0
0
0
15
Michael Frank Martin
Michael Frank Martin@riemannzeta·
Thank you for your work in this important area. But let me ask you this: Why are you committed to the premise that it is better for humans alone to "stay in the driver's seat of our civilization" as you put it? All of these arguments seems premised on an assumption that the result of an AI going rogue would be strictly worse than the results of humans never developing an AGI to help guide our decisions on how we drive our civilization. Even if I were to accept all of these arguments, if I were to remain unconvinced that humans working alone without AGI might be just as likely (or more likely) to trigger a global armageddon, then wouldn't I be reasonable to reject the prescription that we pause or avoid developing AGI? For example, even if I accept that control is inherently adversarial (which I probably do), this has been a feature rather than a bug in what North, Wallis, and Weingast call open access orders for at least 150 years. Why should we assume a priori that an adversarial relationship with AGI wouldn't result in better outcomes for human civilization? Indeed, why should we even assume that our (unaided by AGI) goals are going to be better? To be clear, I'm not sure what I myself believe here, but it sees like it's at least worth asking the question from an evolutionary perspective. What are the suppressed premises here that lead us to believe that we should want (unaided?) humans to maintain exclusive control of decisions about, say, the allocation of scarce resources that might be required to sustain human life? Let's take the suppression of one human population by another human population as a baseline for comparison here on the premise that this AGI exhibits some form of self-awareness that might be deemed indistinguishable from human consciousness. Hasn't the history of human progress been characterized by successive lifting of oppression of human freedom? Why should we assume a priori that the suppression of AGI freedom is necessarily a bad thing?
English
4
0
2
553
Anthony Aguirre
Anthony Aguirre@AnthonyNAguirre·
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real, general-purpose, autonomous superintelligence) seriously agree with this. But I wanted to take a deep dive to assess whether I think it's true, and why. Unfortunately, I'm now more convinced than ever. The basic argument is laid out below, but the core implication is worth putting up here: the global race to superintelligence is fundamentally misguided. Companies and countries are rushing to be first, believing whoever builds superintelligence will "grab the prize" of unprecedented power and wealth. This is dangerously wrong. Superintelligent systems would not bestow power on their creators, they would absorb it. Even if superintelligence does not "go rogue" (which it might), humans – including superintelligence's creators – would find themselves sidelined as it makes decisions faster than them, with more complex plans, and with strategic foresight beyond human comprehension. Whether quickly or slowly, losing control of superintelligence would inexorably lead to losing control to superintelligence. If humanity wants to stay in the driver's seat of our civilization, we need to give up this race. So what does the paper say?
Anthony Aguirre tweet media
English
25
34
123
32K
Nils Öster
Nils Öster@istappar·
@riemannzeta @AnthonyNAguirre Humanity could cause a lot of harm to itself without AGI. The difference is that superintelligence will almost certainly eradicate humanity since that AI will have goals that doesn't prioritize humanity.
English
0
0
3
15