r/Accelerate to the Singularity

843 posts

r/Accelerate to the Singularity banner
r/Accelerate to the Singularity

r/Accelerate to the Singularity

@Accelerate__Now

r/acc tech-acc Without AGI, we've a 100% chance of death. A random AGI beats those odds. We can do better than random Towards a plurality of aligned AGIs. XLR8!

Australia Katılım Kasım 2012
237 Takip Edilen105 Takipçiler
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
AI will create more jobs than any other technology in history. The doomers' fundamental error isn't just the lump of labor fallacy. It's deeper than that. They assume a finite problem space. This is the fundamental error of AI and job doomers. They look at the economy and see a fixed amount of work to be done, a pie that can only be sliced thinner as machines take bigger bites. They see humans a competitive resource for a finite amount of work and a finite amount of problems to solve that must be eliminated. This is fundamentally, totally and completely wrong. The pie isn't fixed. It never was. And the reason it isn't fixed is baked into the very nature of technology itself. Technology is nothing but abstraction stacking. And abstraction stacking is infinite. Therefore the work is infinite. The hammer didn't reduce the amount of work. It moved the work up the stack. And the new work was more complex, more varied, and more interesting than the old work. Complexity breeds more complexity and more variety. Once you have houses instead of mud huts, you have a cascade of new problems that didn't exist before. Plumbing. Wiring. Insulation. Roofing materials that don't rot. Drainage systems so the foundation doesn't flood. Fire codes so your neighbor's bad wiring doesn't burn down the whole block. Each of those problems becomes a job. A plumber. An electrician. An insulator. A roofer. A civil engineer. A building inspector. None of those jobs existed when we lived in mud huts. They exist because we solved the mud hut problem. Think of all of human technological development as a stack of abstraction layers, each one built on top of the ones below it. At the bottom: raw survival. Finding food. Building shelter. Making fire. These are the base-layer problems. Each major technology wave solved a base-layer problem and in doing so created an entirely new layer of problems above it: Agriculture solved "how do we reliably eat?" — and created problems of land ownership, irrigation, crop rotation, storage, trade, taxation, and governance. Writing solved "how do we remember things across generations?" — and created problems of literacy, education, record-keeping, law, bureaucracy, and literature. The printing press solved "how do we spread knowledge at scale?" — and created problems of intellectual property, censorship, journalism, publishing, public opinion, and democratic discourse. The steam engine solved "how do we generate mechanical power without muscles?" — and created problems of factory design, worker safety, urban planning, railroad engineering, coal mining, labor relations, and environmental pollution. Electricity solved "how do we deliver energy anywhere?" — and created problems of grid design, power generation, appliance manufacturing, electrical safety codes, utility regulation, and an entire consumer electronics industry. The Internet solved "how do we connect all human knowledge?" — and created problems of cybersecurity, digital privacy, online commerce, content moderation, network infrastructure, cloud computing, social media dynamics, and an entire digital economy that employs tens of millions. Notice the pattern? Each solution didn't just solve a problem. It created an entirely new problem space that was larger, more complex, and more varied than the one it replaced. The stack grows. It never shrinks. It's turtles all the way down and all the way up.
English
243
324
1.3K
124.6K
r/Accelerate to the Singularity
@deanwball in r/accelerate we talk about both with equal fervor and frequency. in the USA certain states will outlaw AI legal advice, therapy, medical advice, etc, and the backlash will be extreme, and those politicians will get voted out. Sometimes negative symptoms hasten a diagnosis.
English
0
0
1
217
Dean W. Ball
Dean W. Ball@deanwball·
the pro-ai astroturf movement thing that sort of metastasized out of sb 1047 still feels indelibly sb 1047 shaped today. take the obsession they have with "doomers" and their "speculative science-fiction scenarios about AI causing catastrophic risks." we still hear these lines today from the astroturfers and the small number of authentic unwitting fools who got astroturfed. yet the actual, powerful 'pro-ai' line is something more like "right now, only the rich get great legal and medical and other expert advice, and the entrenched classes who provide those services want their work to remain expensive." and indeed, many of the state laws we see are doing just this: barring AI from providing licensed expert advice in various ways, or restricting use in a structurally similar fashion. you'd expect the 'pro-ai astroturf' crowd to be all over this stuff, but few of them are. instead they are pouring monotonically more money into this quixotic quest against the catastrophic risk bills--some of the cleanest AI legislation there is from a political-economy perspective. I wish someone would astroturf the "AI means mass abundance of services previously reserved for the elites" argument--it's true after all! the entrenched classes (the medical establishment, the state bars, etc.) really are lobbying for regulatory capture. where is the outrage? but instead the pro-ai people obsess over this deeply unpersuasive idea that AI policy is a manichean struggle against "the doomers." so bad laws--laws that hinder good uses of ai by normal people and keep expensive things expensive--are passing like crazy, and the White House is bullying states into voting down light-touch catastrophic-risk transparency laws while the career staff of the national security agency point at mythos like the black monolith. it is an incredibly stupid outcome. it is also remarkably sb 1047-shaped. that debate really programmed the brains of many, especially on the accelerationist side (and btw, for those lacking context, I was among the very earliest sb 1047 skeptics, writing screeds about that early attempt at ai regulation back in February 2024 when the VCs were telling me "oh, it's just a state law, that'll never matter." true story.) it is time for a great reset of ai policy.
English
5
19
193
17.2K
r/Accelerate to the Singularity
@deanwball AI is an existential risk for professions that rely on obfuscation and esoteria. The legal and medical industries chief among them. Doubtless lawyers are working overtime to outlaw AI providing legal advice. And by doing so they will show their hand, and hasten their obsoletion.
English
0
0
1
186
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
AI has had one safest technology roll-outs in history. Read that again, because it's a fact. It's used by billions with a tiny fraction of a percent of actual problems. And yet it's seen as dangerous or unsafe by many. There's a constant chorus of people shouting about its supposed dangers with no evidence whatsoever where it matters most: here, in the real world. So what do we actually have here in reality? A few cases in courts about early versions of ChatGPT allegedly being too sycophantic and not recognizing mental illness or someone in trouble, that are still making their way through courts and may prove wrong or right (some of the media released snippets are damning but not definitive). Time will tell. Innocent until proven guilty. The very nature of court litigation is often to find a scapegoat for something that gets thrown out in the actual process of the trial. But outside of that, what? Answer: not much. And viewed through the lens of other technology in history its incident rate is probably lower than lawnmowers. It makes little sense when you think of it through the lens of other tech like cars and planes, which had atrocious early track records. AI even has a better track record of safety than nuclear. Despite being incredibly safe overall, nuclear had several high profile and dangerous failures with Three Mile Island and Fukishima. With AI, nothing of the sort. Not even remotely. I can hear the naysayers now saying "so far, but just you wait!" And yet we keep waiting. And waiting. And waiting. AI fear is a remarkably resilient beast. It's resilient despite zero actual harms manifesting here in reality land. Self-driving cars are remarkably safer than humans who kill 1.2 million people and injure 50 million more each year world wide. (I wrote 1.5M in an early posted and missed my typo). Waymo cars are roughly 10X safer than humans with minimal injuries and fatalities. Even early self-driving cars had incredibly good safety records vis-a-vis early cars driven by humans that had bad safety records even up through the 1950s and 60s. When it comes to cars, society actually resisted making them safer. People fought having to wear seatbelts because they had to pay for them. They resisted early drunk driving laws as impingements on their freedom. Early plane travels was incredibly dangerous. It took many many decades of work to make them the marvels of safety they are today. What about jobs? We have AI execs talking about the "end of work" and yet they're hiring more people in the very profession that is supposedly most exposed: programming. Often at super high salaries approaching half a million dollars a year. Demand for good programmers is rising. We've certainly had execs claim they let people go because of AI. But a deeper look at these claims quickly reveals that most of them are just an easy way to get around labor laws or to simp to shareholders and more readily attributable to COVID over hiring. Tell shareholders "AI" is the reason for layoffs and you're rewarded for being more "efficient." Tell them you have to lay people off because you over hired or just made mistakes and your stock gets hammered. The truth is that anyone who uses AI seriously at the frontier sees how much they have to baby sit it and hand hold it and steer it. It is not doing any job end to end. It's doing tasks and that is about it. Now it will certainly get better but will it magically make the leap from task to job? Maybe. But we'll need evidence of that in, you guessed it, reality before we start making policy decisions. So what other problems do we have here in reality? Nothing but the two problems I've already discussed at length in my work: Surveillance and weapons of war. But these are not new. They're just things that AI enhances, just like computers enhanced them, and better materials science, and many other tech revolutions before them. Again, ask yourself, really ask yourself, where are the real problems? And again, there's a loud chorus of people who keep shouting "just you wait, I imagined this problem in my head and it's totally inevitable because I say so" and yet billions of people are using this technology every day with no problems. Now you could say "Russell's Turkey." The trend is the trend until it breaks. But then the burden is on you to prove the trend is breaking. There is no evidence of it other than in people's minds. At what point do people just wake up and realize that none of this makes any sense? It's not that there won't be problems. It's just that often times the problems we imagine (we've been imagining the end of all work for 100 years) don't match what happens in actual reality. The problems turn out to be very different and you can only deal with them when they come up. A lot of politicians today imagined if they had only "gotten ahead" of the Internet with regulations we'd be in a much better place. Utter nonsense. When Section 230 was passed the number one question among Congress was "what is the Internet?" And these folks are supposed to imagine TikTok 25 years later? No. We have to deal with problems as they come up, not imaginary problems that some very vocal people promise are coming. The burden is on them to prove it and writing long essays from "first principles thinking" and scary books does not count as evidence for anything at all. At what point does the cognitive dissonance hit and people wake up and say, maybe I was wrong? Probably never. Beliefs are a tricky thing and wrong beliefs have caused more problems in world history than AI ever will.
English
35
14
146
30.2K
Ethan Mollick
Ethan Mollick@emollick·
We really need a better word for the good kind of AI psychosis, the one where someone goes into a fugue state with the latest model and returns 40 days later from the mountaintop with something new.
English
93
55
910
47.1K
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
More people will die from suppressing AI than from the imaginary AI apocalypse. They'll die from restricting safe self-driving cars that are 90% better drivers than people who kill 1.5 million people and injure 50 million more every year. They'll die from the vaccines and cures that never get created. They'll die from all the myriad of helpful inventions that never get created by geniuses in a datacenter. They'll die from preventable diseases that they could have asked their chat bots about so they were better informed when they went to see their doctors but who couldn't ask because short-sighted legislators made it so the chat bots had to refuse to answer. They'll die from the slower economy that stifles robot driven factories over wildly overblown jobs apocalypse fears which will mean we never get a vast array of new and more affordable goods. They'll die from the cheaper solar panels and batteries that would get made by those automated factories which would slow climate damage and provide cheap energy to undeserved areas. They'll die from the super smart tele-AI doctors that never get deployed to remote areas. And they'll die as fanatics from the stop AI movement radicalize their followers to shoot people or throw firebombs.
Max Tegmark@tegmark

Senator @BernieSanders has invited me and three other AI researchers to a public panel on AI existential risk & international cooperation at the U.S. Capitol 7pm Wednesday April 29th. RSVP here to join us for this important conversation: forms.office.com/Pages/Response…

English
208
178
945
206.5K
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
It's always easy to see jobs that will go away but hard to see the jobs that will get created. Try explaining "web developer" to an 18th century farmer. You can't do it because he's got to envision a chain of inventions like electricity, wires, computers, the internet and more. Stop listening to wise fools like Hinton. AI will do what technology always does, create a wild variety of new jobs and possibilities and opportunities.
Aaron Levie@levie

The jump from working with a chatbot to having an agent that actually helps automate a process requires a real amount of work. Most companies will need to have dedicated people that are responsible for bringing automation to their teams, instead of leaving this up to every individual employee. Partly because the work is more technical than we imagine today, and partly because it’s just hard to do this as a side project. The job spec is to map out new workflows with agents, implement new systems to deploy agents, make sure the agent has all the right (up to date) context to work with, wiring up internal systems to connect to the agents, creating evals for the agents, figuring out where the human is in the loop, managing the system when there are new upgrades, helping with the change management of the existing business process, and so on. These jobs may come from IT or engineering, or live directly in the business function itself. They’ll be called different things depending on the company, and in some sense it’s the future of software engineering that you’ll see a huge growth of in non-tech companies. Most companies will have to be hiring for this now or in the future, and it’s another example of the kind of new jobs that will be created in AI.

English
30
17
146
31.8K
NIK
NIK@ns123abc·
BREAKING: Google DeepMind has assembled a strike team because Anthropic is mogging them on coding Led by Sergey Brin and DeepMind CTO Goal: Force recursive self-improvement by turning coding models into full AI researchers that can automate the entire R&D loop GDM is focusing on: >long-context coding tasks >training models on GDM’s private codebase “To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers” ACCELERATE
NIK tweet mediaNIK tweet mediaNIK tweet media
English
176
260
3.5K
342.9K
Sarah Chieng
Sarah Chieng@MilksandMatcha·
Giving away 5 Windsurf Max ($200/month) plans Each person will get 3 months of free Windsurf Max (highest tier). Try out SWE 1.6, Cognition's latest, fastest, and most intelligent model, powered by @cerebras. Winners will be selected from comments in 48 hours, comment below why you want it.
Cognition@cognition

We’re releasing SWE-1.6, our best model in both intelligence & model UX. SWE-1.6 matches our Preview model on SWE-Bench Pro while dramatically improving on various behavioral axes. It’s available today in Windsurf in two modes: free tier (200 tok/s) and fast tier (950 tok/s).

English
1K
51
859
161.2K
r/Accelerate to the Singularity
r/Accelerate to the Singularity@Accelerate__Now·
@deanwball @hamandcheese subreddits share private vetted chinese whatsapp groups. It's safer and more organised than you think. People in the groups regularly pay for random tests and upload the results, linked to the testing platform results. Peptides are a massive industry now.
English
0
0
0
220
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
I'm on week 4 of retatrutide and already down 12.4 lbs. More interestingly: - I seem to have much more energy and focus - I find myself spontaneously preferring standing desks, walking rather than ubering, opting into exercise etc. - My blood sugar no longer crashes after eating - I don't drink nearly as much but when I do my hangovers seem a lot weaker - My food preferences spontaneously shifted in favor of fish, salads and fresh fruit - My GI health improved a lot, I think mostly thanks to it being easier to resist trigger foods I've had no negative symptoms or downsides, and am still on a low starter dose (~2.5mg)
English
36
10
502
60.8K
John Carmack
John Carmack@ID_AA_Carmack·
It is generally frowned upon to have LLMs precisely regurgitate part of their training set, but it is an interesting question how you could use LLM training to nearly losslesly compress a huge corpus like the entirety of the Internet Archive. The Hutter Prize is for perfect compression, but only one GB. There would be different trades at the PB level, and it gets much more interesting when it doesn’t have to be bit-accurate.
English
104
50
1.5K
140.7K
r/Accelerate to the Singularity
r/Accelerate to the Singularity@Accelerate__Now·
@MorePerfectUS You're freaking sick! There's something seriously wrong with you! Spilling innocent pedestrian blood to protect bad jobs? Do you realise that YOU'RE the bad guys? You're morally deranged and should seek wisdom somewhere other than up your own myopic ass.
English
0
0
2
1.4K
More Perfect Union
More Perfect Union@MorePerfectUS·
NEW: If Waymo gets its way, 2 million workers will be out of work. When Waymo gets a firm hold on a city, wages go down. Some drivers now have to work 12 hours day, 7 days a week just to get by. This isn't inevitable — but Big Tech is spending millions to make you think it is.
English
981
640
2.7K
4.2M
Alexander Kruel
Alexander Kruel@XiXiDu·
In 2036, when the newest internal Anthropic model solved cancer, all the major medical institutions lined up to use it. But skeptics weren't buying it. It turned out that once you knew where to look, previously released public models could reproduce the cure.
Roko 🐉@RokoMijic

Mythos was a marketing exercise

English
18
25
798
50.1K
r/Accelerate to the Singularity
r/Accelerate to the Singularity@Accelerate__Now·
@theojaffee you're absolutely correct, of course, and will get your time wasted by confused, reactive people with vastly less ability to think through these issues. i'm only commenting to remind you that most people who agree don't leave comments
English
0
0
0
14
Theo
Theo@theojaffee·
The typical policy scenario proposed by AI doomers will on net increase x-risk. If you ban all AGI development (or, as some have proposed, even 2024-level open-source models) outside of a single, highly secured, perhaps state-run Manhattan Project with no contact with the outside world, you are much more likely to end up with a misaligned superintelligent singleton than if you develop AI in a broad and multipolar way, in which case misaligned AGIs can be counteracted and defeated by aligned AGIs
English
21
4
45
5.7K
r/Accelerate to the Singularity
r/Accelerate to the Singularity@Accelerate__Now·
@deanwball it is their primary personality quirk. and when you drill down into their arguments they justify premises with "well, obviously X is true". so it often boils down to common sense fallacy and begging the question
English
0
0
1
93
Dean W. Ball
Dean W. Ball@deanwball·
The characteristic of AI xrisk arguments that makes them so prone to stirring violence is NOT, per se, the notion of existential stakes. Instead it is the *certainty* that xriskers tend to have. “If x, then y” is not a probabilistic statement; it is a mathematical guarantee.
English
28
10
154
19.1K
Dean W. Ball
Dean W. Ball@deanwball·
@teortaxesTex It’s so much deeper than DCs alone; so much other physical world stuff that is ~illegal in the U.S. and may just remain so indefinitely. China has a bigger lead in the latter than in the DCs, at least for now.
English
3
2
96
9.2K
Dean W. Ball
Dean W. Ball@deanwball·
If you needed a reminder of why America’s founders were deeply skeptical of democracy and the will of the masses, there is none better than the fact that the American people seem to be enthusiastically banning a wave of industrialization before it has even begun in earnest.
Dean W. Ball tweet media
English
156
182
1.8K
131.4K
Dean W. Ball
Dean W. Ball@deanwball·
It’s crazy that some are just straight up in denial about mythos having the capabilities anthropic says it does. Usually the in-denial-about-AI community is able to cloak their views in at least *some* intellectual garb, but this time it’s just, “it’s not real.” Wild. Also sad.
English
63
30
655
164.6K