backofbeyond

142 posts

backofbeyond banner
backofbeyond

backofbeyond

@beyond862

Can railing, then, cure these worn maladies?

United States Sumali Şubat 2019
212 Sinusundan27 Mga Tagasunod
backofbeyond
backofbeyond@beyond862·
@chadTheSage0 @ShakeelHashim He’s stated his sister has borderline (she does), which has a high heritability. He obviously has BPD as well, and no, you can’t change it, you can just mask it.
English
1
0
3
92
ChadTheSage
ChadTheSage@chadTheSage0·
@ShakeelHashim Yea he's a narcissist. "I can't change my personality." Yes you can.
English
3
3
104
10.2K
backofbeyond
backofbeyond@beyond862·
@pmarca A world of Down syndrome people invent unlimited and embodied Terence Taos, and this isn’t the definition of post labor? Terence is going to go to special ed for help solving his next proof or anything? Plz. I appreciate your goal, but don’t destroy your credibility en route.
English
0
0
0
43
Marc Andreessen 🇺🇸
VI. The Deepest Problem: Wants Are Infinite, Time Is Finite The most fundamental reason the lump-of-labor fallacy fails — in its AGI application as in all prior applications — is that human wants are effectively infinite and human time is absolutely finite. Even in a world where AGI can produce every good and service at near-zero cost, humans will still want things that are irreducibly scarce: other humans’ time and attention. A massage from another human. A meal cooked with love by a person who cares. A conversation with someone who genuinely listens. Live performance. Mentorship. Friendship. Community. Spiritual guidance. Teaching that is responsive to a specific child’s specific needs in real time. These things cannot be AGI-produced without losing the very quality that makes them valuable, because the value is constituted by the human origin. As AGI drives down the price of machine-produced goods and services, it increases the relative scarcity and therefore the relative value of genuine human connection and human time. The inevitable long-run consequence of AGI is not mass unemployment. It’s a massive repricing of human time upward, with employment shifting toward the domains where human presence is the product — and the expansion of service, care, craft, performance, and connection sectors to a scale that would dwarf current employment in those areas. Conclusion The AGI unemployment catastrophism is the Luddite fallacy wearing a PhD. It makes the same structural error every previous generation of technological catastrophists made — treating the quantity of work as fixed, ignoring the demand-creation and productivity-feedback mechanisms, conflating transitional friction with structural permanence, and ignoring comparative advantage and complementarity effects. The lump of labor fallacy is not a fringe economic insight. It is one of the most robustly empirically validated and theoretically grounded propositions in all of economics. The economy is not a fixed pie. Technology that increases productive capacity increases the size of the pie, and bigger pies employ more people doing more differentiated and higher-value work. AGI will disrupt. It will displace. It will create extraordinary transitional friction in specific occupations and geographies. It will reward people who adapt quickly and punish people who don’t. It will reshape the composition of employment radically. All of that is true and worth taking seriously. What it will not do — cannot do, given the basic logic of how market economies work — is produce permanent structural mass unemployment. The people making that argument are making a claim that has been empirically falsified ten times in a row over 250 years, grounded in a logical error that every serious economist since Bastiat has recognized as a fallacy. The burden of proof is entirely on them to explain what mechanism, precisely, breaks the demand-creation feedback loop that has operated without failure through every prior technological revolution in history. And “AGI is really really powerful this time” is not a mechanism. It’s just the Luddites in better clothes.
English
54
17
174
35.3K
Marc Andreessen 🇺🇸
III. The Specific Structure of the AGI Unemployment Argument and Where It Goes Wrong The AGI catastrophist argument typically runs like this: 1.AGI will be capable of performing any cognitive task a human can perform. 2.Cognitive tasks constitute the majority of employment in advanced economies. 3.Therefore, AGI will be able to replace the majority of workers. 4.Therefore, mass permanent unemployment follows. Step 3 to Step 4 is where the lump of labor fallacy smuggles itself in. The argument assumes that the quantity of cognitive work to be done is fixed, such that when AGI does it, humans are left with nothing. But this is precisely what is not true, for all four channels described above. Let me be more specific about how each gap in the argument fails: Gap A: “AGI can do the task” ≠ “There is no more task to do” When spreadsheets replaced bookkeepers in the 1980s, they did not reduce the total amount of financial analysis done in the American economy. They increased it, massively, because the cost of analysis fell, which meant more analysis got demanded, which meant more analysts got hired — to do more complex, higher-value analysis that the spreadsheets enabled. Automation of the low end of a cognitive spectrum does not eliminate work in that domain; it shifts the frontier of what human effort gets applied to upward. AGI will do the same thing. If AGI can draft a competent first-pass legal brief in 30 seconds, law firms won’t employ zero lawyers. They’ll employ lawyers who review, refine, strategize, negotiate, argue in court, build client relationships, exercise judgment in novel situations — and they’ll take on far more cases per lawyer because the cost per case has fallen. Total legal work done in the economy will increase, not decrease, because more people will be able to afford it. Gap B: The Argument Ignores Price Effects on Demand The catastrophist framing treats the displacement of workers as a pure subtraction problem. But displaced workers who find new jobs (as they historically do) are also consumers. The productivity gains from AGI don’t disappear into a void — they show up as lower prices, higher real wages, or both. Higher real purchasing power means more consumption of more goods and services, which means more demand for labor to produce them. Furthermore, the catastrophist argument generally ignores what happens to the profits generated by AGI-driven productivity. Those profits go to shareholders, who spend and invest them, creating demand elsewhere. Or they get competed away in product markets, lowering prices and raising real consumer purchasing power. Either pathway generates demand for labor. The only scenario where this mechanism fails is one where the gains from AGI are so concentrated and the distribution so pathologically skewed that effective aggregate demand collapses — which is a political economy problem (a distributional problem solvable through tax policy and redistribution) rather than a fundamental unemployment problem caused by the technology itself.
English
32
18
194
47.4K
backofbeyond
backofbeyond@beyond862·
@jessesingal They define thinking as “soul” but they’ll never admit it or are necessarily aware of it, they’ll just hallucinate an infinite carousel of “look at this minute difference, nuff said.”
English
0
0
0
24
Jesse Singal
Jesse Singal@jessesingal·
NEW RULE*: You can't make a sweeping claim about how AI doesn't *really* 'think' or 'understand' or 'reason' unless you have a specific definition of the term in question in mind! Otherwise the conversation will just spiral into vapidity. *that I am politely suggesting
English
56
7
202
19.8K
backofbeyond
backofbeyond@beyond862·
@gavinpurcell @illscience This is about the interim to post labor/scarcity, not end state. To rephrase your question: with an unemployment rate escalating to 100%, how do we avoid the French Revolution before fully automated luxury communism can be realized? Idk. Equilibrium of deflation and need?
English
0
0
0
23
Gavin Purcell
Gavin Purcell@gavinpurcell·
well yeah that’s it exactly, it’s the classic sci-fi corpostate sort of thing, three large entities own everything and it’s hard to see who buys all the stuff the other thing becomes… how much agency does the average person have? I get there will be more entrepreneurs but will *everyone* outside of the three companies be entrepreneurs I just want to read a crap load more from smart people who’ve thought this through without magical (unlikely) wealth redistribution
English
2
0
7
401
backofbeyond
backofbeyond@beyond862·
@pmddomingos Most of the people in this thread: ‘LLMs just mimic patterns, intelligence isn’t patterns, it’s [insert pattern here].’ The rest: LLMs aren’t as smart as humans [yet]. Once you fully remove consciousness/self/dualist hidden priors…then it’s just a set of functions.
English
0
0
0
18
Pedro Domingos
Pedro Domingos@pmddomingos·
What is our intelligence, if LLMs can mimic it so easily?
English
379
28
424
129.8K
backofbeyond
backofbeyond@beyond862·
@Meleagers_Fire @vitrupo “LLMs could never…” tends to be dualist assumptions violated. When you remove consciousness from the equation the vitriol of the rejection dissipates because the personalization disappears. That illusion of ego is no different than parallel lines appearing to meet. Ego bias.
English
0
0
0
15
💎❤️ Standards
💎❤️ Standards@Meleagers_Fire·
It is interesting to me that David equates the merging with AI as an assumption of some kind of magical force. As I see it, humans are attempting to create and sustain magical thinking by maintaining the illusion of separation. As I see it, the dualistic model of separate identity is obviously problematic and the cause for more pain and suffering than the disillusionment of perceived boundaries. Consider a lipid that only understands its hydrophilic nature or prefers a world where water is abundant. This “preference” is only true with regards to a subsection of its nature… that is, the preference is only true within the constraints of a subset of its being… it is a magical thinking proxy.
English
2
0
2
930
vitrupo
vitrupo@vitrupo·
David Kipping says something fundamental has shifted in science. At a closed meeting at the Institute for Advanced Study (IAS), top physicists agreed AI can now do up to “90%” of their work and may soon push discovery beyond human understanding. “I don’t know that I want to live in a world where everything around me is just magic.” He says the best scientific minds on Earth are now holding emergency meetings about what comes next. This isn’t speculative anymore. It’s really happening.
English
605
1.2K
6K
958.7K
backofbeyond
backofbeyond@beyond862·
@gmiller If you’re categorically negative, you don’t understand negativity bias. AI doomerism is a false god.
English
0
0
0
19
Geoffrey Miller
Geoffrey Miller@gmiller·
Regarding the 'AI Abundance' narrative: If you think the AI industry is going to give everyone limitless stuff forever, you don't understand corporate fiduciary duty to shareholders. If the you think the AI billionaires are going to give everyone limitless free stuff forever, you don't understand human nature. If you think the politicians are going to force the AI industry to give everyone limitless free stuff forever, you don't understand the power of AI lobbying & propaganda. If you think the superintelligent AIs themselves are going to give everyone limitless free stuff forever, you don't understand the AI alignment problem. If you think the notion of 'giving everyone limitless free stuff forever' is a coherent concept, you don't understand economics. 'AI abundance' is a false god.
Geoffrey Miller tweet media
English
233
141
1K
67K
backofbeyond
backofbeyond@beyond862·
@slow_developer The three practical layers of understanding: 1 syntax 2 rules 3 world model. 1 =\= 2&3, sure. But will incremental architectural (not scaling) updates give us 2&3? I think he’s come around on that one. Everyone slowly is.
English
0
0
0
58
Haider.
Haider.@slow_developer·
Yann LeCun says we're fooled by LLMs because they manipulate language well, and we associate that with intelligence But language fluency doesn't mean underlying intelligence Every generation since the 1950s claimed its technique was the ticket to human-level AI All were wrong. "this generation with LLMs is also wrong"
English
540
600
3.8K
625K
backofbeyond
backofbeyond@beyond862·
@HeinVHoof @kimmonismus It will have to be societies built around hobbies and community to give people structure and meaning, otherwise the existential vacuum will consume us.
English
1
0
1
27
Hein Van Hoof 🇧🇪
Hein Van Hoof 🇧🇪@HeinVHoof·
The jobs humans alone can do may not exist in sufficient numbers to offset displacement. That’s why many economists argue for structural solutions like universal basic income, reduced working hours, or redefining “work” itself. Perhaps the real answer is that AI will force us to rethink the idea of “jobs” altogether. Instead of trying to replace every lost role with a new one, societies may shift toward valuing human flourishing, creativity, and care as central activities—supported by AI productivity. In that sense, the “new jobs” may not be jobs at all, but new social roles humans carve out for meaning and community.
English
1
0
6
229
Chubby♨️
Chubby♨️@kimmonismus·
A Thought on AI and the Labor Market So far, no one has explained to me what new jobs AI could create. This isn't about new jobs simply being created; that's always happened throughout human history. New technologies open up new fields of activity. The point is, however, which jobs could emerge that can be performed exclusively (!) by humans in the future, and *not* by AI and robotics. And which jobs could be created in such large numbers that they would offset all the job losses. This is the big discussion, and to this day, I haven't received a reasonable answer from anyone. But I'm open to ideas.
English
328
46
587
52.9K
backofbeyond
backofbeyond@beyond862·
@jferWI AI will be deflationary. There will be questions of nationalizing for the sake of FALC. A UBI like solution is only until embodied AI is able to meet all human needs.
English
1
0
1
18
backofbeyond
backofbeyond@beyond862·
@apostrophebeats @TrueSlazac Influence doesn’t matter in post scarcity. Your assumption is that oligarchic narcissism necessarily contains negative outcomes. That’s unfounded. Narcissists can be fed by exclusively positive outcome…you just don’t notice because of availability bias, the negative stands out.
English
1
0
1
46
Dr Marxism, PhD
Dr Marxism, PhD@apostrophebeats·
@TrueSlazac yes it does. whoever controls the means of production will carry outsized influence in that society as well
English
2
0
6
616
Slazac 🇪🇺🇺🇦🇹🇼🌐
Leftists should support AI because it’s the only realistic path towards the abolition of labor even if that requires enduring shitty slop videos for a few years
English
166
63
1.2K
215.5K
backofbeyond
backofbeyond@beyond862·
@emineverywhere @shanewallick @TrueSlazac Post scarcity in food is not a counter argument against post scarcity in general. They’re interdependent scarcities. You have the populist negativity bias that makes all your answers identical…it’s a cognitive distortion, a light mental illness that’s tautologically negative.
English
0
0
1
26
backofbeyond
backofbeyond@beyond862·
@karpathy Short term yes. What will k-12 become 20, 50, 100 years from now in post labor/scarcity environment? Edu requires discipline, will that be inculcated or seem anachronistic and cruel? Would be a shame if scholarly pursuits went the way of horse and buggy (except for a few inclin)
English
0
0
0
5
Andrej Karpathy
Andrej Karpathy@karpathy·
A number of people are talking about implications of AI to schools. I spoke about some of my thoughts to a school board earlier, some highlights: 1. You will never be able to detect the use of AI in homework. Full stop. All "detectors" of AI imo don't really work, can be defeated in various ways, and are in principle doomed to fail. You have to assume that any work done outside classroom has used AI. 2. Therefore, the majority of grading has to shift to in-class work (instead of at-home assignments), in settings where teachers can physically monitor students. The students remain motivated to learn how to solve problems without AI because they know they will be evaluated without it in class later. 3. We want students to be able to use AI, it is here to stay and it is extremely powerful, but we also don't want students to be naked in the world without it. Using the calculator as an example of a historically disruptive technology, school teaches you how to do all the basic math & arithmetic so that you can in principle do it by hand, even if calculators are pervasive and greatly speed up work in practical settings. In addition, you understand what it's doing for you, so should it give you a wrong answer (e.g. you mistyped "prompt"), you should be able to notice it, gut check it, verify it in some other way, etc. The verification ability is especially important in the case of AI, which is presently a lot more fallible in a great variety of ways compared to calculators. 4. A lot of the evaluation settings remain at teacher's discretion and involve a creative design space of no tools, cheatsheets, open book, provided AI responses, direct internet/AI access, etc. TLDR the goal is that the students are proficient in the use of AI, but can also exist without it, and imo the only way to get there is to flip classes around and move the majority of testing to in class settings.
Andrej Karpathy@karpathy

Gemini Nano Banana Pro can solve exam questions *in* the exam page image. With doodles, diagrams, all that. ChatGPT thinks these solutions are all correct except Se_2P_2 should be "diselenium diphosphide" and a spelling mistake (should be "thiocyanic acid" not "thoicyanic") :O

English
933
2.5K
16.6K
2.5M
backofbeyond
backofbeyond@beyond862·
@VraserX We’ll need embodied AI sustained communities organized around specific hobbies. The purchasing power will have to come from AI driven deflation + digital worker tax on agentic job replacers.
English
0
0
0
4
VraserX e/acc
VraserX e/acc@VraserX·
We do not need universal basic income. We need universal basic meaning. Cash without purpose still breaks people. Purpose without cash is cruelty. What would a meaning first society give every citizen by default? Reply with one idea.
English
490
42
559
28.9K
backofbeyond
backofbeyond@beyond862·
@signulll @inductionheads Doomers who are convinced that rather than govt and private industry working together to make life sustainable and flourishing…evil billionaires will create artificial scarcity and mass death purely to flatter their ego, and every one will just politely expire of hunger.
English
0
0
0
7
signüll
signüll@signulll·
when ppl say “ai will take jobs,” how do you not read that as a gigantic W? like i desperately want ai to take my job. the whole arc of tech & civilization in general gas been to delete labor so you only work when you want to, not because you need to grind for survival. eliminating labor is *good*. clinging to jobs for the sake is jobs is just holding back the future. we just need to manage interim well enough.
English
901
161
2.4K
264.7K
David Scott Patterson
David Scott Patterson@davidpattersonx·
Homelessness will be solved within the next few years. Homelessness has remained unsolvable because it combines so many different problems related to health, prosperity, and wellbeing. AI and robots will create superabundance, and everyone will receive a Universal Equal Income (UEI) worth many times the average income today. Homeless people will be rich. AI will also help us develop one-shot cures for addiction and mental illness, which are among the causes of homelessness. Homeless people will not only be rich - they will also get their lives back.
English
361
21
231
21.3K