๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ

40.3K posts

๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ banner
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ

๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ

@GearMentation

Planets are wombs. You'd never go back in, except for recreation. Transhumanism with continuity. Singularity is also psychological, archetypes are breaking.

Katฤฑlฤฑm Ekim 2011
1.7K Takip Edilen282 Takipรงiler
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalianยท
We always hear about "the singularity" of AI. But does anyone actually know of a single credible case of AI meaningfully improving itself outside of narrow, heavily engineered tasks and studies? It seems more likely that current LLMs just sit there until prompted. Prompting then presumably produces a bunch of not very useful, time-wasting attempts at gathering data or fine-tuning. Genuine "self-improving" loops still appear to require constant human intervention - e.g. scaffolding agents, designing recursive prompt structures, setting up evaluation environments etc. This kinda defeats the whole point of an autonomous intelligence explosion.
English
14
0
13
855
Penny2x
Penny2x@imPenny2xยท
You read this, and you just move on. Who can blame you? It takes a lot of effort to process. 200 times the US economy EVERY YEAR? The concept of money made effectively obsolete through hyperabundance? I promise you even your most difficult objections become childs play when we are operating at that scale. You can make enough "cool" places, so that everyone can have their dream home. Some will be tucked away secret spots in the mountains. Some will be underwater. Some will be in unimaginably high skyscrapers. Or even on the moon. We will build theme parks and sky islands and other structures so grand that they are currently outside the scope of our imagination. What will you build?
Elon Musk@elonmusk

@aaronburnett 100TW of compute per year would be 200 times the US economy EVERY YEAR. Dollars wonโ€™t even be used if we get to that point.

English
25
5
72
3.8K
James Melville ๐Ÿšœ
James Melville ๐Ÿšœ@JamesMelvilleยท
I absolutely hate AI. And Iโ€™m not saying this as a technological Luddite. AI is going to wipe out millions of jobs and is already diminishing authentic creativity. A technological step forward that is actually a giant leap backwards for original human endeavour.
English
539
614
4K
86.7K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapiยท
It's interesting watching the Tech Right deal with the cognitive dissonance of "maximum acceleration" colliding with Protestant work ethic. It basically boils down to "labor and financial precarity are good... Because reasons" When you unpack it, there is some substance. Humans without striving tend to decay. But in no way does that mean you need enforced wage slavery for striving. Some, like Marc Andreesen, simply seem to have bought the "labor is virtuous" doxa hook line and sinker. And since he has "zero introspection" he has no clue where that value comes from. And they are all mistaking a personal aesthetic preference for a universal human truth. Then, on the functional/utilitarian side, the Tech Right seems to think that human involvement is necessary for things like "entropy generation" without realizing that humans produce less entropy when scrambling in survival mode. So, to simplify, there's the individualistic view ie "labor is good for the human animal" argument, which is defensible. But modern wage slavery is "good for the human animal" in the same way that prison meets your social needs. Then there's the macro view ie "enforced precarity causes more prosperity and progress because creativity" (or something along those lines). But that second opinion utterly fails to realize that most people who were Great Men of the past has zero precarity. Let's just take Charles Darwin for instance. Never needed to work a day in his life because he was gentry. In fact, he was chronically ill, a condition that would have killed him and prevented his work.. However, the fact that he has ample financial security meant that he could spend weeks resting when needed, and decades working on his theory of evolution. There is zero evidence that precarity boosts creativity or "entropy generation" Eccentricity (high entropy signals) require financial security.
English
33
22
135
5.1K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Jesรบs Enrique Rosas
Trump just blew up every amateur take on Iran with one sentence on Truth Social, and most people still pretend they don't understand what happened. The mainstream media went blue yelling that the Iran deal had collapsed. Then 30 minutes ago, Trump posted: "The Government of Iran is seriously fractured, not unexpectedly so." That is the president of the United States telling you, in writing, that the regime in Tehran is split. He did not post it as a guess. He posted it as an explanation for what he is doing next. And what he is doing next is the part that rearranges the whole board. Trump is not bombing. Trump is not walking away. Trump just extended the ceasefire, kept the naval blockade running, and announced the American military will hold off on strikes until the Iranian government, in his words, comes up with a unified proposal. Translation: The Iranian hardliners inside the Revolutionary Guard just pushed the Iranian negotiators out of the room. Trump is not rewarding the hardliners with a strike that lets them rally the country. Trump is not rewarding the hardliners with a walkaway that lets them claim victory. Trump is handing the whole Iranian regime a rope. Either the economic faction inside the regime comes back to the table and pushes the hardliners aside (and they have until the blockade finishes starving the country to do it!) or the hardliners 'win' the internal fight with a Pyrrhic victory just to say goodbye to their bridges and power plants. The clock is running against Tehran. Not Washington.
Jesรบs Enrique Rosas tweet media
English
110
552
2.1K
109.6K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005ยท
The CEO of Anthropic believes the defeat of cancer, the cure for Alzheimer's, and the doubling of the human lifespan could all happen before 2035. He wrote 15,000 words explaining exactly how. Almost nobody read it. His name is Dario Amodei. Before he founded the company that built Claude, he was vice president of research at OpenAI. He has a PhD in biophysics. He spent years watching these systems scale up from the inside, and he is one of the very small number of people alive who has full visibility into what the next generation of models can actually do. He called the essay Machines of Loving Grace, and the reason most people did not get past the first few pages is that it is structured like a scientific paper, not a manifesto. The argument is careful. The predictions are specific. And the thing buried inside it is the clearest vision of the near future that anyone running a frontier AI lab has ever put on paper. Here is what he actually said, and why it matters that he said it. He started with a definition most people skip past, and the whole essay depends on it. He said powerful AI, the thing he is describing, is a system smarter than a Nobel Prize winner across almost every relevant field. Biology. Programming. Math. Engineering. Writing. Not a chatbot. Not a copilot. Something that, given the resources used to train it, could be repurposed at inference time to run millions of instances of itself, each operating at ten to a hundred times human speed. He gave this entity a name. A country of geniuses in a datacenter. That phrase is doing a lot of work. He was careful about the word country. He meant that these millions of instances could be fine-tuned into different specializations and could collaborate the way a real civilization of scientists would. Some of them focused on cancer. Some on materials science. Some on statecraft. Some on building the next generation of themselves. All running in parallel, all communicating instantly, all operating at speeds the physical world cannot match. Then he made the prediction that almost nobody has actually sat with. He said the arrival of this system, which he thinks could happen as early as 2026, would compress the progress that human biologists would have made across the entire rest of the twenty-first century into a window of five to ten years. Fifty to a hundred years of biomedical research, done in under a decade. He called this the compressed twenty-first century, and he said it with the same tone a physicist uses to describe a verified result. The specific outcomes he predicted in that window are the ones most people cannot emotionally absorb on the first read. Reliable prevention and treatment of nearly all natural infectious disease. Elimination of most cancer. Effective cures for genetic disease. Prevention of Alzheimer's. The doubling of the human lifespan to around one hundred and fifty years. He walked through the mechanism for each of these one at a time. He did not present them as miracles. He presented them as what happens when the rate limit on biology shifts from the availability of brilliant scientists to the speed of physical experiments, because the scientists become effectively infinite. The part that separates his vision from every science fiction version of this story is the part he spent the most words on. He said he does not believe AI will instantly transform the world the day it arrives. He rejected that framing explicitly. He said intelligence has hard limits imposed by the physical world. Clinical trials take time. Cells take time to grow. Human institutions are slow. Regulatory frameworks exist. The speed of light applies. His actual argument is that progress will be bounded by these physical and social frictions, not by the intelligence itself, and that the work of the next decade is figuring out which of those frictions are real and which are just habits we have not yet realized we can dissolve. He wrote that a significant fraction of progress in biology has historically come from a tiny number of broad measurement tools and techniques. CRISPR. Cryo-electron microscopy. AlphaFold. Maybe one major tool per year, collectively responsible for more than half of all progress in the field. The country of geniuses in the datacenter is not faster because it has better ideas. It is faster because it can produce hundreds of these foundational tools in the time human civilization currently produces one. He did not stop at biology. He laid out five categories where he expected the same compression to happen. Biology and physical health. Neuroscience and mental health. Economic development and poverty. Peace and governance. Work and meaning. Each one gets the same treatment. The same specificity. The same refusal to hedge. And he ended the essay with a line that has stayed with me since the first time I read it. He wrote that if all of this happens in five to ten years, the defeat of most diseases, the lifting of billions of people out of poverty, a renaissance of the things that make human life worth living, everyone watching it will be surprised by the effect it has on them. The reason this essay matters is not that the predictions are correct. They might not be. He said that himself, many times, in careful language. The reason it matters is that it is the most specific, most grounded, most empirically careful vision of what the next decade could actually look like coming from someone who has the resources and the team to make it happen. Most people are preparing for a future that looks like a faster version of the present. Dario Amodei spent fifteen thousand words trying to tell anyone who would listen that the future is not going to rhyme with the past, and that almost nobody is actually thinking at the scale of what is about to be handed to them. Whether or not he turns out to be right, he is the one person in the room who has drawn the map. And when a map like that shows up from someone with that kind of vantage point, the right move is not to dismiss it as optimism. It is to read it all the way through and then ask yourself what you would build if you actually believed him. darioamodei.com/essay/machinesโ€ฆ
Ihtesham Ali tweet media
English
38
183
640
43.5K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ
I guess this is controversial to some...
Ihtesham Ali@ihtesham2005

A MIT professor who built the world's first neural network machine said something about intelligence that nobody in Silicon Valley wants to admit. His name was Marvin Minsky. He co-founded MIT's artificial intelligence lab with John McCarthy in 1959. He built SNARC the first randomly wired neural network learning machine in 1951, as a graduate student at Princeton. He won the Turing Award. He advised Stanley Kubrick on 2001: A Space Odyssey. Isaac Asimov, who was not a modest man, said Minsky was one of only two people he would admit were more intelligent than him. In 1986, after decades of building machines that could think, Minsky published a book about something far more unsettling. How humans think. And why we are wrong about almost everything we believe about it. The book is called The Society of Mind. It has 270 essays. Each one is a page long. Together they build a single argument that most people, when they first encounter it, reject immediately because it is too uncomfortable to accept. The argument is this: you do not have a mind. You have thousands of them. What you experience as a single, unified self making clear-headed decisions is not a thinker. It is an outcome. The result of hundreds of tiny, specialized, mostly mindless agents competing, negotiating, overriding, and occasionally cooperating with each other beneath the surface of your awareness. You do not decide things. You are what is left over after the arguing stops. Minsky was precise about this. He wrote that the power of intelligence stems from our vast diversity, not from any single perfect principle. He called this the trick that makes us intelligent, and then immediately added: the trick is that there is no trick. There is no central processor. No ghost in the machine. No unified self sitting behind your eyes, calmly evaluating options and choosing rationally. There is only the parliament. And the parliament is always in session. This reframing destroys the standard explanation for every failure of self-control. The reason you procrastinate is not laziness. It is that the agent in you that understands long-term consequences is losing an argument to the agent that wants comfort right now, and neither of those agents has a decisive vote. The reason you change your mind the moment someone pushes back is not weakness. It is that the social agent, the one that monitors status and belonging, just outweighed the analytical one. The reason willpower fails is not a character flaw. It is that you sent one small agent into a fight against dozens, and you called that discipline. Minsky had a specific line that breaks this open completely. He said: in general, we are least aware of what our minds do best. The things you do with the most apparent ease, reading a face, walking through a crowded room, understanding a sentence, catching a ball, are not simple at all. They are the products of staggeringly complex agent networks that run so smoothly, so far below conscious access, that you experience them as effortless. The things that feel like work, the logical arguments, the deliberate choices, the careful plans, are actually the clumsy surface layer, the small fraction of mental activity you can observe at all. You have been taking credit for the wrong parts of your own intelligence. The practical implication is the one that most productivity advice misses entirely. If your decisions are not made by a single rational self but by whichever coalition of agents happens to win the moment, then the game is not about training yourself to be more disciplined. The game is about designing the environment so that the right agents win without needing a fight. This is why removing your phone from the room works better than deciding not to check it. This is why writing one task on an index card works better than building a sophisticated system. This is why commitment devices beat motivation every time. You are not strengthening your will. You are changing the conditions of the argument so that the outcome you want becomes the path of least resistance. Minsky spent his entire career building machines that could imitate intelligence. What he discovered in the process was that natural intelligence, the kind running inside every human brain on earth, is nothing like what we think it is. It is not a single flame burning in a single chamber. It is a city. Loud, chaotic, full of competing interests, with no mayor. The people who understand this stop trying to win the argument through force of will. They learn to build a better city instead.

English
0
0
0
2
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Dustin
Dustin@r0ck3t23ยท
Elon Musk just compared artificial intelligence to a magic genie. The audience heard a fairy tale. He was describing a psychological collapse. Rishi Sunak asked him what happens to the labor market. Musk bypassed the economy entirely. Elon Musk: โ€œThere will come a point where no job is needed. You can have a job if you want to have a job for sort of personal satisfaction.โ€ Everyone assumes losing your labor is the worst case. Musk just told you it is the best case. Lose your labor and you lose a paycheck. Lose your usefulness and you lose the reason you get out of bed. Musk: โ€œOne of the challenges in the future will be: how do we find meaning in life?โ€ Look at the genie myth. Every version gives you exactly three wishes. The limit is the entire point. Scarcity forces you to choose. Choice is where meaning comes from. Musk: โ€œYou just have as many wishes as you want.โ€ The limit is gone. Unlimited wishes means unlimited abundance. Unlimited abundance means zero friction. Human meaning has always been built entirely out of friction. We spent all of civilization building something that could grant our every request. We never stopped to ask what happens to the mind the morning after it gets exactly what it wanted. We thought the worst fate was a world that demanded everything from us. Maybe it is. But the generation that figures out how to build meaning without scarcity will be the first in history that chose purpose instead of having it forced on them. That is not a crisis. That is the hardest graduation ceremony the species has ever faced.
English
59
61
238
30.8K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Penny2x
Penny2x@imPenny2xยท
People have got a lot of nerve complaining about Elon's timelines. Do they forget he's basically the only person on Earth with the guts to even try half this shit? Blue Origin is finally landing rockets like 10 years later. Meanwhile SpaceX is working on starship with 6x payload capacity. Tesla has CUSTOMERS driving PRODUCTION vehicles across the country without touching the controls. AFAIK Neuralink is in a league of it's own. Several patients experiencing incredible quality of life improvements. It's not like this is new. He has acknowledged himself that his timelines are ALWAYS hyper optimistic. He does it that way on purpose to cultivate next level hard work. Any reasonably informed Tesla or Elon investor should approach every project and timeline with that in mind. Be happy that he pushes the limits. Be understanding when best case scenarios don't work perfectly. Trust the process.
English
49
66
829
13.4K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Bunagaya
Bunagaya@Bunagayafrostยท
AGI. Mass unemployment. RSI. ASI. Abundance. Aging solved. "The powerful will genocide the plebs rather than pay UBI." People laugh and tell me it's naive to think otherwise. But ask yourself: would you want to be that guy? 1000 years from now, fancy dinner overlooking the planet. Someone turns to you: "so you're the one who killed 7.5 billion people because you thought the Georgia Guidestones were clever?" Awkward. And that's your reputation for the rest of time. They'll just pay the UBI. These people read history books too. Nobody wants to one-up Hitler, Stalin, and Mao to be remembered as the ultimate genocider when there was enough for everyone and so much more.
Bunagaya@Bunagayafrost

Problem: Mass AI unemployment Solution: UBI No brainer, right? Anti-UBI arguments I've encountered: 1) Look at history, there's always new jobs - Imagine if a more superior intelligent alien species that can make infinite copies of themselves arrived in 1850 to compete against you in the labor market. They work for energy undercutting wages. how would that have played out? 2) AI sucks now so it'll never happen - Current capabilities are not future capabilities. 3) AI will never exceed human capabilities, so no mass unemployment - Current models already exceed humans at many tasks. The human brain proves that human-level general intelligence runs on 20 watts. We're not waiting for a physics breakthrough, we're engineering toward a known target. Extrapolate the trajectory. 4) There will always be human jobs, bc AI doesn't have agency - You don't need agency to do tasks. 5) Cost of compute give 20watt brain a competitive advantage - For a while. 6) Why not expand existing welfare or do a new narrow welfare to address it rather than UBI? - Unemployment never recovers and only expands. It's just easier to hand everyone the same amount from the get go. 7) We can retrain people to do new jobs! - If AI can replace almost all human jobs, it can do almost any new jobs that are created. 8) UBI kills motivation. People need meaning in life. - Retirees and kids seem alright. There's more to life than trading labor for resources to survive. 9) The government will just let us starve. - People will revolt. 10) The powerful will kill all the useless people - ...like why? There's enough for everyone. 11) But what if mass AI unemployment never materialize? - No UBI then. 12) But how do you pay for it? - Printer go brrr. Print money, don't worry about the debt. If the AI is smart enough to replace human workers, money will essentially become worthless soon, because the AI will be capable of making ever better AIs. So you just need UBI to get over the transitionary period until everything is essentially free. 13) But this is communism! - Not really, but i do see the similarities. It's just a new paradigm. Communism fails because people are imperfect. AI will essentially be as perfect as any intelligence can ever get. 14) But inflation will be insane! - Automation will be deflationary. You need something to off-set it, so printing money and distributing via UBI is a balancing act. 15) But there will always be jobs. It just won't be necessary jobs like maintaining infrastructure or producing energy. - Sure. People can do whatever they want unshackled from the labor extracting economic incentives. >But what if bad people control AI and do bad things to us? - How does a lesser intelligence control a vastly more superior intelligence?

English
98
15
179
15.3K
Tim Pool
Tim Pool@Timcastยท
You can't fund UBI through taxing AI The math does not add up You need greater input than output for the system to function and taxing AI to give UBI generates massive output with insufficient input. The cost of the system will be greater than the revenue dispersed creating a negative feedback loop and economic regression The argument then becomes that UBI will only be supplemental but you still run into the inflation problem again. When given a choice between working 40 hours a week and receiving 10K UBI and 15K wages many people would choose unlimited free time and 10k In order to then hire someone for jobs we cannot automate you have to then increase pay for those jobs thus the cost of the good increases and the 10k supplemental now is largely useless for most goods and services I break the question down like this How many people do you know play guitar? Most say quite a few How many would choose to be a musician professionally if they had the choice? Most say in fact quite a few How many have the talent to actually make it? none if any And that still assumes most people would choose to pursue a passion with some economic benefit The reality is that many people would choose UBI and 40 hours of some creative work that ultimately provides no functional value to society and that generates no revenue. We effectively use tech to subsidize net negative output from people That system is bound to implode The most important thing to understand is that there are core necessities such as housing and healthcare that cannot be meaningfully automated. UBI is a pipe dream that can't even exist in a society with replicators from Star Trek. Land is still owned or leased and money is the means of distribution. If you want to argue for government control of property then youre just arguing for techno-communism
Andrew Yang๐Ÿงขโฌ†๏ธ๐Ÿ‡บ๐Ÿ‡ธ@AndrewYang

We should tax the bots. blog.andrewyang.com/p/tax-the-bots

English
344
62
699
62.2K
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalianยท
Thinking that LLMs are "AGI", is a similar psychological mechanism to 2000s kids who thought their "20 Questions" gadget game was reading their mind. Statistical algorithms are impressive, but they aren't magic or sentient.
English
78
94
1.1K
18.8K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Eric S. Raymond
Eric S. Raymond@esrtweetยท
This is a complicated moral question, it's been troubling me for years, and I still don't have a stable answer. Most of the arguments for leaving the Sentinelese alone are romanticizing noble-savage bullshit. We know what life in societies like this is like - nasty, brutish, and short to a degree modern Westerners not only fail to understand but have difficulty even imagining. Technology and wealth can fix that. On the other hand, it is undeniable that contact with modern civilization would in a very meaningful sense destroy the Sentinelese; this experiment has already been run on their kin in the rest of the Andaman Islands and the results were dolorous. Many would be killed by disease. Many others would become addicted to alcohol. Still others would evade both those fates only to find that they're still incapable of having any role in modern society other than as dependents and impoverished day laborers. Only a lucky few at the right-hand end of their IQ bell curve would integrate successfully. The utilitarian question is whether improving the quality of life for their talented tenth outweighs what modernity would do to the rest of them. And there's another point: the Sentinelese have made it very clear that most of them want to be left alone. It's easy to argue that we're obligated to respect their autonomy, but - what of the children who gaze at passing ships and dream of escaping the primitive grind of their lives? This is a more significant question when you reflect that in societies like these a significant fraction of those children are quite likely to be raped by the men in their tribe before reaching adulthood. Such behavior is sometimes ritualized as tribal custom; field anthropologists know this, but generations of them have been politically conditioned never to reveal facts about the societies they study that could be construed as arguments for Western superiority. There are no facile answers. It's a variant of the trolley problem - we know that leaving the Sentinelese alone conduces to many kinds of harm, but busting their isolation would open them up to many different kinds of harm. Hands-off is an easy policy. At least we can tell ourselves we're not actively doing damage, and that what they do to each other is on them. I do wonder how often the children are being raped, though.
Marcus Pittman@ImKingGinger

Raise your hand if you think it's immoral to keep tribal groups like the people of North Sentinel Island from the blessings of modern technology, comforts, health and medicine, and electricity just so we can "preserve them" as a zoo and museum exhibit for people to observe.

English
544
73
1.3K
289.2K
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalianยท
Well, a large team of humans have succeeded in making AI that can perform at near human expert-level, for a bunch of tasks (e.g. writing, coding, legal advice, medical advice etc.). This was because we had mountains of data (e.g. documents, scientific papers, transcripts of debates etc.) for training these LLMs to emulate experts. This is a special asymmetric situation, where AI could catch up fast on the back of good records of human expertise. I argue that this situation then vanishes for anything beyond tail-end human expertise - we don't have any inhumanly superintelligent data to optimise against - and humans cannot reasonably create it at any meaningful or complete scale.
English
1
0
1
56
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalianยท
The main reason frontier AI emulates human intelligence so convincingly is simple: we trained it on an entire internet of human-written text and code. But the same cannot be said for "superintelligence". We have exactly zero training data on superintelligent language, philosophy, stories, or code. Superintelligence, as far as we know, has never existed on Earth. The transformer architecture is a genuine miracle - when scaled into LLMs, it learns to emulate human intelligence extremely well from large pools of linguistic data. Current AI paradigms might one day let us emulate every human genius we have records of: thinking physics like Newton or Einstein, composing like Mozart, writing like Hemingway etc. But leaping far beyond human genius level into truly alien "superintelligence"? That remains very far-fetched. The idea that a human-level AI can simply be instructed to self-improve into superintelligence is absurd. Try asking GPT-2 to improve itself into GPT-5. Try asking a 100 IQ human to bootstrap themselves into 180 IQ. Try asking Claude Opus 4.7 right now to turn a large open-source LLaMA into a frontier model, then into superintelligence. You can't. I suspect there is no clean "intelligence escape velocity". Intelligence can improve in some dimensions, but its own blind spots and limitations at any given level tend to compound as well. There is no guaranteed long-term runaway effect.
English
45
10
72
6K
Kenrik March
Kenrik March@KenrikMarchยท
@gfodor The main issue I see is we will still end up with WILD inflation on things that AI and robotics canโ€™t solve for. What value do we place on the Rocky Mountains? Are we fine just leveling them for raw materials so humans can have more stuff?
English
4
0
21
5.1K
gfodor.id
gfodor.id@gfodorยท
Almost nobody today would object to a UBI system dispensing an inflation-adjusted $0.01 a month to every human US citizen, and it turns out that is basically all we need to do to avert catastrophe. The fact we won't is because people are too dumb to understand the problem.
English
54
7
492
51.4K
๐”พ๐•–๐•’๐•ฃ ๐•„๐•–๐•Ÿ๐•ฅ๐•’๐•ฅ๐•š๐• ๐•Ÿ retweetledi
Dr. Mike Israetel
Dr. Mike Israetel@misraetelยท
AI Doomers, please let me ask you an open question: Why would superintelligent AI have any more reason to kill all humans than any superintelligent human (Elon, Einstein, Von Neumann) would? (My view is that intelligent systems concerned with their own survival almost certainly become better at cooperating with humans for net benefit than any humans ever were, and that systems not concerned with their own survival are simply ultra-powerful enhancing tools for human flourishing, thus there's no need to fear AI on this basis. And as for bad actors using AI that's not in charge of itself and cannot stop the harm: the solution to that is good actors using smarter AI to predict and counter them... just like we have done for decades in intelligence and cybersecurity).
English
75
6
93
9.4K
The Pholosopher ๐ŸŒŽ๐Ÿ•Š
The Pholosopher ๐ŸŒŽ๐Ÿ•Š@ThePholosopherXยท
You cannot sustain prosperity by just giving everyone checks. Prosperity comes by way of increasing efficient productivity where people can use less effort to provide more value to others. Individuals have to be productive to increase the wealth, else, it ends up being net consumption followed by a rise in prices to offset the generalized handouts.
The Pholosopher ๐ŸŒŽ๐Ÿ•Š tweet media
English
164
58
256
7.7K