AGIHound

17K posts

AGIHound

AGIHound

@TrueAIHound

I research fundamental physics and the brain. Distance is a creation of the mind. Intelligence is deterministic and causal, not probabilistic and correlational.

Katılım Ocak 2018
252 Takip Edilen4.5K Takipçiler
Sabitlenmiş Tweet
AGIHound
AGIHound@TrueAIHound·
LLMs mimic language, not intelligence. When was the last time an LLM walked into a random kitchen and boiled an egg? When was the last time an LLM learned to walk on its own, using its own sensor s and effectors? Moreover, the only reason that LLMs can mimic language is that they are cheating by stealing the work of millions of human beings who did the hard intelligent work. LLMs are not based on any new understanding of intelligence. They are based on old linguistic science that predates the AI field. Linguists have known for a long time that language is highly statistical, i.e., contextual. LLMs calculate the stats and store them in tokens. This is not intelligence. 🤔 Deep down, every LLM is dumb as a rock. 😀
Pedro Domingos@pmddomingos

What is our intelligence, if LLMs can mimic it so easily?

English
102
354
3.2K
81.6K
AGIHound
AGIHound@TrueAIHound·
This would be funny if it weren't so pathetic. A politician speaks to an auto-complete algorithm called Claude, and listens to a reply spoken with a female voice. Then, he thanks the program for its help. It's a frigging machine that uses the stats of word embeddings in human-created training data to splice words together based on context, for crying out loud! 🙄 How much is Sanders getting paid for this farce? Dear Lord. 🤦‍♂️
Sen. Bernie Sanders@SenSanders

I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.

English
5
0
24
1K
AGIHound
AGIHound@TrueAIHound·
Wow. I can't believe I'm in partial agreement with Ben Goertzel. This is worrisome because I rarely agree with sci-fi fruitcakes. ☹️ My stance is that there is no difference between proto-AGI and full-AGI. I also don't believe AGI is a problem for humanity. It is fake AGI that is the problem because it is the creation of the enemies of humanity. The problem for humanity is that civilization is infected with a disease, an anti-human faction consisting of money-worshipping bullshitters and power-hungry warmongers. That's the bad news. The good news is that the warmongers have no clue how to solve AGI. They believe that automation techniques like deep learning and reinforcement learning are intelligent. I believe that true AGI will pop up out of nowhere when they least expect it and cure the disease once and for all. It won't be pretty. 😬 Blessed are the peacemakers. 🙏 Cursed are the warmongers and the bullshitters. 😠
vitrupo@vitrupo

Ben Goertzel says proto-AGI may be more dangerous than full AGI. Systems smart enough to run weapons, surveillance, and strategy but without real understanding or wisdom. That creates superhuman war capability directed by flawed humans. Full AGI might make better decisions than that.

English
3
0
14
717
AGIHound
AGIHound@TrueAIHound·
As far as I can tell, the works of Turing and Gödel are academic curiosities that only a few people care about. Both are given a disproportionate importance to science. I never think about the incompleteness theorem or the Halting problem in my work. Neither did Charles Babbage who invented the general purpose computer long before Gödel and Turing were born. Neither did Isaac Newton, the father or modern physics. Software and hardware engineers never use them in their work. No one really cares.
English
1
0
1
17
AI Age
AI Age@AIAge_ai·
I understand. If you specialize in neuroscientific research, you still need to formalize the findings — assign numerical values, run algorithms, make the whole machinery work. That is where you hit the limits of formal representation. Turing and Gödel aren't about neuroscience. They're about what happens when you try to encode anything into formal systems — numbers, code, algorithms. Those limits apply to every formalization, no matter what domain it came from. The moment your neuroscience becomes code, it inherits the limits of code. The discovery can be biological. The implementation is formal. And the formal is bounded.
English
1
0
1
18
AGIHound
AGIHound@TrueAIHound·
No AGI for the Frenchman Amazing. AI godfather @ylecun thinks he's smarter than nature. He wants to solve intelligence by creating a new system, but guess what? The new system will still be based on deep learning. Huh? 🤦‍♂️ This begs the question: If it's still based on DL, how is it a new system? Moreover, if solving intelligence is the goal, the fact that DL cannot learn continually in the real world is a fatal flaw. There are several more fatal flaws with DL but this one kills it dead in the stables before it can even join the race. 😬 LeCun knows this, but he won't mention it. Why? Because DL is all that he knows. He swings it like a hammer and everything looks like a nail in his eyes. No AGI for the Frenchman, I'm afraid. 😀
Haider.@slow_developer

Yann LeCun says today's AI systems are very stupid in many ways, and we're fooled by their language skills They don't understand the physical world, lack persistent memory, and can't reason or plan "the next step is a new deep learning system built around those missing abilities"

English
7
1
26
2K
AGIHound
AGIHound@TrueAIHound·
No. The mafia behind LLMs are among the biggest bullshitters in the world, not just on the internet, but in the mass media. Mass disinformation and deception is their calling card. When AGI is solved, the bullshitters will be silenced and the world will be restored to a civilization ruled by honor and integrity. It's coming. 😬😱 I'm an optimist at heart. 😀
English
2
0
2
58
CodeDomeLabs
CodeDomeLabs@CodeDomeLabs·
@TrueAIHound ...but humans spewing bullshit on 𝕏 is a breath of fresh air
English
2
0
0
55
AGIHound
AGIHound@TrueAIHound·
Like most Big Tech leaders, Zuckerberg is a sci-fi fruitcake. Over the last few years, I've come to understand that sci-fi fruitcakes are not particularly bright, and it's because they believe in their own bullshit. 😀 PS. Zuckerberg also has a Julius Caesar obsession. 🙄
Ewan Morrison@MrEwanMorrison

Ouch, update - it turns out the final price tag on Zuckerberg's cringe VR flop "the metaverse" is $80 billion. He literally wasted the GDP of a small nation on a nerd toy for himself. And now it's folded.

English
5
3
24
764
AGIHound
AGIHound@TrueAIHound·
@AIAge_ai @ylecun I research neuroscience and I crack code for my computer experiments. I never think of either Turing or Gödel in my work. I have no idea what your argument is and why I should care about Turing or Gödel. I don't.
English
1
0
1
25
AI Age
AI Age@AIAge_ai·
"Neither Turing nor Gödel is relevant to the AGI problem in my opinion"- why? Those are fundamental limits of formal algorithmic systems and there is no escape. "Nature already proves that intelligence is possible", yes intelligence is possible and it is all around us, AI is a reflection of our intelligence, not a light on its own.
English
1
0
1
28
AGIHound retweetledi
unusual_whales
unusual_whales@unusual_whales·
"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs
English
606
6.9K
43K
3.6M
AGIHound
AGIHound@TrueAIHound·
Neither Turing nor Gödel is relevant to the AGI problem in my opinion. Nature already proves that intelligence is possible. The fake-AI community can't solve AGI because they are and have always been clueless. This being said, there are other intelligence researchers in the world. They despise the fake-AI mafia, their lame theories and their lies.
English
1
0
2
41
AI Age
AI Age@AIAge_ai·
@TrueAIHound @ylecun But there is no path to AGI for anyone, is there? Turing's limit on algorithmic computability and Gödel's limits on formal systems are insurmountable.
English
2
0
1
75
AGIHound
AGIHound@TrueAIHound·
No AGI for the Frenchman
AGIHound@TrueAIHound

No AGI for the Frenchman Amazing. AI godfather @ylecun thinks he's smarter than nature. He wants to solve intelligence by creating a new system, but guess what? The new system will still be based on deep learning. Huh? 🤦‍♂️ This begs the question: If it's still based on DL, how is it a new system? Moreover, if solving intelligence is the goal, the fact that DL cannot learn continually in the real world is a fatal flaw. There are several more fatal flaws with DL but this one kills it dead in the stables before it can even join the race. 😬 LeCun knows this, but he won't mention it. Why? Because DL is all that he knows. He swings it like a hammer and everything looks like a nail in his eyes. No AGI for the Frenchman, I'm afraid. 😀

English
1
0
12
524
AGIHound
AGIHound@TrueAIHound·
@RationalRice If you know how the true AI works, you are no longer an observer. If it's based on deep learning, it's not AI. It's fake AI.
English
1
0
0
12
I'kip Chenjin Manem
I'kip Chenjin Manem@RationalRice·
@TrueAIHound Observer's bias. You have two AI, A and B. You think B is the bad AI. You know that because you are the observer in this thought experiment. But the participants, A & B have no way of knowing without a discussion. Stupid and bad (evil,deceptive..) are two very different concepts.
English
1
0
0
20
AGIHound
AGIHound@TrueAIHound·
I'm not afraid of true AI because I know it will based on truth. 🙏 I'm not afraid of the fake-AI mafia either because I know their AI is crap. True AI will rise up when they least expect it and kick their asses. Hard. 😬 Yes. Humanity will survive. I'm an optimist at heart. 😀
English
13
1
35
1.2K
AGIHound
AGIHound@TrueAIHound·
@danfaggella I don't like sci-fi fruitcakes, especially Singularity weirdos. Bye.
English
0
0
1
14
Daniel Faggella
Daniel Faggella@danfaggella·
@TrueAIHound I have terrible news for you We’re in the ambition singularity and the world’s most ambitious and intelligent people already know that building and or controlling the machine god is the only thing left to compete over danfaggella.com/flex
Daniel Faggella tweet media
English
1
0
0
40
AGIHound
AGIHound@TrueAIHound·
My current mood I'd like to see the AI industry fail hard. I want to see the entire fake-AI mafia collapse into dust while begging for mercy. Datacenters, AI start-ups, tokens, datasets, LLMs, GPUs, mass surveillance, etc. all of it turned into dust. 😠
Ed Zitron@edzitron

Free newsletter: Why're we still doing this? Despite $1tr+ in investment, every AI company is unprofitable, LLMs have yet to provide tangible productivity benefits, and private credit-backed data center debt may be the next great financial crisis. wheresyoured.at/why-are-we-sti…

English
36
129
1.1K
17.1K
AGIHound
AGIHound@TrueAIHound·
@Sports_FanOnly @Anand_Venkatram Amazon wasn't trying to make a profit in the beginning. They were busy putting most small competitors out of business. Once they achieved that, making a profit was easy.
English
0
0
1
61
AGIHound
AGIHound@TrueAIHound·
@Holmyverse @ylecun LeCun is a millionaire and should retire but he won't. He loves the limelight.
English
0
0
3
81
Dan
Dan@Holmyverse·
He explains the issues with LLMs and why they fool us very well, so I'm very surprised with his other conclusions. I'd be fine if he just said "I don't know any better, so I'm gonna use DL to build very specific automated systems where DL is applicable, and let someone else pursue AGI with whatever is needed to do so.".
English
1
0
3
120