AGI is *so* 2025. Memory is 2026.

40K posts

AGI is *so* 2025. Memory is 2026. banner
AGI is *so* 2025. Memory is 2026.

AGI is *so* 2025. Memory is 2026.

@andygrossberg

https://t.co/uF7Q4whYEO ONCE: AI SEO BEFORE: https://t.co/PQQDvRVN6o, @ColonyNFT, @TribesNFT, Image, Jon Peddie, MAXIS, Studio Cutie

Portland, OR Katılım Haziran 2009
5.2K Takip Edilen3.6K Takipçiler
am.will
am.will@LLMJunky·
The human brain is truly a marvel of nature. If you horribly reductive, and boiled it down to a language model, you'd be looking at roughly 100 trillon parameters running as a sparse MoE architecture Only about 1-5% of neurons fire at any given moment, meaning the brain "activates" maybe 1-5 trillion parameters per inference step. For context, the largest AI models we've built probably top out around 5 trillion parameters. The brain is roughly 100x larger. Even its active params at any given moment are larger than almost every model in existence today. Here's what melts my brain (pun intnended) though Your brain does all of this on about 20 watts of power, less than a dim light bulb. Training a frontier AI model consumes enough electricity to power small cities for months. Running inference across data centers pulls megawatts. Your brain runs 24/7 for 80+ years on the equivalent of a phone charger. We haven't come close to matching the brain's scale. And we're not even in the same universe when it comes to efficiency. Evolution spent 500 million yrs optimizing the most energy-efficient intelligence architecture ever known. we're trying to brute force our way there with compute and electricity. Nature is still the best engineer in the room.
English
315
289
1.8K
122.5K
Howard ✡. 🟦🇮🇱🎗🧡
It's depressing AF that Hamas supporting kapo and useless gasbag who tanked NYC's economy as comptroller Brad Lander is leading incumbent and lead impeachment prosecutor Dan Goldman by 20 pointa and Polymarket has Lander 86% to unseat Goldman. WTH is happening to Democrats?
English
18
23
124
4.2K
Heidi Bachram
Heidi Bachram@HeidiBachram·
This is Nikki Brooker who calls herself the “antiracist mum” but was supporting Palestine Action fans today because of the “Zionist parasites” that “infiltrated” government and control politicians like puppets. Three tropes for the price of one. She’s a youth worker 😬
English
284
660
2.7K
272.9K
AGI is *so* 2025. Memory is 2026.
@beffjezos It was bad enough here in Portland when the "Butlerian Jihad" children vandalized cars in the parking lot of an AI event a couple months ago. Escalating to molotovs is not going to end well. For the protesters.
English
0
0
1
104
AGI is *so* 2025. Memory is 2026.
@RachelBitecofer Problem is, Netanyahu is beholden to the Israeli Right, and part of their deal is they don't care if things go badly for the diaspora. The Right there figures all Jews will then be forced to come live in Israel. They want Israel to be a fortress nation.
English
0
0
0
23
Sean
Sean@sean_from_earth·
AI safety and x-risk is an irrelevant distraction by people who are delusional or seeking power. The actual risk is that we built a massively complex, intertwined civilization that is fundamentally insecure. The AI risks are just downstream of that and will never be solved by banning LLMs or data centers. It doesn't matter if a killer virus is natural, government created, or a made by a terrorist cell with the help of AI because our systems are totally vulnerable. Our information systems are vulnerable to demagogues, regardless of if they use AI in their propaganda. Our supply chains are vulnerable to disruptions, whether by simple force or autonomous AI drones. Our digital networks are vulnerable and have been constantly compromised long before hackers used AI. So, AI safety is bullshit. What we really need to do is finally accept that we built an inherently insecure system that runs the world and decide what, if anything, we're willing to do to solve the root issues. I'll save you the trouble though. We're not willing to do much because we massively benefit from our interconnectedness. The best solutions are greater decentralization, massively accelerate biosecurity and biological sciences (with AI), and rapid expansion off planet. In short, the only rational solutions are decoupling and a return to mostly pre-industrial society or to leverage AI and everything else at hand to out-accelerate the true threats that we started creating thousands of years ago.
English
1
2
8
397
Beff (e/acc)
Beff (e/acc)@beffjezos·
Anti-AI extremists: throw molotov cocktails at people's homes, threaten to commit shootings at AI co HQs e/acc extremists: start awesome AI or energy companies that help civilization expand in scope and scale
English
23
21
213
9.1K
Pod Save America
Pod Save America@PodSaveAmerica·
"I want the Democratic party to treat me like a Never-Trumper" –Hasan Piker
English
703
397
7.8K
1.7M
Marc Andreessen 🇺🇸
The most accurate futurist in world history is Ray Kurzweil. Amazing prescience.
English
366
464
7.3K
644.1K
BLACK VOTERS ARENT SUPERMAN… WE NEED HELP
Yep. I can't argue with this. I'm more concerned about if dude was a rapist or not. Fuck politics. Dems will win the CA governor’s race. He can be replaced in the house. This is a huge accusation with life changing consequences.
English
3
5
14
728
R
R@shibadad8·
Centrist libs are diet republicans and we need to start treating them like the republicans they want to be.
English
146
502
3.2K
149.4K
Gabriele Berton
Gabriele Berton@gabriberton·
Super interesting take from one of the greatest hackers He says Mythos is not as good as they claim, because zero-day vulnerabilities are not that hard to find for skilled hackers I'm far from the hacking world but sounds reasonable Any thought?
Gabriele Berton tweet media
English
444
245
4.2K
509.7K
AGI is *so* 2025. Memory is 2026.
@GaryMarcus An LLM alone will not become AGI. Just as language alone isn't the root of human intelligence. AGI will be a group of components: neurosymbolic, world model, reasoning, I/O sensorium, etc. But an LLM will be the narrator describing the state of the system and its experience.
English
0
0
0
185
Gary Marcus
Gary Marcus@GaryMarcus·
Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.* Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work. Claude Code isn’t better because of scaling. It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close. What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.* Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too. Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. The paradigm has changed. — *Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day.
English
179
522
2.9K
557.3K