Gary Marcus

55.9K posts

Gary Marcus banner
Gary Marcus

Gary Marcus

@GaryMarcus

“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker

เข้าร่วม Aralık 2010
6.9K กำลังติดตาม214.1K ผู้ติดตาม
ทวีตที่ปักหมุด
Gary Marcus
Gary Marcus@GaryMarcus·
Three thoughts on what really matters: 1. Fuck cancer 2. Friends are irreplaceable 3. The new "Marcus test" for AI is when AI makes a significant dent on cancer May that happen sooner, much sooner, rather than later. In memory of my childhood friend Paul.
English
149
135
2.3K
387.2K
Geoffrey Miller
Geoffrey Miller@gmiller·
It's almost hilarious how bad @sama and @OpenAI are at public relations. They've got a product (ChatGPT) with 900 million weekly active users. Yet most of the American public has negative views of the company, ranging from mild distrust to moral disgust. And splurging on a random podcast ain't gonna fix that.
English
1
0
15
913
Gary Marcus
Gary Marcus@GaryMarcus·
@theworststink first part: integrity, to some degree second, yep, as usual I was the first to predict that, over two years ago:
Gary Marcus tweet media
English
0
0
1
37
thebigstinker
thebigstinker@theworststink·
@GaryMarcus Why you'd put Google and Anthropic on a pedestal is beyond me. Their AI models are fungible and the Claude CLI, Anthropic's jewel is done. All three AIs lose money and have no hope of being profitable. All AIs are indistinguishable now.
English
0
0
0
128
Gary Marcus
Gary Marcus@GaryMarcus·
When spin is all you have left: OpenAI, flailing wildly, and rapidly losing ground to Google and Anthropic, and burning truly massive amounts of money every day,burns a reported $250M on an 18-month-old tech podcast — presumably in order to control the narrative.
Paul Nary@ProfPaulNary

OpenAI acquiring @tbpn makes zero sense to me (an M&A professor).

English
12
15
112
9.1K
Gary Marcus
Gary Marcus@GaryMarcus·
@joserivera234 @DeryaTR_ an independent study just reported my technical predictions were over 90% correct, but I welcome your concrete examples of what you claim I said was not possible that has actually been achieved.
English
0
0
0
22
Jose Angel Rivera
Jose Angel Rivera@joserivera234·
@DeryaTR_ Not a surprise. Models keep improving vastly and doing whatever @GaryMarcus said won't be possible. Yet, he will keep pushing harder to tell you he was right.
English
1
0
0
31
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
Late 2024, I had a bet with Gary Marcus: if OpenAI's valuation stayed under $87 billion at the end of 2025, he would win; if it was over that, I would win. Obviously I won, except that the valuation ended up being about 10x more!😅Big congratulations to OpenAI! Much more to come!
OpenAI@OpenAI

Today, we closed our latest funding round with $122 billion in committed capital at an $852B post-money valuation. The fastest way to expand AI’s benefits is to put useful intelligence in people’s hands early and let access compound globally. This funding gives us resources to lead at scale. openai.com/index/accelera…

English
16
11
230
17.5K
Gary Marcus
Gary Marcus@GaryMarcus·
@iamJonasB @DeryaTR_ yep. a recent study said my tech predictions were 90% correct, but the idea of valuing OpenAI at $800B certainly defies me…
English
0
0
0
30
Mandy Lu
Mandy Lu@mandylu·
i think AI passes the Turing test now, depending on the human. we've come far.
English
13
2
19
2.5K
Gary Marcus รีทวีตแล้ว
Judea Pearl
Judea Pearl@yudapearl·
Good point. AI has lost precious time by over-indulgence in DL and LLMs, at the expense of research towards AGI. Focusing our priorities on the social effects of these untrustful architectures, instead of an understanding of what it takes to make them trustworthy, may cost us another precious decade. @erichorvitz @GaryMarcus
Elias Bareinboim@eliasbareinboim

Thanks for writing the piece, @erichorvitz! I think "causal inference" should be put front and center since, as part of the AI community, we could help provide sound foundations for many challenges (including safety, equity, robustness, transparency, and understanding). Happy to help! (Just an example from ACM's book in honor of @yudapearl's Turing, perhaps timely in the context of Avi's announcement today: causalai.net/r60.pdf Here is another one in terms of fairness & equity: causalai.net/r90.pdf)

English
8
34
154
54.4K
Gary Marcus
Gary Marcus@GaryMarcus·
Either we accept this as a society, and set a precedent for allowing virtually all jobs to be replaced with almost no compensation. Or we speak up now. For artists. For writers. For musicians. For everybody.
Simplifying AI@simplifyinAI

🚨 BREAKING: OpenAI and Google are about to have a massive legal problem. OpenAI, Google, and Anthropic have repeatedly sworn to courts that their models do not store exact copies of copyrighted books. They claim their "safety training" prevents regurgitation. Researchers just dropped a paper called "Alignment Whack-a-Mole" that proves otherwise. They didn't use complex jailbreaks or malicious prompts. They just took GPT-4o, Gemini, and DeepSeek, and fine-tuned them on a normal, benign task: expanding plot summaries into full text. The safety guardrails instantly collapsed. Without ever seeing the actual book text in the prompt, the models started spitting out exact, verbatim copies of copyrighted books. Up to 90% of entire novels, word-for-word. Continuous passages exceeding 460 words at a time. But here is the part that changes everything. They fine-tuned a model exclusively on Haruki Murakami novels. It didn't just learn Murakami. It unlocked the verbatim text of over 30 completely unrelated authors across different genres. The AI wasn't learning the text during fine-tuning. The text was already permanently trapped inside its weights from pre-training. The fine-tuning just turned off the filter. It gets worse. They tested models from three completely different tech giants. All three had memorized the exact same books, in the exact same spots. A 90% overlap. It's a fundamental, industry-wide vulnerability. For years, AI companies have argued in court that their models are just "learning patterns," not storing raw data. This paper provides the smoking gun.

English
62
1.4K
5.1K
123.6K
Gary Marcus
Gary Marcus@GaryMarcus·
@PressFreedoms ak-47s should be available without backgrounds checks or waits, at 7-11! not.
English
1
0
5
132
Gary Marcus
Gary Marcus@GaryMarcus·
When you have one job, and blow it.
Gary Marcus tweet media
English
5
5
40
4K
Gary Marcus
Gary Marcus@GaryMarcus·
@RandolphCarterZ my views have been laid out clearly, and you obviously haven’t read them. bye.
English
1
0
8
444
Randolph Carter
Randolph Carter@RandolphCarterZ·
@GaryMarcus So is it a stochastic parrot or is it capable of replacing humanity Was humanity just stochastic parrots? You're just anti-AI and trolling for attention.
English
1
0
3
458
Gary Marcus
Gary Marcus@GaryMarcus·
Well, I for one was very reassured by this speech. It was modest, thoughtful and accurate, and contained an unexpectedly clear and detailed plan for how he will end the conflict and repair the economic damage. Oops, April Fool’s. None of that actually happened.😢
English
5
9
114
5.4K
Gary Marcus
Gary Marcus@GaryMarcus·
@JeffLadish control ≠ intelligence. I am not sure your logic here makes sense unless you think they are interchangeable.
English
0
0
1
760
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
I hate to say it but an international agreement between the US and China to ban superintelligence is inevitable. Leaders in these countries are just going to follow their incentives, and none of them are willing to give up control to an artificial superintelligence.
English
56
29
307
22.4K