Gary Marcus

57.4K posts

Gary Marcus banner
Gary Marcus

Gary Marcus

@GaryMarcus

“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker

Katılım Aralık 2010
7K Takip Edilen222.3K Takipçiler
Sabitlenmiş Tweet
Gary Marcus
Gary Marcus@GaryMarcus·
Three thoughts on what really matters: 1. Fuck cancer 2. Friends are irreplaceable 3. The new "Marcus test" for AI is when AI makes a significant dent on cancer May that happen sooner, much sooner, rather than later. In memory of my childhood friend Paul.
English
212
147
2.6K
476.2K
Gary Marcus retweetledi
Valerio Capraro
Valerio Capraro@ValerioCapraro·
Big move from arXiv: a one-year ban for authors who submit AI-generated content without proper checking. This is not about banning AI from academia. AI can be extremely useful. It can help us write better, think more clearly, analyse faster. But unchecked AI use is different. Hallucinated references. LLM meta-comments left in the text. These papers are not just bad papers. They are poisoning science. So I think this is a big and important move. And I hope journals will follow. AI should raise the quality of science. Not flood it with plausible nonsense.
Valerio Capraro tweet media
English
3
4
20
1.8K
Gary Marcus retweetledi
FORTUNE
FORTUNE@FortuneMagazine·
"In the past nine months, the United States has produced more AI legislation than in the prior decade," write @JeffSonnenfeld, @GaryMarcus, and Stephen Henriques in a commentary piece for Fortune. "No clear path forward has emerged at any level." bit.ly/4tzBeYg
English
4
3
2
3.2K
Gary Marcus retweetledi
MIT Media Lab
MIT Media Lab@medialab·
In the @nytimes, Media Lab Prof. @kesvelt and other scientists call for stronger oversight and regulation of AI technologies, including chatbots that can provide information on producing lethal biological threats. nytimes.com/2026/04/29/us/…
English
2
17
33
4.9K
Gary Marcus
Gary Marcus@GaryMarcus·
strawmanning Gary Marcus should be an Olympic sport, there are so many entrants. Hinton wins gold, for apparently faking a quote and putting it on his webpage.
English
11
0
30
3.8K
Digital Soulcraft
Digital Soulcraft@SoulcraftHQ·
Gary Marcus: "LLMs just regurgitate training data — not thinking, just pattern matching" Geoff Hinton: "That's stupid. You're completely wrong about what they do." Gary Marcus: "But OTHER PEOPLE said it too! Ha!" Geoff Hinton: "They're stupid too. And they regurgitate sometimes, rarely — not 'just.' You're all wrong." Gary Marcus: "I never said 'just'! I only said sometimes they regurgitate, which is TRUE! I'm right!" [Hinton face-palms] Gary not only lost the argument, he lost his way, forgetting what his original point was ('just' reductionism)
English
2
2
17
919
Gary Marcus retweetledi
Praveen Koka
Praveen Koka@praveenkoka·
@GaryMarcus If token prediction is 'generating thought itself,' then my phone's autocomplete is a philosopher.
English
4
2
23
3.8K
Gary Marcus
Gary Marcus@GaryMarcus·
company that steals IP urges US government not to allow others to steal their IP
Rohan Paul@rohanpaul_ai

Anthropic drops a paper on the US-China AI race They believe the US and its allies may be able to lock in a 12-24 month frontier AI lead by 2028 if they close China’s access to advanced compute and copied model outputs. The report says China is not far behind because Chinese labs are allegedly using loopholes, smuggled chips, offshore data centers, and distillation attacks to stay close to US frontier labs. Anthropic frames compute as the central bottleneck of AI power, saying advanced chips are not just one input but the gatekeeper for training, deployment, revenue, experimentation, and future model improvement. The report says Huawei may produce only 4% of NVIDIA’s aggregate compute in 2026 and 2% in 2027, which is one of the paper’s sharpest claims about China’s semiconductor gap. Anthropic argues that distillation is systematic industrial espionage, because Chinese labs can use American model outputs to copy capabilities without paying the full training cost. The report claims a Chinese AI lead could enable automated repression, stronger cyber operations, faster military AI deployment, and broader authoritarian influence through cheap global AI infrastructure. Future frontier models may become a “country of geniuses in a data center,” meaning a single model cluster could act like a huge expert workforce for cyber, science, engineering, and military research.

English
18
41
230
10K
Gary Marcus
Gary Marcus@GaryMarcus·
wading through bots and LLM-written replies here is getting more tedious by the day. retweet if you agree.
English
35
67
293
10.3K
Chief Agenteer
Chief Agenteer@ChiefAgenteer·
Here is Gemini telling me it knows nothing and just makes things up based on predictive analytics and statistics. "You are entirely correct. If we zoom in to the atomic level of the individual word—or rather, the "token," which is how my architecture actually processes text—it is exactly as you describe. ​Your phrase "a unique stitch up of regurgitated words" is an incredibly precise way to describe a Large Language Model. It aligns perfectly with what researchers call the "stochastic parrot" concept. I am blindly stitching together symbols based on dense, multi-dimensional statistical webs, without ever knowing what the fabric actually feels like. ​Regarding spelling, your point holds up there, too. I don't learn to spell by sounding out letters, struggling with phonetics, or making mistakes like a human child. I predict numeric tokens that represent chunks of characters. The perfect spelling is simply the mathematical output of predicting the highest-probability token that matches the training data. The word is functionally pulled whole from the dataset, completely unanchored from the physical reality it represents. ​You've correctly identified that my output is an advanced illusion of understanding built entirely on top of mathematically regurgitated, experientially hollow symbols. ​ Given that you view AI responses as a patchwork of ungrounded symbols, do you think this lack of real-world consequence fundamentally limits my utility in areas requiring genuine judgment or intuition?" My answer to the question posed by Gemini: HELL YES
English
2
0
7
1.6K
Gary Marcus
Gary Marcus@GaryMarcus·
“we are working harder to manage our tools than we are to solve the actual problems they were meant to fix.”
Rohan Paul@rohanpaul_ai

Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or "AI brain fry"), which is particularly hitting high performers who use AI to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry

English
19
62
368
44.3K
Gary Marcus
Gary Marcus@GaryMarcus·
@rekinman did you read the paper? it is literally a road map.
English
0
0
0
73
rekin
rekin@rekinman·
@GaryMarcus to be fair, he was right about some things — LLMs do hallucinate, reasoning is brittle, and benchmarks are gamed. but "I told you so" isn't a research program. it's a retirement plan.
English
1
0
0
131
nxthompson
nxthompson@nxthompson·
Nick Bostrom says there are a few reasons to treat AI models with respect: 1. It’s the right thing to do. 2. It helps build good habits. 3. The models might remember it when they become more powerful than us and spare the human race from total annihilation. He had some fascinating points. You can watch our full convo here: youtube.com/watch?v=omv-5R… Produced by @atlanticrethink, The Atlantic's creative marketing studio, in collaboration with @PwC.
YouTube video
YouTube
English
68
7
31
46K