Yann LeCun

26.1K posts

Yann LeCun banner
Yann LeCun

Yann LeCun

@ylecun

Professor at NYU & Executive Chairman at AMI Labs. Ex-Chief AI Scientist at Meta. Researcher in AI, Machine Learning, Robotics, etc. ACM Turing Award Laureate.

New York Katılım Haziran 2009
784 Takip Edilen1.2M Takipçiler
Sabitlenmiş Tweet
Yann LeCun
Yann LeCun@ylecun·
I do not write posts on X. I tweet links to posts on other platforms. I like and retweet (occasionally) I comment on friends' tweets (rarely) Follow me on...⬇️⬇️⬇️
English
889
111
1.8K
1.6M
Yann LeCun
Yann LeCun@ylecun·
@sedielem @koraykv @MarcRanzato @rob_fergus It does work. I mean, with stacks of pre-trained sparse convolutional auto-encoders, we could get to a good starting point from which to fine-tune supervised on tiny labelled datasets (like Caltech 101, that had 30 training samples per category) and get near-SOTA performance.
English
3
0
1
263
Yann LeCun retweetledi
Andrea Montanari
Andrea Montanari@Andrea__M·
There is a case to be made that the future of Mathematics is very bright. In my mind, proofs have always been a tool to achieve a goal. The goal was and still is to understand, and reading/writing proofs (or just know that they exist) will remain part of it.
English
19
28
292
26.3K
Yann LeCun retweetledi
Kate from Kharkiv
Kate from Kharkiv@BohuslavskaKate·
APPLEBAUM: Russia's war in Ukraine is sometimes described, including recently by American Vice President, as if it were nothing more than territorial dispute, kind of scuffle over lines on map. But when Russia denies that Ukraine is a real nation, builds concentration camps on Ukrainian territory, bans Ukrainian language and systematically arrests mayors, teachers, journalists, and priests, then Russia is also attacking Europe that was built after 1945, Europe whose borders are not supposed to be changed by force. Russia invaded Ukraine not only to destroy Ukraine but also to prove that treaties are meaningless, alliances are weak, and brute force still decides fate of nations. By waging imperial war of conquest, Russia seeks to undermine Europe's post-imperial order.
English
170
4K
11.5K
193.3K
Yann LeCun retweetledi
Rodney Brooks
Rodney Brooks@rodneyabrooks·
On the current scale of things the Trump phone is a minor corruption, and only goes to show how incompetent everyone in his family is. If you think that any other president would have done things like this (relatively minor for this president) then you are a member of a cult.
English
2
3
50
9.8K
Yann LeCun retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
The most revealing thing about this AI leadership paper is that it reads less like a vision for innovation and more like a glossy whitepaper for a 21st century East India Company. Every generation of incumbents discovers a new moral vocabulary for why they alone should control transformative technology. In the 90s it was cryptography. We were told strong encryption was too dangerous to spread because terrorists, rogue states, chaos, dual-use, etc. So the US crippled exports, weakened products, slowed adoption, and kneecapped parts of its own software industry. Right up until reality steamrolled the policy and we woke up to its stupidity and then eCommerce, secure communications, software signing, and the modern internet exploded and gave us tremendous benefits. Now the exact same priesthood has returned with AI. - “Dual-use.” - “Strategic advantage.” - “Model distillation.” - “National security.” - “Responsible access.” A few different nouns but mostly the same ones. Same instinct: Centralize control, gatekeep compute, fuse state and corporate power, and call it safety. The funniest part is that this strategy is almost perfectly designed to accelerate the thing they claim to fear. You do not stop a rival superpower (who happens to be the absolute best at scaling energy and manufacturing and who has a choke-hold on rare Earths refinement) from building domestic capability by permanently attempting to strangle them. You create the economic and political incentive for total self-sufficiency. We have already done that as Jensen warned. We went from 100% market to nearly 0%. Huawei is now manufacturing millions of chips. DeepSeek v4 trained on them. They have more energy than the rest of the world combined. Meanwhile, we have activists and anti-economic fools like AOC and Bernie pushing for data center moratoriums and we can't build a single bullet train in 20 years and folks fighting to not expand the energy grid here and new nuclear plants getting tied up in environmental regulation for a decade. The sanctions did the exact opposite of what the hawks wanted. They jumpstarted a moribund, dinosaur of a Chinese chips industry. We basically said to the people who happen control the most powerful manufacturing engine on the planet "we intend to squeeze you." They rightly saw it as an existential threat. The sanctions become the industrial policy. Huawei. SMIC. Domestic lithography. Packaging. Memory. Entire Chinese supply chains that did not exist at serious scale a decade ago now exist precisely because Washington convinced Beijing they had no choice. Brilliant work. So the endgame here is what exactly? 1) Push China into a Manhattan Project for chips and AI. 2) Increase the strategic value of Taiwan even further. 3) Once China reaches self sufficiency that can invade Taiwan and choke off our own super advanced chips where are made there exclusively (and no we don't have even close to enough TSMC factories in Arizona or anywhere else in the world). That's every NVIDIA chip. Every Google tensor chip. Every Apple chip. Every chip in you iPhone and Android phone. Every Amazon chip. The chips in your car and truck and hair dryer and washing machine. 4) Escalate a cold tech war into a permanent civilizational bloc conflict that is likely to turn into a shooting war at one point. 5) Fragment the global software ecosystem. 6) Create American AI aristocracies protected by regulation and compute licensing. And somehow call this “open innovation.” Meanwhile the actual history of software keeps screaming the opposite lesson: Knowledge diffuses, open ecosystems win, developers route around gatekeepers, and attempts to permanently contain computation usually fail. What really jumps off the page is the assumption that a tiny cluster of frontier labs should become quasi-sovereign actors, deciding who gets intelligence, who gets compute, who gets models, and which countries are permitted to participate in the future. Not elected governments. Not open markets. Not open-source communities. A handful of corporations sitting beside the national security state, insisting that concentration of power is necessary to protect democracy. You almost have to admire the audacity.
Anthropic@AnthropicAI

We've published a paper that explains our views on AI competition between the US and China. The US and democratic allies hold the lead in frontier AI today. Read more on what it’ll take to keep that lead: anthropic.com/research/2028-…

English
20
60
303
46.5K
Yann LeCun retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
Any arguments against open source and open weights are mendacious and malicious by their very nature. Open source is the foundation of modern society worth 8.8 trillion to the economy and the foundation of every major cloud, your home router, your phone, your operating system and more. These anti-open source anti-freedom arguments are especially nasty when they use weaselly hawk coded words like "dual use." Linux is dual use. So is your operating system. So is your phone. So is your kitchen knife. Dual use was used against encryption. Once this stupid and spurious restriction was lifted eCommerce took off like a rocket and was worth trillions to society. Choke points, gates and centralized controls are inherently limiting and benefit the few at the expense of the many. They choke out growth and development in society. We don't need monks in a cave deciding what books to copy. We need the printing press. Anti-open source arguments have no moral ground to stand on. They are inherently self-serving and have no other purpose than to create centrally dominated monopolies and regulatory capture in an underhanded, unscrupulous way.
English
15
17
104
16.5K
Yann LeCun retweetledi
Logical Intelligence
Logical Intelligence@logic_int·
Aleph, our fully autonomous AI agent system for formal verification, aced all major theorem proving benchmarks including PutnamBench, VeriSoftBench, and Verina
Logical Intelligence tweet media
English
12
30
104
18.8K
Yann LeCun retweetledi
Brian Allen
Brian Allen@allenanalysis·
Obama on the Iran nuclear deal today: “We pulled it off without firing a missile. We got 97% of their enriched uranium out. There’s no dispute that it worked. We didn’t have to kill a whole bunch of people or shut down the Strait of Hormuz.” $25 billion spent. 14 Americans dead. Oil at $119 a barrel. The world has one month of strategic reserves left. A UAE oil port on fire. Ships turning back at gunpoint. Trump called Obama’s deal the worst deal ever made. Then tore it up. Then started a war to get back to the same place.
English
140
1.6K
4.8K
251.4K
Yann LeCun
Yann LeCun@ylecun·
@cfryant 😂😂😂 LLMs can actually pulverise asphalt.
English
1
0
4
192
Christopher Fryant
Christopher Fryant@cfryant·
"Mythos Unleashed" A parody early 2000s style disaster movie trailer for when Anthropic pushes the button and sets Mythos AI free. Released in honor of the new SpaceXAI + Anthropic partnership. Created with Seedance 2.0 via Runway
Claude@claudeai

We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.

English
12
3
35
6K
Yann LeCun retweetledi
MTS
MTS@MTSlive·
We asked the CEO of HuggingFace @ClementDelangue what the risks of releasing powerful open source models are. He says restricting AI creates more risk than openness. "Six, seven years ago, at the time it was GPT-2, and there was already a lot of people saying that it was too dangerous to release in open source." "Mythos, when it was announced was crazy dangerous... In a few weeks or a few months, everyone is gonna be using Mythos, and not destroy the world as a result." "For cybersecurity, the biggest risk is that a few players have capabilities that other people don't have... If you make it more open, it's usually easier for defenders to react and make the whole system safer." "The idea of restricting a technology like AI based on risks is like saying, 'Some people can punch other people, so let's tie down everybody's hands.'" "Otherwise you slow down progress, you create massive gaps in terms of controls, in terms of capabilities, and you create actually additional risks."
clem 🤗@ClementDelangue

Weird how some people always target open-source in AI! First it was: “Open-source AI will destroy the world” (spoiler: it didn't and it won't) Now: “Open-source is a cybersecurity threat because of AI” Both narratives are far too simplistic. The truth is that the exact same risks exist in closed-source systems, often even more so. For example, in practice, APIs can create much bigger data and security vulnerabilities than open systems you can inspect, self-host, and secure yourself. And as with software more broadly, open-source often ends up more secure because it benefits from far more scrutiny than private internal systems. The reality is not “open vs closed.” The reality is that AI is raising cybersecurity stakes across the board, and we need to tackle that seriously together.

English
43
106
530
298.9K
Yann LeCun retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
There will be no AI jobpocalypse. The story that AI will lead to massive unemployment is stoking unnecessary fear. AI — like any other technology — does affect jobs, but telling overblown stories of large-scale unemployment is irresponsible and damaging. Let’s put a stop to it. I’ve expressed skepticism about the jobpocalypse in previous posts. I’m glad to see that the popular press is now pushing back on this narrative. The image below features some recent headlines. Software engineering is the sector most affected by AI tools, as coding agents race ahead. Yet hiring of software engineers remains strong! So while there are examples of AI taking away jobs, the trends strongly suggest the net job creation is vastly greater than the job destruction — just like earlier waves of technology. Further, despite all the exciting progress in AI, the U.S. unemployment rate remains a healthy 4.3%. Why is the AI jobpocalypse narrative so popular? For one thing, frontier AI labs have a strong incentive to tell stories that make AI technology sound more powerful. At their most extreme, they promote science-fiction scenarios of AI “taking over” and causing human extinction. If a technology can replace many employees, surely that technology must be very valuable! Also, a lot of SaaS software companies charge around $100-$1000 per user/year. But if an AI company can replace an employee who makes $100,000 — or make them 50% more productive — then charging even $10,000 starts to look reasonable. By anchoring not to typical SaaS prices but to salaries of employees, AI companies can charge a lot more. Additionally, businesses have a strong incentive to talk about layoffs as if they were caused by AI. After all, talking about how they’re using AI to be far more productive with fewer staff makes them look smart. This is a better message than admitting they overhired during the pandemic when capital was abundant due to low interest rates and a massive government financial stimulus. To be clear, I recognize that AI is causing a lot of people’s work to change. This is hard. This is stressful. (And to some, it can be fun.) I empathize with everyone affected. At the same time, this is very different from predicting a collapse of the job market. Societies are capable of telling themselves stories for years that have little basis in reality and lead to poor society-wide decision making. For example, fears over nuclear plant safety led to under-investment in nuclear power. Fears of the “population bomb” in the 1960s led countries to implement harsh policies to reduce their populations. And worries about dietary fat led governments to promote unhealthy high-sugar diets for decades. Now that mainstream media is openly skeptical about the jobpocalypse, I hope these stories will start to lose their teeth (much like fears of AI-driven human extinction have). Contrary to the predictions of an AI jobpocalypse, I predict the opposite: There will be an AI jobapalooza! AI will lead to a lot more good AI engineering jobs, and I’m also optimistic about the future of the overall job market. What AI engineers do will be different from traditional software engineering, and many of these jobs will be in businesses other than traditional large employers of developers. In non-AI roles, too, the skills needed will change because of AI. That makes this a good time to encourage more people to become proficient in AI, and make sure they’re ready for the different but plentiful jobs of the future! [Original text in The Batch newsletter.]
Andrew Ng tweet media
English
548
1.2K
5.2K
753.7K
Yann LeCun retweetledi
Haider.
Haider.@haider1·
Yann LeCun says you cannot build a reliable agentic system without a world model LLMs don't have world models. They can't predict the consequences of their actions before taking them "they just act, and whatever happens next is someone else's problem" Without that, it's not intelligence
English
275
357
2.7K
314.5K