Gregory Gromov

42.1K posts

Gregory Gromov banner
Gregory Gromov

Gregory Gromov

@ntvll

MS Avionics, Autoformalization, Autonomous Vehicles, Road Safety Metrics

USA เข้าร่วม Aralık 2013
228 กำลังติดตาม787 ผู้ติดตาม
ทวีตที่ปักหมุด
Gregory Gromov
Gregory Gromov@ntvll·
The idea of the intellectual immortality of generations is knocking at the door of the next stage of the AI revolution. To give a first impression of what we are talking about, let me recall a counter-intuitive statistic on road safety by age groups: the safest drivers, by far, are seventy-year-olds. In other words, it takes people roughly half a century to truly learn how to drive safely. And then, all too soon, they pass away — with no opportunity to share that hard-won skill … not even with their own grandchildren, who must begin learning from scratch and often pay the ultimate price for their mistakes. This is not a unique situation. In fact, it is emblematic of a deeper human condition. All verbalized knowledge amounts to only a thin, transparent surface layer floating on the vast ocean of human knowledge & skills. That ocean is formed by a lifetime of accumulated experience, most of which remains inaccessible to others … felt only by the individual, stored in tactile, muscular, motor, and emotional memory rather than in any kind of sharable words. As for knowledge formalized in computers and other automated systems — that is merely a thin film on the surface of this immense ocean. Keeping all of the above in mind, I coined the term AUTOFORMALIZATION and described its meaning in an article: “Autoformalisation - Knowledge & skill acquisition” netvalley.com/cgi-bin/librar… From my more recent experience, I see that today’s most popular AI products — LLM services such as GPT — hold the potential to become effective tools for helping people extract their intuitive knowledge: first by verbalizing it, and then by autoformalizing it. Some people have asked me why I coined the term AUTOFORMALIZATION at a time when there were no tools capable of scaling the process it describes. My usual answer is a quotation from Heinrich Neuhaus book: “The Art of Piano playing” ‘To give a thing a name is the beginning of understanding it’
English
3
1
24
4.7K
Gregory Gromov
Gregory Gromov@ntvll·
“A four-year-old child absorbs more sensory data in a single year than every LLM ever trained on text.” Surprise, x.com/ntvll/status/1… surprise …
Dustin@r0ck3t23

Yann LeCun left Meta after twelve years to build a company from scratch. Not because Zuckerberg turned on him. Because the machine did. LeCun: “It’s a project Mark Zuckerberg really likes. But over the last several months he and I both realized the potential spectrum of applications was beyond what Meta was interested in.” He had the CEO. He had the most influential AI lab on the planet. He had the resources. He did not have the one thing that mattered. A company that could afford to be wrong about LLMs. Meta had placed hundreds of billions behind a single bet. Every hire. Every roadmap. Every quarterly target. All locked into the same trajectory. Dissent was not forbidden. It was just structurally impossible. LeCun: “Right now, they are sucking the air out of the room anywhere they go, and so there’s basically no resources left for anything else.” So he walked in and told Zuckerberg the truth. LeCun: “I can do this faster, cheaper, and better outside of Meta.” Zuckerberg’s response: “OK, we can work together.” That is not a story about a bad boss. That is what happens when a company commits hundreds of billions to one direction. The institution becomes the ceiling. Even the man who built the lab could not think above it. LeCun has spent a decade as the loudest dissenter in the room. Not because LLMs are not useful. Because predicting the next word is not the same as knowing what happens when you drop a glass. A four-year-old child absorbs more sensory data in a single year than every LLM ever trained on text. Everyone inside Meta understood this. Nobody inside Meta could say it. He walked out. Raised $1.03 billion in four months. Jeff Bezos led the round. Nvidia backed it. The man who co-invented deep learning is not building a better chatbot. He is not racing to win the current era. He is building what ends it.

English
0
0
1
16
Gregory Gromov
Gregory Gromov@ntvll·
@aakashgupta “Anthropic, which didn't exist in 2019” as an independent company. It was a part of openAI team.
English
0
0
0
119
Aakash Gupta
Aakash Gupta@aakashgupta·
In 2019, OpenAI was a nonprofit valued at $1 billion with fewer than 100 employees. GPT-2 existed and they were afraid to release it because it could write a convincing paragraph. Seven years later, OpenAI is valued at $730 billion. NVIDIA went from a gaming GPU company to the most valuable company on Earth at $4.37 trillion. The total market cap of AI companies crossed $41 trillion. Anthropic, which didn't exist in 2019, just raised at $170 billion. In 2019, "AI" was a computer science elective. In 2026, it's a $300 billion annual infrastructure buildout. Remote work was a negotiation line item in your offer letter. TikTok had just launched in the US. The entire internet ran on a set of assumptions that have been systematically dismantled in 84 months. 2019 you would open this app and think it was science fiction.
Kristin Raworth 🇨🇦@KristinRaworth

I've never sen anything more accurate

English
17
6
120
24.6K
Gregory Gromov
Gregory Gromov@ntvll·
@deredleritt3r As long as hallucinations are considered inherent components of LLMs, what kind of self-improvement can we really talk
English
0
0
2
183
prinz
prinz@deredleritt3r·
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following: 1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years. 2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically. RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics. 3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon. Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace". Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc. Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
Kevin A. Bryan@Afinetheorem

My read on "normal policymaker & corp. leader on AI": mostly now they don't need to be convinced it is very important (unlike a year ago). But they still see its capabilities as today + epsilon. So just briefly, here is what even "AI is normal tech" folks in the labs believe: 1/8

English
64
121
1.1K
145.4K
Gregory Gromov
Gregory Gromov@ntvll·
@elonmusk Nobody can be as far from understanding the road traffic and decades long attempts to “automate driving”…. then people who compare road traffic with elevator and car drivers with operators of elevators.
English
0
0
0
11
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
This 93 year old has found new freedom after she bought a new @Tesla Model Y with FSD. She also uses Grok navigation. "Although she has always been a good driver, my mom can now drive without the fear or fatigue that can naturally come with age. No more relying on others for every trip. No more feeling stuck. This is true mobility that can spark new adventures in a still adventurous women!" (via Dan Doyle's Family Channel. Full video below)
English
890
2.6K
22.5K
1.9M
Gregory Gromov
Gregory Gromov@ntvll·
As one of the long time Tesla constructive critics needs to additionally acknowledge the company’s products that remains outside of any critique: Heavy rockets reusable busters, Starlink Grok FSD v.14 what a decade long remains the legitimate topic for Tesla critique is Tesla road safety metrics. Now all of the sudden it turned out to be that SpaceX began to provide a huge amount of the very vulnerable points for critique related to Musk claims about LEO DCs
English
0
0
0
26
Gregory Gromov
Gregory Gromov@ntvll·
Unbanned the next day — appreciated.
English
0
0
0
13
Gregory Gromov
Gregory Gromov@ntvll·
Accepting congratulations. Finally, Musk banned me. I have been constructively criticizing him for more than a decade. Mostly regarding his inability to understand AP/FSD safety features and his naive hope that AI, as a modern version of Deus ex Machina, will resolve FSD safety issues that he refuses to even think about. I admire that he patiently endured all this criticism of mine for over 10 years. For comparison, his first AI director banned me immediately — from the very first phrase addressed to him about 10 years ago — and remains so to this day, even long after he stopped working there. So, I’m taking this opportunity to thank Musk for listening to my critical remarks directed at him for 10 years. I really appreciated his attention.
English
1
0
0
34
Gregory Gromov
Gregory Gromov@ntvll·
Supervised Tesla Full Self-Driving (FSD) v14 now delivers over 10X more comfort than any other vehicle on the road. That said, as long as Elon Musk has not yet released comprehensive safety data in the form of casualties (or major incidents) per vehicle mile traveled (VMT), legitimate questions about its true safety profile remain open for discussion. Having said that, life is full of risks. Anyone who has been “teleported” door-to-door — even just a couple of times — through complex downtown traffic and interstate highways simply by tapping addresses on the Tesla touchscreen tends to become a Tesla customer for life. Tesla’s current demand challenges stem from one primary reason: most people still have no real idea just how miraculous FSD v14 actually is.
English
0
0
0
30
Gregory Gromov
Gregory Gromov@ntvll·
Tesla "10+ billion miles of vehicle data" is a real treasure. Though, a dozen of unsupervised FSD cars in Austine after a year efforts to scale Level 4 robotaxi is just a next reminder that the treasure itself does not guarantee anything... and FSD v.14 is just a next impressive x.com/ntvll/status/1… step further in a decade long effort to trade road safety on a comfort of driving.
Elon Musk@elonmusk

@wholemars They have no idea how hard FSD is. Only path to success imo is hardcore real-world AI software with dedicated NN inference acceleration ASICs in car, multibillion dollar NN training supercluster and 10+ billion miles of vehicle data. Good luck.

English
0
0
0
35
Gregory Gromov
Gregory Gromov@ntvll·
Musk daily claims any nonsense that comes to his mind nobody care, but since Elon declared feud to Sam ... openAI CEO becomes the favorite target of Musk 300M followers. For instance:
Ewan Morrison@MrEwanMorrison

Always good to keep screenshots of the claims of Altman so you can see the pattern of his deliberately misleading predictions over time. This is from January 2025. Given failed outcomes this could potentially go to court as "misleading investors".

English
3
0
0
219
Gregory Gromov
Gregory Gromov@ntvll·
The task of rationally distributing resources for a multifunctional company has always been one of the most important, especially for an industry leader. In this sense, OpenAI’s decision to discontinue its Sora video service in order to focus on more important tasks looks encouraging. Meanwhile, Musk consistently promoting videos of beautiful women while Grok is still unable to create science & tech graphics with the quality required by the editorial teams of scientific journals seems a bit naive for an entrepreneur of his scale.
Haider.@slow_developer

openai discontinuing Sora to free up compute for its next big model says a lot i'm glad they're focusing more on reasoning, coding, and research -- and that sora was not profitable enough to keep those resources still, i think they're building an omnimodal model maybe gpt-5o, with video

English
0
0
0
449