Ivan Nekrasov

9.1K posts

Ivan Nekrasov banner
Ivan Nekrasov

Ivan Nekrasov

@IvanAtDell

NVIDIA Evangelist and AI Field Product Manager at Dell Technologies

Nashville, TN Katılım Ocak 2012
4.8K Takip Edilen1.9K Takipçiler
Ivan Nekrasov retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
267
726
5.5K
771.5K
Ivan Nekrasov
Ivan Nekrasov@IvanAtDell·
Finally - unboxed 🎁📦my latest system! Dell Pro 14 Premium is definitely a great, reliable, very light #AIPC for your business travel and office work on the go! 🤖Meet me at the Dell Technologies World! Visit this link to learn more: dell.com/en-us/shop/del… #iwork4Dell
English
0
0
0
57
Ivan Nekrasov retweetledi
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
Energy is high at #TheAISummit in NYC. With @DellTech, we are showcasing cutting-edge AI demos and powering this year’s Hackathon — a three-day challenge where students are building next-gen applications on the Dell Pro Max with GB10 technology. 👩‍💻 ✨ We are excited to see what they will build!
NVIDIA AI Developer tweet mediaNVIDIA AI Developer tweet mediaNVIDIA AI Developer tweet media
English
3
7
59
3.4K
Ivan Nekrasov
Ivan Nekrasov@IvanAtDell·
New “on the go” gear to travel to #DellTechWorld with! #Iwork4dell Dell Precision 5490 Dell Pro 14 Plus Portable Monitor Dell WL5024 Headphones
Ivan Nekrasov tweet mediaIvan Nekrasov tweet mediaIvan Nekrasov tweet mediaIvan Nekrasov tweet media
English
1
0
1
225
Ivan Nekrasov retweetledi
Ethan Mollick
Ethan Mollick@emollick·
A few implications of tricks like this: 1) We are still VERY early in the development of Reasoners 2) There is high value in understanding how humans solve problems & applying that to AI 3) Higher possibility of further exponential growth in AI capabilities as techniques compound
Ethan Mollick@emollick

This paper is wild - a Stanford team shows the simplest way to make an open LLM into a reasoning model. They used just 1,000 carefully curated reasoning examples & a trick where if the model tries to stop thinking, they append "Wait" to force it to continue. Near o1 at math.

English
11
42
420
40.3K
Ivan Nekrasov retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
We have to take the LLMs to school. When you open any textbook, you'll see three major types of information: 1. Background information / exposition. The meat of the textbook that explains concepts. As you attend over it, your brain is training on that data. This is equivalent to pretraining, where the model is reading the internet and accumulating background knowledge. 2. Worked problems with solutions. These are concrete examples of how an expert solves problems. They are demonstrations to be imitated. This is equivalent to supervised finetuning, where the model is finetuning on "ideal responses" for an Assistant, written by humans. 3. Practice problems. These are prompts to the student, usually without the solution, but always with the final answer. There are usually many, many of these at the end of each chapter. They are prompting the student to learn by trial & error - they have to try a bunch of stuff to get to the right answer. This is equivalent to reinforcement learning. We've subjected LLMs to a ton of 1 and 2, but 3 is a nascent, emerging frontier. When we're creating datasets for LLMs, it's no different from writing textbooks for them, with these 3 types of data. They have to read, and they have to practice.
Andrej Karpathy tweet media
English
384
1.8K
11.8K
695.7K
Ivan Nekrasov
Ivan Nekrasov@IvanAtDell·
AI = really good math running on really good chips #AI
English
0
0
1
37
Ivan Nekrasov retweetledi
Jeff Clarke
Jeff Clarke@JClarkeatDell·
Another grand slam year for tech is coming. From AI PCs to AI agents to data center transformations, check out the top tech that I expect to hit it out of the park in 2025.⚾dell.com/en-us/blog/my-…
English
3
14
77
29.1K
Ivan Nekrasov retweetledi
Jon Krohn
Jon Krohn@JonKrohnLearns·
Today's episode features heavyhitters from Dell (Chris Bennett) and Iternal (Joseph Balsamo) detailing why we must have flexibility in our A.I. model deployment (and why generative A.I. is overhyped)! Watch here: superdatascience.com/842 In a bit more detail, today's guests are: • Chris Bennett: Global CTO for Data & A.I. Solutions at Dell Technologies (@Dell) • Joseph Balsamo: Sr VP of Product Development at Iternal Technologies This episode was filmed live at Insight Partners' ScaleUp:AI conference in New York a few weeks ago. Thanks to George Mathew, Jennifer Jordan, Kristen Zeck and Deanna Uzarski for inviting me and making the magic of this session happen. The "Super Data Science Podcast with Jon Krohn" is available on all major podcasting platforms and a video version is on YouTube. This is Episode #842! #superdatascience #ai #aideployment #cloud #generativeai #genai
Jon Krohn tweet media
English
0
1
4
291
Ivan Nekrasov
Ivan Nekrasov@IvanAtDell·
Amazing #sunrise this morning. Was so cold though that my iPhone 16 pro max camera froze 🤣
Ivan Nekrasov tweet mediaIvan Nekrasov tweet mediaIvan Nekrasov tweet mediaIvan Nekrasov tweet media
English
0
0
0
76
Ivan Nekrasov
Ivan Nekrasov@IvanAtDell·
I joined Dell team on November 29th, 2004! Very thankful to be part of the journey for 20 out of 40 years :)
Michael Dell 🇺🇸@MichaelDell

I started @DellTech in my @UTAustin dorm room in 1984, and since then, we’ve brought in over $1.7 trillion in revenue. I’m so grateful to every customer, partner, supplier, and team member who’s helped us get here. It’s been a fantastic journey, and it feels like we’re only getting started. 🙏🚀

English
0
0
0
37