Mike Knapp

347 posts

Mike Knapp banner
Mike Knapp

Mike Knapp

@mikeee

Product Manager @ Goog. My views are entirely my own.

Sydney, Australia شامل ہوئے Şubat 2007
417 فالونگ2.5K فالوورز
Mike Knapp ری ٹویٹ کیا
Aaron Levie
Aaron Levie@levie·
Jevons paradox is happening in real time. Companies, especially outside of tech, are realizing that they can now afford to take on software projects that they wouldn’t have been able to tackle before because now AI lets them do so. We’re going to start to use software for all new things in the economy because it’s incrementally cheaper to produce. Marketing teams at big companies will have engineers helping to automate workflows. Engineers in life sciences and healthcare will automate research. Small businesses will hire engineers for the first to build better digital experiences. And as long as AI agents still require a human who understands what to prompt, how to review when an agent goes off the rails, how it guide back, how to maintain the system that was built, how to fix the ongoing bugs, and more, we will still have humans managing these agents. This is why all the advice you get of not going into engineering is wrong. The world is going to increasingly be made up of software, and the people that understand it best will be in a strong economic position. This will happen in other roles as well where output goes up and demand increases.
Lenny Rachitsky@lennysan

Engineering job openings are at the highest levels we’ve seen in over 3 years There are over 67,000 (!!!) eng openings at tech companies globally right now, with 26,000 just in the U.S. We don’t know if there would have been more open roles if not for AI or if AI is actually leading to more open roles, but since the start of this year, the increase in open eng roles is accelerating even more.

English
225
655
4.7K
1M
Mike Knapp
Mike Knapp@mikeee·
@caviterginsoy @phyrooo @ChristosTzamos Isn't the main benefit here that you can backprop through the ENTIRE execution sequence and build better intuition on how to write the initial code? You're literally teaching it how to be a classical computer. Even more powerful when combined with something like GRPO.
English
0
0
2
83
Cavit Erginsoy
Cavit Erginsoy@caviterginsoy·
@phyrooo @ChristosTzamos It’s all about when to use tool call. You as an intelligent system constantly offload computation externally don’t you?
English
5
0
19
4K
Christos Tzamos
Christos Tzamos@ChristosTzamos·
1/4 LLMs solve research grade math problems but struggle with basic calculations. We bridge this gap by turning them to computers. We built a computer INSIDE a transformer that can run programs for millions of steps in seconds solving even the hardest Sudokus with 100% accuracy
English
251
809
6.1K
1.8M
Mike Knapp ری ٹویٹ کیا
毛丹青
毛丹青@maodanqing·
AIが復元した「清明上河図」に言葉を失う。確かに、AI特有の「滑らかすぎる質感」に違和感を覚える瞬間もある。だが、張択端が描こうとした千年前の喧騒が、圧倒的な解像度で迫ってくるのも事実だから、これは模写ではない。AIという異質なフィルターを通すことで、われわれは初めて「大宋」の熱気に触れる。賛否はあろうが、この没入感だけは否定できないかもしれない。
日本語
144
2.8K
11.9K
1.3M
Mike Knapp
Mike Knapp@mikeee·
@karpathy @archiexzzz The problem I’ve had with RL for language tasks is the model quickly learns to game the system. It’s super hard to write good scoring systems that allows the model to improve and also produce novel responses. The avg. score goes up, while quality goes down just as fast.
English
0
0
1
365
Andrej Karpathy
Andrej Karpathy@karpathy·
@archiexzzz I just mean long term, imo RL finetuning paradigm is a big upgrade over just SFT (expert imitation) for LLMs at the current stage of development and will continue to grow substantially.
English
3
5
341
21K
Andrej Karpathy
Andrej Karpathy@karpathy·
In era of pretraining, what mattered was internet text. You'd primarily want a large, diverse, high quality collection of internet documents to learn from. In era of supervised finetuning, it was conversations. Contract workers are hired to create answers for questions, a bit like what you'd see on Stack Overflow / Quora, or etc., but geared towards LLM use cases. Neither of the two above are going away (imo), but in this era of reinforcement learning, it is now environments. Unlike the above, they give the LLM an opportunity to actually interact - take actions, see outcomes, etc. This means you can hope to do a lot better than statistical expert imitation. And they can be used both for model training and evaluation. But just like before, the core problem now is needing a large, diverse, high quality set of environments, as exercises for the LLM to practice against. In some ways, I'm reminded of OpenAI's very first project (gym), which was exactly a framework hoping to build a large collection of environments in the same schema, but this was way before LLMs. So the environments were simple academic control tasks of the time, like cartpole, ATARI, etc. The @PrimeIntellect environments hub (and the `verifiers` repo on GitHub) builds the modernized version specifically targeting LLMs, and it's a great effort/idea. I pitched that someone build something like it earlier this year: x.com/karpathy/statu… Environments have the property that once the skeleton of the framework is in place, in principle the community / industry can parallelize across many different domains, which is exciting. Final thought - personally and long-term, I am bullish on environments and agentic interactions but I am bearish on reinforcement learning specifically. I think that reward functions are super sus, and I think humans don't use RL to learn (maybe they do for some motor tasks etc, but not intellectual problem solving tasks). Humans use different learning paradigms that are significantly more powerful and sample efficient and that haven't been properly invented and scaled yet, though early sketches and ideas exist (as just one example, the idea of "system prompt learning", moving the update to tokens/contexts not weights and optionally distilling to weights as a separate process a bit like sleep does).
Prime Intellect@PrimeIntellect

Introducing the Environments Hub RL environments are the key bottleneck to the next wave of AI progress, but big labs are locking them down We built a community platform for crowdsourcing open environments, so anyone can contribute to open-source AGI

English
258
858
7.3K
946.5K
Mike Knapp ری ٹویٹ کیا
Talor
Talor@Talor_A·
this is one of the most remarkable technical blog posts I’ve ever read
Talor tweet media
English
73
733
11.7K
1.3M
Mike Knapp ری ٹویٹ کیا
ア
@yuruyurau·
a=(x,y,d=mag(k=(4+sin(y*2-t)*3)*cos(x/29),e=y/8-13))=>point((q=3*sin(k*2)+.3/k+sin(y/25)*k*(9+4*sin(e*9-d*3+t*2)))+30*cos(c=d-t)+200,q*sin(c)+d*39-220) t=0,draw=$=>{t||createCanvas(w=400,w);background(9).stroke(w,96);for(t+=PI/240,i=1e4;i--;)a(i,i/235)}//#つぶやきProcessing
English
220
1.5K
12.6K
715.6K
Mike Knapp ری ٹویٹ کیا
GREG ISENBERG
GREG ISENBERG@gregisenberg·
Email from Fiverr CEO to his team about AI ($1b company):
GREG ISENBERG tweet media
English
266
781
6K
1.2M
Mike Knapp ری ٹویٹ کیا
𒐪
𒐪@SHL0MS·
my best one yet. load in 4k then zoom out
𒐪 tweet media
English
260
808
13.7K
4.1M
Mike Knapp ری ٹویٹ کیا
Andrej Karpathy
Andrej Karpathy@karpathy·
Noticing myself adopting a certain rhythm in AI-assisted coding (i.e. code I actually and professionally care about, contrast to vibe code). 1. Stuff everything relevant into context (this can take a while in big projects. If the project is small enough just stuff everything e.g. `files-to-prompt . -e ts -e tsx -e css -e md --cxml --ignore node_modules -o prompt.xml`) 2. Describe the next single, concrete incremental change we're trying to implement. Don't ask for code, ask for a few high-level approaches, pros/cons. There's almost always a few ways to do thing and the LLM's judgement is not always great. Optionally make concrete. 3. Pick one approach, ask for first draft code. 4. Review / learning phase: (Manually...) pull up all the API docs in a side browser of functions I haven't called before or I am less familiar with, ask for explanations, clarifications, changes, wind back and try a different approach. 6. Test. 7. Git commit. Ask for suggestions on what we could implement next. Repeat. Something like this feels more along the lines of the inner loop of AI-assisted development. The emphasis is on keeping a very tight leash on this new over-eager junior intern savant with encyclopedic knowledge of software, but who also bullshits you all the time, has an over-abundance of courage and shows little to no taste for good code. And emphasis on being slow, defensive, careful, paranoid, and on always taking the inline learning opportunity, not delegating. Many of these stages are clunky and manual and aren't made explicit or super well supported yet in existing tools. We're still very early and so much can still be done on the UI/UX of AI assisted coding.
English
457
1K
12.3K
1.2M
Mike Knapp ری ٹویٹ کیا
NEOMECHANICA
NEOMECHANICA@neomechanica·
NEOMECHANICA tweet media
ZXX
34
725
9.4K
185.6K
@levelsio
@levelsio@levelsio·
Europe is cooked
@levelsio tweet media
English
601
311
10.3K
1.2M
Ian Nuttall
Ian Nuttall@iannuttall·
graphic designers reacting to chatgpt images right now
Ian Nuttall tweet media
English
1K
3.9K
65.6K
23.2M
Andrew Price
Andrew Price@andrewpprice·
@mikeee @bilawalsidhu Funnily enough I just made a short video testing this out! youtu.be/MG8mkeCRhXo In short, it's not as easy as examples here make it seem. But yes, "assistants" will definitely become part of every app we use. I'm more surprised/scared by model generators like @BackflipAI
YouTube video
YouTube
English
1
0
3
182
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Blender MCP is here, allowing Claude to talk directly to Blender. For example, provide a 2D reference image and ask Claude to create it in 3D for you.
English
183
743
7.2K
758.7K
Mike Knapp
Mike Knapp@mikeee·
How fast are we sliding into a world where all “jobs” are just reformatting ChatGPT output?
English
3
0
2
288
Mike Knapp ری ٹویٹ کیا
Ed Conway
Ed Conway@EdConwaySky·
If you're even half interested in energy, I bet you've seen this chart. I call it The Most Hopeful Chart in the World. The point? We're embracing renewable power MUCH faster than expected. Hurrah! Only problem is, this chart has an evil twin. A chart we really need to discuss 🧵
Ed Conway tweet media
English
122
490
2.1K
454.9K
Mike Knapp ری ٹویٹ کیا
Andrej Karpathy
Andrej Karpathy@karpathy·
AI video generation today. When I was back in school, the story of the field of computer graphics (and physically based rendering etc.) was that we will carefully study and model all the object/scene geometry, physics, rendering etc., and after 1000 PhDs and 50 SIGGRAPHs get results like this. That a Transformers can shortcut all of that at this high of fidelity by training on a dataset of videos...
Agrim Gupta@agrimgupta92

"A pair of hands skillfully slicing a ripe tomato on a wooden cutting board" #veo

English
234
567
8.4K
1.5M