Daniel Lemire

34.4K posts

Daniel Lemire banner
Daniel Lemire

Daniel Lemire

@lemire

Software performance expert. Ranked in the top 2% of scientists globally (Stanford/Elsevier 2025) and among GitHub's top 1000 developers. Father, husband.

Montreal, Quebec Katılım Kasım 2007
2.1K Takip Edilen33.1K Takipçiler
phil bohun
phil bohun@philthistweet·
@lemire This is very surprising to me. I would've thought it'd be a dozen or so. Why did they design the cpu to predict so many branches? Are there really that many in flight in a program? You would think hot loops would only have a few.
English
2
0
2
852
Maynard Handley 🟦
Maynard Handley 🟦@handleym99·
No, my point is consider two designs. Both achieve “99% branch accuracy” But the first predicts every branch correctly in the Fetch predictor. And the other predicts half of them correctly in Fetch, and corrects the other half in TAGE, with a five cycle cost. The accuracy is identical. But the first design is far superior to the second. None of the standard metrics capture this. I suspect that in real designs this is a significant effect, not just a minor third order quibble.
English
1
0
0
20
Daniel Lemire
Daniel Lemire@lemire·
@handleym99 @andrew_rogers I think that my metric here (branch misses) implies a flush. But yeah, there are many nuances and I am all for investigating them. My key message here is that you need to be very careful when benchmarking.
English
1
0
0
23
Maynard Handley 🟦
Maynard Handley 🟦@handleym99·
Not exactly! You have Fetch prediction (at 0 and 1 cycle latency), then at least one fast predictor and one slower predictor. A later predictor can override an earlier incorrect predictor, thus invalidating those cycles of Fetch (but not requiring a Flush). Point is, the cost of the prediction (even when correct) can be from 0 to 5 or so cycles. This is one of the elements that makes simple counts of prediction accuracy, or numbers of branches remembered or whatever less than ideal. We still haven’t solved the problem of the best way to compare modern branch prediction :-( (Then there’s a whole orthogonal issue of how well the processor can handle frequent *taken* branches, where the relevant issue is not prediction accuracy, let’s assume that’s 100%, but taken vs non-taken. Don’t get me started on that!)
English
1
0
0
26
hersh
hersh@harsh4786·
“However, if we repeat multiple times the benchmark, always using the same random values, the processor learns the branches.” the problem of overfitting with a small fixed benchmarking data/load is real, just test in prod.
Daniel Lemire@lemire

x.com/i/article/2034…

English
1
0
11
542
Eric Schöneberg
Eric Schöneberg@Escoeerg·
@lemire Maybe finally - where abundant Internet information failed - AI will force professors to teach thinking instead of remembering.
English
1
0
0
33
Daniel Lemire
Daniel Lemire@lemire·
Many of you have been thinking about AI. Some are using agents to seemingly multiply their productivity. Some are skeptical that it can improve productivity much. Some think that it will wipe out jobs. Some even think that ChatGPT will lead to the Terminator. But what is happening on campus? How much attention do university professors pay to the application of AI in their courses? Short of regulating it and lamenting how it impacts homework: they either do not think about it, or denounce it. Why? Part of it is just the general laziness and careerism typical of modern universities. You only bother if you can virtue-signal or get ahead. But there is something deeper at play. Academics avoid the AI topic the same way that spiritually weak people avoid thinking about their own death: they are fearful. The printing press, after many years, became a challenge to the Church by enabling more people to read and study the Bible. It led to some of the most brutal wars in our history. I don’t expect blood to flow on campus, but what is going to happen when the students show up to class, and the mediocre/activist scholar who is their professor can’t answer questions very well? What happens when everyone in the class can see that ChatGPT is superior to the professor? Notice how an entire generation of scholars has mostly stayed out of the public square: they are not on X, not on YouTube, not on Threads… At best, they show up on LinkedIn to promote a talk that they gave (never showing the content, just signalling the prestige). Contemporary academics do not like to be challenged. And they fear AI, they fear any tool that can break the bubble. Any tool that can show that the Emperor is naked.
Daniel Lemire tweet media
English
7
3
26
3.1K
Jason V. Miller
Jason V. Miller@jasonvmiller·
@lemire When I first learned about branch prediction, I swear the CPU manual just said it always predicts that the branch will be taken. But that was like 15 or 20 years ago.
English
1
0
0
262
Daniel Lemire
Daniel Lemire@lemire·
@andrew_rogers Well, it is 0/1. You either get it right or you don't. There are certainly other factors, such as how quickly the processors can learn and so forth but it gets harder to measure.
English
3
0
0
335
Andrew Rogers
Andrew Rogers@andrew_rogers·
@lemire Given the large differences in the number of branches that can be predicted across those architectures, it would be interesting to assess relative prediction quality. Is it a quantity versus quality tradeoff we are seeing here? If so it would suggest optimization possibilities.
English
1
0
1
352
Daniel Lemire
Daniel Lemire@lemire·
@JCJasmin3 Nous pourrions décrire Jésus comme quelqu'un qui a subi « l'annulation » à cause de ses idées.
Français
1
0
2
55
Pablo A. Quintanilla
Pablo A. Quintanilla@PabloAQuin4054·
@lemire I am more of an "techno-pessimist" but I actually hope your prediction stays true well past 2024. I used to edit/translate, so in my experience AI has already destroyed those jobs. Teaching and researching are on the line, let's hope they hold up for a bit more. Regards.
English
1
0
0
31
Pablo A. Quintanilla
Pablo A. Quintanilla@PabloAQuin4054·
@lemire But this is deeper than bad professors being exposed (the good ones are another story). AI will take over lots of jobs, so broadly speaking neither professors nor students will be neccesary. The new generation will enter "society" but not "the workforce".
English
1
0
0
76
Daniel Lemire
Daniel Lemire@lemire·
@ecoezen I think that many academics have an incentive to say that AI makes students stupid because AI, as a tutor, competes against them in some respect. Whether AI makes people stupid is unclear at this point.
English
0
0
1
14
Can
Can@ecoezen·
@lemire do you have any alternative idea @lemire ? i mean do not only consider clever students around you. most students are not that much 'clever'. how will we teach them to learn, to think, to do, if the current ai trends force them to delegate effort in exchange for the result?
English
2
0
1
15
Can
Can@ecoezen·
@lemire i mean, we see some people saying AI is amplifier, but the reality is that ai is amplifier for only very small percentage of population. the vast majority of population relies on 'artificial friction' the education system provides. let's be honest, and accept this first!
English
1
0
2
13
Alexandre McKinnon
Alexandre McKinnon@AlexLMcK·
@lemire I like the post, but that 6 panel comic was painful. I appreciate trying new stuff, but I don't feel blog post renders well in comic form. If I didn't read the post first, reading the comic wouldn't feel like reading the same in a weaker rendition. Infographic next?
English
1
0
1
95
Daniel Lemire
Daniel Lemire@lemire·
AI as an ‘IDE’ and what to watch for Since the early days of programming, there has been a feeling that programming needed to be ‘supported’. It could not just be typing text in an editor. And so we got integrated development environments like Turbo Pascal, Visual Studio, and so forth. I estimate that most software programmers work within an IDE. I certainly got serious with programming by using an IDE. With a system like Visual Basic, you could, in a few hours, build an entire desktop application even if you did not know programming very well. We got autocompletion so that you did not need to remember function names or read the documentation. It is obvious to me that AI agents like GitHub Copilot or Claude CLI are just a form of IDE. And I think that they need to be used accordingly. One of my favourite games lately has been to ask an AI to give me the assembly code of a function and to document it. If you accept my premise that AI is like an IDE, then many of my observations regarding IDEs apply. Many years ago, a company tried to recruit me to run their software division. When I learned (accidentally) that it was built on Visual Basic, I immediately got cold feet. There is nothing wrong per se with Visual Basic. You can write solid software with it. But the psychological effect of a tool like Visual Basic is that you tend to spend little time on actual engineering and more time adding features. Thus, it scales poorly. “Being easy” is not always the feature you think it is. And I would react similarly today if I were asked to lead a software team… and I learned that their entire stack was vibe coded (coded entirely by AI). You might say that it is unfair. You have stopped coding by hand months ago and your code has never been better. Maybe it is. And, to be fair, remarkable pieces of software were written using powerful tools that made programming easier. « The medium is the message » said McLuhan. And it is true. I look at Java, a great programming language. The community quickly became very IDE-centric (as opposed to the Unix tradition of command-line tools). Some of the most important systems today are built with Java. But you can feel the struggle to remain relevant, because the pain points are hidden behind a graphical interface; people just get it. « Do not look behind the curtain! » Here is what you want, ideally: Bad engineering should be physically painful. When you are doing things wrong, you should get headaches. Tooling powerful enough to make bad engineering ‘work’ might cause more harm than good.
Daniel Lemire tweet media
English
4
8
68
7.8K