Nick

316 posts

Nick

Nick

@nic_s182

Katılım Mart 2017
33 Takip Edilen8 Takipçiler
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
I'm struck by the number of complaints about my assertion that C# is derivative of Java. I guess people just don't know the history of the language.
English
97
12
610
48.9K
Nick
Nick@nic_s182·
@cmuratori It's really shocking how people in tech just don't know how fast a computer can be. I build crappy crud web apps for a living, I'm a meh dev at best, but I know that 100ms for something this simple is pathetic even if it is built using web tech...
English
0
1
17
1.2K
Casey Muratori
Casey Muratori@cmuratori·
I wanted to test this now-promulgated claim that "94ms time-to-show" - assuming that's accurate - would constitute Microsoft having "managed to improve the run prompt latency" that "nobody has had an issue with" in previous versions. On a 60fps* HDMI frame capture from my Windows 10 machine that's around a decade old (Intel i7-7700K @ 4.2GHz), there is one visible frame in between completing the click action and seeing a completely-drawn "run" dialog. The run dialog appears hollow for a single frame, then fills in (as shown). With just a capture, I can't tell if this is benefiting from a "hidden" extra frame of latency, because the Start Menu appears to take one extra frame to "disappear". One frame prior to the first one in the screenshot appears to have actually finished the click (the mouse cursor changed), but the Start Menu does an extra frame of color change on the text after that. Without looking at the code, I'm not sure whether to count that against "run" or against the Start Menu if we're being meticulous - and of course I don't know whether the "94 ms" (apparently median) time would have been counting that time or not. Either way, "94ms time-to-show" would clearly be a significant regression unless that number is measuring something very different from "response after completing the click on Run". The Windows 10 version on 9-year-old hardware appears to be responding within either ~33ms or 50ms depending on how you count the frames. Normally, I would now say "this means, best case, it will feel like a 30fps experience," but we all know how that would go. Apparently it is just horribly nefarious and misleading to tell people an equivalent FPS number to help them gauge the responsiveness of an interactive program in a casual tweet you make on social media. Of course, please note that I am not measuring total physical latency here (like mouse-to-event or submission-to-display) latency here, as I assume the originally quoted "94ms" was not measuring those things either. Either way, once there is a wide-release version of the new Run dialog, it will be easy enough to test if it has improved or not by running like-for-like captures, which I cannot do here because I don't have the new purportedly-faster version. * I apologize profusely for using the term "60fps" to specify the rate at which these frames were captured so you could have a reference for their temporal spacing. I realize that it is highly misleading to use FPS, especially when looking at only 2 frames. I should clearly have said "on a 16.66666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666 millisecond per frame capture", so that everyone would much better intuitive understanding of what was going on.
Casey Muratori tweet media
Tom Warren@tomwarren

everyone is taking issue with the conflation of two perf metrics. I think the problem is people read optimizing for perf as making it much faster, but I think Microsoft’s point is that they’ve managed to improve the Run prompt latency (that nobody has had an issue with) despite adding more functionality (Command Palette) and redesigning it. So they optimized for the perf of the added feature set

English
37
20
677
58.6K
Nick
Nick@nic_s182·
@nicbarkeragain To be clear, viewport culling is good solution when you need to deal with 10k+ lines of text...
English
0
0
0
22
Nick
Nick@nic_s182·
@nicbarkeragain I've done this on native apps, but never needed to in the browser. I just stick to paging. I want end users to use filters instead of scrolling until they find what they want. Also, I want the app to remain quick and light on 10 year old handheld scanners over corporate wifi...
English
2
0
0
791
Nick
Nick@nic_s182·
@efortis @nicbarkeragain Don't keep track of what is selected in the UI element, track in state in your backend.
English
0
0
1
68
Eric Fortis
Eric Fortis@efortis·
@nicbarkeragain I want to this on my project, any ideas for handling selection across pages?
English
2
0
6
2.2K
Casey Muratori
Casey Muratori@cmuratori·
Just want to make sure I'm reading this right: Microsoft rewrote the run dialog with performance "top-of-mind", and the best they could manage to do when putting up a single text box was 10fps?
Casey Muratori tweet media
English
157
79
2.2K
335.4K
Nick
Nick@nic_s182·
@cmuratori IMO, <100ms is decent for a web app making a call over corporate wifi+backend making a call to a large DB+rendering new HTML while using a raspberry pi 3. But 100ms for a icon+textbox+button all local and NATIVE to the OS is pathetic even if they're using DotNet...
English
0
0
4
526
Casey Muratori
Casey Muratori@cmuratori·
At this point I feel like I should do a stream tomorrow to talk about the replies I've seen to this post. I completely disagree with people's umbrage about use of FPS as a metric here: A) that is exactly what time-to-show actually is (we measure 1% and .1% lows in for a reason!), and b) to me, FPS is the most relatable number for response time for average people to understand given that they don't work on software performance for a living like I do. Many people (especially gamers!) intuitively know what 10 or 11fps responsiveness feels like for an action. Few intuitively know what "94ms" responsiveness feels like. I also find it unacceptable to call this "load time" because the user is not asking to "load" anything - it is an action they are taking from a UI that they perceive to be contiguous, and the choice to involve a "load" of any kind at this point is purely the fault of the designers of the system, not some inevitability. Everything has already "loaded" from the point of the view of the user, and if you are claiming to have done a rewrite with performance "top-of-mind", you should have preloaded or precached whatever it is that you believe takes 94ms to "load" here.
Casey Muratori@cmuratori

Just want to make sure I'm reading this right: Microsoft rewrote the run dialog with performance "top-of-mind", and the best they could manage to do when putting up a single text box was 10fps?

English
69
20
885
43K
Nick
Nick@nic_s182·
@tomwarren @cmuratori FPS is a time based measurement and you can convert between other time based measurements. Basic math. If 1 frame takes 94ms then you will have about 11 frames in a second and it's a decent way of conveying how poor 94ms for a tiny native popup really is for the average user.
English
0
0
2
100
Tom Warren
Tom Warren@tomwarren·
@cmuratori it's about relatable as saying a game loads at 0.1fps. Just a weird comparison. The better comparison is that the Run prompt loads faster than it does on Windows 10, but Microsoft *could* have made it faster if it was supposedly optimizing for perf
English
3
0
67
3.9K
Tom Warren
Tom Warren@tomwarren·
if we're misleadingly conflating 2 different perf metrics (time to first frame latency and fps smoothness) then just factor in OS boot time so it's less than 1fps 🙃The reality is the new Run loads faster (94ms) than the existing one (103ms) that nobody has ever moaned about
Casey Muratori@cmuratori

Just want to make sure I'm reading this right: Microsoft rewrote the run dialog with performance "top-of-mind", and the best they could manage to do when putting up a single text box was 10fps?

English
31
13
417
76.7K
Nick
Nick@nic_s182·
@FallenZeraphine @cmuratori @BartoszDobija I wouldn't go as far as to say it's "extremely fast" especially when comparing it to good old C, but it is very much not as slow as whatever MicroSlop is doing. It's almost always the factory pattern-virtual-abstract-override leaning Jenga tower stack rather than the language...
English
2
0
1
148
Wimukthi
Wimukthi@FallenZeraphine·
@nic_s182 @cmuratori @BartoszDobija Agreed, this has nothing to do with .NET, it's extremely fast, and come very close to the performance of a native compiled software in many cases. This is something else.
English
1
0
2
131
Nick
Nick@nic_s182·
@cmuratori @BartoszDobija Been building C# apps for 20 years and no, it's not as fast as C or rust, but with no effort it can respond 100x faster than whatever MicroSlop is doing. It's HOW it's built (strict OOP+SOLID+Design Patterns) that is the real problem IMO...
English
1
0
10
398
Casey Muratori
Casey Muratori@cmuratori·
@BartoszDobija Assuming it is based on the PowerToy, as their blog said, yes, it is presumably in C#.
English
2
0
21
2.5K
Nick
Nick@nic_s182·
@cmuratori I have to connect to EU servers for almost anything and I can tell you that 200ms is in fact NOT perceived as instant. Overall, I aim for <100ms for UI response time in web apps I build that's hosted on site. With lots of data it's the DB calls that is hard to keep under 100ms...
English
0
0
0
258
Nick
Nick@nic_s182·
@nicbarkeragain @miguelvitta Yup... I'd say the glue engineering problem started to take hold around 15-20 years ago. Glue engineers love "AI"...
English
0
0
0
34
Nic Barker
Nic Barker@nicbarkeragain·
To be a little less vague, I suspect that we're likely (not certain, but likely) to be entering into a period of unprecedented software degradation, and we're going to be seeing an increasing frequency of outages like this across many high profile products. But IMO the cause is actually not just the-one-thing-that-everyone-is-always-talking-about, it's a number of things that have all been bubbling away at just below critical levels for a long time. Some of the things off the top of my head: - Poorly designed / optimised software has been getting a free ride on hardware improvements pretty much since the invention of the computer. That chapter is now coming to an end, and will only be worsened by the enormous industry-wide pivot to producing & innovating on AI specific hardware, rather than general purpose CPUs etc. - The ZIRP era created a temporary suspension of reality in our industry, and now that it's ended we need to deal with the hangover. Companies that spent years making no profit, paying extravagant compensation to employees / shareholders and giving away server time for free are now pivoting into extraction mode, which is putting further pressure on their low quality software. QA is being laid off, hardware budgets are being reduced, timelines for shipping features are becoming more aggressive, etc. - The enormous amount of free money incentivised too many new people to join the industry too quickly. This has led to an abundance of poor quality education programs (bootcamps, uncertified colleges etc) and an influx of people into the industry who frankly aren't interested in programming. If you compared the average person in the industry now to 20 years ago, I suspect the difference in motivations would be stark. I'm not saying it's these people's fault necessarily, it's simply an inevitable result of the absurd compensation / performance expectations ratio that our industry has enjoyed for the last 15+ years. Working for a tech company has also become socially prestigious, which further adds to the problem. - Because computer programming was once an incredibly niche area of interest, many of our fundamental systems are built on trust. We're now starting to see that if systems like open source, public supply chain, discussion spaces, education etc become flooded with bad actors, we have no real mechanisms to deal with them. - Our hiring / recruitment pipeline has totally misaligned incentives. Even before the AI resume / AI HR-filtering arms race disaster that we're experiencing now, the widespread adoption of the leetcode style interviews IMO selected for a very narrow personality type, and filtered candidates that would have made great contributions to the industry long term. - The pivot from purchasing long term stable releases of software, to paying a subscription for constantly updating software has done huge damage to software quality as a whole. Companies have lost their incentive to get their software "right" because they can just "fix it later", and for the consumer - you can't just go back to the version of github that still works because the new one has problems. This was all happening well before AI entered the picture. I won't belabor the point because there has been endless discussion about it. But to me personally, there are two additional and deeply worrying problems with AI code generation. - It's undeniable at this point that it negatively affects the people who use it. It stops juniors from getting better, and it burns seniors out and makes them hate their jobs. Like it or not, humans are still the core of this industry, and I don't see this ending well. - It's completely unfit for purpose in the most important, high-stakes situations. One of the reasons that we excuse all the small errors it makes, is because it's low effort to type "do it again and fix this bug". That kind of thing doesn't fly when you only get one attempt because a mistake results in data loss or an outage. The damage is done. All the above has led to a silent exodus of many of our most experienced and impactful people. There are so many amazing programmers who made enough through stock options / compensation that they didn't need to work anymore, and were only doing it because they enjoyed it. Many of these people have just quit the industry and switched to doing hobby projects in the last 5 years. These are the types of people who have the experience and foresight to prevent the types of outages that we're seeing at github today. It's very easy to assume that the proverbial straw that broke the camel's back is entirely to blame here. But I think it's a reckoning that has been on the horizon for a very long time.
English
37
215
1.4K
43.4K
WarrenBuffering
WarrenBuffering@WarrenInTheBuff·
bad engineers engineer things until they're unengineerable
English
7
1
50
979
GitHub
GitHub@github·
Starting June 1st, GitHub Copilot will move to a usage-based billing model as GitHub Copilot supports more agentic and advanced workflows. In early May, you'll see a preview bill experience, giving visibility into projected costs before the transition. 👉 Read more about the upcoming change: github.blog/news-insights/…
English
519
934
2.9K
3.7M
Nick
Nick@nic_s182·
@atmoio It's really an architecture problem IMO and so they end up generating more of the same kind of code with more of the same problems. It's like building steam engines faster with all of the same compounding problems and no one is asking why we're building steam engines in 2026...
English
0
0
0
15
Nick
Nick@nic_s182·
@atmoio Where I work they are trying to use "AI" to generate what they call boiler plate code. Thing is, most of it can just go away with a bit of engineering work, but they are stuck in their ways/"best practices" and the "AI" is just reinforcing it...
English
1
0
1
115
Mo
Mo@atmoio·
The real reason they keep saying AI will take your job
English
231
524
3.8K
224.1K
Nick
Nick@nic_s182·
@pandresgq @atmoio Stop calling it "AI"... LLMs are not "AI". We don't have AI. It's a statistical text generator and with your fuzzy filter as input it can generate compilable text. If LLMs are "AI" then anything with an IF statement can also be called "AI". Your belief does not change reality...
English
0
0
0
21
Nick
Nick@nic_s182·
@rfleury I usually refer to such "developers" as Glue Engineers, but I think we need a new term as I think the word engineer is to generous here. Last year I found a piece of code that looked VERY out of place... quick search and I found the EXACT code on StackOverflow from 9 years ago.
English
1
0
0
114
Ryan Fleury
Ryan Fleury@rfleury·
As demonstrated below, the only “programmers” who think all programming is just babysitting Claude are the same ones who were just copy/pasting from Stack Overflow—e.g. not really doing any programming—because the former is simply an accelerant to the latter
Suhas@zuess05

Programming in 2026 is literally just sitting in a dark room and gaslighting Claude into fixing its own code hallucinations. We went from copy-pasting StackOverflow answers to acting like a deeply disappointed manager for a neural network. The entire tech industry is basically just AI babysitters now.

English
25
67
1.3K
44.5K