John Vert

523 posts

John Vert

John Vert

@jvert

https://t.co/Fci2BOHvih

Katılım Haziran 2009
177 Takip Edilen246 Takipçiler
John Vert
John Vert@jvert·
@rfleury controlling the "semantics and lifetime" is a double-edged sword, as now you have to figure out (and follow) the rules. Everybody already knows and agrees on the rules for the lifetime of stack-allocated memory.
English
0
0
0
283
Ryan Fleury
Ryan Fleury@rfleury·
"The stack" is a per-thread address space range, dynamically reserved by a kernel when a thread is created. The reason why "stack" is often presented as preferable to "heap" is that, when using a thread's stack, the expensive part of allocation - address space reservation, and preparation of physical pages for backing the address space - has already been performed when the thread was created. But kernels also provide mechanisms for doing your own address space reservation (mmap, VirtualAlloc), and there is nothing stopping you from using these to do bulk allocations up-front to create your own stacks. This can make common case allocations as cheap as "the stack", but the advantage is that you now control the semantics and lifetime of the stack you've created. Thus, it does not need to be coupled to - for example - the lifetime of a scope or function, as the thread stack is. The "stack versus heap" dichotomy is an unfortunate mythology because it seems to, in practice, communicate the idea that when a thread stack is insufficient for some purpose (allocations must exceed scope boundaries, allocations may need to exceed thread stack limits, allocations require more fine-tuned reserve/commit behavior, and so on), then the only alternative is the heap, particularly for very granular allocations. This is, again, a mythology, and it has confused the C++ world in particular for decades.
Boost C++ | Open Source Libraries@Boost_Libraries

std::vector always heap allocates. std::array can't change size. For decades, there's been no standard container that gives you a dynamically sized array with a compile-time capacity limit and zero heap allocation C++26 finally adds std::inplace_vector. Guess where they got the idea 🧵👇

English
32
60
1.1K
70.5K
Jim Moore
Jim Moore@cougsgo·
Why would tonight’s game be blacked out in Bend when I paid for every possible MLB package, including MarinersTV. Anyone know why that is? Thanks. #GoCougs
English
16
0
42
17.6K
John Vert
John Vert@jvert·
@eramajaarvi @stevesi React native might be a reasonable tradeoff for cross platform UI but building the start menu in JavaScript is unhinged
English
0
0
0
23
james
james@eramajaarvi·
@jvert @stevesi some parts of the start menu are react native, electron is a completely different thing
English
1
0
2
62
John Vert
John Vert@jvert·
@marklucovsky Self hosting was also an important part of getting real world testing.
English
1
0
0
79
mark lucovsky
mark lucovsky@marklucovsky·
2/ The architecture work happened differently. The kernel/executive team was extremely tight-knit. If you touched kernel code, it was your responsibility to know what Dave Cutler, Steve Wood, Lou Perazzoli, Mark Lucovsky, Landy Wang, etc. were doing. Design review mostly happened socially: hallway conversations, short design docs, active integration/debugging sessions. We moved fast. A feature like I/O completion ports could span kernel, I/O manager, executive, ntdll, and Win32 layers — and major chunks could come together in days because there was enormous trust and constant communication. The “review” often happened after checkin during integration and stress. And yes, we had mitigation mechanisms. “Safe sync periods” where checkins were tightly restricted so people could actually achieve a stable enlistment/build. Near major milestones, checkin lockdowns became extreme. At one point, if you wanted a checkin during stabilization, you physically came to my office and wrote it on my whiteboard. If the board was full, you waited. Later we escalated to formal ship-room processes: real bug, real justification, approval from senior leads, then permission to check in. That wasn’t really code review. It was change control. Different era. Different tooling. Different scale constraints. But one thing David’s piece gets right: A lot of software quality did not come from pre-commit review gates. It came from tight teams, deep ownership, brutal integration pressure, system-wide stress, and developers who fully understood the machinery they were standing on. crawshaw.io/blog/agent-pri…
English
3
0
19
991
mark lucovsky
mark lucovsky@marklucovsky·
In David Crawshaw’s (@davidcrawshaw ) recent post “The agent principal-agent problem” there’s a lot of insight beneath the headline “Code review is broken.” Worth reading carefully. Toward the end, David reflects on what he calls the old “cowboy” development culture at Microsoft in the 80s/90s. Not much has been written about that era, mostly because there was no social media, no laptops everywhere, no phones recording daily engineering life. A few thoughts from someone who lived it. Back then, formal code review was not our primary line of defense. Our biggest daily problem wasn’t “is this algorithm theoretically perfect?” It was: Will the full system compile? Will it link? Will it boot? Will it survive stress? Pre-Win2k we used an internal source control system called SLM (“slime”). No branching. Filesystem-based. Extremely brittle. To build a bootable NT system you needed 100+ SLM projects welded into arbitrary places in the tree. Getting a machine synced could take 3+ hours. You literally ran sync in a loop until you got no new files and no errors. Then came the build. In the NT 3.1 timeframe, a full system build on a capable machine might take ~5 hours. By the Win2k era, full builds had stretched into the 14+ hour range — and this was before modern build farms or large-scale distributed compilation. Those build times fundamentally shaped developer behavior. Most developers avoided full-system builds entirely. They worked in tiny enlistments and borrowed objs/binaries from known-good systems because rebuilding the entire world was simply too expensive in both time and productivity. The longer builds became, the more pressure there was to take shortcuts — and those shortcuts created endless opportunities for integration failures and subtle mistakes. A broken build could easily waste days of engineering time. In bad stretches, you could go multiple days without a clean master build. That approach worked… until someone changed a widely shared struct, renamed a field, added a property, tweaked a macro, or silently altered alignment assumptions somewhere deep in the system. Best case: parts of the system no longer compiled. Next best: they compiled but failed to link. Worst case: everything built successfully, but incompatible assumptions between old objs and newly compiled code poisoned the running system in ways that were extremely difficult to diagnose. THIS was our daily battle: not bad style, not missing comments, not minor logic bugs — it was preserving system-wide build and runtime integrity across a massive codebase when most developers could not practically build the entire system locally. Once we had builds that compiled, linked, and booted, the real work started. Stress. Every dev had at least two machines: one for coding, one for testing/stress. We hammered systems continuously with unrealistic randomized load. Deadlocks. Pool corruption. Loader hangs. Resource exhaustion. “Hung, No Ready Threads.” In the early days, the stress build was literally my build. I’d walk office-to-office in the morning checking which machines had died overnight and assign debugging work. No remote debugging yet. If someone needed your machine, you lost your office for hours. Eventually we got remote.exe and centralized build/stress systems, but debugging was still brutal: raw assembly, minimal symbols, hand-reconstructed stacks, careful avoidance of paged-out memory because one wrong move killed the session. That was the real engineering culture: integration, stress, performance, resource correctness, system behavior under extreme load. Most of the failures we chased would never have been caught by lightweight pre-commit review from someone inside your immediate group.
English
3
12
75
11.5K
John Vert
John Vert@jvert·
@ivanrouzanov The image of Cutler pecking away at his keyboard with his own personal tape drive humming away behind him is frying me 🤣
English
0
0
0
221
Ivan Rouzanov
Ivan Rouzanov@ivanrouzanov·
Is it true that Longhorn was an absolute disaster? Yes. Is it true that Vista was reset back to Server 2003 SP1? Yes. But was it DaveC backup tapes? No, this is not true. All the code was in the source control, we just started a new branch. No backup tapes from DaveC.
Andrew Pla@AndrewPlaTech

"Not a happy marriage." @jsnover on why .NET and Windows have never gotten along. This clip has Bill Gates' obsession, the Longhorn disaster, Dave Cutler's backup tapes, and the day Notepad ballooned from 15KB to 15MB.

English
12
14
231
26.5K
Tyler Angert
Tyler Angert@tylerangert·
What’s funny about the industry stigma against frontend work is that before the internet, basically all programming was “frontend” in a way. Eg it was all about graphics, local performance, proper memory management, etc. sort of “back of the frontend” work all baked together. The service-ification of algorithms + compute has created an artificial divide between where “real work” lives software
English
9
3
86
5.7K
John Vert
John Vert@jvert·
@bcardarella lol how many people do you think worked on NT3.1 kernel?
English
0
0
0
130
Brian Cardarella
Brian Cardarella@bcardarella·
This is such a complete misunderstanding of how complicated it was to do this within Microsoft when he did it. Plotting telemetery is one thing. Getting the largest software vendor, at the time, to coordinate that all software within needs to play nice with this one tool is not.
English
3
0
14
3.7K
John Vert
John Vert@jvert·
@Titan_JS_ @SandyofCthulhu Lol we literally loaded trays of donuts into trash bags. If you think the organizations threw them out, I think you are quite disconnected from the reality of some people's lives.
English
0
0
0
20
TitanJS 💤🐼/💫❄️
@jvert @SandyofCthulhu If you donated sealed boxes of donuts, those organizations can say they were not aware of the origins of the food and can hand them out. If they were loose or open packages they most likely ended up back in the trash.
English
1
0
0
25
Sandy Petersen 🪔
Sandy Petersen 🪔@SandyofCthulhu·
Okay let's discuss why Krispy Kreme does this: 1) obviously, they miscalculated how many donuts to make that day. They don't WANT to waste the food. Since you can't predict your customers, sometimes you will have too much food. 2) Because they're a business, they can't just hand out the donuts for free at day's end, or they won't be able to sell them. 3) So the solution is "give them to charity". The problem here is that the litigious nature of modern America means that they will absolutely be sued when some moron chokes on a pecan chunk. It's not worth the risk. 4) The next step is to argue "The law needs to hold Krispy Kreme guiltless if they give away their extra food." That all sounds fine and well, BUT I guarantee that the city will have a whole list of rules a food donation has to follow to be immune from lawsuits. And Krispy Kreme has clueless teenagers throwing out the donuts, so they will absolutely fail to follow all the rules. 5) the only way this could readily work in the USA without being crushed under the burden of bureaucracy or lawsuits would be if each individual Krispy Kreme had a specific charity who came by at night to pick up the extra donuts. People they knew and trusted. Get rid of the lawyers and allow for common sense and that food can be donated lightning fast.
Molly🎧🏳️‍🌈@RasberryRazz

France made this wastefulness illegal cause it’s cruel and only causes more waste issues. Any food market or restaurant over 400 square meters has to donate all their good unsold food to charities and are fined if they do anything like this. That law should be applied everywhere

English
119
87
2.1K
242.5K
John Vert
John Vert@jvert·
@eeuoss I would never take an engineer seriously who can't figure out what files are taking up space on a filesystem regardless of the OS.
English
0
0
1
32
Eugene Ostroukhov
Eugene Ostroukhov@eeuoss·
My girlfriend Windows pc crashed. Drive is full. With “system files”. I don’t see a way to clean it up. She’s finally ready to move to normal OS. I don’t understand how people can use this shit. I would never take a software engineer seriously if they are on Windows.
English
53
1
51
10.1K
John Vert
John Vert@jvert·
@davepl1968 Didn't you go to school in Canada? Absolutely watched this in the school cafeteria.
English
0
1
1
29
Dave W Plummer
Dave W Plummer@davepl1968·
99.9% of people who "experienced" the Challenger disaster saw it on replay and now remember it as live. Almost NO ONE was watching. Everyone thinks they were. It's a fascinating collective false memory.
Jeremy London@SirJeremyLondon

Anyone who experienced the Space Shuttle Challenger explosion, like I did, is probably a bit hesitant to get too excited about the Artemis II launch today. I truly hope our children don’t have to experience such tragedy. May the Universe welcome them and return them safely home 🙏

English
4.5K
55
1.6K
1M
mark lucovsky
mark lucovsky@marklucovsky·
@housecor Windows NT was built without formal code review… But we also had some pretty aggressive policies around breaking shit
English
2
0
15
14.9K
Cory House
Cory House@housecor·
Just learned a team at Microsoft is doing code reviews *after* merge. Why? To move faster. No more pausing work to wait for code reviews. No need for stacked PRs. No more time-consuming merge conflicts caused by long code review delays. This has risks, but may work well for a team that is: - mature - high trust - has strong automated quality checks
English
165
11
281
94.4K
John Vert
John Vert@jvert·
@allie__voss Basically like going to a big concert or sporting event today (without the ticket check) just walk through the metal detector.
English
0
0
0
56
EMVJ
EMVJ@jujubileen·
@jvert which one will give me the most clout
English
1
0
0
19
EMVJ
EMVJ@jujubileen·
I've got to get a new laptop as my charging ports are finally kicking the bucket. (I'll be using my old one to run a local AI model) Deciding between replacing my old one (Lenovo X1 Carbon i7) or trying out a macbook (probs macbook air M1). I love my thinkpad because of the mechanical keyboard but the webcam is potato quality and I honestly have been so unhappy with the aesthetics of recent windows updates anyone have thoughts??
English
1
0
2
317
EMVJ
EMVJ@jujubileen·
@jvert 🧐you're on to something. But it's sad, will be an end of an era but then I can finally switch my Lenovo to linux for our new family agent Botson to use...Arthur is not ready but I am
English
1
0
0
29
John Vert
John Vert@jvert·
@jujubileen So you can make sick iOS apps. Windows has turned into copilot slop and Arthur's not ready for Linux
English
1
0
1
41
EMVJ
EMVJ@jujubileen·
@jvert for real why
English
2
0
0
35
John Vert
John Vert@jvert·
@marklucovsky @davepl1968 to be fair, using plain C you can still get into a lot of trouble in multi-threaded environments with pre-emptive multi-tasking (and exceptions)... C++ just makes it easier to hide the trouble in constructors/destructors/overloaded operators & functions.
English
1
0
2
62
mark lucovsky
mark lucovsky@marklucovsky·
Some of the angst might have also been due to GDI initially and then made worse by GDI in kernel mode. I think they/we all learned a lot about how much trouble you could get into using C++ in multi-threaded environments with pre-emptive multi-tasking. Remember that phase where they used to take out locks in constructors and released them in destructors — just because. That’s probably part of the issue with number 3… Every morning I’d have a pile of stress failures: “hung no ready threads” — 96.4% of these where due to holding locks across callbacks or holding a lock and then calling a function that queued some work and the worker would need to same lock. AI would have done a better job than some of the folks we worked with back in the day…
English
1
0
1
77
Dave W Plummer
Dave W Plummer@davepl1968·
I had this conversation at Microsoft in 1996: Me: "Why do we have our own pointer array code?" Mgr: "Because it's solid and well tested." Me: "So is vector<> in the STL!" Mgr: "Devs don't know the STL" Me: "They're devs, they should know the STL!" Mgr: "That's great, but they don't, so no." And so we continued to use and write all of our own containers and so on. Because the STL was scary.
trish@TrisH0x2A

i used to roll my eyes whenever senior devs said "just use the standard library." i was wrong. they were right. so much third-party stuff is genuinely unnecessary.

English
68
29
790
133.5K