zakk

2.1K posts

zakk banner
zakk

zakk

@hexednobility

software, markets, ai, vt hokies expertise in data engineering and mobile/cloud arch

East Coast شامل ہوئے Aralık 2020
306 فالونگ581 فالوورز
پن کیا گیا ٹویٹ
zakk
zakk@hexednobility·
3 years ago, many predicted massive SWE productivity gains from LLMs. As a sr swe at big tech, I wrote this post to explain why they were wrong. I was largely right. Even now there is little evidence in 2026 the industry has writ-large seen 30% productivity gains. But that's changing, fast. It’s now abundantly clear - software development and human computer interaction its self is amidst a profound transformation. And I've never been more excited. I’m reviving this account to talk LLMs, engineering and second order effects. Follow for the signal.
zakk@hexednobility

.@bchesky and @Jason recently claimed that AI will boost software developers' productivity by ~30% across industry. As a developer and IC at a major tech company, let me debunk that. 1. Software developers don't spend all their time coding. Typically, coding occupies less than half of a developer's time. The rest goes to parallelization costs (meetings, documentation, alignment, planning) and operations (monitoring metrics, configuring alarms/pipelines, debugging). This limits the time that can be saved. Even with improved coding efficiency, the overall productivity gain would be modest. 2. AI might expedite coding but not code reviews. Every competent tech company reviews all code before production, often by multiple devs. If I use AI-generated logic, I must first understand it such that I can defend it in code reviews. Review time for others remains unchanged. Some will choose to ship AI-written code without understanding which will lead to inevitable disaster. 3. AI models must be fine-tuned for in-house frameworks, libraries, and practices. While the base models can write code using open-source frameworks, they are useless when it comes to in-house development. Overcoming this issue requires fine-tuning which demands capital and—ironically—more software developers. Additionally, fine-tuning AI models to a company's internal software and documentation isn't a one-time task. As frameworks, libraries, and best practices evolve, the AI model requires continuous updates and fine-tuning, consuming significant resources and development time. 4. AI excels at teaching, not executing. Experienced developers will see far less value. Learning a new tech stack is where AI is most impactful. ChatGPT is the most patient and knowledgeable teacher you could ask for. For a tech stack I've mastered over the years though, AI offers minimal value. It might help with boilerplate, but I'll rely on my own expertise for critical code. While I see significant value in AI as a software development tool, its immediate impact is overestimated, particularly in large tech companies. Execs, VCs, and others without boots-on-the-ground engineering experience are swayed by demos and Twitter thread bois. It's exciting and novel, but the proof isn't in the pudding. Startups might see more impact. They lack in-house frameworks and prioritize shipping over quality. They might hire less experienced developers who benefit from AI as a learning tool. But a 30% boost? No way. Not this year.

English
1
0
0
467
zakk
zakk@hexednobility·
One of the greatest use-cases of AI in a large engineering org is shortcutting “how does this work” communication. I used to have to reach out to an SME and say “hey does your system do x, y, z? How is it likely to react to new behavior q?” But now I can have an agent crawl their code base and answer the question for me immediately
English
0
0
0
34
Yegor Bugayenko
Yegor Bugayenko@yegor256·
The real cost in software isn’t engineering but communication. Every message has a price: time, attention, interruptions. This is why most teams fail. The Mythical Man-Month warned us. Team Topologies gave the answer: To reduce communication, redesign it. Don’t add people. Reduce the need to communicate.
English
15
12
96
8K
zakk
zakk@hexednobility·
@Noahpinion There is an obviously compelling narrative in “economy goes brrr, people are empowered, new businesses, etc” But there is just something so utterly tantalizing to capital allocators about the idea of fully splitting away from the “permanent underclass” Your doom is their joy
English
0
0
0
290
zakk
zakk@hexednobility·
We are likely to see “select a model/reasoning effort” UX go away. Not to simplify for the customer but to let companies manage their costs better. Any competent user selects the beefiest model for every task given the choice but the compute supply is not keeping up.
zakk@hexednobility

@GergelyOrosz Looks to me like they are running out of inference capacity. They’ve oversold on inference and the first place they are cracking down is consumer to protect enterprise workflows. No incentive for them to be anti-developer they are just struggling to keep up with demand.

English
0
0
0
16
zakk
zakk@hexednobility·
@GergelyOrosz Looks to me like they are running out of inference capacity. They’ve oversold on inference and the first place they are cracking down is consumer to protect enterprise workflows. No incentive for them to be anti-developer they are just struggling to keep up with demand.
English
0
0
2
637
zakk
zakk@hexednobility·
@samswoora This has always been possible as long as contributions have been digitized (commits, slack, google docs), LLMs don’t really change anything. Why companies aren’t interested in objective performance measures is a nut I haven’t quite cracked but it seems intentional.
English
0
0
0
88
Samswara
Samswara@samswoora·
Feels like the pin is about to drop in software engineering. Right now VP’s can ask an llm “look at all my employees contributions and stack rank them” and the only reason this hasn’t happened is lack of imagination
English
53
8
575
44.2K
zakk
zakk@hexednobility·
@0xCharlota It’s critical to prompt precisely, I’ve become an expert in LLM driven design. Numerical values are key. Some examples “Make this 3x sillier” “Up the VOLUME to 7 here” “A 67 year old woman in Portland, Maine should be delighted by this UX”
English
0
0
1
160
charlota
charlota@0xCharlota·
This is something I noticed using AI design and vibe coding tools in general: The chat UI nudges you toward giving vague prompts. “Make the font slightly bigger” “Make this section a bit more lightweight” “Make the color slightly more vibrant” But most of these tools are aimed at professional designers — people who actually know what size feels right, why a layout feels heavy, which color carries the energy you want. interesting that the interface defaults to outsourcing that judgment rather than supporting it. And also a looming risk: if we stop articulating specific design choices, we’re slowly unlearning the craft.
charlota@0xCharlota

idk but it feels kinda backwards having to write “remove this section” “change font size” “change colour” instead of just deleting it / select font size / select colour with colour picker

English
29
20
342
35.6K
zakk
zakk@hexednobility·
@nedoleary Much like a CPU an LLM delivers exactly on the instructions you give it
English
0
0
2
328
Ned O'Leary
Ned O'Leary@nedoleary·
I don't think smart, technical silicon valley people understand how hard it still is for regular people to get good performance out of coding agents
English
29
11
137
16.7K
zakk
zakk@hexednobility·
Using AI to really master those techniques I never quite mastered in college
zakk tweet media
English
0
0
0
36
zakk
zakk@hexednobility·
@Duderichy Fascinating, I expect this to be fairly useful for companies And also obnoxious as hell for those applying
English
1
0
2
68
the Rich
the Rich@Duderichy·
ai interviewers are apparently now a thing
the Rich tweet media
English
8
0
32
1.8K
zakk
zakk@hexednobility·
@Duderichy Same except its been 1 month later both times for me
English
0
0
1
377
the Rich
the Rich@Duderichy·
everytime meta reaches out to interview me, there's a layoff 6-12 months later
English
16
5
424
11.2K
zakk
zakk@hexednobility·
@steveruizok Agreed actually, the job is going to be defining “what” code to write (aka problem solving) and then validating the code is correct before shipping it. The more junior you are the more reviewing you do and the more senior the more “what” defining you do
English
0
0
1
230
Steve Ruiz
Steve Ruiz@steveruizok·
@hexednobility I don't agree with this actually. I'd say the amount of "engineering" my team does (figuring out a problem, solving it) is the same as it was before even though we're writing less code, I'd expect that to continue
English
1
0
12
1K
Steve Ruiz
Steve Ruiz@steveruizok·
I think code review is a passing problem and will be practically solved in six months by having smarter models and gradually eroded standards
English
41
23
739
41.1K
zakk
zakk@hexednobility·
@GergelyOrosz This is simply the initial stage of becoming agent-pilled. You have to learn what limit of threads you can effectively operate. Once you’ve learned the maximum number of threads you can drive you can build an agent orchestrator to better manage the context for you.
English
0
0
0
45
Gergely Orosz
Gergely Orosz@GergelyOrosz·
The more I use AI tools, the more I have to admit that I'm not that much more productive... I simply FEEL that much more productive. In reality, the context switching of kicking several things off wipes out my perceived productivity gains. At least in many/most cases!
English
199
139
2.1K
145.1K
zakk
zakk@hexednobility·
@ChShersh I can demand quality in consumer software by voting with my wallet. Government software is doomed though unfortunately
English
3
0
0
248
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
There’s an important distinction here. Bad music doesn’t hurt anyone. Bad software does. Unless we bully all devs into caring about the craft, we’ll be living in the world where your payments get failed, you can’t book air tickets, you wait 20 seconds for the website to load, you overload support with tons of requests because your website is shite, you private data will leak, you have to reenter visa form 20 times because refresh loses all the files in data, you can’t close the website or lose the internet while you’re uploading a 5 MB PDF that takes forever, and so on. The world without demand for quality is the world of endless frustration.
@levelsio@levelsio

I've been "shamed" so many times for not doing things properly in coding, using PHP and jQuery and SQLite in 2026 etc But it's all worked out fine for me and I've always made security a priority so never got hacked etc So for me the Garry Tan thing is like the "real" devs once again gatekeeping, out of fear non-coders are now entering their scene, even if they have a point (like exposing tests in front end sure okay) But look at ANY scene, like music too, and you always have the old people gatekeeping the new people And yes the new people suck and do things differently but at some point they won't suck and their way of doing becomes the new standard which I think it will be

English
46
75
809
50.4K
zakk
zakk@hexednobility·
@traskjd React is lame but I’ve learned a ton from the others. There is always something to learn in these software architecture trends. Skipping learning about these due to perceived technical wankery is the true technical wankery.
English
0
0
0
340
John-Daniel Trask
John-Daniel Trask@traskjd·
Kind of glad to have begun coding in the 90s. Skipped Kubernetes. Skipped graphql. Skipped microservices. Skipped react unless absolutely necessary. Got mocked a bunch by “senior devs” who started coding in 2010. Didn’t care, preferred shipping value to customers rather than technical wankery that only bloated teams and slowed delivery. Glad those senior devs have now earned their title and realizing it was dumb too. You can’t replace actual experience.
Rafal Wilinski@rafalwilinski

remember when we thought that hosting everything on hundreds of Lambdas was a good idea?

English
51
22
475
64.2K
zakk
zakk@hexednobility·
My faang has fully entered into “engineering slop” territory internal open-claw ports are available, many are speed-running the worst promo-driven design you’ve ever seen Favorite part is watching a VP drop a slop code review on a team “this should pull forward the roadmap”
English
1
0
3
92
zakk ری ٹویٹ کیا
Klaas
Klaas@forgebitz·
there are not enough software engineers demand for software is only growing now that "everyone" can code hard problems are still hard
English
53
8
228
8.6K
zakk
zakk@hexednobility·
My list of AI maxims has formally gained a second item 1. Never ask the LLM a question you don’t know the answer to (New!) 2. LLMs amplify expertise, they don’t replace it
English
0
0
0
39
zakk
zakk@hexednobility·
@DanielNealAdler They are still conceptually hard but agents *dramatically* improve how quickly I can understand and evolve large code bases. Now this wouldn’t be possible if I didn’t have a decade of working with large complex codebases. LLMs amplify expertise.
English
0
0
1
22