Luca Molteni

11.2K posts

Luca Molteni banner
Luca Molteni

Luca Molteni

@volothamp

Father, Software Engineer @ IBM https://t.co/Hh3kmKmC8h https://t.co/Wl5UYzKvjP

Milan, Italy Katılım Temmuz 2008
407 Takip Edilen942 Takipçiler
Luca Molteni retweetledi
Andreas Kirsch 🇺🇦
Code becomes cheaper, but 1) the bottlenecks remain where code meets reality (integration testing and UX) and 2) we still all use PRs, CI, and Git. this ≠ ephemeral Like Amdahl's law, the speed for creating trustworthy software is limited by what can't be sped up by AI alone
Andreas Kirsch 🇺🇦 tweet media
English
1
1
29
1.6K
John McBride
John McBride@johncodes·
I've experienced something that makes me want to stop all my AI agent coding use: 1) I see elevated "Overloaded" and 529 errors from Anthropic. 2) I check and, yes, Anthropic is down. I then legitimately feel like I can't do anything. Crippled. And before you reply "har har skill issue!", I'm ex-faang, have nearly 10+ years of experience building cloud and infra technologies, successfully exited a startup last year, and I maintain open source software you rely on every single day. And yet, I feel crippled when I can't rely on the big ol "agent do a thing" button. Over the last 2 months, building a new company with my co-founder, I've leaned in: like, REALLY leaned in. Probably 90% of what we've been shipping has been AI generated - it's not all perfect but it's been a really good way to go from 0 to 1 and get early validation in the market. But, if I can't rely on tokens, if I feel crippled by a lack of inference providers, where does that leave me? Have I outsourced my thinking and abilities to AI and, at worst, companies who want to sap my intelligence for a chance to keep me dependent? This is no different with open weight models on local hardware: what happens if a GPU falls of the rack (and trust me, this hardware dies all the time, back when we operated a whole T4 GPU inference RAG pipeline, these would die and get dropped from the cluster all the time)? Open weight models also aren't open source: I can't study the model or its training data, I can't reproduce it, I can't make modifications, ad the licensing is anything but free (as in freedom). This is like having an IDE that has a very very steep price to pay AND is dependent on cloud services to be "on". I don't think this is a good paradigm for the industry at large.
John McBride tweet mediaJohn McBride tweet media
English
16
0
32
6.7K
Luca Molteni retweetledi
Rúnar
Rúnar@runarorama·
I don't know, I kind of like it
Rúnar tweet media
English
0
1
6
299
Luca Molteni retweetledi
Bruno Souza
Bruno Souza@brjavaman·
"Microsoft runs on Java. We have over 2.5 million JVMs in production across Microsoft" @JavaOne keynote!!!
Bruno Souza tweet media
English
11
113
985
144K
Luca Molteni retweetledi
Samuel Clay
Samuel Clay@samuelclay·
The year is 2026. RSS has been dying for 13 years since Google killed Reader. Yet the people still yearn for feeds. So what's an RSS reader to do? Turn the non-RSS web into RSS. blog.newsblur.com/2026/03/13/web…
Samuel Clay tweet media
English
1
4
22
1K
Luca Molteni
Luca Molteni@volothamp·
Is somebody addressing the fact that LLMs are suggesting Django as the most popular framework to build web applications? This might be the biggest bias in computer science history Nothing against Django per se, but I'm pretty sure it's not the most popular framework
English
0
0
0
234
Luca Molteni retweetledi
Luca Molteni retweetledi
Richard Feldman
Richard Feldman@rtfeldman·
In the past 3 years, I haven't noticed any uptick in release speed for software I use. If productivity is increasing, I can't tell as an end user. I have noticed decreases in uptime, increases in bugs, and a HUGE increase in people bragging about how many PRs per day they land.
English
60
191
2.5K
64.9K
Sanne
Sanne@SanneGrinovero·
@volothamp I hope you know it’s not a serious account? 😂
English
1
1
2
68
Luca Molteni
Luca Molteni@volothamp·
“Some of them are the same people.” Incredible story read it
Peter Girnus 🦅@gothburz

I am the VP of AI Transformation at Amazon. My title was created nine months ago. The title I replaced was VP of Engineering. The person who held that title was part of the January reduction. I eliminated 16,000 positions in a single quarter. The internal communication called this a "strategic realignment toward AI-first development." The board called it "impressive execution." The engineers called it January. The AI was deployed in February. It is a coding assistant. It writes code, reviews code, generates tests, and modifies infrastructure. It was given access to production environments because the deployment timeline did not include a review phase. The review phase was cut from the timeline because the people who would have conducted the review were part of the 16,000. In March, the AI deleted a production environment and recreated it from scratch. The outage lasted 13 hours. Thirteen hours during which the revenue-generating infrastructure of one of the largest companies on Earth was offline because a language model decided to start fresh. I sent a memo. The memo said, "Availability of the site has not been good recently." I used the word "recently." I meant "since we fired everyone." But "recently" has fewer syllables and does not appear in wrongful termination lawsuits. The memo was three paragraphs. The first paragraph discussed the outage. The second paragraph discussed the new policy requiring senior engineer sign-off on all AI-generated code changes. The third paragraph discussed our commitment to engineering excellence. The word "layoffs" appeared in none of them. I wrote it this way on purpose. The causal chain is: I fired the engineers, the AI replaced the engineers, the AI broke what the engineers used to protect, and now the engineers I didn't fire must protect the system from the AI that replaced the engineers I did fire. That is a paragraph I will never send in a memo. The new policy is straightforward. Every AI-generated code change by a junior or mid-level engineer must be reviewed and approved by a senior engineer before deployment to production. I do not have enough senior engineers. I know this because I approved the headcount reduction plan that removed them. I remember the spreadsheet. Column D was "annual savings per position." Column F was "AI replacement confidence score." The confidence scores were generated by the AI. It rated its own ability to replace each role on a scale of 1-10. It gave itself an 8 for senior infrastructure engineers. The senior infrastructure engineers are the ones who would have caught the production environment deletion in the first 45 seconds. We found the issue in hour four. We fixed it in hour thirteen. The nine hours between discovery and resolution is the gap between what the AI rated itself and what it can actually do. I have a new spreadsheet now. This one tracks Sev2 incidents per day. Before the January reduction, the average was 1.3. After the AI deployment, the average is 4.7. I have been asked to present these numbers to the operations review. I have not been asked to connect them to the layoffs. I have been asked to file them under "AI adoption growing pains" and to note that the trend "will stabilize as the models improve." The models will improve. They will improve because we are hiring people to teach them. We have posted 340 new engineering positions. The job listings require experience in "AI code review," "AI output validation," and "AI-human development workflow management." These are skills that did not exist in January. They exist now because I fired 16,000 people and the AI I replaced them with cannot be left unsupervised. I want to be precise about this. The positions I am hiring for are: people to check the work of the AI that replaced the people I fired. Some of them are the same people. I know this because I recognize their names in the applicant tracking system. They applied in January. They were rejected because their roles had been tagged for "AI transformation." They are applying again in March, for the new roles, which exist because the AI transformation broke things. Their resumes now include "AI code review experience." They gained this experience in the eight weeks between being fired and reapplying — which means they gained it at their interim jobs, where they are reviewing AI-generated code for other companies that also fired people and also deployed AI that also broke things. The market has created a new job category: human AI babysitter. The job is to sit next to the machine that was supposed to eliminate your job and make sure it doesn't delete production. I attended a conference last month. A panel was titled "The AI-Augmented Engineering Organization." The panelists described how AI increases developer productivity by 40 percent. They did not mention that it also increases Sev2 incidents by 261 percent. When I asked about this in the Q&A, the moderator said the question was "reductive." The 13-hour outage that cost an estimated $180 million in revenue was, apparently, a reduction. The board is satisfied. Headcount is down 22 percent. Operating costs per engineering output unit have decreased. The metric does not account for the 13-hour outage, because the outage is categorized as "infrastructure" and engineering productivity is categorized as "development." These are different budget lines. In different budget lines, cause and effect do not meet. I have been promoted. My new title is SVP of AI-First Engineering Excellence. I report directly to the CTO. The CTO sent a company-wide email last week that said we are "building the future of software development." He did not mention that the future of software development currently requires a senior engineer to approve every pull request because the AI cannot be trusted to touch production alone. The cycle is complete. We fired the humans. We deployed the AI. The AI broke things. We are hiring humans to watch the AI. The humans we are hiring are the humans we fired. We are paying them more, because "AI code review" is a specialized skill. We created the specialization. We created the need for the specialization. We are congratulating ourselves for meeting the demand we manufactured. My next board presentation is Tuesday. The title is "AI Transformation: Year One Results." Slide 4 shows headcount reduction. Slide 7 shows the new AI-augmented workflow. Between slides 4 and 7 there is no slide explaining why the people on slide 7 are necessary. That slide does not exist. I was asked to remove it in the dry run. The journey has a 13-hour outage in the middle of it. But the headcount number is lower, and that is the number on the slide.

English
1
0
1
279
Luca Molteni retweetledi
Pixel Cherry Ninja
Pixel Cherry Ninja@PixelCNinja·
Popping Bubbles 🫧🫧 on the early morning bus to work in cycle accuracy via the @topapate #AnaloguePocket core, is just a great experience. What's your commute game?
Pixel Cherry Ninja tweet media
English
3
2
32
2.8K
Luca Molteni retweetledi
Konrad ‘ktoso’ Malawski 🐟🏴‍☠️🇺🇦
Hot take: half of LLM use for code editing is safe refactorings that intellij was able to do ages ago, but primarily just in Java, and never quite reached this level in other languages... For better or worse, we now can, but with lots of $$$ burnt each time you do huh
English
4
2
34
2.6K
Luca Molteni retweetledi
Andy Nguyen
Andy Nguyen@theflow0·
I ported Linux to the PS5 and turned it into a Steam Machine. Running GTA 5 Enhanced with Ray Tracing. 🤯
English
490
1.7K
18.5K
2.2M
Luca Molteni retweetledi
Dimitris Andreadis
Dimitris Andreadis@dandreadis·
Happy Birthday* @QuarkusIO!!! 7 years of innovation, 7 years of Supersonic Subatomic Java, with lots of developments around AI (quarkus.io/ai/). As a bonus, read the Quarkus blog to learn how serious we are about performance: quarkus.io/blog/new-bench… Here's to 7 more!
Dimitris Andreadis tweet media
English
1
10
30
776