KaEl

16.6K posts

KaEl banner
KaEl

KaEl

@BroadenView

Managing Digital Communication. Interested in many things that affect society and humanity - #AI #Ethics #content #UX #InformationLiteracy

Deutschland Katılım Mart 2013
1.6K Takip Edilen1.4K Takipçiler
KaEl retweetledi
Owen Gregorian
Owen Gregorian@OwenGregorian·
Software Developers Say AI Is Rotting Their Brains | Emanuel Maiberg, 404 Media Tech company executives are confident that AI will completely transform the economy and point to the changes they see in-house to prove that this change is coming fast. At Meta, Google, Microsoft, and others, leadership says that AI generates a growing share of the overall code, which makes it cheaper and faster to produce. The implication is that if this AI is good enough that tech companies are using it internally to improve efficiency and reduce headcount, it’s only a matter of time until every other industry is similarly transformed. Developers who are told to use AI whether they like it or not, however, tell a different story. On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to. “We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure—especially when hundreds of other programmers in the company are doing the same,” a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. “We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...).” The actual quality of output doesn't matter as much as our willingness to participate. Tech company executives love to brag about how much of the code at their company is AI-generated. In April, Google said that three quarters of new code at the company was generated by AI. Last year, Microsoft CEO Satya Nadella said up to 30 percent of the company’s code was generated by AI. Microsoft’s CTO Kevin Scott said he expects 95 percent of all code at the company to be AI-generated by 2030. Meta’s Mark Zuckerberg said last year he expects AI to write most of the code improving AI within 12-18 months. Anthropic says 90 percent of the code written by most if its team is AI generated. Tech companies have also been bragging about their “tokenmaxxing,” or how much money they’re spending on AI tools instead of human employees. Predictably, the huge spike in productivity that these companies claim their own AI products have enabled hasn’t resulted in more or better products, shorter work weeks, or better consumer experiences. Mostly, AI implementation in tech companies has been used to justify multiple massive rounds of layoffs. To name just a few examples where tech companies said they reduced headcount because of AI use, more recently, Meta said it would cut 10 percent of its workforce (around 8,000 people), Microsoft said it would offer voluntary retirement to 7 percent of its American workforce (around 125,000 people). Snapchat said it would lay off 16 percent of its full-time staffers (about 1,000 people). The developers I talked to contradicted the narrative about AI’s utility in coding in many ways, but the most glaring issue with the narrative AI company executives are pitching is that the adoption of AI tools they see internally isn’t voluntary or organic. Developers say they are either explicitly ordered to use AI tools or heavily pressured to use them. “AI in some shape or form is all but explicitly mandated,” a software engineer at a FAANG company that brags publicly about its internal AI adoption told me. “Its usage is part of our performance review criteria and most (maybe all?) of us have been reorganized into AI focused ‘pods.’ We're absolutely flooded with AI tooling and it feels like the answer to every problem is ‘use AI first.’” “We've been told performance evaluations are tied to AI adoption,” the UX designer told me. “This has led to most of my teammates using it performatively, even if most of us implicitly know that the output is flawed. The actual quality of output doesn't matter as much as our willingness to participate.” Another software engineer at a financial technology company told me that he was never forced to use LLMs but that the companies where he worked changed in a way that encouraged their use. His previous employer didn’t demand developers use AI but it was encouraged and developers were given access to Cursor, one of the leading coding agents. “It started as a ‘who wants to try it’ and I volunteered. Later it was slowly, due to costs, that we stopped renewing our JetBrains IDE and forced everyone to move to Cursor (though the editor itself doesn't force you to use AI),” he said. JetBrains IDE is an integrated development environment used by software developers. “Adoption came mostly from inside the engineering team, with a single engineer manager trying to champion it and writing project based rules for Cursor to try to make the output better.” All the developers I talked to were excited to try using LLMs at work at first, or were at least curious about them. Their feelings about the tools, based on their personal experience, are now overwhelmingly negative. “There were almost no productivity gains using IDE-based AI tools. AI-generated code ended up with more bugs because I am working on distributed web apps, highly complex multi-system things, so giving the LLM context is very difficult,” a software developer at a small web design firm told me. “Another developer on a contract working with me at the moment generates massive amounts of code, leaving me with 1000+ lines of pull requests to review and it takes massive amounts of time to do this. This leads to me feeling more tired and burned out than I've ever felt in my entire life. The cognitive overhead of switching between prompting, coding, checking the LLM's output is a massive energy drain. It has not been a productivity booster at all, it feels like a speedrun towards severe mental exhaustion.” The developer in fintech I talked to also said that one major problem with LLMs is that it can generate more code than developers can properly vet or explain. “The sheer breadth of code makes it impossible to be critical enough and then you're either throwing it away or submitting it and feeling scared there might be really low quality stuff that if someone notices will make you embarrassed (and even more embarrassing to say: ‘oh i don't know what that is, the AI did that’),” he said. “Or worse, you ship it without someone noticing and that is really hit or miss.” “I have gotten stuck on bug fixes where, when I run out of Anthropic tokens in Claude Code, I couldn't work anymore. The current system I am working on started to become a monstrosity of complexity where I didn't even know what most of it does anymore, and when I had to fix a bug, it took longer than I would have taken in the past to debug,” the software developer at a small web design firm told me. The developers I talked to found AI useful for some tasks. Several developers said that it was good for experimentation, allowing them to quickly prototype an idea or to implement something in a domain they’re unfamiliar with. One developer said it was a good information interface. Specifically, he said, the AI helped him find where on the server a certain request is handled, summarize logs, or find documentation related to code changes. The problem all the developers I talked to agreed on is that the more they relied on AI to code, the more the skills they’ve honed for years deteriorated. This is by now a well studied phenomenon sometimes referred to as "cognitive debt” or "cognitive atrophy.” The idea is that people who use AI to automate certain parts of their job lose the ability to do those tasks well, therefore de-skilling themselves. “I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code,” the software developer at a small web design firm told me. “It's making me dumber for sure,” the fintech software developer told me. “It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing ‘thinking’ in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before.” “When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with,” the software engineer at the FAANG told me. “Another aspect is that I joined late last year and [the company’s] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that.” The developers I talked to agreed that LLMs will stick around and play a role in programming in the future in some fashion, but worried about how the industry will adapt to executives’ current obsession with the technology, especially when it comes to fostering future generations of developers. “Older programmers will be fine if there are any jobs left in a few years, but I worry for people early in their careers,” the UX designer told me. “We are hiring junior programmers who rely on AI to complete the simplest tasks. They don't have the knowledge or experience to know when AI output is error-laden or inefficient.” “I wish I had a crystal ball for this one, but my gut feeling is that this method of building software will be unsustainable either economically or in terms of tech debt,” the software engineer at the FAANG company said. “There's a pretty clear split on my team between people who love AI coding and those who just do it because it's what the company wants, and generally speaking I find that the people who are still [technically focused individual contributors] with their nose in code all the time are less likely to be big AI boosters. I think the tech and its outputs start to really break down the more you question them and those who are doing that day in and day out tend to have a worse opinion of the tech.” “I think there will be a ‘reckoning’ or ‘awakening’ from the industry notion that now everyone can code and that vibe coding is viable for a real production app and software companies are dead,” the developer in fintech said. “I think we will grow to find the patterns and industry best practices that will balance the negatives of LLM development (hallucination, unstructured code) with better techniques to verify the output's correctness at scale, and the hype and techno optimism of AI will get to a saner middle ground.” 404media.co/software-devel…
Owen Gregorian tweet media
English
4
2
12
1.7K
KaEl retweetledi
Dorothea Baur (Dr.)
Dorothea Baur (Dr.)@DorotheaBaur·
This trial won't "change the imperial drive of these companies to consolidate ever-more data and capital, terraform the earth, exhaust and displace labor, and embed themselves...within the state to gain leverage over its apparatuses of violence" @_KarenHao theguardian.com/technology/com…
English
0
8
7
419
KaEl retweetledi
Bloomberg TV
Bloomberg TV@BloombergTV·
French artificial intelligence startup Mistral AI is in discussions with European banks about deploying its answer to Anthropic's Mythos, the limited-access AI model that can uncover cybersecurity vulnerabilities at unprecedented speed and scale. @CarolineHydeTV reports bloom.bg/42tfXVc
English
7
15
62
8.3K
KaEl retweetledi
Ronnie Stoeferle
Ronnie Stoeferle@RonStoeferle·
The US is pouring more capital into AI data centers in 6 years (~$930B) than the inflation-adjusted cost of the Marshall Plan, Apollo, Manhattan Project, and the Interstate Highway System — combined. Meanwhile: AI ≈ 45% of the S&P. Energy ≈ 4%. Everyone is overweight the thing that needs power. Underweight the power. H/T @ekwufinance
Ronnie Stoeferle tweet media
English
124
547
2K
194K
KaEl retweetledi
Info.Militarisierung
Die Analyse "Eskalierende KI, wachsender Widerstand" geht auf den Shift im Silicon Valley zu offen Dominanz ausübenden Tech-Oligarchen, Deregulierung und gesellschaftlichen Kontrollverlust ein - aber auch die Formierung bewegungsübergreifenden Protests. imi-online.de/2026/04/29/esk…
Info.Militarisierung tweet mediaInfo.Militarisierung tweet media
Deutsch
0
1
1
36
KaEl retweetledi
Antonio Grasso
Antonio Grasso@antgrasso·
English dominates URLs while most native languages remain marginal online, leaving a structural gap in access. Organizations design for scale, so entire communities face reduced visibility and cultural erosion. Source @VisualCap via @antgrasso
Antonio Grasso tweet media
English
9
32
66
1.5K
KaEl
KaEl@BroadenView·
The German Armed Forces say no to Palantir. 3 European companies – Almato/Stuttgart, Orcrist/Berlin, and ChapsVision/Paris – are competing for one of Germany's most strategically important IT contracts. escudodigital.com/en/defense/eur…
English
0
0
1
55
KaEl retweetledi
Club des Cordeliers
Club des Cordeliers@cordeliers·
Data center overload may be *intended* to break the grid. A subsequent emergency order could permit unregulated private ownership. This would allow direct control of electrical distribution by tech billionaires. They'll decide who gets power, and how much. businessinsider.com/nerc-issues-al…
English
202
4.2K
8.1K
229.8K
KaEl retweetledi
Suppressed News.
Suppressed News.@SuppressedNws1·
“Blood libel” They are literally boasting about what their dogs do to Palestinians on live TV in Israel!
English
218
5.5K
10.9K
341.9K
KaEl retweetledi
Nicholas Kristof
Nicholas Kristof@NickKristof·
This is a hard article to read, but I hope you'll do so. I've spent some time reporting on widespread rape and other sexual violence of Palestinian male and female prisoners by Israeli authorities, and the article is now published. The assault victims were warned not to give speak of what they endured -- they were sometimes told they would be killed or raped if they gave interviews -- but they found the courage to do so. One man described being raped three times in a single day in Israeli prison, the third time after he tried to protest. A young woman said the guards would come in at the beginning of each shift and strip her naked and abuse her. Another reported that she was shown photos of herself being raped and warned they would be released unless she cooperated with Israeli intelligence. Even three children who had been detained told me they had been sexually abused. Look, whatever our position on the Middle East, we should be able to agree on being anti-rape. Sexual assaults were horrific when Israeli women were targeted on Oct. 7, and they're equally horrific when Israeli authorities use them against Palestinians day after day after day. We should be able to find common ground in opposing rape. Here's a gift link to the article: nytimes.com/2026/05/11/opi…
English
3.4K
16.2K
34.4K
8.7M
KaEl
KaEl@BroadenView·
@drhossamsamy65 Disgusting. It’s unbelievable what kind of atrocities these people are capable of.
English
0
0
3
77
KaEl retweetledi
Dr.Sam Youssef Ph.D.,Ph.D.,DPT.
- A New York Times article highlights the systematic rape of Palestinian prisoners. - The report references UN findings that classify sexual violence as part of Israel's "standard operating procedures." - Evidence from rights groups indicates that this violence is being used systematically.
English
27
538
573
11K