Daniel Lemire | AI MISTAKES

5.5K posts

Daniel Lemire | AI MISTAKES banner
Daniel Lemire | AI MISTAKES

Daniel Lemire | AI MISTAKES

@danlemire

25 years in Tech. I help people solve problems, learn the latest tech, and build high-performing systems and teams with the right architecture, automation & AI.

DFW Texas Katılım Mart 2007
1.2K Takip Edilen415 Takipçiler
Sabitlenmiş Tweet
Daniel Lemire | AI MISTAKES
Daniel Lemire | AI MISTAKES@danlemire·
AI Robots and the personal PC. Anyone can build a robot at home; we're in for a wild ride. Let's go make some AI MISTAKES!
English
0
0
0
64
Daniel Lemire | AI MISTAKES
Companies needing labor doesn’t mean “any labor”. People needing jobs doesn’t mean “any job”. But also, while company growth can mean efficiency and economies of scale improve the standard of living, it doesn’t mean always. It’s all supply and demand curves when you think about it, but mismatch can lead to very different outcomes depending on the topic of the curves. This will never be a simple problem until we align on WHAT we are optimizing for. That is the intersection of economics with politics.
English
0
0
3
1.1K
Chief Nerd
Chief Nerd@TheChiefNerd·
JASON: “Elon seems to think we're gonna have one robot for every human.” JENSEN HUANG: “I'm hoping more … We're millions of people short in labor today. We're actually really desperately in need of robotics. All of these companies could grow more if they had more labor.”
English
1.5K
1.2K
8.9K
30.6M
Daniel Lemire | AI MISTAKES
The fed raising rates because oil is more expensive is ludicrous. Fed rates have ZERO effect on inflation not driven by inflexible demand.
English
0
0
0
9
Daniel Lemire | AI MISTAKES
The number one question you should be asking in your interview now is: “What is my Generative AI token usage budget in this role?”
English
0
0
0
15
Daniel Lemire | AI MISTAKES
@AISafetyMemes It ends when you use tooling (or agents) that inspects the code looking for this. This isn’t a new technique. I’ve been using tooling that renders Unicode characters that play this game (and other variants) since we enabled Unicode characters for TLDs years ago.
English
0
0
0
245
AI Notkilleveryoneism Memes ⏸️
AI is now writing malware that is invisible to the human eye Only other AIs can read it Where do you think this ends?
AI Notkilleveryoneism Memes ⏸️ tweet media
Hedgie@HedgieMarkets

🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace. My Take Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it. The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses. Hedgie🤗 arstechnica.com/security/2026/…

English
50
102
619
56.9K
Daniel Lemire | AI MISTAKES
Daniel Lemire | AI MISTAKES@danlemire·
With the help of AI you can move big rocks, but it doesn't mean they move all on their own or even in the right direction. It still takes clarity of thought and action to deliver something of value. This is the part that many misunderstand about 'vibe coding'.
English
0
0
0
12
Daniel Lemire | AI MISTAKES
Daniel Lemire | AI MISTAKES@danlemire·
I have a terrible case of depending on the most advanced AI models now. I just cannot stand to use older, less capable models for what I do right now.
English
0
0
0
11
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
What if AI doesn’t need to show an immediate ROI but instead is the plausible deniability companies use to RIF 50% of the workforce they already knew did nothing??
English
597
213
4.5K
741.4K
Daniel Lemire | AI MISTAKES retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Seriously, this is just the worst.
Ethan Mollick tweet media
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

English
21
35
755
131.6K
Daniel Lemire | AI MISTAKES
Daniel Lemire | AI MISTAKES@danlemire·
My feed is full on contradictions. This is why I started AI MISTAKES. The future is not all one way or the other, but we must discover where to draw the line.
Daniel Lemire | AI MISTAKES tweet media
English
0
0
2
13
Daniel Lemire | AI MISTAKES retweetledi
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
People are completely missing the point of this feature. Most accidentally and probably some on purpose. They stated very clearly that this obeys all blocking and content rules for your site. It’s literally the opposite of bypassing those controls. The purpose of the system is to get rid of all the halfassed AI crawlers all over the Internet that are doing a crappy job of pulling your content. They are loud, rude, and wasteful. And they’re filling the internet and all our logs with massive amounts background noise. Cloudflare knows they are going to crawl no matter what. This feature is simply giving them a legit way to do it efficiently so that they don’t clog up the entire Internet doing it in a way that it’s 1000x less efficient. And it’s still uses all your rules for what can and cannot be crawled. And all of your other Cloudflare controls around crawling are still enforced. If this is adopted to any significant degree, Cloudflare will be absolute heroes for reducing terabytes of crawler-slop background noise.
Cloudflare Developers@CloudflareDev

Introducing the new /crawl endpoint - one API call and an entire site crawled. No scripts. No browser management. Just the content in HTML, Markdown, or JSON.

English
32
34
577
73.5K
Daniel Lemire | AI MISTAKES retweetledi
Ethan Mollick
Ethan Mollick@emollick·
A big determinant of AI's job impact is driven by the lack of compute, especially for agentic work, which takes a lot of it. That makes AI expensive. So companies will only want to burn compute on high-value tasks (eg coding), because, in other jobs, humans remain much cheaper.
English
80
60
532
40.3K
Daniel Lemire | AI MISTAKES retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry
Rohan Paul tweet media
English
141
370
1.5K
564.4K
Daniel Lemire | AI MISTAKES
Daniel Lemire | AI MISTAKES@danlemire·
If you want abundance in life, you’ll need to bring along enough self-control to prevent CONSUMING everything that comes within your reach.
English
0
0
0
9
Daniel Lemire | AI MISTAKES
I’m all for saying “I don’t know” when it comes to detailed technical questions, but I’m still a little afraid by how much “I don’t know” happens when developing with AI agents.
English
0
0
0
14
Daniel Lemire | AI MISTAKES
I had a bit of fun with Replit and AI MISTAKES today. This new offering helps with generating animations. What do you think? It aligns well with my mission for AI MISTAKES, but is it compelling?
English
0
0
0
21