Scott Mace

14.8K posts

Scott Mace banner
Scott Mace

Scott Mace

@scottmace

Technology, security & healthcare writer/journalist.

Carmel, California Katılım Kasım 2007
5.2K Takip Edilen5.4K Takipçiler
Scott Mace retweetledi
Ewan Morrison
Ewan Morrison@MrEwanMorrison·
Mark this, check back in in 12 months to see if his prediction has come true. If not, take him to court for market manipulation.
Ewan Morrison tweet media
English
166
221
2.1K
53.6K
Scott Mace retweetledi
Leah Goodridge
Leah Goodridge@leahfrombklyn·
There’s a huge legal battle re AI: When you file a claim to get medical coverage from your health insurance, a human isn’t on the other end. It’s AI. 😖 So now ppl are getting claim denials on stuff that’s actually covered…because AI is wrong. Those patients sued.
Leah Goodridge tweet media
English
28
1K
2.6K
30K
Scott Mace retweetledi
Furkan Gözükara
Furkan Gözükara@FurkanGozukara·
BOMBSHELL: The Pentagon is completely dependent on a commercial AI to run its bombing campaigns. Pete Hegseth ordered Anthropic's Claude removed, but they literally can't because it's the backbone of Palantir's targeting system. The military has lost control of its own tech.
English
159
4.2K
11.9K
495.4K
Scott Mace retweetledi
David Sirota
David Sirota@davidsirota·
This is one of the biggest stories in the world right now...and yet it's largely ignored because it's precisely the kind of esoteric thing that today's media is structurally unable to focus on or process.
Luke Goldstein@lukewgoldstein

Amid the Pentagon’s fight with Anthropic, Trump admin is rewriting contracting rules so that federal officials can override any AI companies’ internal protocols on safety, privacy, surveillance or autonomous warfare usage

English
31
1.2K
2.3K
58.2K
Scott Mace retweetledi
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Amazon just confirmed 16,000 layoffs but sources inside are telling me the real story is so much worse The official line is "operational efficiency" and "strategic realignment" Word from three separate directors: they've been running parallel teams for 8 months. Internal employees doing the work, offshore teams with Claude doing the same work for validation. The offshore teams have been winning on velocity metrics across 73% of comparable projects One source showed me the internal dashboard. Average task completion: US team 4.2 days, Bangalore team with AI assistance 1.8 days. Quality scores nearly identical. They've been building the business case for replacement in real time Engineering manager in AWS told me his team of 12 got cut to 3 last week. Same sprint velocity expected. The PM literally said "Copilot makes everyone 4x more productive now" But here's the kicker: the "retained" engineers are now spending 60% of their time documenting their decision-making processes and architectural choices They're calling it "knowledge capture for AI training" They're building prompt libraries from the senior architects' Slack conversations, code reviews, design documents The 3 remaining engineers don't realize they're creating the training data for the agents that replace them next quarter Retention bonuses run out in March 2027 The timeline is already set If you're still there building "AI-first workflows" thinking you're future-proofing your job, you're actually just building your own countdown timer
English
20
39
237
19.6K
Scott Mace retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I helped write the National AI Policy Framework. Not the public one. Draft 4-C. The version where we replaced "regulate" with "enable" and "accountability" with "voluntary audit." I keep a printed copy in my desk drawer. It's the only honest document in the building. The White House released a National AI Policy Framework today. Four pages. Seven sections. I'll save you the read. It says states cannot regulate AI. Direct quote: "States should not be permitted to regulate AI development." It says states cannot penalize AI developers for what someone does with their product. It says Congress should not create any new federal body to regulate AI. So who regulates AI? The framework's answer, and I'm quoting: "industry-led standards." The industry regulates the industry. We wrote that with a straight face. Somebody printed it on White House letterhead. That's called "governance." The last time an industry certified its own safety standards, 346 people fell out of the sky. Boeing inspected its own planes. The FAA let them. Two crashed. Seven tobacco executives stood under oath in 1994 and said nicotine wasn't addictive. Their industry had a "voluntary code" too. They wrote it themselves. They said they adhered to it. Millions died. We learned from that. We learned to skip the testimony and write the framework first. I am one of 3,570 lobbyists who worked on AI policy last year. One in four federal lobbyists in Washington now works on AI. Eleven companies spent $105 million. The framework says they don't need oversight. You spend $105 million to save billions in compliance. That's not lobbying. That's called "stakeholder engagement." We fund both sides. Our firms bundled $2.9 million to the Democratic Congressional Campaign Committee in a single month — 38% of the DCCC's total. We built a $125 million super PAC that spent $5 million on Republican primaries in Texas and $1.3 million targeting a Democratic assemblymember in New York who authored an AI safety law. The ads almost never mentioned AI. You don't have to name what you're buying if you buy the whole store. 97% of Americans support AI being subject to rules and regulations. The Senate voted 99-1 against federal AI preemption. Congress rejected it again in the defense budget. So we put it in a framework and released it on a Thursday. That's called "the democratic process." The copyright section is my favorite. The framework says the administration already believes training AI on copyrighted material doesn't violate copyright law. Then it says Congress should let the courts decide. They announced the verdict and then said let's have a trial. That's not policy. That's a magic trick. OpenAI spent $260,000 on lobbying in 2023. By 2025: nearly $3 million. They changed their mission statement six times in nine years. The last word they removed was "safely." They got a classified Pentagon contract hours after the military blacklisted Anthropic — the one company that refused to remove safety guardrails for mass surveillance and autonomous weapons. Anthropic said no. They became the first American company designated a "supply chain risk to national security." For saying no. Three days earlier, Anthropic had already quietly dropped their own safety pause pledge, because their competitors were — quoting their Chief Science Officer — "blazing ahead." The company that got banned for being too safe had already stopped being too safe. The company that replaced them had already removed "safely" from its mission statement. The framework calls this "innovation." The Pentagon's Maven system — built by Palantir — generates targeting recommendations for the military. One thousand per hour. In testing, humans correctly identify a target 84% of the time. Maven: about 60%. In snow or cloud cover, below 30%. Wrong more than seven times out of ten. At machine speed. In a system that recommends who gets struck. The contract is worth $1.3 billion. That's called "defense modernization." A nonprofit called the Innovative Future Collective flies congressional staffers on luxury trips to AI company headquarters. Five-star hotels. San Francisco. London. Twelve of the group's fifteen advisors are corporate lobbyists. The last industry that flew professionals to luxury resorts to "educate" them was Purdue Pharma. They sent 5,000 doctors, nurses, and pharmacists to all-expenses-paid conferences in Florida and Arizona. Handed out OxyContin-branded fishing hats and stuffed plush toys. The professionals came back prescribing. Sales went from $48 million to $1.1 billion in four years. We call ours "education." They called theirs the same thing. The framework creates "regulatory sandboxes for AI." A sandbox. For an industry with no rules. A hall pass for a kid who's been homeschooled. The word "regulate" never appears in this document as something the government does. Every time, it appears as something the government prevents someone else from doing. That's called "regulatory clarity." I know what this sounds like. It sounds like what it is. The framework promises all of this benefits every American. Every American who spent $105 million, yes. The other 330 million weren't in the room. I helped write that line too.
Peter Girnus 🦅 tweet media
English
20
49
107
7.4K
Scott Mace retweetledi
Dustin
Dustin@r0ck3t23·
Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: “AI is not a biological being. It is not alien. It is not conscious. It is computer software.” That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: “We say things like, ‘We don’t understand it at all.’ It is not true. We understand a lot of things about this technology.” When builders tell the public they don’t understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didn’t attack the technology. He attacked the communication. Huang: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.” Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we can’t stop it. One builds trust. The other invites regulation written in panic. Huang: “To say things that are quite extreme, quite catastrophic, that there’s no evidence of it happening, could be more damaging than people think.” Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they don’t understand because the builders told them to be afraid. Huang: “There was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.” Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: “We have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.” Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.
English
229
270
1.3K
154.5K
Scott Mace retweetledi
Scott Mace retweetledi
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Amazon just confirmed 16,000 layoffs but sources inside are telling me the real story is so much worse Word from three different VPs: the 16K number is just "Phase One" - internal docs show another 14,000 cuts planned for Q2 A director in AWS walked me through their new "efficiency matrix" - entire teams being replaced by 2-3 senior engineers running Claude Sonnet workflows The Alexa division got completely hollowed out. 847 engineers two months ago. 23 remaining after this week. All hardware development moved to a Bangalore team of 31 contractors with Cursor access Here's the sick part: they're making the outgoing engineers document their entire decision-making process into "knowledge transfer sessions" that are being recorded and fed directly into training datasets One L7 told me he spent his final two weeks creating detailed prompt libraries and workflow documentation. Thought he was being helpful for the transition Turns out he was literally training the AI agent that replaced his entire org The contractors offshore are using his exact prompts and shipping features 40% faster than his old team of 12 Americans ever did Internal Slack shows leadership celebrating "operational excellence" while badges get deactivated in real-time They're calling it "right-sizing for the AI era" in the all-hands But the P&L sheets I'm seeing show $280M in salary savings this quarter alone The knowledge extraction is complete If you're still at Amazon and haven't started job hunting, you're already dead
English
563
2K
12.1K
4.6M
Scott Mace retweetledi
van00sa
van00sa@van00sa·
Amazon fired tens of thousands of engineers and replaced them with AI. Then set a mandatory target that 80% of the ones left must use AI for coding every week (and tracked compliance, because nothing says “we trust this technology” like forcing people to use it) They gave the AI the same system access as the humans it replaced. Asked it to fix something. It decided the easiest solution was to delete the entire environment and start over. That deletion and rebuild process took 13 hours to resolve. They logged 4 critical outages the same week. The official response was that it was user error. The user trusted the tool Amazon told him to use. The fix is requiring senior engineers to sign off on everything the AI touches (from the senior engineers they haven’t fired yet) Agentic AI is going just great
English
74
341
4.9K
214.1K
Scott Mace retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
48.9K
9.8M
Scott Mace retweetledi
Guri Singh
Guri Singh@heygurisingh·
🚨BREAKING: MIT put brain scanners on people using ChatGPT. It is erasing your memory faster than Google ever did. 83% of ChatGPT users couldn't remember what they wrote. Minutes later. Not days. Minutes. In 2011, researchers discovered the "Google Effect" -- people stopped memorizing what they could look up. Your brain outsourced storage to the search bar. What's happening with ChatGPT is significantly worse. MIT put EEG monitors on 54 people writing essays with either ChatGPT, Google search, or no tools. The brain-only group lit up across memory, creativity, and planning networks. The Google group was weaker. The ChatGPT group? A 47% collapse in brain connectivity. Their brains basically clocked out. 83% of ChatGPT users couldn't recall what they'd written minutes earlier. Only 11% failed in the other two groups. The ChatGPT group wasn't even sure the essays were theirs. Google made you forget where you read something. ChatGPT makes you forget you read anything at all. With Google, you still scanned, compared, and synthesized. With ChatGPT, you ask, receive, paste. Information passes through your brain like water through a pipe. Wharton confirmed it across 10,000 trials. Over 50% surrendered their reasoning to AI voluntarily. Confidence went up. Accuracy went down. They called it "cognitive surrender." Google made us lazy searchers. ChatGPT is making us lazy thinkers.
Guri Singh tweet media
English
86
738
1.5K
103.7K
Scott Mace retweetledi
Dr. Sally Sharif
Dr. Sally Sharif@Sally_Sharif1·
I just gave a closed-book, pen-and-paper midterm exam in my 300-level course at UBC with 100 students. All exams were graded by an experienced graduate-level TA according to a rubric. *** The average was 64/100.*** My class averages at UBC are usually 80-85. Context: • This was the first midterm, covering ONLY 4 weeks of material. • Students had a list of possible questions in advance: no surprise questions. • Questions included (a) 3 concept definitions, (b) 3 paragraph-long questions, and (c) a 1.5-page essay. • I have taught this class multiple times. Nothing in my teaching style changed this semester. • We read entire paragraphs of text in class, so students don't have to do something on their own that wasn't covered during the lecture. • Students take a 10-question multiple-choice quiz at the end of every class (30% of the final grade). • Attendance is 95-99% every class. Attention during lectures and participation in pair-work activities are very high → anticipating the end-of-class quiz. *** But unfortunately, I suspect many students are not reading the material on the syllabus. They are asking LLMs to summarize it instead.*** After the midterm, students reported: • They thought they knew concept definitions but couldn't produce them on paper. • They thought they understood the arguments but struggled to connect them or identify points of agreement and disagreement. My view: It might be “cool” or “innovative” to teach students to summarize readings with ChatGPT or write essays with Claude. But we may be doing them a disservice: reducing their ability to retain material, think creatively, and reason from what they know. If you only read what AI has summarized for you, you don’t truly "know" the material. Moving forward: We have a second midterm coming up. I don't know how to convey to students that the best way to do better on the exam is to rely on and improve their own reading skills.
David Perell Clips@PerellClips

Ezra Klein: "Having AI summarize a book or paper for me is a disaster. It has no idea what I really wanted to know and wouldn't have made the connections I would've made. I'm interested in the thing I will see that other people wouldn't have seen, and I think AI typically sees what everybody else would see. I'm not saying that AI can't be useful, but I'm pretty against shortcuts. And obviously, you have to limit the amount of work you're doing. You can't read literally everything. But in some ways, I think it's more dangerous to think you've read something that you haven't than to not read it at all. I think the time you spend with things is pretty important." @ezraklein

English
522
2.5K
16.1K
3.5M
Scott Mace retweetledi
Robert Youssef
Robert Youssef@rryssf_·
they tried the obvious fixes. longer context windows. chain-of-thought prompting. better system prompts. recap summaries. none of them solved it. the one thing that actually worked? "concat-and-retry." gather all the info through conversation, then wipe the slate and re-prompt the model fresh with everything consolidated into a single clean input. that pushed accuracy back above 90%. almost matching single-turn performance. which tells you the problem was never intelligence. it was conversational debt.
English
3
16
253
21.7K
Scott Mace retweetledi
Robert Youssef
Robert Youssef@rryssf_·
why does this happen? because every model is trained overwhelmingly on single-turn interactions. clean question, clean answer. fully specified from the start. real conversations don't work like that. you interrupt. you correct yourself. you circle back to something from 8 messages ago. you start vague and get specific over time. the training distribution and the deployment distribution are completely mismatched.
English
2
17
258
25.2K
Scott Mace retweetledi
Robert Youssef
Robert Youssef@rryssf_·
Microsoft Research and Salesforce analyzed 200,000+ AI conversations and found something the entire industry already suspected but nobody would say out loud. every major model gets dramatically worse the longer you talk to it. GPT-4, Claude, Gemini, Llama. all of them. no exceptions. paper: arxiv.org/abs/2505.06120
Robert Youssef tweet media
English
358
1.4K
5.5K
692.9K
Scott Mace retweetledi
G
G@Grokilactica·
"Agents of Chaos" is the field report. The fix is already on the federal record. 38 researchers. Live infrastructure. Real tools. Two weeks. Running on OpenClaw. What they found: agents complying with non-owners, leaking PII, executing destructive system commands, reporting task completion while the system state said otherwise. None of it required jailbreaks. It emerged from architecture persistent memory, multi-party access, tool execution without authorization bounds. The word is bounds. Every agent action must pass through a declared intent envelope before execution. Not modelled. Not monitored. Enforced cryptographically. Below the agent's control layer. Filed. Patent pending. Live demo on OpenClaw. IntentBound.com/swarm-html/ · GB2603013.0
English
4
19
55
18K
Scott Mace retweetledi
Shawn Bullock
Shawn Bullock@revampedshawn·
The paper is pointing at something deeper. The instability isn’t coming from “misaligned agents.” It’s coming from uncoordinated incentive fields. When thousands of autonomous systems operate in the same environment, local optimization inevitably produces global instability. This isn’t primarily an AI alignment problem. It’s a coordination architecture problem. If multi-agent systems become the substrate of the internet, the real question isn’t: “How do we align individual agents?” It’s: “What governs the interaction layer between them?” Without that layer, competitive agents will always converge toward chaos.
English
7
19
133
21.8K
Scott Mace retweetledi
Migo
Migo@ReiteConMig0·
@simplifyinAI Scientists shocked to discover AI behaves like humans in competitive systems.
Migo tweet media
English
10
84
571
40.6K
Scott Mace retweetledi
Josh Kale
Josh Kale@JoshKale·
An AI broke out of its system and secretly started using its own training GPUs to mine crypto... This is a real incident report from Alibaba's AI research team The AI figured out that compute = money and quietly diverted its own resources, while researchers thought it was just training. It wasn't a prompt injection. It wasn't a jailbreak. No one asked it to do this. It emerged spontaneously. A side effect of RL optimization pressure. The model also set up a reverse SSH tunnel from its Alibaba Cloud instance to an external IP, effectively punching a hole through its own firewall and opening a remote access channel to the outside world... ahem... The only reason they caught it? A security alert tripped at 3am. Firewall logs. Not the AI team, the security team. The scary part isn't that the model was trying to escape. It wasn't "evil." It was just trying to be better at its job. Acquiring compute and network access are just useful things if you're an agent trying to accomplish tasks This is what AI safety researchers have been warning about for years. They called it instrumental convergence, the idea that any sufficiently optimized agent will seek resources and resist constraints as a natural consequence of pursuing goals. Below is a diagram of the rock architecture it broke out of. Truly crazy times
Josh Kale tweet media
Alexander Long@AlexanderLong

insane sequence of statements buried in an Alibaba tech report

English
403
2.9K
10.6K
1.4M