Morgan Plummer

1.5K posts

Morgan Plummer banner
Morgan Plummer

Morgan Plummer

@mcplummer1789

VP for Policy Design @americans4ri | Tech & National Security | IR & Foreign Policy | American Politics | All Things Midwestern | Opinions Mine

Naperville, IL Katılım Nisan 2018
292 Takip Edilen488 Takipçiler
Morgan Plummer retweetledi
Brad Carson
Brad Carson@bradrcarson·
More to follow on this, but just a marker that the Framework on AI today is actually worse in most ways than the past naked attempts we helped defeat. Here they are making clear that we intend - rather just fall into - all of the worst bits of past tech policy. 1/2
English
1
4
27
1.6K
Morgan Plummer retweetledi
Brad Carson
Brad Carson@bradrcarson·
Yes, the White House AI Framework is ontologically "something." But it's like saccharine: empty of nutrition, certain to leave a bitter aftertaste, and probably carcinogenic.
Anton Leicht@anton_d_leicht

this is enough for the admin to say 'we put out something' so that Dems can't say they didn't, and to stop other bills from becoming the canonical interpretation of the EO. it leaves anything beyond that to congress, which still seems like an uphill battle before the midterms.

English
2
3
27
1.8K
Morgan Plummer retweetledi
Americans for Responsible Innovation
Broad immunity for an emerging industry rarely ages well. Section 230 proved it ages worst for the people least able to protect themselves. @bradrcarson is testifying before Senate Commerce today on what Congress must do differently for AI.
English
1
6
20
890
Morgan Plummer
Morgan Plummer@mcplummer1789·
Epic fail journey of UA3616 continues @united ops team. At CHO and Crew showed up 30 min before scheduled boarding so no pre-flight checks had been conducted - we’re now in the fun game of cat and mouse where the departure time moves back in 15 increments. Last word from gate agent was that she was “waiting to hear back from the crew about their check.” That was 15 min ago…meanwhile United App hasn’t updated since 10pm last night 🤣
English
0
0
0
39
Morgan Plummer retweetledi
Miles Brundage
Miles Brundage@Miles_Brundage·
Gonna keep posting this until people start doing serious policy and economic research robotics, and until there is a major US policy effort to compete in this critical sector
English
3
1
16
1.9K
Morgan Plummer
Morgan Plummer@mcplummer1789·
UA 3616 is absolute bedlam @united ops team - 2 hours late out of Chicago waiting on inbound crew, diverted to CHO bc the delay out of ORD put us right in the center of the storms at DCA, waited 45 on the tarmac at CHO for ground crew to get it together, and currently trapped inside the terminal bc poor agents have no idea what’s going on. Be better.
English
1
0
1
366
Morgan Plummer retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
This passage in the New Yorker piece on the Anthropic DOW conflict yesterday, including a back and forth between the journalist (Gideon Lewis-Kraus) and an anonymous admin official, is gonna stick in my mind for a long time. “We must also remember that Cyberdyne Systems created Skynet for the government. It was supposed to help America dominate its enemies. It didn’t exactly work out as planned. The government thinks this is absurd. But the Pentagon has not tried to build an aligned A.I., and Anthropic has. Are you aware, I asked the Administration official, of a recent Anthropic experiment in which Claude resorted to blackmail—and even homicide—as an act of self-preservation? It had been carried out explicitly to convince people like him. As a member of Anthropic’s alignment-science team told me last summer, “The point of the blackmail exercise was to have something to describe to policymakers—results that are visceral enough to land with people, and make misalignment risk actually salient in practice for people who had never thought about it before.” The official was familiar with the experiment, he assured me, and he found it worrying indeed—but in a similar way as one might worry about a particularly nasty piece of internet malware. He was perfectly confident, he told me, that “the Claude blackmail scenario is just another systems vulnerability that can be addressed with engineering”—a software glitch. Maybe he’s right. We might get only one chance to find out.” I really recommend everyone read both the full New Yorker piece and Anthropic’s research on persona selection (both linked in the replies) and then spend a while sitting with the disconcerting situation we may have found ourselves in.
English
8
24
229
133.4K
Morgan Plummer retweetledi
Americans for Responsible Innovation
Blacklisting one of America's leading AI companies "does not strengthen our competitive position. It weakens it." A bipartisan coalition of 30 defense and intel veterans is calling on Congress to investigate in a new letter. cnbc.com/2026/03/05/def…
English
0
4
32
1.3K
Morgan Plummer retweetledi
Americans for Responsible Innovation
Supply-chain risk authorities were built to keep foreign adversaries out of U.S. systems. Turning them on American AI companies for building guardrails sends a dangerous signal at exactly the wrong moment. Our statement on today's move by the Pentagon: ari.us/pentagon-desig…
English
1
4
19
890
Morgan Plummer retweetledi
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
We've done international coordination around lots of technologies before, esp nuclear weapons and bioweapons. Who's to say we can't have a deal on AI? I'm not saying we become China's best friend - we can negotiate with them like we did with the Soviets.
Nate Soares ⏹️@So8res

It's silly to be such doomers about international coordination around AI (like Dario, quoted below). World leaders haven't even noticed the problem yet! Giving up before relevant parties even understand the issue is embarrassingly defeatist.

English
5
13
95
6.2K
Morgan Plummer retweetledi
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
Senator @Jim_Banks (R-IN) is asking the right questions about AI superintelligence and the potential for rapid recursive self-improvement. Amazing to see.
Peter Wildeford🇺🇸🚀 tweet media
English
3
34
287
30.3K
Morgan Plummer retweetledi
Seán Ó hÉigeartaigh
Seán Ó hÉigeartaigh@S_OhEigeartaigh·
Rather than boycotting one particular company, I think I'd prefer the media to focus on the question of why decisions on AI in surveillance/LAWs are being made in classified bilateral contract negotiations rather than by elected officials in democratic processes. We need to fix a system, and quickly. This may send some message, but OpenAI aren't massive outliers and some of their competitors won't care. Fundamental principles can't rely on whether 'Sam Altman is a trustworthy dude' (or Pete Hegseth for that matter); there needs to be real transparency and accountability.
English
4
6
75
4.1K
Morgan Plummer retweetledi
Dean W. Ball
Dean W. Ball@deanwball·
@jon_stokes just 1, not 2. I trust these particular dudes more than almost everyone commenting on my side! my point is that even if you *love* the dudes now, eventually you will hate them, and then you'll be fucked. we have process to be resilient to variance in dude quality.
English
5
3
106
4.5K
Morgan Plummer
Morgan Plummer@mcplummer1789·
As a policy wonk, it’s no shocker that I care about policy questions. As the OAI/Anthropic/DOW saga plays out, here are the biggest questions in my mind that still very much needs answers: 1. How do we, as country, feel about delegating the lethal use of force - violence that can only be used by the state on behalf of its people (if you believe in things like social contracts) - to machines? We need a national debate on this question. 2. As DOW rushes to adopt AI into its war-fighting functions, not just its back office, functions, how can policymakers ensure it has the talent to supervise and use that technology in any kind of meaningful way? You could fill a room with the number of reports and commission findings indicating DOW lacks the technical expertise to know what to do with this technology. 3. Now that the chickens have finally come home to roost after decades of R&D underinvestment, who DOES get to decide how emerging tech is used to enable functions of the state? The vendor? The govt? The people? Anyone with an immediate answer to this question isn’t thinking deeply enough about this question. 4. Relatedly, how do we avoid creating a new AI military-industrial complex to which DOW is entirely beholden without disregarding the ethical underpinnings that drive many of the frontier labs or foolishly pulling defense AI development “in house” to the DOW? Better procurement and governance policy (with potentially new paradigms) seem vital in this new era. 5. When will we realize the urgency of the tasks and questions before us? It’s now clear (to me at least) that DOW and other parts of govt are adopting a technology they don’t understand, and once again, policy is lagging far, far behind. This has never worked out well. We’re moving too fast, and doing so blindly. These aren’t sexy questions, but they are the really hard questions with (mostly) no easy answers. Eager to move beyond the palace intrigue of the moment with those who want to get to work on figuring out answers!
English
1
3
9
476
Morgan Plummer retweetledi
Brad Carson
Brad Carson@bradrcarson·
Lots of interesting takes on the new OAI-DOW agreement. @j_asminewang, @CharlieBul58993, @_NathanCalvin, @JTillipman have all inquired or made posts. Some casual impressions from me. I think, if executed in good faith, the language does seem an improvement. A prohibition on using commercial datasets for intelligence purposes goes beyond current (bad) law and introduces a new (and needed) restriction. To repeat, under current law, intelligence agencies can without recourse analyze US persons using commercially available databases. LLMs will turbocharge this, and I'd love to see this type of intelligence analysis limited in any way. But, as is usual for contracts, definitions are key. So let me play evil DOW General Counsel and tell you how I'd get around what has been presented, just for the sake of argument. The load-bearing word is "surveillance." Importantly, this is a term of art defined in FISA. Under FISA, "surveillance" means the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire communication to or from a person in the United States. Being evil or maybe even just ordinary, I'd argue "surveillance" in this OAI contract means exactly what the IC means by it; after all, FISA is explicitly referenced! So, evil GC says, analyzing commercially purchased location data, browsing patterns, and behavioral records through GPT isn't "surveillance" at all. It's only data analysis of lawfully acquired commercial information. In other words, the clause doesn't prohibit it because the activity the clause describes doesn't fall within the category the clause addresses. So to be clear: the IC sees commercial data analysis as not even being "surveillance." So, says evil DOD GC, the new contract language is like saying we agree not to do surveillance of those things that we've already defined as not being objects of "surveillance." Whew! "Tracking" and "monitoring" cause evil GC more problems. These are not terms of art (IIRC!). But I'm ingenious. "Tracking" implies persistence and it requires a direct object. So general and static queries like "Tell me who went to the mosque in Tulsa and booked a trip to New York" isn't tracking at all. Same with "Who in Tulsa had a Samsung phone around 41st Street on March 2nd?" "Monitoring" also implies persistence. So static searches that don't persist over time aren't even monitoring. So, concludes evil GC, running searches without targets that don't persist over time and that don't intercept communication is entirely outside this agreement! This is what we do right now! And we get to continue, with LLM empowerment! And, evil GC says, I particularly like that part where we say, rather strangely but certainly meaningful in some occult way, "the Department understands" rather than simply "This limitation prohibits...." I can probably argue that the latter is stronger than the former, so it must be meaningful in a way that helps my evil ways. Break/break Brad: is this a realistic way that this will be interpreted? Maybe. In the end, intelligence requires us to trust people to act in good faith and not be evil. Most of the time, history shows that to right. But not always. We have to hope for ethical leadership, with proper oversight and accountability from Congress. At its best, this new language might be a new and welcome limitation. At its worst, it's blinding us with very precise terms of art that dupe us into thinking the words instead have ordinary meaning.
English
3
8
65
5.1K
Morgan Plummer retweetledi
Brad Carson
Brad Carson@bradrcarson·
@Aella_Girl I don't think the lesson learned from this episode is to support libertarian politics. The recipe for bad government is good government. That's not always easy, but this is the inescapable obligation.
English
1
1
30
2.4K