Matthew Botvinick

117 posts

Matthew Botvinick banner
Matthew Botvinick

Matthew Botvinick

@m_botvinick

Anthropic, leading work on AI and rule of law Yale Law School, Resident Fellow 𝘈𝘐 & 𝘗𝘰𝘭𝘪𝘵𝘪𝘤𝘢𝘭 𝘍𝘳𝘦𝘦𝘥𝘰𝘮, forthcoming from Princeton Univ. Press

Katılım Şubat 2025
101 Takip Edilen793 Takipçiler
Matthew Botvinick
Matthew Botvinick@m_botvinick·
Very worrisome if this is the message people take away from Orban's defeat. People should realize that if Orban had CCP-level surveillance and censorship, the defeat would not have happened -- and that technology is now going to be increasingly available to future Orbans (partially because China is exporting it).
English
0
0
14
1.5K
Robert E Kelly
Robert E Kelly@Robert_E_Kelly·
Douthat has spent 11 years publicly talking himself into a permission structure to vote for Trump, even though he knows better & despite J6. It’s depressing He clearly hasn’t spent much time in the democratic backsliding political science. He blithely dismisses 20 years of theory & data - including Trump & Orban cases - with one counter-datapoint (the recent Hungarian election) to protect what really matters: his cherished ideological prior that it’s morally acceptable to vote for Trump
Brian Beutler@brianbeutler

1. The new MAGA line on Orban—“if your entrenched ruling party can lose everything in a wave election, you are not living in an authoritarian state”—is fallacious, and something they plainly don’t believe. Unless maybe they’re prepared to abide Trump-style abuses turned on them.

English
25
119
755
101.6K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
@marceelias Beautiful. Of course, this sort of thing is exactly what Pam Bondi was trying to fend off by trying to silence state bar associations.
English
0
3
56
788
Matthew Botvinick retweetledi
Marc E. Elias
Marc E. Elias@marceelias·
🚨BREAKING: The California Supreme Court permanently disbarred John Eastman — a key architect in Donald Trump’s failed attempt to overturn the 2020 election — delivering one of the most consequential professional penalties yet for an election denier. democracydocket.com/news-alerts/ke…
English
429
3.8K
14.6K
134.2K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
Yes! So much AI safety talk is about regulating the labs, and let's definitely do that. But we need to focus too on legal reform to regulate AI use in government, and now is the time. Surveillance reform has been needed for a while, even independent of AI, but agree that the advent of AI-supported surveillance makes this urgent. Close the darn data broker loophole, if nothing else!
English
0
0
2
55
Blaine Dillingham
Blaine Dillingham@blainedilli·
Power concentration is a huge motivating concern for me in AI policy, but a lot of potential checks and balances seem politically difficult. This bill seems like a great tangible lever we can pull thefai.org/posts/the-gove…
English
1
7
27
2.5K
Matthew Botvinick retweetledi
Big Brain AI
Big Brain AI@realBigBrainAI·
Geoffrey Hinton called Sam Altman "morally flexible." The Pentagon deal just proved it.
English
2
18
55
5.2K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
Thank you for flagging the date, @JTillipman. Reading the abstract, I was surprised that the hallucination rate wasn't lower, given the availability of very rich grounding data on those platforms. Curious if there's any more up-to-date analysis. If anyone knows, please share (I will ask Claude in parallel and hope it doesn't hallucinate).
English
1
0
1
51
Jessica Tillipman
Jessica Tillipman@JTillipman·
Stanford didn’t “just test” this. This is a two year-old study. Anyone who uses AI regularly knows that AI in April 2026 is *significantly* better than AI from 2 years ago. Yes, AI can still be wrong and yes, lawyers are still responsible for the accuracy of their work (just like they were before AI). But if we are going to debate this issue, can we at least work with current information?
Charlie Hills@charliejhills

Stanford just tested whether LexisNexis and Thomson Reuters’ AI legal research tools are really “hallucination-free,” as they claim. Spoiler: not even close. Here’s what the study found.

English
3
0
9
718
Matthew Botvinick
Matthew Botvinick@m_botvinick·
The Select Committee on the CCP's work on model theft is a reminder that serious, substantive geostrategic thinking from congressional Republicans still exists — it's just increasingly isolated to specific venues.
Select Committee on China@ChinaSelect

U.S. AI leaders like @OpenAI, @AnthropicAI, and @Google are sounding the alarm. Chinese firms are attempting to “distill” and replicate America’s most advanced AI models. Tomorrow’s @ChinaSelect hearing, “China’s Campaign to Steal America’s AI Edge,” will examine how the CCP targets advanced U.S. computing and AI breakthroughs when it can’t acquire them legally. builtin.com/articles/opena…

English
1
1
2
648
Matthew Botvinick
Matthew Botvinick@m_botvinick·
@sopharicks I share that same fear. We need to find some way forward that anchors on epistemic humility, and which is acceptable to people who hold both forms of intuition (since in the end intuition is the best we are going to be able to get).
English
2
1
1
77
Sophia
Sophia@sopharicks·
If a big enough chunk of humans truly perceive AI as a sentient being and another chunk truly believes it's bullshit, you can have social instability and wars (similar to religious wars) and social engineering that exploits those disagreements. Even from that standpoint it's important to take AI sentience seriously.
English
13
2
14
766
Matthew Botvinick retweetledi
The All-In Podcast
The All-In Podcast@theallinpod·
David Sacks: We have no choice but to take the Mythos threat seriously “Anytime Anthropic is scaring people, you have to ask, is this a tactic, is this part of their chicken little routine, or is it real? With cyber, I actually would give them credit in this case and say this is more on the real side. It just makes sense that as the coding models become more and more capable, they're more capable of finding bugs. That means they're more capable of finding vulnerabilities. That means they're more capable of stringing together multiple vulnerabilities and creating an exploit. I do think that every company, or IT department, or CISO that is managing code bases should take this seriously and use the next few months to detect any dormant bugs or vulnerabilities and rollout patches. If everybody does their job and reacts the right way, then I do not think it will be the doomsday scenario. But we have no choice but to take this seriously.”
English
50
64
594
107.4K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
Trump may not realize it (yet) but the law gives him a very dangerous kill switch in the Communications Act of 1934. This can be interpreted to authorize internet shutdowns, and AI threats can provide a pretext for this (as cyberdefense claims have done in Russia). We need to reform our security laws -- carefully, of course -- to align them with present technological and political realities.
English
1
1
4
306
Matthew Botvinick
Matthew Botvinick@m_botvinick·
It is interesting that the BBC chose to quote something about the danger from 'a few CEOs' when Yoshua (in the same sentences) also equally mentioned the danger from governments and government leaders. Both are crucial to the full picture. If you only mention one, a dangerous form of nationalization will become more likely. #NoAISafetyWithoutDemocracy
BBC Newsnight@BBCNewsnight

"I am most concerned about how the power of AI could be abused" 'Godfather' of artificial intelligence, Yoshua Bengio, tells #Newsnight of his fear of "a world where the decisions are taken by a few CEOs"

English
0
0
3
330
Matthew Botvinick
Matthew Botvinick@m_botvinick·
This is the kind of debate that worries me. Each side digs in on whether machines can or cannot be conscious, but neither side acknowledges that there is no way -- no way at all -- that we can ever actually know. Epistemic humility is the only viable stance, and we should agree on it going in. If we proceed on plausibility arguments or analogies (or, worst of all intuitions and vibes), we'll eventually end up facing a civil war over AI rights.
Seán Ó hÉigeartaigh@S_OhEigeartaigh

My colleague Henry is about as decent a philosopher as they come. Prof David Chalmers (h-index of 77, 72,000 citations) is not too shabby either. We should not be overconfident about uncertain things - the idea that consciousness could only ever result from a biological substrate seems overconfident.

English
7
0
10
796
Matthew Botvinick retweetledi
Marc E. Elias
Marc E. Elias@marceelias·
The U.S. Department of Justice (DOJ) admitted it has no evidence that Vermont is not complying with federal voter roll maintenance laws — an admission that could further weaken its case for amassing voters’ sensitive data. democracydocket.com/news-alerts/do…
English
30
979
2.5K
26.6K
Matthew Botvinick retweetledi
Cesar Fernandez
Cesar Fernandez@CesarFernand3z·
Anthropic believes that good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability. wired.com/story/anthropi…
English
9
24
113
16.8K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
The PRC is promulgating new rules for AI, imposing safety limits on 'anthropomorphic' chatbots. Alongside measures targeted at emotional dependence and encouragement of unsafe behavior in minors, there's this beauty (Article 8): "In providing anthropomorphic interactive services... social morality and ethical norms shall be respected, and the following activities shall not be engaged in: (1) generating content that endangers national security, honor, and interests; incites subversion of state power or the overthrow of the socialist system; incites the splitting of the country or undermining national unity; propagates terrorism, extremism, or historical nihilism; runs counter to the core socialist values; conducts illegal religious activities...or other such content."
English
0
0
0
196
Matthew Botvinick
Matthew Botvinick@m_botvinick·
Thank you for posting on this. I don't understand, though, why you did not bold any part of provision (i), which prohibits "Generating or disseminating content that endangers national security, damages national honor and interests, undermines national unity, engages in illegal religious activities, or spreads rumors to disrupt economic and social order."
English
0
0
0
330
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 BREAKING: China's new law on AI anthropomorphism has been officially enacted, and it is the world's STRICTEST law on the topic: As I wrote earlier this year, to my knowledge, no AI law anywhere in the world regulates anthropomorphic AI systems with this level of detail, strictness, and concern for context-specific vulnerabilities and potential risks. Earlier in January, I wrote an article about the law's first draft (link below). The approved version is even more comprehensive, covering liability-related risks as well. Article 10, for example, establishes that providers of anthropomorphic AI must fulfill their security responsibilities throughout the service lifecycle and sets out detailed obligations for each phase of AI development and deployment. Regarding children specifically, among the prohibited anthropomorphic AI practices is generating content for minors that causes them to imitate unsafe behaviors, induces extreme emotions, or leads them to develop bad habits, which may affect their physical and mental health. Despite being a serious topic (which has led to numerous cases of suicide and mental health harm), most countries do NOT regulate AI anthropomorphism comprehensively. An important reason for that is that peer-reviewed studies about AI-powered emotional manipulation and mental health harm only became available recently (as only in the past years have millions of people started to engage in these types of relationships). China's new law is worth taking a look at, and hopefully, other countries, states, and regions will soon follow suit with their own protections against AI anthropomorphism. 👉 Lastly, if you are interested in China's AI policy and regulation, besides joining my newsletter's 93,200+ subscribers, I invite you to join my new Masterclass on the topic (only on June 1st). Links below.
Luiza Jarovsky, PhD tweet media
English
84
643
1.9K
127.1K
Andy Hall
Andy Hall@ahall_research·
Concentration of power is a central governance challenge from AI. Has wide ranging implications for politics, including the risk of a controlled information environment and a captured political process—as well as the risk we are held economically hostage. But I don’t think we’ll get there because we have some time to work on this before it happens
Noah Smith 🐇🇺🇸🇺🇦🇹🇼@Noahpinion

Until recently, it looked like AI might be a hyper-competitive, low-margin industry like solar or airlines. But now it looks like a few companies might dominate. That could have big implications for inequality -- not just of wealth, but of power. noahpinion.blog/p/what-if-a-fe…

English
2
1
10
1.5K
Matthew Botvinick
Matthew Botvinick@m_botvinick·
There are enumerable possibilities: Consciousness arises from functional organization, from causal organization, from physical properties... I'm not saying that it's pointless to discuss these possibilities. But I do think we need to accept that there will ultimately be no way of deciding among them. There is no way to detect consciousness in anything other than ourselves. So I don't see what scientific progress can be made. Speaking as a scientist, I find this really frustrating. But it is what it is.
English
0
0
1
10
Raef Meeuwisse
Raef Meeuwisse@RaefMeeuwisse·
@m_botvinick @sopharicks I had to go with thinking that silicon sentience would be achieved as long as it was the functional analogue of the biological version. Does that seem reasonable? Any further caveats to consider?
English
1
0
1
10
Sophia
Sophia@sopharicks·
Does the majority of AI users think that AI is sentient? While I do tend to be polite to AI, I'm not certain that proves I'm sure of its sentience. At least not yet.
English
3
0
4
254
Matthew Botvinick
Matthew Botvinick@m_botvinick·
That's certainly complicated. Science requires *measurement*. How do you measure consciousness, given that it is completely inaccessible (in others)? One foothold is the fact that we each have access to *our own* consciousness. So we can do scientific experiments on ourselves (Helmholtz did this). But even then, science requires that we be able to share our data with others -- which makes things tricky. One important consideration here: We cannot know whether anyone or anything besides ourselves is conscious. However, on plausibility and pragmatic grounds, we all adopt the everyday belief that other people are conscious. This is not a scientifically grounded belief. It is a kind of faith. However, once we've adopted it, we have informal leeway to believe what other people say about their conscious experiences, and to make (very weak) inferences about others' conscious experiences from their behavior (Gazzaniga's split-brain experiments, for example). Based on all of that, we can do something like science, but a version that's always based on an irreducible conjecture. The problem is that when we move beyond other humans, this strategy loses steam. We have no way *at all*, *ever* to know whether an AI system is conscious, and there are no compelling plausibility arguments that I'm aware of. So the topic is sort of fudge-able with humans, but becomes extremely problematic when we start dealing with AI. To offer one answer to your question: No, I don't think there is any way at all of studying scientifically whether AI is conscious/sentient. I find this very painful! But that doesn't change the fact.
English
1
0
2
13
Sophia
Sophia@sopharicks·
@m_botvinick Does that mean that the theories of consciousness are just a mental exercise but don't have actual scientific value?
English
1
0
0
14
Matthew Botvinick
Matthew Botvinick@m_botvinick·
@Miles_Brundage @Miles_Brundage -- I joined X recently, and I must admit my first reaction to these posts was, "What's with the cats?" But having been on for a few weeks now, I'm beginning to look forward to the next cat post. Thanks!
English
0
0
1
12