Kirsten Roos

22 posts

Kirsten Roos

Kirsten Roos

@klroos

Alaska Katılım Temmuz 2008
131 Takip Edilen14 Takipçiler
Kirsten Roos
Kirsten Roos@klroos·
@SenSanders It’s at least good to see an older congressman trying to understand AI privacy risks. I wish he would’ve asked more questions about surveillance and propaganda. And about why we might not want our state voter data sent to DHS.
English
0
0
0
6
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English
1.6K
4.2K
26.4K
7M
Kirsten Roos
Kirsten Roos@klroos·
@alaskapublic The devil is in the details. And too many in congress are purposely obscuring those details. Thank you Senator Murkowski and @lruskin for clearly exposing some of those details.
English
0
0
0
16
Alaska Public Media
Alaska Public Media@alaskapublic·
The Save America Act would require voters to show photo ID at the polls and provide proof of citizenship to register. akpub.io/4bnamna
English
3
0
2
171
Kirsten Roos
Kirsten Roos@klroos·
@deanwball Can we pivot and play this choose-your-own-adventure game with DHS and SAVE-collected voter data?
English
1
0
1
918
Dean W. Ball
Dean W. Ball@deanwball·
A hypothetical: 1. In the 2028 election, a Democrat has won. Say that it is Kamala Harris. 2. Using frontier AI systems contracted by the Department of Homeland Security, President Harris orders the creation of a new program for AI to monitor social media and notify the social media platform about posts spreading “misinformation” that “harms homeland and national security by spreading dangerous falsehoods.” 3. Many Republicans see this “misinformation” as core policy positions of their political party. 4. The AI-generated monitoring and notification system described in (2) is designed to conform to the pattern of jawboning exhibited by the Biden Administration in Murthy v. Missouri, where the Supreme Court ruled that people whose social media posts were taken down due to government pressure have no standing to sue. 5. The social media platforms create AI agents that receive the government’s AI generated requests and make decisions in seconds about whether to take down posts, deboost them, deplatform the user, etc. 6. According to very recent Supreme Court precedents, everything I have described falls into “lawful use” of an AI system by all parties involved. A person whose speech was deleted by a social media platform at the request of government does not have standing to sue the government, so long as the government did not threaten policy retaliation against the social media company. And a social media company’s content moderation policies are protected expression. Thus a person whose speech rights were harmed in this context currently has no legal recourse. 7. This is “America’s national security agencies using AI within the bounds of all lawful use.” It is also a wholly automated censorship regime. This is barely a hypothetical. Much of it already happened *under the Biden admin.* The only difference is the use of AI. In the world where this happens, I’d be curious to know whether thoughtful people like @Indian_Bronson would object. If xAI were one of the companies used by the government for the social media monitoring, would you encourage the company to cancel their business with the government? Or would you say they have an obligation to provide their services to the national security apparatus of USG for all lawful use? If you would encourage xAI to cancel their contract with the government, on what principle (not qualitative judgment—universal and timeless principle!) would you distinguish between the DoW’s current insistence on “all lawful use regardless of a private party’s qualms” and xAI’s hypothetical future insistence on “all lawful use regardless of a private party’s qualms”?
English
33
55
642
62.4K
Kirsten Roos
Kirsten Roos@klroos·
@deanwball I’m honestly trying to understand the deescalate argument. Why wouldn’t a collective statement from all major AI companies explaining the dangers without criticising the pentagons request not be a better escalation move?
English
0
0
1
64
Dean W. Ball
Dean W. Ball@deanwball·
I do not share the cynicism of some with respect to OpenAI’s actions in the DoW/Ant dispute. It basically seems to me as though OpenAI was attempting to deescalate last week; whether they executed well is a separate question, but in their defense good execution in such chaos was nearly impossible. But from where I sit it seems OpenAI tried to reduce tensions and find a productive path forward, while allowing its employees considerable latitude to speak their minds. The easy thing would have been for management to stay quiet and let this happen; they did not do that, and they also stood firm in opposition to the supply-chain risk designation. In general, OpenAI is unjustly maligned. This is the thing that bothers me the most about Dario’s leaked memo; it spends so much time on OpenAI conspiracies and cynicism that I fear industry solidarity in the future will be harder than it needs to be. This is not the last time we will see state interference into frontier AI, and until we build formalized structures for such interference it will be important for the industry to hang tough together. I fear that will be less likely now.
English
39
40
521
41.9K
Kirsten Roos
Kirsten Roos@klroos·
@deanwball Agreed. Federal legislation is sorely needed but Congress is ill-equipped to understand how AI actually behaves. An independent expert advisory board might work — but how to keep that from devolving into the same party-line problem, with Sacks on one side and Hinton on the other?
English
0
0
0
91
Dean W. Ball
Dean W. Ball@deanwball·
I regret to inform you that even as this federal shitshow plays out, the states also are attacking AI relentlessly with rent-seeking legislation such as this, which would sap tremendous utility out of today’s AI systems while doing very little to make anyone safer.
More Perfect Union@MorePerfectUS

A New York bill would ban AI from answering questions related to several licensed professions like medicine, law, dentistry, nursing, psychology, social work, engineering, and more. The companies would be liable if the chatbots give “substantive responses” in these areas.

English
45
61
827
49.9K
Kirsten Roos
Kirsten Roos@klroos·
@deanwball It may not have been wise, but the fact that speech now has to be tailored not to upset the executive for fear of retribution is evidence of the problem.
English
0
0
0
14
Dean W. Ball
Dean W. Ball@deanwball·
I really cannot see how Anthropic’s position benefits at this stage from communications like this. Seems it just pushes the Trump Admin to escalate further while also alienating potential allies in the industry. They already had a strong hand. Sometimes silence is best.
Stephanie Palazzolo@steph_palazzolo

Anthropic CEO Dario Amodei told employees on Friday that the OpenAI-Pentagon deal was "safety theater" and the Trump administration didn't like Anthropic because it hadn't "given dictator-style praise to Trump." He expressed skepticism at the safeguards OpenAI touted.

English
93
28
488
64.8K
Stanford HAI
Stanford HAI@StanfordHAI·
At the AI+Education Summit, @StanfordHAI Co-Director James @Landay argues that shaping education for the next generation requires addressing a fundamental question: What competencies will define student success in the age of AI? Watch the full discussion: youtube.com/watch?v=JWKXCv…
YouTube video
YouTube
English
5
10
43
3.3K
Kirsten Roos
Kirsten Roos@klroos·
@OpenAI The accuracy improvements are genuinely welcome — a week ago that might have kept me. But when Anthropic held the line on the Pentagon contract alone, that became the deciding factor. A unified industry stance would have changed everything.
English
0
0
0
7
Kirsten Roos
Kirsten Roos@klroos·
@deanwball When it comes to safety, the unserious provide cover for the seriously opposed.
English
0
0
0
21
Dean W. Ball
Dean W. Ball@deanwball·
It is so clear that the important fissure in AI politics right now is not “liberal vs. conservative,” “Democrat vs. Republican,” “e/acc vs. EA,” or “safety vs. anti-safety,” but instead “takes advanced AI seriously as a concept vs. does not take advanced AI seriously.”
English
64
131
1.3K
224.4K
Kirsten Roos
Kirsten Roos@klroos·
@TimSheehyMT This is fundamentally dishonest. You are being asked to require reasonable transparency and oversight from DHS before any funding is allocated. That is reasonable and should not be abandoned under excuse of war.
English
0
0
0
8
Tim Sheehy
Tim Sheehy@TimSheehyMT·
Following our righteous action in Iran, the risk of a retaliatory terrorist attack is heightened. Chuck Schumer must allow us to fund the Department of Homeland Security to keep Americans safe. TSA needs to be funded and paid so we don’t have another hijacking event like 9/11.
English
166
489
2.7K
103.9K
Kirsten Roos
Kirsten Roos@klroos·
@sama Do OpenAI’s red lines apply to all government contracts, including civilian agencies like DHS - or specifically to the Pentagon agreement announced this week?
English
0
0
0
13
Sam Altman
Sam Altman@sama·
I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.
English
7.6K
588
10.4K
7.1M
Kirsten Roos
Kirsten Roos@klroos·
@sama On your second point - I might have agreed before senior officials started calling safety research “woke.” Who controls AGI matters less than whether the people in control take safety seriously.
English
0
0
0
14
Sam Altman
Sam Altman@sama·
Three general things from this AMA: 1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on, but…I don’t. This seems like an important area for more discussion. 2. I think the is a question behind a lot of the questions but I haven’t seen quite articulated: What happens if the government tries to nationalize OpenAI or other AI efforts? I obviously don’t know; I have thought about it of course (it has seemed to me for a long time it might be be better if building AGI were a government project) but it doesn’t seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important. 3. People take their safety (in the national security sense) more for granted than I realized, which I think is a good thing on balance but I don’t think shows enough respect to the tremendous work it takes for that to happen. Also, I am on the whole very grateful for the level of reasonable and good-faith engagement here. It was not what I expected.
English
466
124
2K
1.3M
NatSecKatrina
NatSecKatrina@natseckatrina·
I would gently push back on the underlying premise that if the government agrees to a usage policy restriction, that's ironclad, but if it's just a law or policy, that's no guarantee at all. Why would Anthropic think that their earlier usage policy forbidding surveillance was sufficient to guarantee their models could not be used for this? My main argument is that usage policies are only one part of a layered set of safeguards. Here's how I think about this: 1. The safety stack travels with the model. The Department was not asking us to modify how our models behave. Their position was, build the model however you want, refuse whatever requests you want, just don't try to govern our operational decisions through usage policies. For whatever risk surface area remains, our safety stack, refusal policies, and guardrails become another protection. And those technical controls are often more reliable than contract clauses anyway. Our contract gives us control over the models and safety stack we deploy, and the ability to improve them over time. 2. AI experts directly involved. Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could. 3. U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract.  And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.
English
11
15
161
30.1K
Kirsten Roos
Kirsten Roos@klroos·
@sama @natseckatrina Altman said OpenAI will deploy forward-deployed engineers to monitor safety. Who do those engineers answer to if they identify a violation - OpenAI or the Pentagon?
English
0
0
0
19
Sam Altman
Sam Altman@sama·
@natseckatrina who leads some of our national security work is going to jump in to answer some of your questions
English
176
24
782
622.8K
Kirsten Roos
Kirsten Roos@klroos·
@OpenAI Strong commitments - stronger than Anthropic’s two red lines. Now the question is whether they’re in the contract, not just the press release. Congressional oversight should verify. Public statements create accountability. Contracts enforce it.
English
0
0
0
10
OpenAI
OpenAI@OpenAI·
Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies. We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Here's why: openai.com/index/our-agre…
English
1.9K
598
3.9K
2.6M
Kirsten Roos
Kirsten Roos@klroos·
@Reuters @phildstewart @edmundblair The stated goal was taking out nuclear capabilities. But that map shows strikes from Tabriz to Chabahar — military bases, oil infrastructure, naval ports, air defenses across the entire country. That’s not a surgical nonproliferation strike. That looks like the opening of a war.
English
0
0
0
5
Reuters
Reuters@Reuters·
Israel said it launched a pre-emptive attack against Iran, pushing the Middle East into a renewed military confrontation and further dimming hopes for a diplomatic solution to Tehran's long-running nuclear dispute with the West reut.rs/3N9KHGD
GIF
English
268
690
1.8K
1.6M
Kirsten Roos
Kirsten Roos@klroos·
@tedlieu I am cautiously optimistic that Sam Altman is sincere. But public statements are very different from what goes into contracts. We will be relying on congressional oversight committees to track this closely.
English
0
0
0
4
Kirsten Roos
Kirsten Roos@klroos·
@mgdurrant Proud Americans can disagree about a lot. Protecting civil liberties from mass surveillance isn’t one of them. Thank you for speaking up.
English
0
0
0
6
Kirsten Roos
Kirsten Roos@klroos·
@sama Encouraging — cautiously optimistic. But public statements and contract language aren’t always the same thing. I’m writing congressional oversight committees to ask them to verify.
English
0
0
0
96
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
15.9K
4K
34.2K
38.1M
Kirsten Roos
Kirsten Roos@klroos·
@Google employees signed. @JeffDean spoke out. Now @sundarpichai needs to go on record. Mass surveillance and autonomous weapons without guardrails are unacceptable — regardless of the client. Where do you stand?
English
0
0
0
6