Scott Cory

490 posts

Scott Cory banner
Scott Cory

Scott Cory

@slessans

robot intelligence @openai

San Francisco, CA Katılım Mayıs 2010
1.1K Takip Edilen1.6K Takipçiler
Scott Cory
Scott Cory@slessans·
@parmita tech employees don’t want you to find out this one simple trick
English
0
0
1
53
Scott Cory
Scott Cory@slessans·
AGI achieved internally. (artificial goblin intelligence)
English
0
0
1
85
Scott Cory
Scott Cory@slessans·
@ai point 1 is way off the mark
English
0
0
0
15
Scott Cory retweetledi
Core Memory
Core Memory@corememory·
.@sama says @OpenAI is going all in on robotics because the US is in peril The AI transcribed OpenAI as Open The Eye. I decided to leave it in as fitting
English
24
29
556
136K
Scott Cory
Scott Cory@slessans·
why do people care claude code was leaked it ain’t the weights lil bro
English
1
0
1
296
Scott Cory
Scott Cory@slessans·
people really freaking out about this SLAM job posting
English
1
1
2
406
Scott Cory
Scott Cory@slessans·
the future belongs to those who, seeing a problem, burneth the token
English
0
0
2
91
Scott Cory
Scott Cory@slessans·
regardless of where you land on these issues, this specific line of reasoning is worth reflecting on for anyone in ai
Scott Cory tweet media
Sam Altman@sama

For a long time, we were planning to non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took. We started talking with the DoW many months ago about our non-classified work. This week things shifted into high gear on the classified side. We found the DoW to be flexible on what we needed, and we want to support them in their very important mission. The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the US. We negotiated to make sure similar terms would be offered to all other AI labs. I know what it's like to feel backed into a corner, and I think it's worth some empathy to the DoW. They are on the a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

English
0
0
2
359
Scott Cory
Scott Cory@slessans·
@kevinroose that’s fair. i think about it less like he was naive wrt the theory of it and more like he was naive wrt implementing it in practice.
English
0
0
2
462
Kevin Roose
Kevin Roose@kevinroose·
@slessans he may have fucked it up! all i'm saying is he did not fuck it up because these considerations are new to him.
English
2
0
96
6.8K
Kevin Roose
Kevin Roose@kevinroose·
Agree with him or not, the (oddly popular on here!) take that Dario Amodei is some kind of bumbling Silicon Valley naïf who couldn't get a deal with the Pentagon done because he doesn't understand politics seems entirely wrong. His favorite book is "The Making of the Atomic Bomb." He used to buy copies for new Anthropic employees. (There's still a copy prominently displayed in the Anthropic library.) He fully expected -- back when it was a crazy thing to expect! -- that AI would become as important as nuclear weapons, and that the people who built it, like the scientists of the Manhattan Project, would face pressure from governments to use their technology in ways they found immoral or dangerous. I am sure this all could have been handled differently, but none of this is a surprise to anyone who knows anything about the relevant people.
English
72
69
1.9K
160.9K
Scott Cory retweetledi
NatSecKatrina
NatSecKatrina@natseckatrina·
I would gently push back on the underlying premise that if the government agrees to a usage policy restriction, that's ironclad, but if it's just a law or policy, that's no guarantee at all. Why would Anthropic think that their earlier usage policy forbidding surveillance was sufficient to guarantee their models could not be used for this? My main argument is that usage policies are only one part of a layered set of safeguards. Here's how I think about this: 1. The safety stack travels with the model. The Department was not asking us to modify how our models behave. Their position was, build the model however you want, refuse whatever requests you want, just don't try to govern our operational decisions through usage policies. For whatever risk surface area remains, our safety stack, refusal policies, and guardrails become another protection. And those technical controls are often more reliable than contract clauses anyway. Our contract gives us control over the models and safety stack we deploy, and the ability to improve them over time. 2. AI experts directly involved. Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could. 3. U.S. law already constrains the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract.  And because laws can change, having this codified in the contract protects against changes in law or policy that we can’t anticipate.
English
11
15
161
30.6K
Scott Cory retweetledi
Boaz Barak
Boaz Barak@boazbaraktcs·
There is this narrative that up until this week, Anthropic had this wonderful contract that prevented the U.S. government from doing mass domestic surveillance or autonomous lethal weapons, and now all hell will break lose. As I wrote, I am not a fan of accelerating AI specifically in the national security space. If I had been an Anthropic employee at the time they signed their original deal with the DoW, I would have probably opposed it, especially given the reduced control since they worked through Palantir. And I don't think having some terms of use in the contract is what we can rely on to protect us. I believe the drama of the last week about these terms of use is more about politics than substance. The substance is about the details, which I hope more of which will come out soon. But it is wrong to present the OAI contract as if it is the same deal than Anthropic rejected, or even as if it is less protective of the red lines than the deal Anthropic already had in place before. Obviously I don't know all details of what Anthropic had before, but based on what I know, it is quite likely that the contract OAI signed gives *more* guarantees of no usage of models for mass domestic surveillance or autonomous lethal weapons than Anthropic ever had.
Boaz Barak@boazbaraktcs

Some thoughts (long tweet.. sorry). I would prefer if we focused first on using AI in science, healthcare, education and even just making money, than the military or law enforcement. I am no pacifist, but too many times national security has been used as an excuse to take people's freedoms (see patriot act). I am very worried about governments using AI to spy on their own people and consolidate power. I also think our current AI systems are nowhere nearly reliable enough to be used in autonomous lethal weapons. I would have preferred to take it slower with classified deployment, but if we are going to do it, it is crucial that we maintain the red lines of no domestic surveillance or autonomous lethal weapons. These are widely held positions, and codified in laws and regulations. They should be stipulated in any agreement, and (more importantly) verified via technical means. I think the terms of this agreement, as I understand them, are in line with these principles, that are also held by other AI companies too. I hope the DoW will offer them the same conditions. Regardless, a healthy AI industry is crucial for U.S. leadership. Whether or not relations have soured, there is zero justification to treat Anthropic - a leading American AI company whose founders are deeply patriotic and care very much about U.S. success - worse than the companies of our adversaries. It appears to me that much of this week's drama has been more about style and emotions than about substance. I hope that people can put this behind them, and come together for the benefit of our country.

English
47
25
238
121.4K
Scott Cory
Scott Cory@slessans·
I just think whatever you think about this entire situation, the fact that at no point was grok ever seriously considered is the funniest possible outcome
English
0
0
4
141
Ben (no treats)
Ben (no treats)@andersonbcdefg·
@provisionalidea i think (3) the dispute isnt specifically actually about the specific lines and is more about vibes and the admin just hates anthropic and doesn't hate openai is a reasonable read also
English
10
0
63
4.6K
Scott Cory
Scott Cory@slessans·
i do 3 plates every morning it’s not a big deal. first one is usually tartine
English
0
0
4
166
john allard
john allard@john__allard·
AI folks have about 4 months to pull a cure for cancer out of the latent space before we drift into the butlerian jihad attractor basin
English
48
155
2.5K
225.6K