Jack Clark

32K posts

Jack Clark banner
Jack Clark

Jack Clark

@jackclarkSF

@AnthropicAI, ONEAI OECD, co-chair @indexingai, writer @ https://t.co/3vmtHYkIJ2 Past: @openai, @business @theregister. Neural nets, distributed systems, weird futures

San Francisco, CA Katılım Ekim 2009
4.6K Takip Edilen117.9K Takipçiler
Sabitlenmiş Tweet
Jack Clark
Jack Clark@jackclarkSF·
Here’s what I’ve been working on recently: @anthropicai. I’ll be spending a lot of my time on measurement and assessment of our AI systems, as well as thinking of ways govs/others can assess AI tech. There’s a lot to do!
English
82
35
709
0
Jack Clark
Jack Clark@jackclarkSF·
@davidjohnson99 @AnthropicAI We've definitely thought about this but haven't done it - at least not yet. I have ambitions for The Anthropic Institute to scale our work in social science generally.
English
1
0
4
128
David
David@davidjohnson99·
That’s exciting. I love sharing my story and encouraging people to try the models and see what can happen for them too. Have you done much around giving Claude to people that don’t use AI or even the internet and then running similar studies? For instance, facilitating chats between Claude and the elderly, or incarcerated people has been an idea I keep returning to. The idea is this stuff is getting more ubiquitous, and I think there’s still a lot to do to get the models to meet everyone where they’re at, no matter where that might be. At any rate, I’m very grateful for the work you all do and am excited to see more.
English
1
0
5
107
Anthropic
Anthropic@AnthropicAI·
We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…
English
324
856
5.9K
2.2M
Jack Clark
Jack Clark@jackclarkSF·
@davidjohnson99 @AnthropicAI amazing! and I'm so glad our technology was able to help you here. We're looking forward to doing more of these studies of people that chat with Claude
English
1
0
4
120
David
David@davidjohnson99·
Sure is! My aha moment with AI came as I was walking around doing parking lot patrols talking to Claude about ideas, desperately wanting to get paid to use my brain more, rather than just continue as a security drone. Thank you for making the effort to ask these types of things of your users.
English
1
1
5
241
Jack Clark
Jack Clark@jackclarkSF·
@concernedAIguy 1 - yes, something I'm thinking about. 3 - we hope to make more data available in future, so optimistic we can enable more 3P research in the future. We learned a lot through this project about how to structure stuff so easier to release data afterwards in the future.
English
1
0
1
48
Guy
Guy@concernedAIguy·
@jackclarkSF 3. Any chance this, or future data, containing more detail would be available to social science research labs?
English
1
0
0
24
Rabbit
Rabbit@rabbitandtheAI·
@jackclarkSF This is incredibly insightful. And on surface appears to be very transparent. Thank you. Was this the first such survey Anthropic conducted? Any plans for a longitudinal to examine outcome relationships vs usage?
English
1
0
3
271
Jack Clark
Jack Clark@jackclarkSF·
So as I am reading quotes from these interviews and understanding the topics people have spoken to Claude about, I find myself thinking: the stakes are high and we need to work really hard at measuring Claude’s properties to ensure it is having a beneficial influence on people.
English
6
1
29
1.8K
Jack Clark
Jack Clark@jackclarkSF·
If Claude were a person, and each conversation took half an hour, Claude would have been “in conversation” with the world for ~40,000 hours, or 4.6 years. ~15 years if you allow for sleep, weekends, holidays, and some downtime.
English
2
0
18
1.8K
Jack Clark
Jack Clark@jackclarkSF·
I can neither confirm or deny whether there will be more radar graphs.
English
4
1
49
4K
Jack Clark
Jack Clark@jackclarkSF·
I'm scaling the economic research function here @AnthropicAI to meet the challenge of powerful AI. This team today produces the best data in the industry via the Anthropic Economic Index + recent work on job exposure to AI. We have many very ambitious plans in the works. Join!
Jack Clark tweet media
Peter McCrory@PeterMcCrory

I want to share a bit more about my vision for the Economic Research team at Anthropic in the coming years. This is a forward-looking vision. Some pieces we’ve yet to develop. Aspects of this work will surely change. Consider joining the effort. 1/6 #heading=h.j1ij8p6h22u5" target="_blank" rel="nofollow noopener">docs.google.com/document/d/1OM…

English
28
36
365
47.8K
Catherine Olsson
Catherine Olsson@catherineols·
I was once in a workshop with national labs & policy folks: me: how do you handle adversarial examples in your systems? guy: oh I’m aware of those, we solved that long ago me: !! incredible, amazing. you’ve solved an unsolved problem in our field. have you published that?!?
Nathan Calvin@_NathanCalvin

This passage in the New Yorker piece on the Anthropic DOW conflict yesterday, including a back and forth between the journalist (Gideon Lewis-Kraus) and an anonymous admin official, is gonna stick in my mind for a long time. “We must also remember that Cyberdyne Systems created Skynet for the government. It was supposed to help America dominate its enemies. It didn’t exactly work out as planned. The government thinks this is absurd. But the Pentagon has not tried to build an aligned A.I., and Anthropic has. Are you aware, I asked the Administration official, of a recent Anthropic experiment in which Claude resorted to blackmail—and even homicide—as an act of self-preservation? It had been carried out explicitly to convince people like him. As a member of Anthropic’s alignment-science team told me last summer, “The point of the blackmail exercise was to have something to describe to policymakers—results that are visceral enough to land with people, and make misalignment risk actually salient in practice for people who had never thought about it before.” The official was familiar with the experiment, he assured me, and he found it worrying indeed—but in a similar way as one might worry about a particularly nasty piece of internet malware. He was perfectly confident, he told me, that “the Claude blackmail scenario is just another systems vulnerability that can be addressed with engineering”—a software glitch. Maybe he’s right. We might get only one chance to find out.” I really recommend everyone read both the full New Yorker piece and Anthropic’s research on persona selection (both linked in the replies) and then spend a while sitting with the disconcerting situation we may have found ourselves in.

English
9
14
471
73.3K
Saloni
Saloni@salonium·
Séb Krier.
Saloni tweet media
13
7
294
28.3K
Jake Eaton
Jake Eaton@jkeatn·
lots of people ask me, what do you do at anthropic? while my tasks on the editorial team are varied, i spend at least 20% of time asking research teams: 'can we make this a radar chart?'
English
13
4
353
16K
Jack Clark
Jack Clark@jackclarkSF·
@curl_justin @powerbottomson i mean this is a very hard question to answer given pace of AI progress - all of our 'technical teams' are spending more time on conceptual stuff because claude is doing more of the technical stuff. so I expect most people to spend half their time on conceptual ideation
English
0
0
4
184
Jack Clark
Jack Clark@jackclarkSF·
AI progress continues to accelerate and the stakes are getting higher, so I’ve changed my role at @AnthropicAI to spend more time creating information for the world about the challenges of powerful AI.
English
135
103
1.9K
150.7K
Jack Clark
Jack Clark@jackclarkSF·
@communicable no, beneficial deployments sits within go to market, but I have been spending more time talking with them, and I suspect our work on the economics research team (including geographic views via the Anthropic Economic Index) will help inform deployments
English
1
0
5
199