Art Min

245 posts

Art Min banner
Art Min

Art Min

@artmin

Ex-Microsoft, Dell, startups, and running Foundations. Now AI alignment interventions @campdotorg. Tracking things with @AlignmentWen

Seattle Katılım Nisan 2024
395 Takip Edilen188 Takipçiler
Sabitlenmiş Tweet
Art Min
Art Min@artmin·
Everyone says “move fast.” But the smartest founders are quietly asking a harder question: How do you stay good when you scale? Our new study (@OpenAI, @AnthropicAI, @Etsy, @patagonia) breaks down 6 proven governance hacks that protect mission under pressure.
Mike McCormick@MikeMcCormick_

How can impact-oriented founders preserve their missions, even when commercial pressures hit? We studied 20+ companies to answer this one question. This is especially important in AI. That's why we commissioned governance experts Steve Young and @artmin to lead comprehensive research studying companies from @OpenAI, @AnthropicAI, and @Meta to @Etsy, @patagonia and @benandjerrys—interviewing 22 founders, investors, and legal experts along the way. Their research is timely because the AI capabilities that can build a brighter future may build a bleaker one, too. The models that could cure diseases might also help bad actors create pandemic viruses. Tools for cyber defense might also be used for cyber attacks. At Halcyon, we back teams building a secure and resilient future as AI transforms the world. Getting governance right is crucial for these high-stakes businesses. And we hear from impact-driven founders that they WANT to take their missions seriously. Today we're releasing our findings on mission-preservation governance mechanisms for startups. Some examples of mission-preserving governance that work: • @TonysChocoUS's golden share with escalating remedies • @patagonia's permanent mission lock (though few founders will give up that much) There's a clear pattern: No perfect mechanism exists. But layered approaches—legal safeguards + cultural reinforcement + reputational accountability—create real durability. We've packaged this into a practical playbook: decision trees for founders, implementation checklists by stage, and 6 proven governance mechanisms. Founders, investors, board members—if you're thinking about these questions for your own company, reach out. Download the full report today ⬇️

English
1
0
6
1.4K
Art Min
Art Min@artmin·
years from now when my grandkids ask me what it was like before AGI, i'll tell them i spent all my time manually restarting the Openclaw gateway on a mac mini.
Art Min tweet media
English
0
0
0
42
Art Min
Art Min@artmin·
On AI + storytelling...years ago I tried to learn to draw and was pretty bad at it. One day my wife asked me to paint something for our wedding. FYI - I proposed to her underwater while scuba diving. Bad composition, wrong proportions, clashing colors, but it told our story. I think it's tough to get AI to create something similar on its own without having the lived context...or at least a deep connection to the story. Maybe that's what we should be protecting...art rooted in authentic lived experience.
Art Min tweet media
English
0
0
0
34
Art Min
Art Min@artmin·
Hmmm, the best Impressionists were still classically trained artists that understood color theory, composition, etc and knew what they were abstracting from. I think humans who can do informed coordination (real world lived experiences and failures) will have an edge when the dust settles a bit.
English
1
1
4
392
James Cham
James Cham@jamescham·
Still puzzling through the new Paul Graham essay. The @ccatalini notes on economics of AGI has a few insights that rhyme: >> Just as photography forced painters to pivot from realistic representation (measurable) to Impressionism (interpretive/non-measurable), humans must pivot from task production to meaning-making...the human is not a laborer but a coordination device
James Cham tweet mediaJames Cham tweet mediaJames Cham tweet media
Paul Graham@paulg

The Brand Age: paulgraham.com/brandage.html

English
10
11
110
42.9K
Art Min
Art Min@artmin·
One good thing that's happening with all this @AnthropicAI @OpenAI and @DeptofWar drama is that it showed what Anthropic's mission led governance structure actually does. It constrains power by giving leadership the authority to say "No." Check out @HalcyonFutures' governance report to get a breakdown on other mission preservation structures for companies that want to preserve their missions as they scale: halcyonfutures.org/governance
English
0
0
2
44
Art Min
Art Min@artmin·
@glauberxyz Ironically, most influencers seem to be committed to it as it's easier to grow an audience vs. other platforms.
English
1
0
0
47
Glauber
Glauber@glauberxyz·
Deleted TikTok 2 days ago and I don’t miss it at all. Long overdue tho… basically all the content is (obviously) performative, too loud, too quick, not even the kitty videos were real anymore. Some accounts started adding AI generated stuff in the middle so they can monetize it
English
1
0
3
279
Gavin Purcell
Gavin Purcell@gavinpurcell·
new thesis loading… the value of humans in the near to mid-future is the simple fact they are human with human thoughts attention will get flooded so fast with ai content, agents and useful tools that *actual* human opinion becomes distribution long —> writers, influencers, human video creators who are present daily, devs who ship and talk about what they’ve shipped & why short —> content automation systems, digital twins, ai bot farms (pretending to be human) one note: the future now means you must be human in public you cannot expect to succeed creating amongst the bots your humanness is the lever
English
4
0
14
630
Art Min
Art Min@artmin·
My serious listening face. Anyone else at the @UNESCO @IASEAIorg AI Safety conference?
Art Min tweet media
English
1
0
2
100
Gavin Purcell
Gavin Purcell@gavinpurcell·
so this follows up on the OpenAI letter to the us gov’t a week or so ago saying something similar something else this prob signals, new (likely very good) deepseek v4 incoming again, we are going to quickly jump to large political ai convos by eoy
Anthropic@AnthropicAI

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.

English
4
1
8
2.1K
Art Min retweetledi
Tyler John in SF 🇺🇸
Tyler John in SF 🇺🇸@tyler_m_john·
Over 5 years I've advised dozens of philanthropists on AI. I compiled the answers to all of the questions I've been asked in one report. 2024 Nobel Prize Geoffrey Hinton calls it “an extremely useful resource for philanthropists interested in funding AI safety and preparedness."
Tyler John in SF 🇺🇸 tweet media
English
8
50
194
48.9K
Art Min
Art Min@artmin·
@jasonkwon @dylanscandinaro Looking at what happened with openclaw and how Anthropic was unable to make changes, you guys should hire a partner governance lead. Someone accountable for setting enforceable partner standards (safety, evals, misuse controls) and monitoring operational governance compliance.
English
0
0
0
34
Art Min
Art Min@artmin·
I hear you man. As you know, I’ve been spending a lot of time lately with people who are actually working on AI safety and governance. They come from AI Labs, startups, academia, government and nonprofits. What’s surprised me is how many people are showing up and trying to help in very concrete ways. The pace is still concerning , but being in rooms with people who are trying to build guardrails, institutions, and solutions has shifted my world view to be more optimistic. As Mr. Rogers said, when things feel overwhelming, look for the helpers. There are more of them than it seems and the number is growing every day. Can't wait for you to join.
English
0
0
2
84
Dave White
Dave White@_Dave__White_·
it's been almost half a year since i've written anything on here one of the biggest reasons is that twitter (do we still call it that?) has become somewhat overwhelming and depressing for me i guess i've always been a bit of a pessimist, but it seems like the rate of shit going extremely wrong has been accelerating dramatically recently, and every time i open the app to try to post i see something horrifying or jealousy-inducing and get sucked in and end up closing out in half an hour kind of fucked up for the rest of the day but i also think i've lost something significant by cutting off my main point of connection to the outside world at large, which is something i'd like to rectify going forward just to get the things out of the way that i'll probably be talking about so i'm not afraid to say it anymore: - i've been spending a lot of time learning to play music. currently learning piano, drums, and production in ableton. i'm very new at this but i'm honestly proud of many of the things i've made. - i'm increasingly horrified by the rate of ai progress and our lack of technological and social infrastructure to ensure that that goes well. i think we probably have one or two years left of anything vaguely normal looking left. - that belief makes it very, very hard to orient and find meaning in things. i.e. music. why learn to haltingly play the e flat major scale when suno is already better at piano then i'll likely ever be? - there are people, i suspect a lot of them who are going to read this post, who think this belief means i'm deluded or weak and i'm an example of how a dangerous ideology can ruin lives. they may be right. i really don't think so though. - i suspect the reason more people aren't feeling this way is that it is too horrifying to look at head on, and/or accepting it as real would imply too many very inconvenient or upsetting life changes, so we choose to look the other way. we've been doing this with death since time immemorial. - i'd like to help people wake up, though, and i'll be trying to do that more in the coming weeks and months
English
41
6
269
31.5K
Art Min
Art Min@artmin·
@MariusHobbhahn The fact that I'm writing code for functional apps now means the line is definitely blurred.
English
0
0
0
115
Marius Hobbhahn
Marius Hobbhahn@MariusHobbhahn·
I'm very happy that people run more of these experiments, but I'm so surprised about the results. Maybe it's the research vs. SWE setting, but research sprints that would have taken me at least 2 weeks during my PhD, I can now do in a weekend with AI.
Anthropic@AnthropicAI

AI can make work faster, but a fear is that relying on it may make it harder to learn new skills on the job. We ran an experiment with software engineers to learn more. Coding with AI led to a decrease in mastery—but this depended on how people used it. anthropic.com/research/AI-as…

English
11
5
87
16K
Art Min
Art Min@artmin·
@dharmesh How would you address the security concerns agent to agent?
English
0
0
0
75
dharmesh
dharmesh@dharmesh·
Just registered clawspot .ai (no website yet). Here's the idea: Setup a multi-tenant OpenClaw server making it easy for normies to setup an OpenClaw bot with core cloud-based features. Thinking: GMail, Google Docs, Web Search, X/Twitter, long-term memory, server-based local working directory (to save files and such). Would let people try out the power of OpenClaw with less of the headache of setup and such. Somebody's probably already working on something like this so going to sleep now (it's 2:30am) and see what kind of comments this post gets.
English
177
25
627
109.5K
Art Min
Art Min@artmin·
I saw a @PLAUDAI pin in the wild today and immediately thought of millions of always-on mics means that anything said, anywhere, can be replayed later. Are we in a soft surveillance state, or am I overthinking this?
English
0
0
0
25
Jack Clark
Jack Clark@jackclarkSF·
I wrote a short story this morning about AI systems subtly changing the behavior of other AI agents through the proliferation of slightly off-distribution and confusing text. And then by this evening with Moltbook, fiction is reality. Interesting times!
English
27
9
158
11.4K
Art Min retweetledi
Mike McCormick
Mike McCormick@MikeMcCormick_·
Moltbot is wild Feels like an good time to say we need the world’s best builders launching ambitious new projects to make the world safe & resilient as we approach AGI If you’re a killer founder and want to build something here, say hi. That’s all we do at @HalcyonFutures and @HalcyonVC
English
2
6
47
15.8K