
matthewputman
2.3K posts

matthewputman
@matthewputman
scientist, musician, author, producer, poet, father CEO & co-founder, @nanotronics



I continue to stand by my comment and don't see it as "gaslighting". I've seen people who have dedicated their lives to alignment and good AI policy get told they're sellouts, bootlickers, traitors to humanity, morally equivalent to concentration camp guards, etc. Sometimes these accusations are directed against them by name (while capabilities researchers at Meta or something get off scot-free, because it doesn't serve intra-factional conflicts as much to condemn them). They feel demoralized by this and under siege. You and I are Twitter warriors and used to this kind of stuff, but a lot of these people aren't. Sometimes they're Anthropic employees, but other times they're just random EA staffers, or technical alignment researchers who think technical alignment is a good path, or writers who side with people in the previous groups. I don't want anyone to overupdate on the exact examples I'm listing here because I'm talking more about a general mood, but I think it would be bad manners not to give any: x.com/ilex_ulmus/sta… x.com/wolflovesmelon… x.com/tombibbys/stat… x.com/gcolbourn/stat… x.com/HumanHarlan/st… x.com/RemmeltE/statu… x.com/DavidSKrueger/… I'm not claiming that all of these comments are as bad as each other, or even in the same ballpark of badness, or that Rob's tweet was as bad as any of them. But for example, I remember a random Lighthaven event, it might've been Manifest or something, where the conversation turned to how we could most effectively "stigmatize" people who worked at Anthropic. Everyone just sort of accepted this framing and started proposing ideas. When I suggested that it wasn't obvious that we should be stigmatizing these people and this was actually a big and dangerous step subject to slatestarcodex.com/2016/05/02/be-… , it was treated as an obvious faux pas on my part. And I keep getting requests for writing advice by random Less Wrong commenters who want me to look over their N versions of the same article about how haven't you heard, technical alignment has now been discredited, Dario has been proven a bad actor, and we all have to switch to PauseAI. It seems like a lot of Less Wrong and Rationalist Twitter are pivoting to this position at once, and it's getting surprisingly (to me) little pushback from within the rationalist community. One explanation is that it's happening because this position is obviously true, and I don't a priori rule this one out, but it doesn't seem compelling to me - partly because the policy switch doesn't feel obviously true *to me*, and partly because lots of people are converging on the same questionable strategic decisions without pushback (eg to use "effective altruism" as the term for the enemy). Meanwhile, when I publicly speak out against this, even in the most gentle way possible (a neutral-tone reply on Twitter to a specific tweet of Rob's that most people in this discussion now agree was at least slightly badly phrased), I get told that Lighthaven is considering canceling all future ACX meetups in retaliation, and several people Discord me in private saying we urgently need to meet and discuss, and my apparently-former-friends tell me that I can't possibly actually believe this and I must be gaslighting them, and people accuse me of lying to preserve my contacts with Open Phil (whose money I have never taken). Yes, as you say, I've built up some status and this insulates me from some of the negative dynamics in the community. But that's exactly what I'm worried about. If you guys ban me from Lighthaven, I can find somewhere else to host meetups. But I think the average person whose org depends on Lighthaven support, or who doesn't want to get in a big Twitter war with all of the luminaries of the community, won't be very excited about trying to push back against this narrative and say that maybe Anthropic might be okay. So the point I'm trying to make with all of this is that the combination of: --- A few bad actors (who I don't lump Rob, Lighthaven, or anyone else in the same bucket as) saying extremely emotionally-charged things, like that if you haven't 100% switched from the old alignment-at-labs agenda to the new pause agenda, you're a traitor to humanity and a child-murderer and should be consumed with guilt. --- Lots of discussions at Lighthaven, on Less Wrong, on this part of Twitter, etc, which just sort of assume that everyone agrees that Pause AI activism is the cool new thing we're all switching to, and that anyone who continues to believe the discredited old alignment-at-labs paradigm must just be lying or shilling (a surprising new consensus which happened quickly and with surprisingly little meta-level commentary) --- Explicit planning about how to stigmatize the alignment-at-labs people. --- Very direct and visible examples of retaliation and pressure against people within the rationalist community who speak out in favor of the alignment-at-labs plan, even if they also want to pursue pausing AI as a parallel strategy. ...are the sort of conditions that contribute to the possibility of epistemic collapse and dumb-in-retrospect strategic errors. (example: I now think that the 2023 FLI letter supporting a six-month pause was a strategic error, because the accelerationists are using it as a not-entirely-unfair jab against us - "do you still believe that pausing for six months in 2023 would have solved our problems?" I signed the letter, and I think in retrospect I made that mistake because I didn't want to look like one of the bad people who was "acting strategically" and "playing 11D chess" by not immediately getting on board with the latest loudly-demand-an-immediate-pause initiative - although I probably wouldn't have used those exact words/concepts in 2023). The rationalist/EA communities in particular are vulnerable to these dynamics. Everyone is so bad at taking their own side of an issue that when a few strong-willed people who are good at performing moral clarity show up and tell them they're wrong and bad, they get hyperscrupulous and fold immediately (see Part I of astralcodexten.com/p/criticism-of… ). I see part of my role as challenging some of these things and giving people permission not to fold. I acknowlege that there are also dynamics on "the other side" about people being unwilling to disagree with EA/OP/Ant. This isn't contradictory, it's the way these situations always work (eg there are reputational penalties both for being woke at an Alabama church, or for being anti-woke at a California university). If you're wondering why I'm criticizing you and not them, my answers are: 1. I am criticizing them. You can see me criticizing Dario's cringeworthy take on "doomers" in Adolescence of Technology on the last ACX links post, entry #31. I think of posting criticism on ACX as a bigger and more aggressive step than posting it on Twitter (although this is counterbalanced by the fact that I'm less sure Dario reads ACX or cares about what I say). When I visited Anthropic, I asked the people I met there lots of questions about why they weren't supporting pausing AI more (the modal response was an assurance that they were aware of the relevant considerations and agreed with me about everything, but that the answer to my question was secret). I don't claim to be challenging them daily or making it a big part of my work, but I'm also not challenging you daily or making it a big part of my work. I'm focusing mostly on object-level stuff, and trying to challenge bad comms patterns of all types on the rare occasions when I see a good opportunity. 2. The EA/OP/Ant version of this (maybe) doesn't happen in spaces where I can see it and intervene as often. It might be a helpful exercise for you to link me to the top ten tweets / blog posts / other forms of communication where you believe that EA/OP/Ant are pressuring, defecting against, or misbehaving against you. I can't immediately think of what would be in a list like this. If your claim is that they're doing it in private, then I think that's an important difference from you doing it in public! 3. My impression is that EA/OP/Ant usually have specific well-thought-out strategic reasons why they're being jerks to you (eg not funding you because they think it would offend their bigshot political connections) and that these reasons are true and sympathetic. I think this is an important difference from the LW/rat/Twitter community just sort of spontaneously settling into an anti-EA/OP/Ant position. 4. Relatedly, I think the goal of the EA community is to fund good things, and the goal of the rationalist community is to be correct about epistemics. If it's hard to disagree with a consensus in EA, I care about this only indirectly/consequentially in terms of whether it makes their funding decisions worse. If it's hard to disagree with a consensus in the rationalist community, I think it's more of an urgent halt-and-catch-fire moment. 5. I think EA/OP/Ant are doing basically the right thing by their own world-model, whereas I think you're making a mistake even by your own lights. That specific mistake is to focus your criticism on "EA", who I think you're interpreting as something like a few grantmakers who are mean to you, when in fact you're doing collateral damage to eg SFF who fund you, to MIRI/PauseAI/CAIS etc who are part of your movement but who the average guy on the street would group in with "EA", to public EA influencers like me/Kelsey/Eliezer, to random people who like mosquito nets, and to the general concept of trying to donate money effectively. 6. There's an asymmetry here sort of like the asymmetry between big corporations and progressive activists. Big corporations are much more powerful than progressive activists, they do lots of bad/unfair things, and insofar as you want to punch up, they're a better target for criticism. But you hear more criticism about corporations from progressive activists, then vice versa. So it's often more useful, as an intellectual, to explain the big corporation perspective than the progressive activist perspective (example: Andy Masley on data center water usage - in some sense it's bizarre for him to be "siding with" trillion dollar data center companies against random very-earnest people on Twitter, but in fact until he started doing that, nobody was defending them, and the discussion was culpably biased in favor of the very-earnest progressive protesters). I think something like that is going on with EA/OP/Ant. Yes, on a financial level they're ten-thousand-ton gorillas. But also I feel like I constantly see trivially wrong attacks on them getting traction, and they're too busy ruling the world to defend themselves. I am, as usual, astralcodexten.com/p/less-utilita… , and it seems important to call out some of those attacks as unfair. I'm not asking Rob to change anything in particular. I certainly don't want to silence him or make him stop saying what he believes. I'm very very much not asking him to "disavow" Guido or Holly or whoever. And I'm not asking Oli or Lighthaven to do anything, I literally didn't even mention or address them until they inserted themselves into this discussion (I acknowledge they're on the same "side" as Rob and are right to think that what I'm saying applies to them too, but I choose who I engage with deliberately, and I wanted to stress that I was putting zero pressure on them to change anything). My entire goal in this is to say publicly, one time, "Hey, I partly disagree with the way this is being communicated, and I'd like to give other people social permission to disagree too." This has been finished, I'm backing off for now except to defend myself on tweets like this one, and you should keep pursuing your political strategies in whatever way you think is most effective without expecting me to interfere much.



Might be the greatest clip I’ve ever seen.












Blackstone says the next big thing is chips, data centers & power. The first two feed on the third as U.S. electricity demand climbs 40% in the next decade. Supply is $VST, $CEG, $OKLO, $CCJ, $LEU & $EOSE Demand is $NBIS, $IREN, $CIFR, $WULF & $CRWV


The deeper you go into the semiconductor supply chain, the less believable it becomes. > TSMC, a company on a small island, produces over 90% of the world’s most advanced chips > TSMC relies on dutch company ASML for EUV lithography machines > ASML depends on German Company Carl Zeiss, the only firm in the world capable of making mirrors precise enough for ASML’s requirements. > The light source for ASML’s EUV machines is produced by a single company in San Diego. > The photoresists used to print transistor patterns are produced by Japanese firms like JSR and Tokyo Ohka Kogyo. > The ultra-pure quartz needed to make silicon wafers comes entirely from a single mine in Spruce Pine, North Carolina. > The copper and rare-earth materials inside chips are mined and refined across Chile, the Congo, and China. > The specialized gases used in chipmaking, like neon and fluorine, largely come from Ukraine and Japan. > The design blueprints for these chips often come from American companies like NVIDIA, AMD, and Apple, which rely on software tools from U.S. firms like Synopsys and Cadence. Remove any single piece and the whole system collapses.




Geoffrey Hinton says AIs may already have subjective experiences, but don't realize it because their sense of self is built from our mistaken beliefs about consciousness.






