matthewputman

2.3K posts

matthewputman

matthewputman

@matthewputman

scientist, musician, author, producer, poet, father CEO & co-founder, @nanotronics

New York Katılım Ağustos 2008
1.4K Takip Edilen2.3K Takipçiler
matthewputman
matthewputman@matthewputman·
@HiFromMichaelV Thanks for sharing that blog @HiFromMichaelV . @jessi_cata wrote a great piece. I wonder if people know how many people you have influenced who now do the most important work in AI and so many other things. Cults insult people from the world while you inspire them to build it.
English
0
0
2
190
michael vassar
michael vassar@HiFromMichaelV·
Scott, you’ve been working to silence your critics for years. You are the main source publicly and groundlessly declaring that receiving information from me a) makes people insane (which you withdrew) and b) makes them ‘Vassarite’ cultists. lesswrong.com/posts/MnFqyPLq…
Scott Alexander@slatestarcodex

I continue to stand by my comment and don't see it as "gaslighting". I've seen people who have dedicated their lives to alignment and good AI policy get told they're sellouts, bootlickers, traitors to humanity, morally equivalent to concentration camp guards, etc. Sometimes these accusations are directed against them by name (while capabilities researchers at Meta or something get off scot-free, because it doesn't serve intra-factional conflicts as much to condemn them). They feel demoralized by this and under siege. You and I are Twitter warriors and used to this kind of stuff, but a lot of these people aren't. Sometimes they're Anthropic employees, but other times they're just random EA staffers, or technical alignment researchers who think technical alignment is a good path, or writers who side with people in the previous groups. I don't want anyone to overupdate on the exact examples I'm listing here because I'm talking more about a general mood, but I think it would be bad manners not to give any: x.com/ilex_ulmus/sta… x.com/wolflovesmelon… x.com/tombibbys/stat… x.com/gcolbourn/stat… x.com/HumanHarlan/st… x.com/RemmeltE/statu… x.com/DavidSKrueger/… I'm not claiming that all of these comments are as bad as each other, or even in the same ballpark of badness, or that Rob's tweet was as bad as any of them. But for example, I remember a random Lighthaven event, it might've been Manifest or something, where the conversation turned to how we could most effectively "stigmatize" people who worked at Anthropic. Everyone just sort of accepted this framing and started proposing ideas. When I suggested that it wasn't obvious that we should be stigmatizing these people and this was actually a big and dangerous step subject to slatestarcodex.com/2016/05/02/be-… , it was treated as an obvious faux pas on my part. And I keep getting requests for writing advice by random Less Wrong commenters who want me to look over their N versions of the same article about how haven't you heard, technical alignment has now been discredited, Dario has been proven a bad actor, and we all have to switch to PauseAI. It seems like a lot of Less Wrong and Rationalist Twitter are pivoting to this position at once, and it's getting surprisingly (to me) little pushback from within the rationalist community. One explanation is that it's happening because this position is obviously true, and I don't a priori rule this one out, but it doesn't seem compelling to me - partly because the policy switch doesn't feel obviously true *to me*, and partly because lots of people are converging on the same questionable strategic decisions without pushback (eg to use "effective altruism" as the term for the enemy). Meanwhile, when I publicly speak out against this, even in the most gentle way possible (a neutral-tone reply on Twitter to a specific tweet of Rob's that most people in this discussion now agree was at least slightly badly phrased), I get told that Lighthaven is considering canceling all future ACX meetups in retaliation, and several people Discord me in private saying we urgently need to meet and discuss, and my apparently-former-friends tell me that I can't possibly actually believe this and I must be gaslighting them, and people accuse me of lying to preserve my contacts with Open Phil (whose money I have never taken). Yes, as you say, I've built up some status and this insulates me from some of the negative dynamics in the community. But that's exactly what I'm worried about. If you guys ban me from Lighthaven, I can find somewhere else to host meetups. But I think the average person whose org depends on Lighthaven support, or who doesn't want to get in a big Twitter war with all of the luminaries of the community, won't be very excited about trying to push back against this narrative and say that maybe Anthropic might be okay. So the point I'm trying to make with all of this is that the combination of: --- A few bad actors (who I don't lump Rob, Lighthaven, or anyone else in the same bucket as) saying extremely emotionally-charged things, like that if you haven't 100% switched from the old alignment-at-labs agenda to the new pause agenda, you're a traitor to humanity and a child-murderer and should be consumed with guilt. --- Lots of discussions at Lighthaven, on Less Wrong, on this part of Twitter, etc, which just sort of assume that everyone agrees that Pause AI activism is the cool new thing we're all switching to, and that anyone who continues to believe the discredited old alignment-at-labs paradigm must just be lying or shilling (a surprising new consensus which happened quickly and with surprisingly little meta-level commentary) --- Explicit planning about how to stigmatize the alignment-at-labs people. --- Very direct and visible examples of retaliation and pressure against people within the rationalist community who speak out in favor of the alignment-at-labs plan, even if they also want to pursue pausing AI as a parallel strategy. ...are the sort of conditions that contribute to the possibility of epistemic collapse and dumb-in-retrospect strategic errors. (example: I now think that the 2023 FLI letter supporting a six-month pause was a strategic error, because the accelerationists are using it as a not-entirely-unfair jab against us - "do you still believe that pausing for six months in 2023 would have solved our problems?" I signed the letter, and I think in retrospect I made that mistake because I didn't want to look like one of the bad people who was "acting strategically" and "playing 11D chess" by not immediately getting on board with the latest loudly-demand-an-immediate-pause initiative - although I probably wouldn't have used those exact words/concepts in 2023). The rationalist/EA communities in particular are vulnerable to these dynamics. Everyone is so bad at taking their own side of an issue that when a few strong-willed people who are good at performing moral clarity show up and tell them they're wrong and bad, they get hyperscrupulous and fold immediately (see Part I of astralcodexten.com/p/criticism-of… ). I see part of my role as challenging some of these things and giving people permission not to fold. I acknowlege that there are also dynamics on "the other side" about people being unwilling to disagree with EA/OP/Ant. This isn't contradictory, it's the way these situations always work (eg there are reputational penalties both for being woke at an Alabama church, or for being anti-woke at a California university). If you're wondering why I'm criticizing you and not them, my answers are: 1. I am criticizing them. You can see me criticizing Dario's cringeworthy take on "doomers" in Adolescence of Technology on the last ACX links post, entry #31. I think of posting criticism on ACX as a bigger and more aggressive step than posting it on Twitter (although this is counterbalanced by the fact that I'm less sure Dario reads ACX or cares about what I say). When I visited Anthropic, I asked the people I met there lots of questions about why they weren't supporting pausing AI more (the modal response was an assurance that they were aware of the relevant considerations and agreed with me about everything, but that the answer to my question was secret). I don't claim to be challenging them daily or making it a big part of my work, but I'm also not challenging you daily or making it a big part of my work. I'm focusing mostly on object-level stuff, and trying to challenge bad comms patterns of all types on the rare occasions when I see a good opportunity. 2. The EA/OP/Ant version of this (maybe) doesn't happen in spaces where I can see it and intervene as often. It might be a helpful exercise for you to link me to the top ten tweets / blog posts / other forms of communication where you believe that EA/OP/Ant are pressuring, defecting against, or misbehaving against you. I can't immediately think of what would be in a list like this. If your claim is that they're doing it in private, then I think that's an important difference from you doing it in public! 3. My impression is that EA/OP/Ant usually have specific well-thought-out strategic reasons why they're being jerks to you (eg not funding you because they think it would offend their bigshot political connections) and that these reasons are true and sympathetic. I think this is an important difference from the LW/rat/Twitter community just sort of spontaneously settling into an anti-EA/OP/Ant position. 4. Relatedly, I think the goal of the EA community is to fund good things, and the goal of the rationalist community is to be correct about epistemics. If it's hard to disagree with a consensus in EA, I care about this only indirectly/consequentially in terms of whether it makes their funding decisions worse. If it's hard to disagree with a consensus in the rationalist community, I think it's more of an urgent halt-and-catch-fire moment. 5. I think EA/OP/Ant are doing basically the right thing by their own world-model, whereas I think you're making a mistake even by your own lights. That specific mistake is to focus your criticism on "EA", who I think you're interpreting as something like a few grantmakers who are mean to you, when in fact you're doing collateral damage to eg SFF who fund you, to MIRI/PauseAI/CAIS etc who are part of your movement but who the average guy on the street would group in with "EA", to public EA influencers like me/Kelsey/Eliezer, to random people who like mosquito nets, and to the general concept of trying to donate money effectively. 6. There's an asymmetry here sort of like the asymmetry between big corporations and progressive activists. Big corporations are much more powerful than progressive activists, they do lots of bad/unfair things, and insofar as you want to punch up, they're a better target for criticism. But you hear more criticism about corporations from progressive activists, then vice versa. So it's often more useful, as an intellectual, to explain the big corporation perspective than the progressive activist perspective (example: Andy Masley on data center water usage - in some sense it's bizarre for him to be "siding with" trillion dollar data center companies against random very-earnest people on Twitter, but in fact until he started doing that, nobody was defending them, and the discussion was culpably biased in favor of the very-earnest progressive protesters). I think something like that is going on with EA/OP/Ant. Yes, on a financial level they're ten-thousand-ton gorillas. But also I feel like I constantly see trivially wrong attacks on them getting traction, and they're too busy ruling the world to defend themselves. I am, as usual, astralcodexten.com/p/less-utilita… , and it seems important to call out some of those attacks as unfair. I'm not asking Rob to change anything in particular. I certainly don't want to silence him or make him stop saying what he believes. I'm very very much not asking him to "disavow" Guido or Holly or whoever. And I'm not asking Oli or Lighthaven to do anything, I literally didn't even mention or address them until they inserted themselves into this discussion (I acknowledge they're on the same "side" as Rob and are right to think that what I'm saying applies to them too, but I choose who I engage with deliberately, and I wanted to stress that I was putting zero pressure on them to change anything). My entire goal in this is to say publicly, one time, "Hey, I partly disagree with the way this is being communicated, and I'd like to give other people social permission to disagree too." This has been finished, I'm backing off for now except to defend myself on tweets like this one, and you should keep pursuing your political strategies in whatever way you think is most effective without expecting me to interfere much.

English
3
0
19
2.7K
matthewputman
matthewputman@matthewputman·
Yeah. I am not arguing with you at all. In fact I am not arguing with anyone, I just want to know where the ideas start, the politics interfere, and where the speculation is. I don’t know genomics well enough to have an informed opinion, and in the science I am not giving an opinion. Generally you start with how nature works, and figure out what to do with it as a society after. Premature speculation is my only concern, but I have so much to learn before I can decide either way on matters that are either true or not. Otherwise it is a political argument disguised as science.
English
0
0
0
40
Eric Weinstein
Eric Weinstein@ericweinstein·
Hi Matthew: Let’s start here. I believe in both situations there is someone who is simultaneously: A) Patiently Condescending from authority. B) Wrong at a basic and self-evident level of science. I meet a fair number of physicists who are unaware of how string theorists treat non string theorists at a level that I believe constitutes abuse of authority and flagrant scientific malpractice. Plain and simple. Let’s start there.
English
2
0
5
402
Eric Weinstein
Eric Weinstein@ericweinstein·
Try being patiently lectured by a String Theorist, who can’t remember the details of the Standard Model about “How physics works.” Or economists about the construction and indexing of the “Representative Consumer.” It’s exactly the same condescending pseudo academic experience.
Anthony@Catholicizm1

Might be the greatest clip I’ve ever seen.

English
166
184
2.9K
129.7K
matthewputman
matthewputman@matthewputman·
I guees that ignoring science for consensus is real, and hinders progress, but there is something to prioritization, and judgement about what we should do on some things matter more to me more . That said, that is why different disciplines exist I guess, as some should focus more on what they are trained on without bias. I just see a time where it becomes a question of what matters in genetics as much as what is. In physics it just matters, if you are a physicist
English
0
0
1
916
Sabine Hossenfelder
Sabine Hossenfelder@skdh·
@ericweinstein I would argue that denying a genetic basis of skin colour is on a different level than forgetting the details of the standard model, but same energy I guess.
English
45
5
384
106.5K
matthewputman
matthewputman@matthewputman·
@AmandaAskell @DanielleMorrill I hate seperating this too, though you could rightly separate those with a scientificaly curious world view, and those without. It is intentionality more than ability.
English
0
0
0
263
Amanda Askell
Amanda Askell@AmandaAskell·
I really dislike categorizing *people* as technical and non-technical. It makes technical work seem like some kind of arcane skill rather than just a thing all people can learn to do to the extent that it's useful to them.
English
121
67
1.2K
77.6K
matthewputman
matthewputman@matthewputman·
This video is in no way the “greatest clip ever.” Studying genetic variation matters for biology, but talking about intelligence is scientifically premature and socially reckless. At present, genetics explains population-level variation far better than individual cognitive traits, and no genes for intelligence differences between racial groups have been identified. I could just as easily argue that many of the Black artists I work with show greater forms of intelligence than the white technologists I know, but that would be an equally useless scientific claim. People are speaking from bias long before cold evidence exists, let alone a shared agreement on what we even mean by intelligence. I am more interested in questions like whether traits such as perfect pitch, so central to jazz, America’s great original art form, have genetic components. Until then, shouting about unfinished science only invites dangerous racial conclusions. But your comment is certainly true too. Lack of generic variability, will not lead to understanding.
English
4
0
9
624
michael vassar
michael vassar@HiFromMichaelV·
@Catholicizm1 I think she may have successfully established that some white people (or whatever she is?) can be dumber than essentially all black people.
English
22
1
554
105.1K
Anthony
Anthony@Catholicizm1·
Might be the greatest clip I’ve ever seen.
English
2.8K
5.4K
68.1K
11M
matthewputman
matthewputman@matthewputman·
@GaryMarcus @kimmonismus Of course it’s remotely possible. That’s not a very high bar. But cancer is already multivariant enough that if AI can make real progress there, it becomes a meaningful proof of concept, not just a remote possibility.
English
2
0
1
164
Chubby♨️
Chubby♨️@kimmonismus·
This is absolutely impressive: Dario Amodei believes we can double our lifespan, cure cancer and so much more, in just a few years!
English
179
94
968
171.4K
matthewputman
matthewputman@matthewputman·
Jesus, Gary. You know I love you, but when someone like Dario aims high, I wish you’d lean in rather than mock it. You’re one of the most insightful thinkers I know, and it’s a waste to spend that brilliance telling us what can’t be done instead of helping push the frontier. And if you see a better path than LLMs, lay out the timeline . I’ll take it seriously.
English
1
0
0
287
Gary Marcus
Gary Marcus@GaryMarcus·
@kimmonismus it’s total bullshit speculation unsupported by any serious science and at odds with the realities of collecting clinical data. not going to happen.
English
17
16
311
5.8K
matthewputman
matthewputman@matthewputman·
@GaryMarcus We used to do this together Gary! Back in the old days when trying to raise money by using the word AI was a buzzword
English
0
0
0
94
Gary Marcus
Gary Marcus@GaryMarcus·
Savvy. Domain-specific AI, not chatbots, just like I recommended in my October NYT oped.
Rohan Paul@rohanpaul_ai

NEWS🏭: Jeff Bezos is officially wearing the CEO badge again. Bezos is launching Project Prometheus with $6.2B and serving as co chief executive to build AI for engineering and manufacturing across space, autos, and computers. Prometheus is about AI that designs parts, plans builds, runs tests with robots, learns from results, then repeats faster than human only workflows, with a tight loop of propose, fabricate, measure, and update. Leadership pairs Bezos with Vik Bajaj, who previously built frontier R&D groups at Google X, Verily, and Foresite Labs. Prometheus already has nearly 100 hires, including researchers from OpenAI, DeepMind, and Meta, which shortens the ramp for model training, simulation, and control stacks. Funding size matters because training physics aware models and operating automated labs burn capital on compute, custom rigs, materials, and high cadence experimentation. The target differs from standard chatbots that learn from text alone, since these systems also learn from physical experiments where outcomes push the model toward designs that actually work in the real world. Think of it as coupling generative design, high fidelity simulation, and closed loop robotics so the AI proposes candidates, screens them in sim, fabricates the best few, measures gaps, and retrains for the next cycle. Rivals are moving, like Periodic Labs with automated discovery lines and Thinking Machines Lab with $2B, so execution speed and data advantages will decide who builds the strongest feedback loop. If Prometheus links design data, manufacturing telemetry, and test results into one continuous dataset, it can cut iterations and raise yield for complex assemblies. --- nytimes .com/2025/11/17/technology/bezos-project-prometheus.html

English
18
10
100
15.6K
matthewputman retweetledi
matthewputman
matthewputman@matthewputman·
RL and transformers don’t just train consumer models, they can train matter. The future factory is a living neural network, not a static assembly line.
English
1
1
5
219
matthewputman
matthewputman@matthewputman·
Everyone chases “big” as the path to scale. Cubefabs.com changes the paradigm. each fab can start small, fast, and autonomous , but together they act as the world’s largest megafab. The network itself is the scale.
English
1
0
2
123
matthewputman retweetledi
matthewputman
matthewputman@matthewputman·
@MattWelch You do not seem to find it painful to watch without the Mets making it? It is hard for me in sports to get over losses. NBA championship when Knicks as a made it for example. I wish I could.
English
1
0
0
26
Matt Welch
Matt Welch@MattWelch·
Good baseball game on.
English
23
1
80
6.9K
matthewputman
matthewputman@matthewputman·
I would love to see a day of as many people using AlphaFold instead of image generators . We might cure diseases instead of just drawing them. @demishassabis @DeepMind @BakerLab
English
0
0
4
361
matthewputman
matthewputman@matthewputman·
Like most people I am thrilled that machines are getting smarter, but the the problem is that we have forgotten what we wanted to be smart for.
English
1
1
2
85
matthewputman
matthewputman@matthewputman·
@motorhueso I can’t wait for the day when people do not refer to discussions about progress or doom in terms of financial bubbles.
English
1
0
0
98
matthewputman
matthewputman@matthewputman·
@GaryMarcus Man, I remember when you were still a disciple! We used to build AI together. You have fallen from grace. Or maybe the Judas of deep learning.
English
0
0
0
97
Gary Marcus
Gary Marcus@GaryMarcus·
i feel flattered.
Gary Marcus tweet media
English
25
9
141
16.4K