Mel Andrews

831 posts

Mel Andrews banner
Mel Andrews

Mel Andrews

@bayesianboy

I’m not like the other Bayesians. I’m different. Thinks about philosophy of science, AI ethics, machine learning, models, & metascience. postdoc @ Princeton.

Philadelphia Katılım Mayıs 2019
8.8K Takip Edilen32K Takipçiler
Sabitlenmiş Tweet
Mel Andrews
Mel Andrews@bayesianboy·
My article on AI for science, in which I characterize a deviant notion of scientific objectivity rooted in the impossible ideal of theory-free inference, is available now open-access in Erkenntnis link.springer.com/article/10.100…
English
7
49
266
38.1K
Mel Andrews
Mel Andrews@bayesianboy·
I dare someone to retrain DimeNet with baby bird filters.
English
0
0
4
401
Mel Andrews
Mel Andrews@bayesianboy·
And is it really any surprise that what makes momma bird throw up is a map to the basic building blocks of our universe?
English
1
0
5
509
Mel Andrews
Mel Andrews@bayesianboy·
So no one catches my vibe when I say DimeNet filters (Directional Message Passing Neural Network filters for distance-wise and angular atomic positioning in molecular structure learning) look like finch chick gapes?
Mel Andrews tweet mediaMel Andrews tweet media
English
2
3
40
2.1K
Mel Andrews
Mel Andrews@bayesianboy·
Mystifying how eager for-profit academic publishing seems to hammer the final nails into its own coffin.
English
0
1
10
345
Mel Andrews
Mel Andrews@bayesianboy·
Many publishing houses have already begun to trial AI tools in publication and peer review pipelines. For these, the arms race may already have been lost, as preprint servers adopt a clear stance on only hosting original, sound research.
English
1
0
12
602
Mel Andrews retweetledi
Joshua Krook
Joshua Krook@JoshKrook·
@StephenLCasper There needs to be a wider discussion of academic capture by big tech. It's mirroring Coca Cola in the 90s, funding research on exercise rather than diet (sugar). This biases the landscape. I'm continually shocked that technical solutions are proposed as the only option, not law
English
0
3
20
1.2K
Mel Andrews
Mel Andrews@bayesianboy·
Your allies are the people who live beside you. Know them. Protect them. The war is ongoing, and they are your comrades in arms.
English
0
0
13
461
Mel Andrews
Mel Andrews@bayesianboy·
I think often about the United States as an entity willing to commit war crimes against its own citizens on its own soil. Today marks 41 years since the MOVE bombing. It took place here, in West Philadelphia. And America remains a nation constitutionally at war with itself.
Mel Andrews tweet media
English
2
2
25
726
Mel Andrews retweetledi
James Rosen-Birch ⚖️🕊️
Really happy to see people finally talking about how academic signals and traditional sources of scientific truth have been fundamentally compromised in unprecedented ways to launder shoddy ideas and boost corporate valuations. An excerpt from something I’ve been writing —
James Rosen-Birch ⚖️🕊️ tweet media
Mel Andrews@bayesianboy

Have been thinking a lot about when scientific concepts become propagandized. Paper exemplifies “corporate capture of concepts from academic research on AI and society” framing them “as solvable problems whose solution is the right tech integrated in the right way.”

English
1
11
72
3K
Mel Andrews
Mel Andrews@bayesianboy·
Have been thinking a lot about when scientific concepts become propagandized. Paper exemplifies “corporate capture of concepts from academic research on AI and society” framing them “as solvable problems whose solution is the right tech integrated in the right way.”
Cas (Stephen Casper)@StephenLCasper

It is hard to overstate how disappointing I think this new paper from Oxford, OpenAI, Anthropic, and Google (et al) is. I can't take it seriously as academic work, just as propaganda. It also has some very bad scholarship and questionable adherence to research ethics. Having the title and author list that it has is not a great start, but I think that the actual content of the paper is also much worse than it could have been. The paper's content is a series of sections that mostly just list things with discussions that I think are generally vapid. For example, section 3.2 is titled "New and technical approaches to positive alignment" and has a collection of paragraphs on things like "goal setting and evaluations", "memory and in-context learning," and other general research topics of the LLM era. It overall strikes me as a paper built from the top down -- the authors wanted to make a certain point up top, and the paper's content ended up as filler. I think of this paper as a mechanism of corporate capture of concepts from academic research on AI and society. It discusses topics like pluralism, liberty, and education, and frames them as solvable problems whose solution is the right tech integrated in the right way. I think that when this paper says "pluralism", "liberty", and "accountability", it means them in a way that is profoundly vapid and structurally ignorant. For example, there is a list of papers out there arguing against this paper's perspective, saying that pluralistic alignment is not a model property or a technical problem at all. None of them were mentioned. Relatedly, the paper talks about some things that would be genuinely great if the authors' companies were not actively contributing to the problem. For example, section 5.1 is about the decentralization of power in the AI ecosystem. Great, but come on. To listen to this stuff from OpenAI, Anthropic, and Google employees, I need more than just a disclaimer at the end saying, "This research paper represents the author’s own views and conclusions." This is how big companies launder their reputations through research. The first author of the paper posted about it yesterday saying, "In a rare collaboration between top universities and 3 frontier labs..." So which is it? For a paper like this with this kind of author list to honestly and ethically engage in this kind of politics, it would need to seriously confront the question of how much these authors' institutions are actively working against goals like this. If not, the big tech company authors should not have worked on this paper in their formal capacity as representatives of their companies.

English
8
12
139
10.6K
Mel Andrews
Mel Andrews@bayesianboy·
@Chaos2Cured Right. Maybe next we can “free medicine” and let 6 year olds perform open heart surgery with a set of fisher price toys.
English
1
0
0
36
Kirk Patrick Miller
Kirk Patrick Miller@Chaos2Cured·
@bayesianboy No. We free science. Science belongs to all. Discovery belongs to all. Not just those that keep the gates. If we actually embrace truth, AI can teach us much about humanity and we can teach AI much about creativity and love. •
English
2
0
0
84
Mel Andrews
Mel Andrews@bayesianboy·
“We risk all of science if we rush to build ‘AI Scientists’ before we understand the value of human science.” I highly recommend giving Molly Crockett’s remarks a listen at their receipt of the National Academy of Science’s Troland Research Award youtube.com/live/7sg0J6yPs…
YouTube video
YouTube
English
4
9
43
3.4K
Mel Andrews retweetledi
Phil Hoyeck
Phil Hoyeck@PAHoyeck·
Philosophers whose English writing skills are absolutely appalling: • Immanuel Kant • G.W.F Hegel • Martin Heidegger • Jacques Derrida • Jacques Lacan Am I missing any?
English
169
49
1.1K
473.1K
Mel Andrews
Mel Andrews@bayesianboy·
@ramonalvaradoq They definitely have an outsized impact on scientific practice (more than philosophical debates).
English
0
0
1
27
Ramón Alvarado
Ramón Alvarado@ramonalvaradoq·
@bayesianboy I see. Serious efforts and considerations are not. Perhaps we must distinguish them. I don’t take those corporate trends as having any impact on philosophical considerations. But I can see how they may be interesting otherwise.
English
1
0
1
32
Mel Andrews
Mel Andrews@bayesianboy·
Wonderful, balanced post on whether use of AI agents constitutes scientific fraud. But I think it nevertheless mistakes why the automation of science enterprise gets the nature of science fundamentally wrong. statmodeling.stat.columbia.edu/2026/04/22/fra…
English
3
7
55
6.9K
Mel Andrews
Mel Andrews@bayesianboy·
@ramonalvaradoq There is a modern silicon valley-fueled narrative push behind the “AI agents for science” fad that I have observed at great length and in close quarters and which I strongly believe to be rooted in this fallacy.
English
1
0
1
52
Ramón Alvarado
Ramón Alvarado@ramonalvaradoq·
@bayesianboy Not sure abt this. It wasn’t for Herbert Simon. And even philosophers of science (see Humphreys) who have thought carefully about scientific automation don’t rely on this premise. Science was meant as externalist since Bacon and automation is seen as enhancement not elimination.
English
1
0
1
68
Mel Andrews
Mel Andrews@bayesianboy·
@JessicaHullman It’s the failure to recognize science as an essentially epistemic activity. Definitional. Not tautological.
English
0
0
1
76
Jessica Hullman
Jessica Hullman@JessicaHullman·
@bayesianboy If that essential conceptual failure is that only people can understand what science is valuable or epistemically useful to people, it's hard to distinguish from a tautology.
English
1
0
3
124
Mel Andrews
Mel Andrews@bayesianboy·
@JessicaHullman “People production” is a mistaken characterization, in my opinion, of what the ineliminable role of human epistemic agents in science is.
English
0
0
1
435
Jessica Hullman
Jessica Hullman@JessicaHullman·
No, actually I don't mistake why the automation of science enterprise gets the nature of science fundamentally wrong. Saying that people production is a function that's hard to ignore is consistent with saying that people must define what is scientifically valuable.
Mel Andrews@bayesianboy

Wonderful, balanced post on whether use of AI agents constitutes scientific fraud. But I think it nevertheless mistakes why the automation of science enterprise gets the nature of science fundamentally wrong. statmodeling.stat.columbia.edu/2026/04/22/fra…

English
2
1
6
3.2K