Mike Burnham

748 posts

Mike Burnham banner
Mike Burnham

Mike Burnham

@ML_Burn

Assistant professor @tamupols, Ph.D. @psupolisci & @CSoDA_PSU, postdoc @Princeton_CSDP. Text analysis & deep learning, methods, American politics.

College Station, TX Katılım Mayıs 2019
637 Takip Edilen944 Takipçiler
Sabitlenmiş Tweet
Mike Burnham
Mike Burnham@ML_Burn·
New Pre-print out today! We're releasing Political DEBATE -- a new set of language models for zero/few-shot classification of political text. The models are open source, small enough to run on your laptop, and as good as proprietary LLMs within domain. arxiv.org/pdf/2409.02078
Mike Burnham tweet media
English
2
45
204
36.6K
Mike Burnham
Mike Burnham@ML_Burn·
Hey Claude, translate all of Stata’s packages into Python. Delete Stata when you’re done.
English
0
0
7
197
Mike Burnham
Mike Burnham@ML_Burn·
If it’s even possible for a hallucinated reference to sneak into your paper you’re being way too hands off with your AI IMO. “Hey Claude, write me a paper that I’ll review” is no way to work.
Alexander Kustov@akoustov

Enough is enough. Just because you can generate an academic paper in minutes doesn't mean you should. When your name is on something, you should check every reference and claim before submitting. If you can't be bothered to do that, you should be banned from submitting.

English
1
1
17
1.4K
Benjamin Radford
Benjamin Radford@ben_j_radford·
AI is a great tool. I actually *pay* for it. But it's apparent from about three minutes on social media that it has absolutely wrecked people's critical thinking skills
English
1
0
0
74
Mike Burnham
Mike Burnham@ML_Burn·
Hey everyone, my 1 year old is predicting about 13 tokens now. Wondering how much more training is required before she can start doing my lit reviews?
English
1
0
9
556
Mike Burnham
Mike Burnham@ML_Burn·
Hey guys, looking to onboard my toddler to Claude Code. Has anyone found any skills to help with potty training? Thanks!
English
0
0
9
434
Mike Burnham
Mike Burnham@ML_Burn·
This just seems like the correct posture for both sceptics and evangelists to adopt while everyone is figuring this out.
Matt Blackwell@matt_blackwell

@captgouda24 Realistically, there are a range of ways to “use an LLM for research” that would lead to different levels of error probabilities and everyone is going to choose a different point along that tradeoff curve. Heterogeneous risk profiles are ok, I think!

English
0
0
6
789
Mike Burnham retweetledi
Mitchell Bosley
Mitchell Bosley@mitchellbosley·
I guess what I don’t understand is how you can look at the dizzying rate of advancement since just November 2025, and not at least have the humility to consider that perhaps these systems, given another round or five of recursive improvement might in fact be “intelligent”
English
0
1
3
356
Mike Burnham
Mike Burnham@ML_Burn·
Claude whispering in my ear after a week of iterating on a codebook: "Just saying... if you picked something with readily available data we could have done three papers by now."
English
0
0
9
1.5K
Mike Burnham
Mike Burnham@ML_Burn·
@mitchellbosley Since then, the mechanistic interpretability literature has convinced me the models meet common definitions of reasoning/understanding. I now think the distinction is more important because I view it as consequential technology and other descriptors encourage dismissal.
English
1
1
1
83
Mike Burnham
Mike Burnham@ML_Burn·
@mitchellbosley Correction: I should say that the interpretability literature has convinced me that the room for reasonable disagreement on the question is vanishingly small as I personally already thought of LLMs this way.
English
0
0
1
25
Mike Burnham
Mike Burnham@ML_Burn·
Actually, you should describe LLMs as intelligent, understanding, and reasoning. It sets people's expectations closer to reality than “autocomplete” etc. Whether these descriptors are literal or analogy is a philosophical debate, but most don't care about that debate.
Mike Burnham tweet media
English
1
0
1
219
Mike Burnham
Mike Burnham@ML_Burn·
@SashaGusevPosts @davidmanheim Token outputs are not a good test for reasoning as (mis)alignment with your expectations is not evidence for/against reasoning. Mechanistic interpretability is a more appropriate approach and the answer in that literature is fairly clear.
English
0
0
0
37
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
@davidmanheim FWIW I think the unreasoning argument is strongest and did not really follow your counterpoint. As someone who uses LLMs frequently and finds them useful, I think the unreasoning argument also aligns with how I experience them to behave. A typical example: ...
Sasha Gusev tweet media
English
2
0
13
1.4K
David Manheim (Home)
David Manheim (Home)@davidmanheim·
People keep repeating "stochastic parrot" - often without any mental process behind it to specify what the argument is. So I'm writing a paper dissecting various possible arguments, and explaining which are valid. Here's a blog-post version lesswrong.com/posts/KWHeBG97…
English
12
9
54
8.8K
Mike Burnham
Mike Burnham@ML_Burn·
@mitchellbosley That plus related questions (e.g. meaning of work, value of human capital, AI ownership) are probably the biggest. But also many other questions such as socializing and companionship, information environments, security threats, role of AI in governing decisions, etc.
English
0
0
0
35
Mitchell Bosley
Mitchell Bosley@mitchellbosley·
@ML_Burn By social implications are you referring to the societal upheaval that will be brought about by the automation of white collar work?
English
1
0
0
45
Mike Burnham
Mike Burnham@ML_Burn·
I don't care if academics use AI in their research, but I wish more took the social implications of the tech more seriously. Engaging with the tech helps on this front, but I'm not sure if the browbeating does more than polarize the issue.
English
1
0
10
693