
Philip Resnik
591 posts

Philip Resnik
@psresnik
MPower Professor; Professor, Linguistics and Institute for Advanced Computer Studies, University of Maryland; Tech Advisor, FiscalNote, Coleridge Initiative
Katılım Mart 2011
27 Takip Edilen4.2K Takipçiler

I complain when a company's behaved poorly, so only fair to praise when they do well. I had a short-notice emergency that led me to not take a @SouthwestAir flight, and they waived their no-show policy to refund. Much appreciated.
English

@ChrisGPotts Even leaving aside the linguistic and philosophical debate, though, there are important real-world reasons the radical version of distributionalism is a problem — see direct.mit.edu/coli/article/5….
English

@psresnik That's fair. He hedges on whether he considers meaning in that sense to be part of language, but he does seem to be allowing that it could be excluded from his distributional analyses. However, if I were a student of Harris, I would want to push this as far as possible:

English

Scientific progress requires investigating a diverse set of ideas. Do we still have that in natural language processing and AI? In this short (3min) video snippet, I raise this question during a panel at the top NLP conference this past summer. youtube.com/watch?v=1ZPjTZ…

YouTube
English

@ChrisGPotts There’s evidence that he’s more than just hedging (see the rest of the paragraph I quoted) but I don’t doubt you’dhave been inclined to see how far you could take it! :) And quite possibly he’d have been up for that. I dearly wish Lila Gleitman were still around for us to ask.
English

@ChrisGPotts Harris (1954) speaks of distributional relations “which *correlate with* some aspect of meaning” (p. 156, my emphasis). He never says there is no notion of meaning other than distribution, and in fact he clearly indicates that meaning goes beyond just distributions.
English

@psresnik I appreciate the comment! For the remark about Zellig Harris: his 1954 paper "Distributional structure" attempts to push the distributional hypothesis as far as possible – meaning, discourse, even mental procesess. I assume he'd see LLMs as support for his nominalist stance.
English

Should we think of LLMs as cognitive models? This talk goes beyond the "argument from amazingness" to a more careful assessment of what it means to model human language processing, and why thinking of LLMs as cognitive models might or might not make sense.
youtu.be/Gt7IAT9lgrQ

YouTube
English

To be presented at ACL 2025: Large Language Models Are Biased Because They Are Large Language Models.
Article: doi.org/10.1162/coli_a…
Short (8min) video: youtube.com/watch?v=WLSuhe…
#ACL2025NLP #NLProc #LLMs

YouTube
English

In earlier work, we showed that neural topic model evaluation was broken, and those models didn't improve over classical methods the way people thought. This new paper provides a replacement paradigm that's grounded in the real-world requirements of qualitative content analysis.
Alexander Hoyle@miserlis_
(Repost due to mistaken deletion😢): Evaluating topic models (& doc clustering methods) is hard. In fact, since our paper critiquing standard eval practices 4 years ago, there hasn't been a good replacement metric That ends today! Our ACL paper introduces a new evaluation🧵
English
Philip Resnik retweetledi

Also joint work with @psresnik and @boydgraber. This work concludes a "trilogy" of topic model evaluation papers
paper 1: dl.acm.org/doi/10.5555/35…
thread 1: x.com/miserlis_/stat…
paper 2: aclanthology.org/2022.findings-…
thread 2: x.com/miserlis_/stat…
Alexander Hoyle@miserlis_
When evaluating a method, it makes sense to ask: what's the method used for? In our Findings of EMNLP paper "Are Neural Topic Models Broken?", we draw from content analysis to inform topic model evaluation metrics @Pranav__Goel @rupak_53 @psresnik arxiv.org/abs/2210.16162 1/11
English

@Itay_itzhak_ Thanks for this! Possibly of interest: my new paper on (normativity-related) bias aligns with yours, and offers a root-cause answer to your Section 3 question, "What Causes Bias in LLMs?". youtube.com/watch?v=WLSuhe…, doi.org/10.1162/coli_a… #ACL2025NLP #NLProc

YouTube
English

🚨New paper alert🚨
🧠
Instruction-tuned LLMs show amplified cognitive biases — but are these new behaviors, or pretraining ghosts resurfacing?
Excited to share our new paper, accepted to CoLM 2025🎉!
See thread below 👇
#BiasInAI #LLMs #MachineLearning #NLProc

English
Philip Resnik retweetledi

Researchers @UofMaryland are examining how and why some misleading narratives proliferate via strategic use of certain mainstream news articles. The paper—coauthored by Pranav Goel, @_Jon_Green, @davidlazer & @psresnik—was published @NatureHumBehav.
nature.com/articles/s4156…
English

Looking for guidance on raising successful kids in a complicated world? Rebecca Resnik's new podcast is (objectively, even if I'm personally biased!) a great place to go. podcasts.apple.com/us/podcast/cul…
English

@UHC If an MD prescribed a medication, they’re saying it’s medically justified. If the generic is unavailable and you insist on extra pre-authorization to approve the brand name, there’s ONLY one reason: to avoid paying out benefits to which someone is entitled. Shame on you.
English

Just out on arXiv: my paper arguing that harmful biases are an inherent consequence of the underlying assumptions in any large language model, as LLMs are currently formulated. To the extent this is true, those assumptions badly need to be revisited. arxiv.org/abs/2406.13138
English

I am beyond thrilled to report that my first PhD grad, Mona Diab, has been named a new Fellow of the Association for Computational Linguistics, along with four other terrific scholars. Congratulations to @MonaDiab77!
English
Philip Resnik retweetledi

Honored my paper with @psresnik was accepted to Findings of #EMNLP2023! Many psycholinguistics studies use LLMs to estimate the probability of words in context. But LLMs process statistically derived subword tokens, while human processing does not. Does the disconnect matter? 🧵

English