Pablo Contreras Kallens

89 posts

Pablo Contreras Kallens

Pablo Contreras Kallens

@pcontrerask

Ph. D. candidate, Cornell Psychology in the Cognitive Science of Language lab. Serious account.

Ithaca, NY Katılım Eylül 2020
54 Takip Edilen127 Takipçiler
Jeff Yoshimi
Jeff Yoshimi@JeffYoshimi·
I've been having a nice time talking to @denizcemonduygu about maps and graphs of philosophical discourse, and thought I'd repost my bibliometric map of the phenomenology literature. Dots are authors, links are citations, and colors are clusters.
Jeff Yoshimi tweet media
English
3
8
82
7.5K
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@Forrest_L_Davis @MH_Christiansen @CSL_Lab There's no paper yet! But yes, we found that in this particular agreement dataset the accuracy of the RLHF GPT-3 diminishes in comparison with non-RLHF models in the two phrase manipulations that humans struggle with the most.
English
0
0
1
51
Forrest Davis
Forrest Davis@Forrest_L_Davis·
@MH_Christiansen @CSL_Lab @pcontrerask Is there a link to the talk or to a paper/abstract? There’s work on agreement errors in English models (partially) overlapping with humans without RLHF or “feedback”. Is the argument they align more closely after fine tuning ?
English
2
0
0
83
Pablo Contreras Kallens retweetledi
Morten H. Christiansen
Morten H. Christiansen@MH_Christiansen·
@CSL_Lab alumn @pcontrerask talks about how feedback is crucial for getting large language models to produce more human-like language output, such as making similar agreement errors and being sensitive to subtle semantic distinctions
Morten H. Christiansen tweet media
English
1
1
4
448
Pablo Contreras Kallens retweetledi
Steven Elmlinger
Steven Elmlinger@ElmlingerSteven·
How do infants learn to produce the consonant sounds of their ambient language? To find out, check out our CogSci proceedings paper “Statistical learning or phonological universals? Ambient language statistics guide consonant acquisition in four languages” A 🧵: /1
English
1
6
28
6.9K
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@glupyan Their response to why it's not seems fixated on only one of the interpretations of a very complex intellectual exercise, which focuses on just a portion of the assumptions. Really strange, to me.
English
0
0
0
253
Gary Lupyan
Gary Lupyan@glupyan·
So everyone knows Bender and Koller’s “octopus test” is Searle’s Chinese room thought experiment, right? And there are decades of critiques showing all the ways it’s wrong..? (Asking for a friend)
English
3
0
10
4.8K
Pablo Contreras Kallens retweetledi
Jeff Yoshimi
Jeff Yoshimi@JeffYoshimi·
I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essays on the state of the field. A brief thread about the book, and the long and ultimately victorious struggle to publish it open access. 1/
Jeff Yoshimi tweet mediaJeff Yoshimi tweet media
English
10
38
138
18.1K
Pablo Contreras Kallens retweetledi
Ben Ambridge
Ben Ambridge@Benambridge·
In sum, the ideas that children (a) have innate syntactic categories and (b) NEED them because they can't construct them via distributional analyses alone are NOT straw-men but real and influential proposals in the child language literature 7/n
English
1
1
16
1K
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@jbenfalandays I was once told that bacteria don't have cognition because they don't have brains, by people who were staunch Fodorians. Weird feeling
English
0
0
1
74
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@roger_p_levy @glupyan @weGotlieb @xsway_ I don't know if that exchange ever actually happened in published form, so it's just a hunch, and the now "weak" version can be considered now as a contender. But labeling it as a "modern" version, as if the other option was just a strawman, is a bit too revisionist for my taste.
English
0
0
1
100
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@roger_p_levy @glupyan @weGotlieb @xsway_ I fear there's a little bit of squirming in this "weak" POS. I don't think generativists would have told you that the magic "amount" actually exists. I have trouble believing that if you asked "what if they got 2x the data? 3x? 100x? the answer would have been "yes" at any point.
English
1
0
2
280
Roger Levy
Roger Levy@roger_p_levy·
I urge interested readers to consult the @weGotlieb et al. in press: we find that the "GRNN" LSTM of @xsway_ et al. 2018 trained on a childhood's worth of English shows substantial success on filler–gap dependencies and the island constraints on them. 1/3 direct.mit.edu/ling/article/d…
Masoud@linguistMasoud

It seems like in the recent discussion of the poverty of the stimulus & large language models two versions of PoS are being conflated. Version 1 says human children do not have enough data to learn abstract grammatical representations from *their* input. LLMs have nothing to say

English
3
14
68
35K
Gary Lupyan
Gary Lupyan@glupyan·
Naive (?) question: how does OpenAI (and other big players) get access to all the copyrighted books etc?
English
4
1
19
7.2K
Pablo Contreras Kallens retweetledi
Ben Ambridge
Ben Ambridge@Benambridge·
I've seen lots of threads about large language models (LLMs) and their implications for language acquisition BUT not many threads by language-acquisition specialists. So here's my two cents on how LLMs undermine SOME SPECIFIC PROPOSALS for acquisition of syntactic categories 1/n
English
4
13
82
26.3K
Pablo Contreras Kallens
Pablo Contreras Kallens@pcontrerask·
@EvelinaLeivada @glupyan @Timothy0Leary But this framing assumes a view of language that's based on the work being questioned. "Learning rules" is an explanation, not a description, of productive language use. If a model that doesn't do the former can achieve the latter, why assume that that's what people are doing?
English
0
0
1
31
Prof. Evelina Leivada
Prof. Evelina Leivada@EvelinaLeivada·
@glupyan @Timothy0Leary Honest Q: Never mind the language disagreement, would you say it is *learning*? Meaning figuring out of how a rule works, the domain in which it applies, its exceptions, its productivity etc. Do they learn in the sense of critically engaging with rather than absorbing any input?
English
2
0
0
215
Timothy O'Leary
Timothy O'Leary@Timothy0Leary·
I'm bewildered. The argument here seems to be that if a statistical model of a phenomenon provides accurate predictions, then this is evidence that the phenomenon itself is nothing more than that same statistical model + the means used to fit it.
English
10
7
102
63.9K