Stuart Shieber (@[email protected])
1.3K posts

Stuart Shieber (@[email protected])
@pmphlt
Harvard computer science prof interested in scholarly communication.
Katılım Şubat 2010
88 Takip Edilen1.6K Takipçiler

Since we first distributed our #goodOA guide 10 years ago today, over 70 institutions have enacted simpatico #openaccess policies. We'd love to know (harvard.goodoa@gmail.com) if it's helped you too! bit.ly/goodoa
h/t @petersuber
English
Stuart Shieber (@[email protected]) retweetledi

@SplitSeason1981 All records pre-2023 increase in size of bases.
English

Moderating principles blogs.harvard.edu/pamphlet/2022/…
English
Stuart Shieber (@[email protected]) retweetledi

Which patents get accepted?
How do the standards for innovation evolve over time?
What kinds of technologies turn over more quickly?
In joint work with Mirac Suzgun @lukemelas @skominers and @pmphlt, we introduce a dataset that may help address questions like these [1/n]

English
Stuart Shieber (@[email protected]) retweetledi

I do not think it is possible to say very much with confidence about the sentience of LaMDA — or of any other computer system — on the basis of transcripts. @pmphlt’s “The Turing Test as Interactive Proof” convincingly explains why not. dash.harvard.edu/handle/1/20272… 1/
English

@srush_nlp @redpony @yoavgo …but they'd probably argue that matching up of words and corresponding letters is not a *syntactic* requirement, much as in their argument against "respectively" constructions in section 4 of <jstor.org/stable/25001071>. Rather, it's a semantic or even pragmatic phenomenon. 2/2
English

@srush_nlp @redpony @yoavgo I'm assuming @yoavgo is thinking of an argument that the full phrase and its acronym display a cross-serial dependency, as in the CSDDSG example? I'd defer to Pullum and Gazdar (GaP) on this, ... 1/2
English

Exactly 11 years ago today I was on this trip: shbr.link/scouring
#uffington #whitehorse #preservation #libraries
English
Stuart Shieber (@[email protected]) retweetledi
Stuart Shieber (@[email protected]) retweetledi

The results are in this new #ACL2021NLP paper: arxiv.org/abs/2106.06087
Hoping this encourages more work using mediation analysis to reveal internal mechanisms in language models. 4/4
English
Stuart Shieber (@[email protected]) retweetledi

Sometime in the last month, the guide to good practices for university #openaccess policies that I maintain with @pmphlt passed the milestone of 300,000 page views.
bit.ly/goodoa
English
Stuart Shieber (@[email protected]) retweetledi

@CarlosODonell That’s the most important thing I learned in college. (Unless you count meeting my wife. But we’re still married because I try to make it easy for her to give me an “A”.) Giving credit where it’s due, I learned it from @pmphlt.
English
Stuart Shieber (@[email protected]) retweetledi

I've never asked for retweets before, but please retweet this in an effort to put an end to this madness: tinyurl.com/x3ywt3ca
@CNN et. al. Please. It's "MLB," not "the MLB." It stands for "Major League Baseball." No one says "The Major League Baseball is working with. . . ."

English

@ThomasScialom @srush_nlp Yep. That was Alan Turing's point in his 1950 Mind paper "Computing machinery and intelligence".
English

@srush_nlp I tend to think that truly solving NLG would be very close to an AGI, isn't it ? Would be curious to know your opinion on that Sasha?
English
Stuart Shieber (@[email protected]) retweetledi

Every time someone says we are close to solving NLG they should be forced to read 5 of these summaries from Shashi Narayan's paper.
huggingface.co/datasets/viewe…
English

