Ryan Teehan

264 posts

Ryan Teehan

Ryan Teehan

@rteehas

PhD Student @nyuniversity | prev. @stabilityai | x-cofounder @carperai | prev. @uchicago @TTIC_Connect

Katılım Mayıs 2022
1.4K Takip Edilen297 Takipçiler
Ryan Teehan
Ryan Teehan@rteehas·
@dilanesper I mean, the protests were severely repressed, it's hard to keep momentum for many years, and there officially was a ceasefire. Not really surprising that the protest landscape is different now
English
1
0
0
643
Dilan Esper
Dilan Esper@dilanesper·
isn't it pretty obvious that a whole academic and activist omnicause apparatus built around Palestine preexisted and was activated by October 7? You can't easily duplicate that.
Josh Kraushaar@JoshKraushaar

“Some academics think that the students themselves are different: Whether because of concerns about the worsening job market or a cultural shift rightward, they seem less interested in raising hell on campus.” theatlantic.com/ideas/2026/03/…

English
5
23
322
20.9K
James Chen
James Chen@jchencxh·
somehow, despite the proposal of “learning to compose abstractions from noisy data” being an incredibly general problem, there’s exactly one (1) industry lab that’s focusing on it outside of vidgen.
English
2
0
24
2.1K
Ryan Teehan
Ryan Teehan@rteehas·
@leothecurious I've been working on something related to this in my down time, which I really need to write up soon
English
0
0
1
35
davinci
davinci@leothecurious·
yess, someone gets it!! this is very important for people to internalize and truly grok. supervising from a hierarchy of interdependent representations forces coherence across network layers, and rules out spurious features as those have less predictable components than their simpler robust and generalizable counterparts. i bet this works across modalities and will be a default aspect of generative models in the (hopefully) near future. predicting the raw data alone (be it text or pixels) is not constraining enough for the network to converge onto robust (see FER hypothesis) representations of the data.
James Chen@jchencxh

Only a very high-level target (at the extreme, think class labels) can make shortcut solutions easier. If the loss has to explain a hierarchy instead of just a high-level target, you impose more semantic constraints on the representation, and you get less spuriousness.

English
2
2
29
2.7K
Ryan Teehan
Ryan Teehan@rteehas·
@tallinzen I don't really let LLMs touch my writing aside from checking for typos or suggesting alternatives to a clunky sentence tbh
English
0
0
4
341
Tal Linzen
Tal Linzen@tallinzen·
tried to use a couple of LLMs for feedback on a manuscript and wasn't so impressed with the results, they did catch a lot of typos but the higher-level suggestions were pretty basic (and sometimes just lame). probably "skill issue" in that I couldn't spend a lot of time prompt engineering as I had a paper deadline. what are people using that they actually find helpful?
English
12
1
25
8.5K
Ryan Teehan
Ryan Teehan@rteehas·
@PetarV_93 Separately, have always found Neil Ghani's category theory work interesting. I periodically try to find a deep learning application for his paper with Michael Abbott, Thorsten Altenkirch, and Conor McBride on differentiating data structures
English
0
0
1
37
Petar Veličković
Petar Veličković@PetarV_93·
@rteehas we actually used kripke's quus example to motivate one of our recent papers! arxiv.org/abs/2507.08796 in general many of our recent works concern out-of-distribution behaviours, so i am not surprised by the parallels to wittgenstein... but there is progress that can be made :)
English
3
0
5
1.2K
Ryan Teehan
Ryan Teehan@rteehas·
@PetarV_93 I like the idea of connecting a certain kind of extrapolation with equivariance to some transformation. Out of curiosity, have you explored making the transformation to which you are equivariant data-dependent?
English
1
0
1
44
Ryan Teehan
Ryan Teehan@rteehas·
@PetarV_93 Oh that's very cool, thanks for the pointer. I've always had a soft spot for Wittgenstein and his work inspired some of my research interests when applying to PhD programs. I guess it's not too surprising how many problems feel like a return to Wittgenstein or Hume, though
English
0
0
1
59
serendipitously erin
serendipitously erin@superinducting·
@mean_field_zane @typeclonghouse They recognize that it has good free speech, but there aren’t enough ROTC students, courses are too hard, too many alumni trying for academia Shoulda gone to Auburn
serendipitously erin tweet mediaserendipitously erin tweet mediaserendipitously erin tweet mediaserendipitously erin tweet media
English
1
0
1
77
Ryan Teehan retweetledi
Jack Lu
Jack Lu@Jacklu_me·
Excited to see my work with @rteehas on "When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers" featured here! arXiv link: arxiv.org/pdf/2512.02304 👀
NYU Center for Data Science@NYUDataScience

Do stronger LLMs make better verifiers? Not necessarily when grading themselves. New work led by Courant PhD student Jack Lu (@Jacklu_me) and CDS Asst Prof Mengye Ren (@mengyer) shows that cross-family verification outperforms self-verification. nyudatascience.medium.com/study-reveals-…

English
0
6
25
3.7K
Shubhendu Trivedi
Shubhendu Trivedi@_onionesque·
I have decided to join Google Deepmind. Very curious about the next leg.
English
89
19
1.3K
131.2K
Ryan Teehan retweetledi
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
At Agentic Learning AI Lab, the best part of our research is working with the incredible students. They are the ones writing the code, running the experiments, and solving the hard problems in AI today. As we approach the end of the year, we are fundraising to ensure these students have the resources they need to thrive in 2026. Unlike big industry labs, we rely on grants and gifts to keep our research independent and open. If you’d like to support the next generation of AI researchers of our lab at NYU, please visit the link below!
English
1
9
27
10.6K
agentic learning ai lab
agentic learning ai lab@agentic_ai_lab·
Lab gathering at #NeurIPS2025. Proud of this year’s work and excited about the ideas we’re building toward next!
agentic learning ai lab tweet mediaagentic learning ai lab tweet mediaagentic learning ai lab tweet media
English
8
7
92
32.8K
Ryan Teehan
Ryan Teehan@rteehas·
Was Hermann Weyl a big fan of Rilke? I was surprised to see him mentioned in the foreword of a translation of Sonnets to Orpheus.
Ryan Teehan tweet media
English
0
0
1
185
Ryan Teehan
Ryan Teehan@rteehas·
Take a look at our recent study of test-time verification! We present some interesting results about, among other things, the benefits of choosing a verifier from a different model family than your solver.
Jack Lu@Jacklu_me

Wondering how to get the most out of LLM test-time verification? New study: “When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers". 🔍 37 models, 9 datasets 🔥 Self vs intra-family vs cross-family verification Result: verify across families! 🧵👇

English
0
0
4
255