Tom Rainforth

372 posts

Tom Rainforth banner
Tom Rainforth

Tom Rainforth

@tom_rainforth

Associate Professor in Machine Learning at the University of Oxford, Head of RainML Research Lab (https://t.co/uuBwQ2WdMN)

Oxford, England 加入时间 Kasım 2016
293 关注5.8K 粉丝
置顶推文
Tom Rainforth
Tom Rainforth@tom_rainforth·
We've got really good at utilizing data. But methods for acquiring that data are often still rudimentary. Our new review paper shows how Bayesian experimental design has recently transformed to now provide a powerful mechanism to acquire data intelligently arxiv.org/abs/2302.14545
Tom Rainforth tweet media
English
2
31
107
15.1K
Tom Rainforth 已转推
Freddie Bickford Smith
Freddie Bickford Smith@fbickfordsmith·
Active testing enables label-efficient model evals but can be computationally expensive. We show how to reduce costs and scale up to LLMs. arxiv.org/abs/2508.09093 Work led by Gabrielle Berrada. Find her at EurIPS, or @janundnik and me at NeurIPS in San Diego.
Freddie Bickford Smith tweet media
English
3
7
14
7.1K
Tom Rainforth 已转推
Jackson Atkins
Jackson Atkins@JacksonAtkinsX·
Apple and Oxford just made AI 6.5x better at problem-solving. The secret: it teaches AI agents to ask perfect questions. This rockets success rates from 14% to 91%. No need for fine-tuning or retraining. It runs on current models. Here's how it works: It's a strategic loop designed for multi-turn conversations. At every step, the agent works to find the shortest path to the right answer. Hypothesize: The agent creates an internal list of all possible solutions to the problem. Score Questions: It simulates asking various questions and scores each one on "Expected Information Gain" (EIG). This number represents how much a question is mathematically likely to shrink the list of possibilities. Ask the Best Question: It asks the user only the single, highest-scoring question. Update & Repeat: Based on the answer, it filters its list of hypotheses, getting smarter with each interaction, and then begins the loop again for the next turn. Why this matters for your AI strategy: This marks a shift from building passive "oracles" to proactive, question-asking agents Business Leaders: A 6.5x multiplier on task success is a lever for efficiency. This translates to fewer failed customer interactions, faster diagnostics, and more accurate personalization, a clear ROI on smarter AI. Practitioners: This is a deployment-time framework, not a new model. You can build this agent on top of existing LLMs today. It provides a principled way to overcome common multi-turn issues like inconsistency and context loss without fine-tuning or retraining. Researchers: This paper is a victory for information theory. It proves that a full EIG calculation is superior to heuristics like predictive entropy. It sets a new standard for how to build intelligent information-seeking agents.
Jackson Atkins tweet media
English
23
136
834
89.3K
Tom Rainforth
Tom Rainforth@tom_rainforth·
I have an opening for a 2-year postdoc in probabilistic machine learning and/or experimental design. The application deadline is the 3rd of September. See here for details and how to apply: tinyurl.com/rainmlpostdoc2…
English
0
11
39
3.2K
Andreas Kirsch 🇺🇦
Andreas Kirsch 🇺🇦@BlackHC·
A small info-theory thread (or at least food for thought): Why is the Bayesian Model Average the best choice? Really why? I'll go through a naive argument (anyone has better references?), simple lower-bounds and decompositions, and pitch a "reverse mutual information" 1/15
Andreas Kirsch 🇺🇦 tweet media
English
4
22
153
17.5K
Tom Rainforth
Tom Rainforth@tom_rainforth·
I have an opening for a 2.5-year postdoc position in the RainML lab as part of my ERC grant on probabilistic machine learning and intelligent data acquisition. Application deadline 10th July 2024. See here for details and to apply: tinyurl.com/rainmlpostdoc
English
1
21
53
11.5K
Tom Rainforth
Tom Rainforth@tom_rainforth·
I'm delighted to announce that from September I will officially be an Associate Professor (remaining at the Oxford stats department)
English
17
2
172
16.2K
Tom Rainforth
Tom Rainforth@tom_rainforth·
Our new #ICLR2024 paper shows how LLMs can successfully check their own change of thought reasoning without any fine-tuning or even examples, using an approach we call SelfCheck. Join me at poster 125 this afternoon to learn more Paper: openreview.net/forum?id=pTHfA…
Tom Rainforth tweet media
English
1
4
26
3.7K
Tom Rainforth
Tom Rainforth@tom_rainforth·
In-context learning can learn novel input-output relationships beyond what can be picked up from input context alone, but doesn't behave like conventional learning algorithm. Find out more at our ICLR poster #129 this afternoon. Paper: openreview.net/forum?id=YPIA7…, led by @janundnik
English
0
1
12
1.4K
Tom Rainforth 已转推
Jannik Kossen
Jannik Kossen@janundnik·
Are you at ICLR? Have you heard that In-Context Learning in LLMs does not learn label relationships? Well that's not true. Visit our poster TODAY to find out how LLMs incorporate label information. Spoiler: it's not Bayesian inference. Poster #129, May 7, 4.30 pm
Jannik Kossen tweet media
English
1
9
35
5.6K
Tom Rainforth 已转推
Tim Reichelt
Tim Reichelt@TimReichelt3·
I will be presenting our work on "Beyond Bayesian Model Averaging over Paths in Probabilistic Programs with Stochastic Support" at AISTATS in Valencia tomorrow (details in thread below). If you are interested in probabilistic programming, come and say hi at poster session 1!
Tom Rainforth@tom_rainforth

Our new paper (arxiv.org/abs/2310.14888) shows that probabilistic programs with stochastic support are implicitly Bayesian model averages (BMA) which leads to issues if we assume our model is misspecified! w/ @TimReichelt3 and Luke Ong A thread (1/5)

English
0
1
20
1.4K
Tom Rainforth
Tom Rainforth@tom_rainforth·
We are delighted to announce an ACM-TOPML special issue on "Probabilistic Programming". Please see the attached call for papers for details
Tom Rainforth tweet media
English
1
6
33
5.5K
Tom Rainforth
Tom Rainforth@tom_rainforth·
@haus_cole We did this as a pretty direct follow up: arxiv.org/abs/2106.13746. I think unfortunately the reality is that disentanglement is not generally viable without either strong inductive biases or some degree of supervision
English
1
0
5
215
Derek Parfait
Derek Parfait@haus_cole·
Does anyone know of follow-ups to twitter.com/tom_rainforth/… ? It kind of seems like it leaves us without a good way to approach disentangling in VAEs?
Derek Parfait tweet mediaDerek Parfait tweet mediaDerek Parfait tweet mediaDerek Parfait tweet media
Tom Rainforth@tom_rainforth

Disentangling Disentanglement in Variational Autoencoders, our new #icml2019 paper with @MathieuEmile, N. Siddharth, and @yeewhye. We generalize the notion of disentanglement to a more general framework for learning structured representations Paper: proceedings.mlr.press/v97/mathieu19a…

English
1
0
2
441