Andrew Williams

129 posts

Andrew Williams

Andrew Williams

@CluelessAndrew

Phd @Mila_Quebec. Forecasts, discovery, sequential decisions (single and multi-agent).

Montreal, QC 가입일 Nisan 2021
719 팔로잉463 팔로워
Andrew Williams 리트윗함
Gaurav Sahu
Gaurav Sahu@dem_fier·
ever been here? open overleaf → write a paragraph → "hmm...this needs a citation" → open 15 different tabs → skim 8 abstracts → find the 1 actually relevant paper → format bibtex → paste it back on overleaf if so, i built a plugin just for you. meet openleaf: → reads your paper paragraph by paragraph → searches major academic databases → filters out irrelevant papers using ai → one click to add BibTeX to your .bib you'll also find the 🤝 friendly and 🔥 fire reviewers there. i don't think i need to tell you what they do :) free. open source. no account. no data collection. works with ollama, openrouter, openai api and more. github.com/demfier/openle… dear algorithm, please show this to my fellow researchers in need 🙏 #overleaf #latex #opensource #academictwitter
English
27
105
814
1.1M
Andrew Williams 리트윗함
Andrei Mircea
Andrei Mircea@mirandrom·
I gave a talk on LLM zero-sum learning dynamics last week at MSR Montreal. I went over a few things that were not in the paper but that I'm particularly excited about; one of those is the connection between generalization and zero-sum learning. youtu.be/UyK3DgWY7yw
YouTube video
YouTube
English
1
9
36
4.5K
Andrew Williams 리트윗함
David Duvenaud
David Duvenaud@DavidDuvenaud·
LLMs have complex joint beliefs about all sorts of quantities. And my postdoc @jamesrequeima visualized them! In this thread we show LLM predictive distributions conditioned on data and free-form text. LLMs pick up on all kinds of subtle and unusual structure: 🧵
English
30
197
1.6K
193.5K
Andrew Williams 리트윗함
David Duvenaud
David Duvenaud@DavidDuvenaud·
This is fun because LLMs can condition on free-form side information, and make predictions about anything. This turns qualitative knowledge into quantitative predictions. Here we condition Llama 3 on two datapoints, plus text. Changing the text changes the meaning of the data.
GIF
English
3
16
168
10.6K
Andrew Williams 리트윗함
Perouz Taslakian
Perouz Taslakian@PerouzT·
🚀 We have released our paper on ReTreever! 🌳🔍 ReTreever organizes and represents documents in a binary tree across various granular levels, balancing cost & utility while enhancing retrieval transparency. 📜 Read it here: arxiv.org/pdf/2502.07971 #AI @ServiceNowRSRCH 🧵👇
Perouz Taslakian tweet media
English
1
20
31
4.3K
Andrew Williams 리트윗함
Ahmed Masry
Ahmed Masry@Ahmed_Masry97·
Happy to announce AlignVLM📏: a novel approach to bridging vision and language latent spaces for multimodal understanding in VLMs! 🌍📄🖼️ 🔗 Read the paper: arxiv.org/abs/2502.01341 🧵👇 Thread
Ahmed Masry tweet media
English
2
55
211
22.7K
Andrew Williams 리트윗함
gian
gian@giansegato·
this is paper is kinda wild. turns out that if you simply ask an LLM to straight out predict a timeseries like this: ``` (t1, v1) (t2, v2) (t3, v3) (t4, v4) (t5, v5) ``` making sure to prepend the prompt like this: ``` Here is some context about the task. Make sure to factor in any background knowledge, satisfy any constraints, and respect any scenarios. ((context)) ``` it will just… do it? beating SOTA timeseries forcasting?! llama 3.1 405b directly prompted is more precise at forecasting real-world series than: - stats-based timeseries models (ARIMA, ETS) - foundation models specifically trained for time series (eg. chronos) - multimodal forecasting models (eg, time-LLM) peak 'bitter lesson' behavior lol
gian tweet mediagian tweet media
English
48
195
2.2K
247.7K
Andrew Williams 리트윗함
Andrei Mircea
Andrei Mircea@mirandrom·
📢 New paper “Language model scaling laws and zero-sum learning” @scifordl #NeurIPS2024 ℹ️openreview.net/forum?id=yBq2g… TL;DR: scaling improves LMs by mitigating zero-sum learning, a mechanism that could be targeted directly and independent of scale. W205 @ 4:30pm (1/12)🧵
Andrei Mircea tweet media
English
2
17
54
7.6K
Andrew Williams 리트윗함
Arjun Ashok
Arjun Ashok@arjunashok37·
Starting the workshop on Time Series in the Age of Large Models (TSALM) at #NeurIPS2024 with @tomaspfister's invited talk on Multimodal Time Series Modeling!
Arjun Ashok tweet media
English
1
3
23
991
Andrew Williams 리트윗함
Tianyu Zhang
Tianyu Zhang@tianyu_zh·
🚀 Excited to present our work on VCR: Visual Caption Restoration – the 1st and unique VLM benchmark testing if VLMs can focus on tiny but crucial details! 📍 Join us at #NeurIPS24 on Sunday, Dec 15, West Ballroom B 🛠️ Dive into the details: github.com/tianyu-z/VCR
Tianyu Zhang tweet media
English
0
2
5
358
Andrew Williams 리트윗함
Andrew Williams 리트윗함
Joan Rodriguez
Joan Rodriguez@joanrod_ai·
🎉 Excited to introduce BigDocs! An open, transparent multimodal dataset designed for: 📄 Documents 🌐 Web content 🖥️ GUI understanding 👨‍💻 Code generation from images We’re also launching BigDocs-Bench, featuring 10 tasks to test models on: ➡️ Document, Web, GUI Visual reasoning ➡️ Converting images into JSON, Markdown, LaTeX, SVG, and more! 📜 Paper: arxiv.org/pdf/2412.04626 huggingface.co/papers/2412.04… 🌍 Website bigdocs.github.io
Joan Rodriguez tweet media
English
2
41
96
23K
Andrew Williams 리트윗함
Benjamin Thérien
Benjamin Thérien@benjamintherien·
Learned optimizers can’t generalize to large unseen tasks…. Until now! Excited to present μLO: Compute-Efficient Meta-Generalization of Learned Optimizers! Don’t miss my talk about it next Sunday at the OPT2024 Neurips Workshop :) 🧵arxiv.org/abs/2406.00153 1/N
Benjamin Thérien tweet media
English
2
32
113
12.3K
Andrew Williams 리트윗함
Will Bryk
Will Bryk@WilliamBryk·
Spent the weekend hacking together Exa embeddings over 4500 NeurIPS 2024 papers - neurips.exa.ai Let's you: - do otherwise impossible searches ("transformer architectures inspired by neuroscience") - explore a 2D t-SNE plot - chat with Claude about multiple papers
Will Bryk tweet media
English
28
78
667
109.5K
Andrew Williams 리트윗함
Arjun Ashok
Arjun Ashok@arjunashok37·
🧵 Excited to be at #NeurIPS2024 next week. Let's grab a coffee and chat if you're around! Also drop by at our workshop on Time Series in the Age of Large Models on Sunday 🗓️ We have an exciting schedule: 🎓 6 invited talks 🖼️ 60+ posters 🎤 11 contributed talks ✨ and more!
English
3
8
25
1.5K