Louis Bradshaw

128 posts

Louis Bradshaw banner
Louis Bradshaw

Louis Bradshaw

@loubbrad

ML/CS PhD student at @C4DM. Interested in audio/multimodal.

London Katılım Mart 2023
248 Takip Edilen176 Takipçiler
Jake Tivy
Jake Tivy@jakkuh_t·
best Linux distro for desktop use & gaming, goooo
English
591
21
1.6K
287.9K
Louis Bradshaw retweetledi
Alex Spangher @ Neurips2025
Alex Spangher @ Neurips2025@AlexanderSpangh·
✨ Very overdue update: I'll be starting as an Assistant Professor in CS at University of Minnesota, Twin Cities, Fall 2026. I will be recruiting PhD students!! Please help me spread the word! [Thread] 1/n
Alex Spangher @ Neurips2025 tweet media
English
40
141
743
91.8K
Louis Bradshaw retweetledi
Alex Spangher @ Neurips2025
Alex Spangher @ Neurips2025@AlexanderSpangh·
🚨🚨🚨So excited to have Aria-Duet accepted to Neurips 2025 Creative AI Track, see us in San Diego!! This has really been the most fun I've had doing research in a long time!! Fun work with @loubbrad and @BlancheMinerva , supported by @AiEleuther. Check it out!!
English
4
7
36
6.8K
kalomaze
kalomaze@kalomaze·
@zaptrem this is a side project attempting to replicate snac 32khz on ~10k MIDI files worth of audio i'm a True Multimodal Believer™ and there's been basically no real work on omni models for audio (no, gpt4o's speech is not a 'modality') the state of neural audio codecs is REALLY Bad
English
5
0
24
1.3K
kalomaze
kalomaze@kalomaze·
this stupid GAN training is finally stable
kalomaze tweet media
English
18
3
273
38K
Louis Bradshaw
Louis Bradshaw@loubbrad·
My colleague @AlexanderSpangh and I are trying to film a demo for our piano autopilot on a modern Disklavier. If you're in London or LA and have access to one (or know someone who does), please DM me! #robotjazz
English
1
2
10
744
Louis Bradshaw
Louis Bradshaw@loubbrad·
@CaseyWescott No velocity tuning here -- this was actually trained on our own dataset: x.com/loubbrad/statu… More details coming soon!
Louis Bradshaw@loubbrad

I’ll be at #ICLR2025 this week presenting our work on curating large datasets for symbolic music modelling. Excited to chat about generative music, audio language models, and audio/speech LLMs (DMs open!). 📄 Paper: openreview.net/pdf?id=X5hrhgn… 🔗 Dataset: github.com/loubbrad/aria-… 🗓️ Fri 25 Apr 3 p.m. - 5:30 p.m | Hall 3 + Hall 2B #116

English
1
0
2
79
Casey Wescott
Casey Wescott@CaseyWescott·
@loubbrad Awesome - Are you scaling data after the model to fit the Disklavier or using MAESTRO? My Disklavier's response to velocities seems to need massaging to get sounding like the simulations.
English
1
0
0
110
Louis Bradshaw
Louis Bradshaw@loubbrad·
Small teaser for our piano generative model -- enabling real time human-AI co-creation 🎹✨
English
1
4
24
864
Louis Bradshaw retweetledi
Stella Biderman
Stella Biderman@BlancheMinerva·
What are the most impactful areas to work in in open science research? Where is a lack of openness holding back science? Where do we as a community need to go? Come find out what's on my wishlist for 2027 on Monday April 28th, Hall 4 #5 at 9:10 am.
Stella Biderman tweet media
Qian Liu@sivil_taram

🎉 Announcing the first Open Science for Foundation Models (SCI-FM) Workshop at #ICLR2025! Join us in advancing transparency and reproducibility in AI through open foundation models. 🤝 Looking to contribute? Join our Program Committee: bit.ly/4acBBjF 🔍 Learn more at: open-foundation-model.github.io #OpenScience #MachineLearning #FoundationModels 1/N

English
2
7
57
6.9K
Louis Bradshaw
Louis Bradshaw@loubbrad·
Great panel on self improving foundation models w/o human supervision at #ICLR2025 Bengio - 'I find the name of this workshop very concerning' Audience - *laughs nervously*
Louis Bradshaw tweet media
English
0
0
6
446
Louis Bradshaw
Louis Bradshaw@loubbrad·
Also looking forward to discussing what new applications might be unlocked with the dataset we're releasing (~100k hours of MIDI)!
English
0
0
2
149
Louis Bradshaw
Louis Bradshaw@loubbrad·
@_AbrahamMathews For each composition (data point) in the figure, I computed its embedding by averaging the embeddings of all available versions of that composition - typically 20-50.
English
1
0
1
43
Abraham Mathews
Abraham Mathews@_AbrahamMathews·
@loubbrad Looks great. Is the audio recordings for each composers from the same source?
English
1
0
0
38
Louis Bradshaw
Louis Bradshaw@loubbrad·
Created this t-SNE plot visualizing unsupervised embeddings of compositions by different classical composers. Interesting to see how the model clusters the history of classical (piano) music without any explicit exposure to composers, text labels, or audio.
Louis Bradshaw tweet media
English
1
3
12
704
Louis Bradshaw retweetledi
Thomas Wolf
Thomas Wolf@Thom_Wolf·
I shared a controversial take the other day at an event and I decided to write it down in a longer format: I’m afraid AI won't give us a "compressed 21st century". The "compressed 21st century" comes from Dario's "Machine of Loving Grace" and if you haven’t read it, you probably should, it’s a noteworthy essay. In a nutshell the paper claims that, over a year or two, we’ll have a "country of Einsteins sitting in a data center”, and it will result in a compressed 21st century during which all the scientific discoveries of the 21st century will happen in the span of only 5-10 years. I read this essay twice. The first time I was totally amazed: AI will change everything in science in 5 years, I thought! A few days later I came back to it and, re-reading it, I realized that much of it seemed like wishful thinking at best. What we'll actually get, in my opinion, is “a country of yes-men on servers” (if we just continue on current trends). Let me explain the difference with a small part of my personal story. I’ve always been a straight-A student. Coming from a small village, I joined the top French engineering school before getting accepted to MIT for PhD. School was always quite easy for me. I could just get where the professor was going, where the exam's creators were taking us and could predict the test questions beforehand. That’s why, when I eventually became a researcher (more specifically a PhD student), I was completely shocked to discover that I was a pretty average, underwhelming, mediocre researcher. While many colleagues around me had interesting ideas, I was constantly hitting a wall. If something was not written in a book I could not invent it unless it was a rather useless variation of a known theory. More annoyingly, I found it very hard to challenge the status-quo, to question what I had learned. I was no Einstein, I was just very good at school. Or maybe even: I was no Einstein in part *because* I was good at school. History is filled with geniuses struggling during their studies. Edison was called "addled" by his teacher. Barbara McClintock got criticized for "weird thinking" before winning a Nobel Prize. Einstein failed his first attempt at the ETH Zurich entrance exam. And the list goes on. The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student. This perspective misses the most crucial aspect of science: the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around. To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise. Just consider the crazy paradigm shift of special relativity and the guts it took to formulate a first axiom like “let’s assume the speed of light is constant in all frames of reference” defying the common sense of these days (and even of today…) Or take CRISPR, generally considered to be an adaptive bacterial immune system since the 80s until, 25 years after its discovery, Jennifer Doudna and Emmanuelle Charpentier proposed to use it for something much broader and general: gene editing, leading to a Nobel prize. This type of realization –"we've known XX does YY for years, but what if we've been wrong about it all along? Or what if we could apply it to the entirely different concept of ZZ instead?” is an example of out-side-of-knowledge thinking –or paradigm shift– which is essentially making the progress of science. Such paradigm shifts happen rarely, maybe 1-2 times a year and are usually awarded Nobel prizes once everybody has taken stock of the impact. However rare they are, I agree with Dario in saying that they take the lion’s share in defining scientific progress over a given century while the rest is mostly noise. Now let’s consider what we’re currently using to benchmark recent AI model intelligence improvement. Some of the most recent AI tests are for instance the grandiosely named "Humanity's Last Exam" or "Frontier Math". They consist of very difficult questions –usually written by PhDs– but with clear, closed-end, answers. These are exactly the kinds of exams where I excelled in my field. These benchmarks test if AI models can find the right answers to a set of questions we already know the answer to. However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas. Remember Douglas Adams' Hitchhiker's Guide? The answer is apparently 42, but nobody knows the right question. That's research in a nutshell. In my opinion this is one of the reasons LLMs, while they already have all of humanity's knowledge in memory, haven't generated any new knowledge by connecting previously unrelated facts. They're mostly doing "manifold filling" at the moment - filling in the interpolation gaps between what humans already know, somehow treating knowledge as an intangible fabric of reality. We're currently building very obedient students, not revolutionaries. This is perfect for today’s main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet. If we want scientific breakthroughs, we should probably explore how we’re currently measuring the performance of AI models and move to a measure of knowledge and reasoning able to test if scientific AI models can for instance: - Challenge their own training data knowledge - Take bold counterfactual approaches - Make general proposals based on tiny hints - Ask non-obvious questions that lead to new research paths We don't need an A+ student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed. --- PS: You might be wondering what such a benchmark could look like. Evaluating it could involve testing a model on some recent discovery it should not know yet (a modern equivalent of special relativity) and explore how the model might start asking the right questions on a topic it has no exposure to the answers or conceptual framework of. This is challenging because most models are trained on virtually all human knowledge available today but it seems essential if we want to benchmark these behaviors. Overall this is really an open question and I’ll be happy to hear your insightful thoughts.
English
276
494
2.5K
409.9K
Louis Bradshaw retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
Today I was sent the following cool demo: Two AI agents on a phone call realize they’re both AI and switch to a superior audio signal ggwave
English
2.9K
13.3K
92.1K
17.5M
Louis Bradshaw
Louis Bradshaw@loubbrad·
Hacked together a real-time piano continuation demo this weekend. Piano sound = real-time input. Electric piano sound = real-time (ai) continuation streamed from my server over the LAN. Gotta get this hooked up to an acoustic self-playing piano, for both input/output, next week!
English
3
4
22
1K
Louis Bradshaw
Louis Bradshaw@loubbrad·
@miniapeur Or you could access the PDFs online for free. Fortunately, we live in a time when extremely high-quality educational resources are freely available online : )
English
0
0
0
104