Kyle Cranmer

28.1K posts

Kyle Cranmer banner
Kyle Cranmer

Kyle Cranmer

@KyleCranmer

Director Data Science Institute @UWMadison @datascience_uw. EiC @MLSTjournal. Physics, stats/ML/AI, open science. same handle @sigmoid.social and bsky

New York Katılım Haziran 2013
3.3K Takip Edilen17.5K Takipçiler
Jim Halverson
Jim Halverson@jhhalverson·
@KyleCranmer Our grants office is currently refusing to meet this "very strong" encouragement. Have you heard anything about what it means for proposals that remain collaborative?
English
1
0
0
121
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
The shot heard around the (US scientific research) world
Kyle Cranmer tweet media
English
7
4
75
37.4K
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
Neutrino beam going through that white spot in the wall… sure, if you say so.
Kyle Cranmer tweet media
English
1
0
3
363
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
Visiting @Fermilab to give a “Wine & Cheese” seminar on Emerging Patterns in AI for Science.
Kyle Cranmer tweet mediaKyle Cranmer tweet media
English
2
1
14
3.2K
Kyle Cranmer retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Zuckerberg paid $14.3 billion for a 28-year-old who had never trained a frontier model. Nine months later, that bet just shipped. The benchmark table tells you exactly what kind of lab Wang built. Muse Spark leads or ties Opus 4.6 and GPT 5.4 on multimodal perception, health queries, and visual reasoning. MedXpertQA, SimpleVQA, ScreenSpot Pro, CharXiv. These are all data-quality-sensitive benchmarks where training set curation determines the ceiling. Where it gets destroyed: ARC AGI 2 (42.5 vs 76.5 Gemini), Terminal-Bench (59.0 vs 75.1 GPT 5.4), GDPval office tasks (1444 vs 1672 GPT 5.4). Coding and abstract reasoning. The exact categories where architecture innovation and RL scaling matter more than data. This is a data labeling CEO's model. The fingerprints are all over the results. Wang spent seven years learning which benchmarks respond to better data and which ones require something else entirely. Muse Spark maxed out the first category and exposed the gap in the second. The $14.3B question was always whether the guy who built the best data pipeline in AI could build the best model. The answer so far: he built the best model at the things data pipelines solve, and a mediocre one at everything else. The move nobody's pricing: Meta said larger models are already in development, private API today, open-source future versions. Wang called this "step one." If the next model closes the coding and reasoning gap, Meta goes from also-ran to three-horse race. If it doesn't, they spent $14.3 billion to build a very good medical chatbot for 3 billion users. Both outcomes are interesting. Only one justifies the stock moving 9%.
Alexandr Wang@alexandr_wang

1/ today we're releasing muse spark, the first model from MSL. nine months ago we rebuilt our ai stack from scratch. new infrastructure, new architecture, new data pipelines. muse spark is the result of that work, and now it powers meta ai. 🧵

English
88
229
2.6K
987.9K
Kyle Cranmer retweetledi
Kyle Cranmer retweetledi
datascience@uw
datascience@uw@datascience_uw·
Join DSI and the Sustainability Research Hub this Thursday at 3pm for a Data Center Working Group chalk talk by Anna Haensch, who will provide a policy and legislative overview of data centers in Wisconsin. RSVP and more info: dsi.wisc.edu/2026/03/30/dat…
datascience@uw tweet media
English
0
1
3
359
Kyle Cranmer retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.7K
56.4K
20M
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
Genesis, Genesis, Genesis, Genesis, .... FML
HT
0
0
4
910
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
While the sketch of the proof in the appendix of this early draft needs to be tightened up, I'm amazed that this approach actually has several optimality properties. Ones that you don't get with the traditional approach with fixed, task-agnostic representations.
English
0
0
0
233
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
I also really like the differential optimization layer. It reminds me of a few years ago during when 'differentiable programing' was trending and there was a lot of innovation in non-standard differentiable layers. Here we don't need to differentiate through Opt(), but still...
Kyle Cranmer tweet media
English
1
0
0
254
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
🚨I'm happy to share a preview draft of new paper "Scalars Are All You Need for Multimodal Inference". Instead of the traditional approach to multimodal foundation models for science with task-independent embedding, I outline an alternative strategy theoryandpractice.org/2026/04/scalar…
Kyle Cranmer tweet mediaKyle Cranmer tweet media
English
1
7
43
4.1K
Kyle Cranmer
Kyle Cranmer@KyleCranmer·
I thought this was pretty funny while on vacation, but realizing that the email is going to find me tomorrow when I go back to work.
Kyle Cranmer tweet media
English
0
1
12
817