Mark Anderson

26.5K posts

Mark Anderson banner
Mark Anderson

Mark Anderson

@mandercorn

I live in NYC. I provide interesting links on #education. And stuff. https://t.co/4JZ31G6lc7

New York Katılım Mart 2010
4.5K Takip Edilen5.2K Takipçiler
Mark Anderson
Mark Anderson@mandercorn·
"*any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm."
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
0
0
0
99
Mark Anderson retweetledi
Daniel Willingham
Daniel Willingham@DTWillingham·
Still hear this justification re: AI in classrooms: “It’s out there & we can't be left behind.” Friendly challenge: which happens more often? 1. Schools regretted waiting too long to adopt a technology. 2. Schools regretted adopting a technology before they knew how to use it.
English
43
116
519
28.7K
Mark Anderson
Mark Anderson@mandercorn·
Because if it's more the latter, than maybe focus less on AI and more on public resources, infrastructure, and civic institutions.
English
0
0
0
38
Mark Anderson
Mark Anderson@mandercorn·
Are you more worried about AI slop or about whether human beings can adequately distinguish between AI slop and quality content?
English
1
0
0
147
Mark Anderson
Mark Anderson@mandercorn·
"The structure was never in the model. It was in the corpus. 👇"
Elan Barenholtz@ebarenholtz

(1/2) Just dropped a new paper: "World Properties without World Models: Recovering Spatial and Temporal Structure from Co-occurrence Statistics in Static Word Embeddings". A key line of evidence for LLM "world models": linear probes recover city coordinates and historical dates from hidden states. @wesg52 & @tegmark did this with Llama-2 and got R²=0.91 for city locations. Very cool result. But is it world models or word statistics? An alternative: maybe the structure isn't emerging inside the LLM. Maybe it was already latent in training text itself, inherited from the systematic differences in how language describes different places and eras. I ran the same probes on GloVe and Word2Vec, static word embeddings from 2013-2014, trained purely on distributional statistics; no layers, no attention, no contextual processing. R²=0.71–0.87 for city coordinates (see map). And the signal is selective, not a probe artifact. Latitude, longitude, temperature: all recoverable. Elevation, GDP, population: R² goes negative. The probe finds real distributional structure, not noise. So: deflationary for world models. But inflationary for language. Co-occurrence statistics alone preserve a richer imprint of the physical world than anyone assumed. The words that surround "Nairobi" and the words that surround "Oslo" are systematically different, and that difference is enough to localize them. On a map. The structure was never in the model. It was in the corpus. 👇

English
0
0
0
142
Mark Anderson
Mark Anderson@mandercorn·
"Lux may be about the lyrics, but I didn't need to know Italian in order to understand "Mio Cristo" any more than one needs to understand German to know that Beethoven's Ninth is about joy. And perhaps because the devastating beauty of Rosalía's voice could reflect in some elemental way the ordeal I had been through--both the physical discomfort and the euphoria of surviving it--the aria leveled my defenses and left me gutted. For a few minutes in a Ralphs parking lot, I sat in my car and sobbed uncontrollably." vogue.com/article/rosali…
English
0
0
0
61
Mark Anderson
Mark Anderson@mandercorn·
Which Rosalía songs from Lux are your favorites? Mine are Reliquia and Divinize. What an amazing voice
English
1
0
0
104
Mark Anderson retweetledi
John A. List
John A. List@Econ_4_Everyone·
When AI first arrived on the scene, I worried it would make economists, or even critical thinkers more broadly, less valuable. In my travels in the past 6 months to work with non-profits, for profits, and government agencies, I have observed how people are actually using AI. I have watched them fumble around with insights they clearly did not create themselves. My fears are now assuaged. One observation is that AI can produce something that in some cases is very wrong and in others looks nearly right, but is not quite there. Even if in time AI improves to "nearly right" or "exactly right" every time, a second issue still arises: explaining the materials. Explaining why an answer is almost correct but subtly off requires exactly the critical thinking skills that created the knowledge in the first place. Even explaining "exactly right" material takes critical thinking. I've watched smart people confidently present AI-generated material they clearly don't fully understand. The words sound right. But when someone pushes back just a little bit, the sand castle crumbles. It is quite difficult to defend what you didn't build. This leads me to now make the optimistic case for human expertise. The value of deeply understanding something — of having built the knowledge yourself — hasn't diminished with AI. If anything, it's increased. The people who can tell the difference between "nearly right" and "right" are more valuable than ever. The people who can explain the subtle details about something that is exactly right are invaluable. Creating knowledge still matters. Maybe now more than ever.
English
31
169
729
65.1K
Mark Anderson
Mark Anderson@mandercorn·
A military that fears "wicked ideologies" more than it fears incompetence is a more weakened and fragile military. Real strength is the ability to engage with, deconstruct, and out-maneuver inflexible ideology.
Mark Anderson tweet media
English
0
0
0
60
Mark Anderson
Mark Anderson@mandercorn·
"the government has ... said that they will treat you like a foreign adversary—indeed, they will treat you in some ways worse than a foreign adversary—simply for refusing to capitulate to their terms of business. "
Derek Thompson@DKThomp

A quite brilliant essay on AI, the law, and the future of the republic. An upshot: If the US govt can go to any company, demand any contract language, and reserve the right to destroy your company if you have qualms, there is no such thing as private property rights in America.

English
0
0
0
72
Mark Anderson retweetledi
M. Florencia Assaneo
M. Florencia Assaneo@FlorAssaneo·
Phonological processing=left dorsal stream? Not in Mexican Spanish-speaking children at reading onset. Here, bilateral ventral pathways predict phonological awareness. Neuroscience must broaden the populations it studies to build generalizable brain models.doi.org/10.1162/NOL.a.…
English
1
19
51
3.2K
Mark Anderson retweetledi
EduPapers
EduPapers@Edupapers1·
Teaching quality as a dynamic system: How school support, psychological needs, and motivation interact across needs–skills configurations dlvr.it/TRCtkV
English
0
1
0
133
Mark Anderson retweetledi
John A. List
John A. List@Econ_4_Everyone·
We all know that early childhood education is touted to improve test scores and earnings. But what about something deeper — the social preferences that shape how people navigate sharing, cooperation, and fairness throughout their lives? That was the question I addressed today during my lecture. I discussed our work at CHECC, which is a large-scale field experiment that took place in Chicago Heights. Children aged 3-4 were randomly assigned to one of three groups: a full-time preschool program, a parenting program with financial incentives, or a control group. Then we came back several years later, when the kids were 6-8 years old, and ran incentivized experiments to measure their social preferences. Two things jump out in the data:  Preschool made children significantly more egalitarian and the parenting program, by contrast, made children place more weight on efficiency relative to fairness. The paper is here: ideas.repec.org/p/feb/framed/0… Why does this matter? We spend enormous energy measuring whether early childhood programs raise math and reading scores. But these programs are also shaping the fundamental social preferences that determine how the next generation thinks about inequality, redistribution, and cooperation. If we want to understand why people differ in their attitudes toward inequality, part of the answer may trace back to their earliest institutional experiences.
English
0
6
24
2.7K
Mark Anderson
Mark Anderson@mandercorn·
This study suggests that children who struggle with both reading and math face a "double burden" of anxiety that is specific to each area of difficulty. The findings emphasize that educators and parents should provide targeted support to help children manage the specific skills they need and anxiety they feel toward reading and math individually. journals.sagepub.com/doi/10.1177/07…
English
0
0
1
144