Nico Collignon

571 posts

Nico Collignon banner
Nico Collignon

Nico Collignon

@nccollignon

building @kale__co · cognitive scientist working with cities

London Katılım Mart 2012
2.8K Takip Edilen523 Takipçiler
Nico Collignon retweetledi
NB
NB@Noahbolanowski·
A (very) rare video of 1968's Cybernetic Serendipity exhibition at ICA, London (Aug 2 - Oct 20, 1968). From Visualisation: Art et Cybernétique, French Broadcasting Corp., 1968.
ArtMeta@artmetaofficial

Exploring Cybernetic Serendipity (1968-69) The culmination of 3+ years of effort by curator Jasia Reichardt, this early international exhibition of Digital Art would introduce hundreds of thousands of visitors to the world of computer-generated creativity. More below.

English
15
185
873
44.1K
Nico Collignon retweetledi
Seth Thompson
Seth Thompson@s3ththompson·
Searching my library is now fast enough for live searches as I type! Querying 100k paragraphs and 20k figures takes ~10-30ms. The bottleneck is now 600-800ms for multi-modal reranking...
English
8
5
121
5.9K
Nico Collignon retweetledi
rauno
rauno@raunofreiberg·
Video player with soul
English
15
26
859
89.8K
Nico Collignon retweetledi
Juan Mateos Garcia
Juan Mateos Garcia@JMateosGarcia·
How Artificial Intelligence Shapes Science: Evidence from AlphaFold Very useful contribution from @RyanReedHill and Carolyn Stein, the best economists of structural biology in the world.
English
2
3
16
2K
Nico Collignon retweetledi
Psyho
Psyho@FakePsyho·
This is just a simple hill climbing, so it's an extremely crude version of AlphaEvolve / OpenAI's AWTF scaffold / ALE-Agent. But it's also a good reminder that non-bleeding-edge ML is just throwing random shit at the wall and seeing what sticks.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
18
9
307
38.9K
Nico Collignon retweetledi
Johannes Mutter
Johannes Mutter@JohannesMutter·
Punch cards from the early days of computing have a unique graphic aesthetic. The first image is actually an SVG with <feTurbulence> filter. Gemini 3.1 Pro created it from a low res 200px photo.
Johannes Mutter tweet mediaJohannes Mutter tweet media
English
1
19
354
14.1K
Nico Collignon retweetledi
lucas gelfond
lucas gelfond@gucaslelfond·
for the release of the Whole Earth Redux, I OCRed/indexed/embedded all 22,000 pages of the Whole Earth Catalog and built a searchable archive — it’s up at searchwhole.earth !
GIF
English
44
212
1.2K
121.5K
Nico Collignon retweetledi
Anaïs
Anaïs@nomadic_anais·
This Klee guy fucking loves diagrams
Anaïs tweet mediaAnaïs tweet media
English
18
167
2K
114.2K
Nico Collignon retweetledi
Laura ✨
Laura ✨@Lauracsc_·
microsite using the arena api for showcasing the diagrams in my 6+ years old research channel and quotes from readings.
English
6
32
388
15.2K
Nico Collignon retweetledi
Kat ⊷ the Poet Engineer
Kat ⊷ the Poet Engineer@poetengineer__·
these are gems ✨ i made a little interface for myself to better browse all these 300+ pdfs on one page, without scrolling it comes with two modes: stable (grid) and chaotic (a more playful k-d tree layout)
Prathyush@prathyvsh

Yearly reminder that Bret Victor has a frequently updated catalog of some of the very best material relevant to computation / user interface design over here: worrydream.com/refs/

English
12
59
821
69.2K
Nico Collignon retweetledi
Ethan Mollick
Ethan Mollick@emollick·
I am surprised that we don’t see more governments and non-profits going all-in on transformational AI use cases for good. There are areas like journalism & education where funding ambitious, civic-minded & context-sensitive moonshots could make a difference and empower people.
English
34
25
269
30.7K
Nico Collignon retweetledi
Coby
Coby@Cobylefko·
Madrid prioritized people over cars along the River Manzanares and the results are phenomenal. Every city with waterfront highways should be doing this!
Coby tweet mediaCoby tweet media
English
80
293
3.4K
762.8K
Nico Collignon retweetledi
nicola 🏟️
nicola 🏟️@iamnotnicola·
We are announcing a new ~£50m research funding programme to make AI agents in the wild secure. The call for proposal is now open for £300k-£3m grants until March 24, 2026. (Programme: Scaling Trust at @ARIA_research - see thread)
nicola 🏟️ tweet media
English
22
66
361
29.1K