Andre Brown
2.3K posts

Andre Brown
@aexbrown
Group leader at MRC London Institute of Medical Sciences and Reader at Imperial College London


Delighted to share our new fund: researchrevival.org > Soviet papers from the 50s! > Antimalarial drugs from Mao’s China! > Buried research on reality and consciousness! Led by @WendiYan5 and seeded by @jwmares and @stuartbuck1

BBNs built the ARPAnet and autonomous vehicles, but the R&D model went out of style. Could it still work today? I spent 2025 focused on this experiment. First results are in: it’s working! That’s why @janellehmtam and I are raising a fund to double down🧵freaktakes.com/p/the-bbn-fund

NSF is launching one of the most ambitious experiments in federal science funding in 75 years. The program is called Tech Labs, and the goal is to invest ~$1 billion to seed new institutions of science and technology for the 21st century. Instead of funding projects, the NSF will fund teams. I’m in the @WSJ today with a piece on why this matters (gift link): wsj.com/opinion/scienc… Here’s the basic case: 1) Most federal science funding takes the form of small, incremental, project-based grants to individual scientists at universities. 2) The typical NSF grant is ~$250k/year to a professor with a couple of grad students and modest equipment over a few years. This is a perfectly reasonable way to fund some science, but it's not the only way. 3) A healthy portfolio needs more than one instrument. Project-based grants are like bonds: low-risk, steady, safe. But no one trying to maximize long-run returns would put 70% of their portfolio in bonds. 4) Yet that's basically what our civilian science funding portfolio looks like. Around 3/4ths of NSF and NIH grant funding is project-based. 5) Tech Labs is NSF's attempt to diversify that portfolio. The Tech Labs program is aiming for: - $10-50 million/year awards per team - 5+ year commitments - Measuring impact through advancement up the Tech Readiness Level scale rather than papers published - Up to ~$1 billion for the program - Supporting research orgs outside traditional university structures 6) Scientific production looks very different than it did when the NSF launched 75 years ago. The lone genius at the chalkboard can only do so much. Frontier science + tech today is increasingly team-based, interdisciplinary, and infrastructure-intensive. 7) The team behind AlphaFold just won the Nobel Prize in Chemistry. It came from DeepMind, an AI lab with sustained institutional funding and full-time research teams. It would be near-impossible to fund this kind of work on a 3-year academic grant. 8) Same pattern at the @arcinstitute (8-year appointments, cross-cutting technical support teams) and @HHMIJanelia (massive infrastructure investments to map the complete fly brain). Ambitious science increasingly needs core institutional support, not a series of project grants stapled together. 9) Similarly, Focused Research Organizations (@Convergent_FROs) have showcased a new model supporting teams with concrete missions and predefined milestones to unlock new funding. 10) There’s a whole ecosystem of philanthropically-supported centers doing amazing research, like the Institute for Protein Design, the Allen Institute, the Flatiron Institute, the Whitehead Institute, the Wyss Institute, the Broad — the list goes on. 11) But philanthropy can’t reshape American science alone. The federal government spends close to $200 billion each year on research and development, an order of magnitude more than even the largest foundations. 12) If we want to change how science gets done at scale, federal funding has to evolve. And the NSF and NIH don’t have dedicated funding mechanisms to support or seed these sorts of organizations. 13) Earlier this year, I started working on a related framework called “X-Labs” that built on all this exciting institutional experimentation that’s been happening within the private and philanthropic sectors. It’s time for the federal government to step into the arena: rebuilding.tech/posts/launchin… 14) Traditional university grants are still important for training the next generation of scientists and for certain kinds of curiosity-driven work. But after 75 years of putting nearly everything into one model, we should try something different. 15) And key program details are still being developed! You can reply to the Request for Information with suggestions or feedback on how to design this program here: nsf.gov/news/nsf-annou… 16) Science is supposed to be about experimentation. Science funding should be too.





Many great discoveries in biology came about because of LIMIT THINKING. Rather than focus on small tweaks to improve a system, it's sometimes better to make them abstract to find that system's theoretical limits. Carnot did this with engines. Shannon did this with information. Hopfield did this with kinetic proofreading. Limit Thinking forces one to focus solely on the features of a system essential for its performance, so that one can make predictions or evaluations, regardless of the specifics of how each individual system is built. It grounds problems in mathematics — as one cannot calculate limits without being precise about what is being measured and in what units. Once such calculations have been determined, they often drive rapid progress, signaling not only when we have reached diminishing returns, but also just how far we can aspire. In 1712, for example, Thomas Newcomen constructed his newly designed “atmospheric engine” for Coneygree Coalworks. Each stroke of the 20-ton machine would raise about 37 liters of water from the flooded mines below. Although Newcomen's engines were cost-effective compared to horses or humans, they were still expensive to operate. In 1763, while working at the University of Glasgow, James Watt was tasked with repairing the university’s scale model of the Newcomen engine. As he worked, Watt envisioned ways to improve the efficiency of the design. In 1776, he unveiled an engine with seemingly extraordinary modifications: it consumed 75-80 percent less fuel than Newcomen’s. Tasks that would have burned 100 kilograms of coal could now be done with a mere 20. Although a triumph of engineering, Watt’s engine was nowhere near its optimal performance. The difference between Newcomen’s and Watt’s engines, in fact, is between 0.5 percent and 2.5 percent efficiency. But these inventors could not have known this because the concept of a theoretical limit — asking how efficient an engine design could be, in principle — had not yet been imagined. But then, in 1824, a 28-year-old Nicolas Léonard Sadi Carnot aimed to determine the fundamental limits of how heat could be converted into mechanical work. He wrote a 118-page booklet, of which he printed 600 copies at his own expense, that was largely ignored until 1834. But Carnot’s work showed that what mattered for the efficiency of a heat engine was the temperature differential between the hot and cold reservoirs, rather than any particular design feature. He says of the motive power of heat: “Its quantity is fixed solely by the temperatures of the bodies between which it is effected.” In retrospect, this simple fact explained why the Watt engine was superior to the Newcomen engine; the separate condenser allowed for a larger difference between the reservoirs. The same style of thinking also applies to biology, as we explore in our new essay by David Jordan: asimov.press/p/limit-thinki…

When I think of reasons to be optimistic about the UK, the talent and ambition unleashed by @ARIA_research is close to the top of the list. @ilangur and I spoke to @thetimes about how and why it works





Outsiders drive many of science's biggest breakthroughs. A draper discovered bacteria. An actress helped invent WiFi. But there's not enough room for them in modern science. In @WorksInProgMag , @LauraLungum and I explore why this matters and what to do about it! 🧵(1/10)






