Bernardino Romera-Paredes

1.2K posts

Bernardino Romera-Paredes

Bernardino Romera-Paredes

@ber24

Previously core team member of AlphaFold2, AlphaTensor, and FunSearch at Google DeepMind. Now starting something new

London Katılım Temmuz 2009
464 Takip Edilen832 Takipçiler
Bernardino Romera-Paredes retweetledi
Quantinuum
Quantinuum@QuantinuumQC·
What if every quantum researcher had an army of students to help write quantum algorithms? LLMs are starting to serve as such a resource. We’ve partnered with @HivergeAI to use AI for quantum algorithm discovery, exploring practical quantum chemistry. quantinuum.com/blog/automated…
English
4
14
27
2.8K
Bernardino Romera-Paredes retweetledi
Adam Zsolt Wagner
Adam Zsolt Wagner@azwagner_·
Really happy to share our new paper on using AlphaEvolve for mathematical exploration at scale, written with Javier Gómez-Serrano, Terence Tao, and @GoogleDeepMind's Bogdan Georgiev. We tested it on 67 problems and documented all our successes and failures. 🧵
Adam Zsolt Wagner tweet media
English
18
141
878
97.5K
Bernardino Romera-Paredes retweetledi
Alhussein Fawzi
Alhussein Fawzi@AlhusseinFawzi·
We challenged our intern @ramadan_al76760 (zero prior AI experience) to beat the CIFAR-10 training speed record using @hivergeai's algorithmic discovery engine. Result: Sub-2-second (!!) training for the first time ever.
Keller Jordan@kellerjordan0

New CIFAR-10 training speed record: 94% in 1.99 seconds on one A100 Previous record: 2.59 seconds (Nov. 10th 2024) New record-holder: Algorithmic discovery engine developed by @hivergeai Changelog: - Muon: Vectorize NS iter and reduce frequency of 'normalize weights' step 1/3

English
2
7
124
34.1K
Bernardino Romera-Paredes
Falsest friend conjecture: "Inhabitable" is the only word (or else the longest one) such that: - it has the exact same spelling in two different languages (English & Spanish) - it means the exact opposite thing in each
Bernardino Romera-Paredes tweet mediaBernardino Romera-Paredes tweet media
English
0
0
0
114
Bernardino Romera-Paredes
We took on the challenge and we’ve put our system to work on the nanoGPT benchmark. @hivergeai tech discovered new algorithmic improvements beyond the existing optimizations. Check out the results in the PR github.com/KellerJordan/m… and read our blogpost hiverge.ai/blog/introduci…!
Andrej Karpathy@karpathy

Love this project: nanoGPT -> recursive self-improvement benchmark. Good old nanoGPT keeps on giving and surprising :) - First I wrote it as a small little repo to teach people the basics of training GPTs. - Then it became a target and baseline for my port to direct C/CUDA re-implementation in llm.c. - Then that was modded (by @kellerjordan0 et al.) into a (small-scale) LLM research harness. People iteratively optimized the training so that e.g. reproducing GPT-2 (124M) performance takes not 45 min (original) but now only 3 min! - Now the idea is to use this process of optimizing the code as a benchmark for LLM coding agents. If humans can speed up LLM training from 45 to 3 minutes, how well do LLM Agents do, under different kinds of settings (e.g. with or without hints etc.)? (spoiler: in this paper, as a baseline and right now not that well, even with strong hints). The idea of recursive self-improvement has of course been around for a long time. My usual rant on it is that it's not going to be this thing that didn't exist and then suddenly exists. Recursive self-improvement has already begun a long time ago and is under-way today in a smooth, incremental way. First, even basic software tools (e.g. coding IDEs) fall into the category because they speed up programmers in building the N+1 version. Any of our existing software infrastructure that speeds up development (google search, git, ...) qualifies. And then if you insist on AI as a special and distinct, most programmers now already routinely use LLM code completion or code diffs in their own programming workflows, collaborating in increasingly larger chunks of functionality and experimentation. This amount of collaboration will continue to grow. It's worth also pointing out that nanoGPT is a super simple, tiny educational codebase (~750 lines of code) and for only the pretraining stage of building LLMs. Production-grade code bases are *significantly* (100-1000X?) bigger and more complex. But for the current level of AI capability, it is imo an excellent, interesting, tractable benchmark that I look forward to following.

English
0
3
8
823
Bernardino Romera-Paredes
Quietly building, testing, iterating. Today we’re out of stealth. Excited to finally share what we’ve been building at @hivergeai: an algorithm factory that creates new algorithms, optimised for real-world impact. Stay tuned!
Alhussein Fawzi@AlhusseinFawzi

Thrilled to announce @hivergeai. Our goal is to build algorithmic superintelligence. See how we accelerate AI training and solve large-scale planning problems: hiverge.ai/blog/introduci…

English
2
1
7
432
Bernardino Romera-Paredes retweetledi
Anja Šurina
Anja Šurina@AnjaSurina·
Excited to share our latest work on EvoTune, a novel method integrating LLM-guided evolutionary search and reinforcement learning to accelerate the discovery of algorithms! 1/12🧵
English
2
29
126
24.3K
Bernardino Romera-Paredes retweetledi
Pushmeet Kohli
Pushmeet Kohli@pushmeet·
Our team @GoogleDeepMind has released AlphaTensor-Quantum, a new method that improves quantum circuits by reducing T gates needed for quantum ops such as those used in Shor’s alg & chemistry sims. A step towards scalable Quantum Computing that can transform Science&Security.
Google DeepMind@GoogleDeepMind

In @NatMachIntell, we show how our AI system AlphaTensor-Quantum can make quantum computing more efficient. 🖥️⚡ By optimizing quantum circuits, it’s helping run calculations faster to save resources and accelerate discoveries. ↓ goo.gle/4iHAITd

English
3
9
103
7.3K
Bernardino Romera-Paredes retweetledi
Dr. Mar Gonzalez-Franco
Dr. Mar Gonzalez-Franco@twi_mar·
I love when people notice the secret sauce that is: things should just work! @jetscott: “I lifted my hand off the mouse, hand tracking was instantly back in action. Android XR made to whatever inputs are available: hands, eyes, voice, keyboards, mice or connected phones” 🧵
Scott Stein 👓🎲🪄@jetscott

Android XR is coming in 2025, with developer preview announced now. And Samsung’s making a Vision Pro-like headset, and glasses after that. I wore the headset and Astra prototype glasses with displays…and all-seeing Gemini AI is the center of it all. cnet.com/tech/computing…

English
1
5
35
6.3K
Bernardino Romera-Paredes retweetledi
Petar Veličković
Petar Veličković@PetarV_93·
A clear step towards achieving my dream: building AI that assists competitive programmers 🧑‍💻 “This is an exciting approach to combine work of human competitive programmers and LLMs, to achieve results that neither would achieve on their own.” --Petr Mitrichev Details below! 🧵
Petar Veličković tweet media
English
5
59
349
43.8K
Bernardino Romera-Paredes
Great episode! Asked whether an evolutionary approach is needed to get to a higher level @_rockt said "Science, the way humans do it, is evolutionary search and I don't see any other way of how automated scientific process can work differently. It has to be evolutionary". 💯%
Tim Rocktäschel@_rockt

I had a fantastic time talking with @samcharrington (@twimlai) about @orionbooks' "Artificial Intelligence: 10 Things You Should Know" (geni.us/ArtificialInte…) and many exciting 2024 research papers (some of them from my teams) in the Open-Endedness community by outstanding researchers like @merrierm @jennyzhangzt @jeffclune @chrisantha_f @2ne1 @edwardfhughes @MichaelD1729 @ber24 @akbirkhan @_chris_lu_ @cong_ml @RobertTLange @j_foerst @hardmaru ... Levels of AGI: Morris et al. Levels of AGI: Operationalizing Progress on the Path to AGI. ICML 2024. doi.org/10.48550/arXiv… OMNI: Zhang et al. OMNI: Open-endedness via Models of human Notions of Interestingness. ICML 2024. arxiv.org/abs/2306.01711 Promptbreeder: Fernando et al. Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. ICML 2024. arxiv.org/abs/2309.16797 Epistemology (@DavidDeutschOxf): Deutsch, D. (2012). The Beginning of Infinity. Penguin Books. Open-Endedness: Hughes et al. Open-Endedness is Essential for Artificial Superhuman Intelligence. ICML 2024. arxiv.org/abs/2406.04268 Rainbow Teaming: Samvelyan et al. Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts. NeurIPS 2024. doi.org/10.48550/arXiv… FunSearch: Romera-Paredes et al. Mathematical discoveries from program search with large language models. Nature, 625(7995), 468–475. doi.org/10.1038/s41586… AI Debate: Khan et al. Debating with More Persuasive LLMs Leads to More Truthful Answers. ICML 2024. arxiv.org/abs/2402.06782 AI Scientist: Lu et al. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. arXiv 2024. doi.org/10.48550/arXiv…

English
2
0
13
2.1K
Bernardino Romera-Paredes
Great to see FunSearch featured as one of this year’s top contributions to the field in the State of AI Report! As always, a highly recommended report on how everything around AI keeps evolving. It even includes yesterday’s great news about the Nobel Prize awarded to Demis & John
Bernardino Romera-Paredes tweet media
Nathan Benaich@nathanbenaich

🪩The @stateofai 2024 has landed! 🪩 Our seventh installment is our biggest and most comprehensive yet, covering everything you *need* to know about research, industry, safety and politics. As ever, here's my director’s cut (+ video tutorial!) 🧵

English
3
1
6
885