Fondation Intelligence

32.3K posts

Fondation Intelligence banner
Fondation Intelligence

Fondation Intelligence

@IntelligenceTV

Fondation Intelligence / Intelligence Foundation

Québec, Canada Katılım Ağustos 2012
57 Takip Edilen45.7K Takipçiler
Fondation Intelligence retweetledi
ICLR
ICLR@iclr_conf·
Announcing the #ICLR2026 Outstanding Paper Awards 🏆 Congratulations to:
ICLR tweet media
English
7
98
1K
84.6K
Fondation Intelligence retweetledi
The Nobel Prize
The Nobel Prize@NobelPrize·
AI has already contributed to applications across all STEM fields. In this Nobel Prize Dialogue we look ahead to the ways that AI might transform science in the future. Demis Hassabis, Alison Noble and Paul Nurse join us to discuss how the scientific community can get the best out of AI. Together we will explore how to meet challenges such as data heterogeneity, transparency and the proprietary nature of AI tools. We will also consider access to resources and the growing skills gap. Given the growing demand for science to tackle so many of the problems confronting us, how do we imagine AI’s contribution will help us meet those needs? Register now to attend the event in London in May: bit.ly/4cYuMFc
The Nobel Prize tweet media
English
23
169
460
28.1K
Fondation Intelligence retweetledi
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
Using only box-forwarding speed as the reward, our Stackelberg PPO automatically evolves robots with arms for pushing and legs for moving. The key idea is a novel game-theoretic view of structure–control co-design, yielding more effective optimization and dramatically better designs. Come see our poster at ICLR 2026 on Apr 25, 10:30 AM, at P4-#4810. With @YuhuiWangAI, @YanningD_AI, @oneDylanAshley. Paper: arxiv.org/abs/2603.15388 Project Page: yanningdai.github.io/stackelberg-pp…
English
14
64
535
49.8K
Fondation Intelligence retweetledi
Chelsea Finn
Chelsea Finn@chelseabfinn·
Can LLMs generate new insights that build on prior research? GiantsBench is a new scientific discovery benchmark, that tests whether models can synthesize new insights given two parent papers. Paper + data + code: giants-insights.github.io
Chelsea Finn tweet media
Joy He-Yueya@JoyHeYueya

Scientists often make breakthroughs by synthesizing ideas across papers. In our new paper, we ask whether a language model can anticipate this process: given two parent papers, can it generate the core insight of a future paper built on them? 🧵⬇️

English
19
114
749
108.7K
Fondation Intelligence retweetledi
DurstewitzLab
DurstewitzLab@DurstewitzLab·
Unlike current AI systems, brains can quickly & flexibly adapt to changing environments. This is the topic of our perspective in Nature MI (rdcu.be/eSeif), where we relate dynamical & plasticity mechanisms in the brain to in-context & continual learning in AI. #NeuroAI
DurstewitzLab tweet media
English
7
89
475
24.7K
Fondation Intelligence retweetledi
Parmita Mishra
Parmita Mishra@parmita·
the most relatable thing I have ever seen in my entire life:
Sriram Krishnan@sriramk

had the pleasure of watching "The Thinking Game" over the weekend. on one level, it is about @demishassabis and the path of AI over the last 20 years, especially around solving the protein folding problem. On another level, it is about the act of multi-disciplinary science - experimentation, running into walls and persistence. recommended watch.

English
33
180
3K
479.6K
Fondation Intelligence retweetledi
Yi Ma
Yi Ma@YiMaTweets·
All done with my new course on Deep Representation Learning this semester. All lecture slides and video recordings are now available at the book website: ma-lab-berkeley.github.io/deep-represent… We believe, with the book, they help students clarify basic concepts and principles of Intelligence.
Yi Ma tweet media
English
13
256
1.6K
101K
Fondation Intelligence retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Thrilled to celebrate 5 years of AlphaFold 2! It’s now been used by over 3 million researchers around the world to accelerate their vital research - and it was an honour of a lifetime for our work to be recognised last year with the Nobel Prize! Proof of AI’s potential to enable science at digital speed 🚀 To honour the anniversary, we’ve made The Thinking Game film available for free on our YouTube channel - it’s a great look behind the scenes of AlphaFold & our journey to AGI.
English
117
288
3.5K
241.9K
Fondation Intelligence retweetledi
Peyman Milanfar
Peyman Milanfar@docmilanfar·
when you ask a PhD student when they’ll finish
Peyman Milanfar tweet media
English
18
77
1.4K
54.9K
Fondation Intelligence retweetledi
hardmaru
hardmaru@hardmaru·
I’m co-organizing an “AI for Science: Algorithms to Atoms” social event during #NeurIPS2025 with Yann LeCun, Anima Anandkumar, Bill Dally, and Max Welling If you want to talk about AI Scientist, World Models and the future of AI-driven discovery, please come on Dec 5 3:30pm PT!
hardmaru tweet media
English
13
37
356
84.9K
Fondation Intelligence retweetledi
Mustafa Suleyman
Mustafa Suleyman@mustafasuleyman·
Some of the best career advice I've ever gotten: comfort is the enemy of learning. If an opportunity is a little intimidating and feels like a stretch - it's probably the right opportunity.
English
114
771
5.1K
142.1K
Fondation Intelligence retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
As a fun Saturday vibe code project and following up on this tweet earlier, I hacked up an **llm-council** web app. It looks exactly like ChatGPT except each user query is 1) dispatched to multiple models on your council using OpenRouter, e.g. currently: "openai/gpt-5.1", "google/gemini-3-pro-preview", "anthropic/claude-sonnet-4.5", "x-ai/grok-4", Then 2) all models get to see each other's (anonymized) responses and they review and rank them, and then 3) a "Chairman LLM" gets all of that as context and produces the final response. It's interesting to see the results from multiple models side by side on the same query, and even more amusingly, to read through their evaluation and ranking of each other's responses. Quite often, the models are surprisingly willing to select another LLM's response as superior to their own, making this an interesting model evaluation strategy more generally. For example, reading book chapters together with my LLM Council today, the models consistently praise GPT 5.1 as the best and most insightful model, and consistently select Claude as the worst model, with the other models floating in between. But I'm not 100% convinced this aligns with my own qualitative assessment. For example, qualitatively I find GPT 5.1 a little too wordy and sprawled and Gemini 3 a bit more condensed and processed. Claude is too terse in this domain. That said, there's probably a whole design space of the data flow of your LLM council. The construction of LLM ensembles seems under-explored. I pushed the vibe coded app to github.com/karpathy/llm-c… if others would like to play. ty nano banana pro for fun header image for the repo
Andrej Karpathy tweet media
Andrej Karpathy@karpathy

I’m starting to get into a habit of reading everything (blogs, articles, book chapters,…) with LLMs. Usually pass 1 is manual, then pass 2 “explain/summarize”, pass 3 Q&A. I usually end up with a better/deeper understanding than if I moved on. Growing to among top use cases. On the flip side, if you’re a writer trying to explain/communicate something, we may increasingly see less of a mindset of “I’m writing this for another human” and more “I’m writing this for an LLM”. Because once an LLM “gets it”, it can then target, personalize and serve the idea to its user.

English
909
1.5K
17K
5.3M
Fondation Intelligence retweetledi
Alec Helbling
Alec Helbling@alec_helbling·
Hamiltonian Monte Carlo frames sampling from a probability distribution as a physics problem. By endowing "particles" with momentum and simulating their energy and motion through Hamilton's equations you can efficiently explore a distribution.
English
30
234
1.8K
156.3K