Vilobh Meshram

2.3K posts

Vilobh Meshram

Vilobh Meshram

@vilobhmm

Build. Learn. Share #ViewsPersonal

Mountain View, CA Katılım Kasım 2009
2K Takip Edilen167 Takipçiler
Vilobh Meshram retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
I’ve always believed the No.1 application of AI should be to improve human health. That work started with AlphaFold, and now at @IsomorphicLabs with the mission to reimagine drug discovery and one day solve all disease! We are turbocharging that goal with $2.1B in new funding.
English
653
2.4K
19K
2.6M
Vilobh Meshram retweetledi
Claude
Claude@claudeai·
Join us at 1pm PT for a conversation with our co-founders Dario Amodei and Daniela Amodei, moderated by Chief Product Officer Ami Vora. x.com/i/broadcasts/1…
English
152
160
1.9K
295.1K
Vilobh Meshram retweetledi
ClaudeDevs
ClaudeDevs@ClaudeDevs·
In Claude Managed Agents, we’ve added multiagent orchestration, an outcomes loop for rubric-driven self-improvement, dreaming for self-learning, & webhooks.
English
135
430
5.9K
562.8K
Vilobh Meshram retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
I've always been passionate about games and they've played a big part in @GoogleDeepMind’s history, as the perfect proving ground for AI. Thrilled to announce this research partnership with @FenrisCreations - @EveOnline is one of the most extraordinary games ever built and has an amazing community. Very excited to work with @HilmarVeigar and the team!
Demis Hassabis tweet media
English
108
149
1.7K
151.6K
Vilobh Meshram retweetledi
Nathan Lambert
Nathan Lambert@natolambert·
So much rests on which of these trend lines is more representative.
Nathan Lambert tweet mediaNathan Lambert tweet media
English
41
41
552
103.5K
Vilobh Meshram retweetledi
enjoy my life
enjoy my life@issei_sato·
Claude Mythos is a looped Transformer? Why does a loop improve performance? Our ICML 2026 paper formally answers this. arxiv.org/abs/2509.25239
English
11
89
577
40K
Vilobh Meshram retweetledi
ARC Prize
ARC Prize@arcprize·
GPT-5.5 & Opus 4.7 on ARC-AGI-3 - GPT-5.5: 0.43% - Opus 4.7: 0.18% We found 3 failure modes: - True local effect, false world model - Wrong level of abstraction from training data - Solved the level, didn’t reinforce the reward See our full analysis 🧵
ARC Prize tweet media
English
73
138
1.5K
344K
Vilobh Meshram retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Thanks @Konstantine and @sequoia for such a fun and wide-ranging chat! Loved the final question - von Neumann FTW 😀
Konstantine Buhler@Konstantine

Sir @demishassabis has a mind for synthesis. His favorite book is about a grand theory of everything. His preferred philosophers are seen by some as opposites. His life's work ranges from board games to Nobel-winning science. We're grateful to have hosted Demis and his @GoogleDeepMind team at @sequoia AI Ascent last week for a fireside chat. He kindly gave us permission to share this, and you can watch the full video here: 00:00 Intro 00:38 The Common Thread 01:29 Games as AI Training 02:59 Startup Advice 1.0 04:39 Founding DeepMind 07:25 DeepMind and AGI 08:52 AI for Science 10:37 Biology Breakthroughs and Isomorphic 12:42 New Sciences 20:29 Philosophy

English
61
81
878
131.4K
Vilobh Meshram retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Think your vibe coding and creativity could be on the #GoogleIO main stage? Show us. As we countdown to the start of the show, the best ideas built with @GeminiApp or @GoogleAIStudio will be featured – think protein simulators, physics engines, or math-based art. 🔢
GIF
English
70
66
650
100.2K
Vilobh Meshram retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Really enjoyed this conversation - thanks again @garrytan for hosting!
Y Combinator@ycombinator

Demis Hassabis (@demishassabis) has had one of the most extraordinary careers in tech. He started as a chess prodigy and video game designer at 17 before getting a PhD in neuroscience and going on to found DeepMind. His lab cracked Go, solved protein structure prediction with AlphaFold, and then gave it away free to every scientist on earth. That work won him the 2024 Nobel Prize in Chemistry. Today he leads @GoogleDeepMind, pushing toward the same goal he set as a teenager: AGI. On this special live episode of How to Build the Future, he sat down with YC's @garrytan to talk about what still needs to happen to get us to AGI, his advice for founders on how to stay ahead of the curve, and what the next big scientific breakthroughs might be. 01:48 — What’s Missing Before We Get To AGI? 03:36 — Why Memory Is Still Unsolved 06:14 — How AlphaGo Shaped Gemini 08:06 — Why Smaller Models Are Getting So Powerful 10:46 — The 1000x Engineer 12:40 — Continual Learning and the Future of Agents 13:32 — Why AI Still Fails at Basic Reasoning 15:33 — Are Agents Overhyped or Just Getting Started? 18:31 — Can AI Become Truly Creative? 20:26 — Open Models, Gemma, and Local AI 22:26 — Why Gemini Was Built Multimodal 24:08 — What Happens When Inference Gets Cheap? 25:24 — From AlphaFold to the Virtual Cells 28:24 — AI as the Ultimate Tool for Science 30:43 — Advice for Founders 33:30 — The AlphaFold Breakthrough Pattern 35:20 — Can AI Make Real Scientific Discoveries? 37:59 — What to Build Before AGI Arrives

English
40
86
1.1K
122.4K
Vilobh Meshram retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
325
750
5.7K
854.7K
Vilobh Meshram retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Q1 earnings are in: 2026 is off to a terrific start. Our AI investments and full stack approach are lighting up every part of the business: Search queries are at an all-time high with AI continuing to drive usage. Google Cloud revenue grew 63%, Gemini models have incredible momentum, and it was our strongest quarter ever for consumer AI subs, driven by @GeminiApp. Thanks to our partners + employees around the world. Much more to share on our earnings call in 20 minutes… and at Google I/O in 20 days!
Sundar Pichai tweet media
English
375
946
9.8K
1M
Vilobh Meshram retweetledi
Konstantine Buhler
Konstantine Buhler@Konstantine·
Sir @demishassabis has a mind for synthesis. His favorite book is about a grand theory of everything. His preferred philosophers are seen by some as opposites. His life's work ranges from board games to Nobel-winning science. We're grateful to have hosted Demis and his @GoogleDeepMind team at @sequoia AI Ascent last week for a fireside chat. He kindly gave us permission to share this, and you can watch the full video here: 00:00 Intro 00:38 The Common Thread 01:29 Games as AI Training 02:59 Startup Advice 1.0 04:39 Founding DeepMind 07:25 DeepMind and AGI 08:52 AI for Science 10:37 Biology Breakthroughs and Isomorphic 12:42 New Sciences 20:29 Philosophy
English
40
208
1.5K
413.3K
Vilobh Meshram retweetledi
Stephanie Zhan
Stephanie Zhan@stephzhan·
@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.
English
60
176
1.5K
850.9K
Vilobh Meshram retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
You can now ask Gemini to create Docs, Sheets, Slides, PDFs, and more directly in your chat. No more copying, pasting, or reformatting, just prompt and download. Available globally for all @GeminiApp users.
English
602
1.7K
18.1K
2M
Vilobh Meshram retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Hello. How are you? Thank you. I love you. Please. Some of the most frequently translated phrases of the past 20 years! Google Translate began twenty years ago with a mission to help people understand one another, regardless of the language they speak. What started as a small experiment has become a global tool that helps over 1 billion users every month. In that time Translate has evolved from simple pattern matching to true understanding. In 2006, it relied on statistical machine learning to look for patterns in small word clusters. By 2016, we pioneered a shift to neural networks to move beyond literal word-for-word translations, and today we’re using our powerful Gemini models to make Translate even more helpful. We are moving from text to fluid, real-time conversations. With our latest models, you can even use your headphones as a personal interpreter that preserves your original tone and cadence - it’s an amazing experience! One of the interesting things about AI is that as we make progress, we begin to take it for granted. If you met a person who could translate across a hundred languages faster than any human can, you would be so impressed. Today, one product does that for nearly 250 languages, and we kind of just shrug. Being able to say thank you in 250 languages is not something I take for granted. So to the 1 billion who use Google Translate - merci, dhanyavaad, arigatō, gracias, and thank you! Let’s see what the next 20 years will bring.
GIF
English
301
404
5K
249.5K
Vilobh Meshram retweetledi
Garry Tan
Garry Tan@garrytan·
Truly an honor and blessing to host @demishassabis at YC today 🙏
Garry Tan tweet media
English
72
60
2.4K
251.9K