Ed H. Chi

10.5K posts

Ed H. Chi banner
Ed H. Chi

Ed H. Chi

@edchi

Research VP @ GoogleDeepMind. ACM Fellow.

California Katılım Ekim 2007
3.8K Takip Edilen12.8K Takipçiler
Ed H. Chi retweetledi
New Turing Institute
New Turing Institute@newturing·
Dr. @edchi will join GStar Summit 2026 for a keynote on foundation models and a panel discussion on frontier AI. Website for program details and ticket registration: summit.newturing.ai Be part of the conversation!
New Turing Institute tweet media
English
0
1
7
467
Ed H. Chi
Ed H. Chi@edchi·
Related: github.com/juliusbrussee/… "Claude Code skill & Codex plugin that makes agent talk like caveman — cutting ~75% output tokens while keeping full technical accuracy. Now with 文言文 mode, terse commits, one-line code reviews, and compression tool that cuts ~46% tokens."
Ed H. Chi@edchi

A Hypothesis: Agents of the future could communicate to each other using thought tokens. Corollary: Humans will have to decode those thought tokens in order to find out what the agents are doing with each other. Note: GenZers are already doing this with their slang. No Cap.

English
1
0
3
988
Ed H. Chi
Ed H. Chi@edchi·
@datawarmup being able to invent new rewards that are interesting might be the real definition of AGI. :D
English
1
0
1
204
Ed H. Chi
Ed H. Chi@edchi·
A Hypothesis: Agents of the future could communicate to each other using thought tokens. Corollary: Humans will have to decode those thought tokens in order to find out what the agents are doing with each other. Note: GenZers are already doing this with their slang. No Cap.
English
1
0
5
1.9K
Ed H. Chi
Ed H. Chi@edchi·
While Agentic AI is brewing (and exploding), are you preparing for the next wave? What's the next wave of new ideas?
English
1
0
6
785
Ed H. Chi
Ed H. Chi@edchi·
@spacegrep @denny_zhou @quocleix Imho, the attention paid over the length of CoT helps sharpen the model toward the correct decoding path. This CoT reasoning path helps the model perform 'next idea prediction' instead of just doing 'next token prediction'. That has been my intuition.
English
0
0
0
16
spacegrep🏳️‍🌈
spacegrep🏳️‍🌈@spacegrep·
@edchi @denny_zhou @quocleix Hello! The fact that performance (in domains on which the LLM has been RL-trained) increases with the length of CoT feels unreal. There are arguments & papers on complexity theory etc. explaining the same. What acc to you is the best argument that explains why CoT works so well?
English
1
0
1
21
Ed H. Chi
Ed H. Chi@edchi·
@hyhieu226 @OpenAI @xai Good luck Hieu. Still remember the days when we worked together. Hope you recover well!
English
0
0
3
360
Hieu Pham
Hieu Pham@hyhieu226·
I have made the difficult decision to leave @OpenAI. Working here and at @xai before was a once-in-a-lifetime experience. I have met the best people. Not the best people in AI. Not the best people in tech. Simply the best people. At these companies, I have helped creating extremely intelligent entities that will meaningfully improve our lives. The work makes me proud. But the intensive work came with a price. I cannot believe I would say this one day, but I am burnt out. All the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous. I am going to take a break from frontier AI labs, and will take my family to my home country Vietnam. There, I will try something new, and also search for a cure for my conditions. I hope I will heal. Until then.
English
1.1K
413
14K
1.2M
Ed H. Chi retweetledi
Thang Luong
Thang Luong@lmthang·
Thrilled to share: #Aletheia, our math research agent, just solved 6/10 notoriously hard FirstProof problems autonomously, the best result in the inaugural challenge! To me, this is even bigger than our historic IMO-gold achievement last year; these problems challenge even top mathematicians. We share our results transparently, see paper and full thoughts in the thread. 👇
Thang Luong tweet media
English
30
153
924
158.9K
spacegrep🏳️‍🌈
spacegrep🏳️‍🌈@spacegrep·
@edchi @denny_zhou @quocleix What do you think of the fact that the brain, the only gold standard "proof" of AGI we have today, uses more information dense sensory signals like visual signals and acts in a local environment, & performs some (probably a good) level of thinking beyond what is done in language?
English
1
0
0
34
Ed H. Chi
Ed H. Chi@edchi·
@spacegrep @denny_zhou @quocleix yes, major bugs IMHO: - the current models are generally fixed minds, and only learns and compresses new knowledge during gradient descent. - the other learning / memory mechanism is in-context learning with CoT, but the model forgets it right after. Clearly insufficient.
English
2
0
1
42
Ed H. Chi
Ed H. Chi@edchi·
In the social media era, kids actually feel more loneliness---ironically. As a former social computing researcher, this is deeply depressing to me. freerangekids.com/surge-in-child… h/t Kristina Lerman #WSDM 2026 keynote
English
0
0
7
491
Ed H. Chi
Ed H. Chi@edchi·
@denny_zhou @quocleix in my not-so-humble opinion: - 1995-2015 most important 3 ideas are: reverse indexing with mapreduce, vector space models, deep learning. - 2015-2025 most important 3 ideas: seq2seq learning/transduction with transformers, CoT fine-tuning, and refinement using RL.
English
0
0
10
884
Zichen Liu
Zichen Liu@zzlccc·
Thrilled to share that I’ve joined @GoogleDeepMind to work on Gemini post-training! I feel incredibly fortunate to be cooking on this sunny island under @YiTayML's leadership, within @quocleix's broader organization. Looking forward to enjoying RL research and pushing the frontiers of Gemini alongside such a brilliant team!
Zichen Liu tweet media
English
47
8
278
44.9K
Ed H. Chi
Ed H. Chi@edchi·
@bendee983 @denny_zhou Actually, in any large frontier lab, there are sufficient resources to do both. The question is incentives and allocation of energy.
English
0
0
1
42
Ben Dickson
Ben Dickson@bendee983·
@denny_zhou Don't want to read too much into this. From your post, I suppose DeepMind believes in the second approach, which sounds exciting. But I thought David Silver left because he wanted to work on new approaches. Or am I missing something here?
English
1
0
0
266
Ed H. Chi retweetledi
Google AI
Google AI@GoogleAI·
Announcing Personal Intelligence, a more personalized @GeminiApp designed just for you. How it works: — Customized: With your permission, it reasons across your @Gmail, @YouTube, @GooglePhotos, and Search apps to share hyper-relevant and context-aware responses — Secure: If enabled, you control which Google apps to connect to. This setting is off by default — Useful: From travel plans based on your Google Photos to gym recommendations based on goals you’ve shared with Gemini, you get help tailored to your world Personal Intelligence in beta is rolling out to Google AI Pro and AI Ultra subscribers in the U.S., with expansions to the free tier, more countries, and AI Mode in Search to come. Take a look at the Gemini app's personalized assistance in the clip below, then let us know what you would use it for!
English
114
253
2.2K
319.9K
Ed H. Chi retweetledi
News from Google
News from Google@NewsFromGoogle·
Joint Statement: Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year. After careful evaluation, Apple determined that Google's Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards.
English
1.5K
6.4K
52K
11M
Ed H. Chi
Ed H. Chi@edchi·
Hot take: Model capability gap and switching cost will determine much of the AI development race in 2026.
English
0
0
2
360