Fuheng Zhao

133 posts

Fuheng Zhao

Fuheng Zhao

@FuhengZ

Incoming assistant professor @UUtah | PostDoc @Snowflake | PhD from @dsl_ucsb | Prev intern @FoundationDB @Snowflake @GraySystemsLab

Katılım Mart 2021
838 Takip Edilen633 Takipçiler
Fuheng Zhao
Fuheng Zhao@FuhengZ·
This echoes a line of work in data summaries and sketches that aims to distinguish hot items from cold ones, providing different guarantee for hot and cold data.
English
0
0
0
43
Fuheng Zhao retweetledi
Boris Cherny
Boris Cherny@bcherny·
@EthanLipnik 👋 Early versions of Claude Code used RAG + a local vector db, but we found pretty quickly that agentic search generally works better. It is also simpler and doesn’t have the same issues around security, privacy, staleness, and reliability.
English
151
297
5K
1.2M
Fuheng Zhao retweetledi
Novak Djokovic
Novak Djokovic@DjokerNole·
Lost for words 🇦🇺
Novak Djokovic tweet media
English
12.1K
24.8K
244.4K
5.6M
Fuheng Zhao retweetledi
OpenAI
OpenAI@OpenAI·
Introducing Prism, a free workspace for scientists to write and collaborate on research, powered by GPT-5.2. Available today to anyone with a ChatGPT personal account: prism.openai.com
English
1.1K
2.3K
16.3K
5.9M
Fuheng Zhao
Fuheng Zhao@FuhengZ·
4 out 5 guessed correctly @cidrdb Gong Show Quiz 😂. Thanks to @andy_pavlo and @pateljm for organizing the session and signing it. Amazing talks and congrats to @tianyu_li_ for great presentation. Also thanks to @amrelabbadi for taking us to hike, otherwise will be home XD.
Fuheng Zhao tweet mediaFuheng Zhao tweet mediaFuheng Zhao tweet media
English
1
1
7
3.2K
Fuheng Zhao retweetledi
Matei Zaharia
Matei Zaharia@matei_zaharia·
Super excited that we’re open sourcing the Dicer autosharder! It’s become a critical piece of infrastructure in Databricks that’s made many of our systems more scalable and reliable, and it’s powered by really cool systems work. databricks.com/blog/open-sour…
Matei Zaharia tweet media
English
5
34
240
17.5K
Fuheng Zhao
Fuheng Zhao@FuhengZ·
The future of cloud computing might be star computing? This is so cool!
Y Combinator@ycombinator

Congrats to @Starcloud_Inc1 on the launch of their first satellite, just 21 months from starting the company. This is the first NVIDIA H100 in space and paves the way for huge, solar-powered orbital data centers.

English
0
0
2
442
Fuheng Zhao retweetledi
Alex Miller
Alex Miller@AlexMillerDB·
[PVLDB] Enhancing Transaction Processing through Indirection Skipping vldb.org/pvldb/vol18/p4… Whereas VMCache improve pointer swizzing's complexity by removing the swizzling, this work points out that page and frame hints are highly effective, and okay if they're wrong.
Alex Miller tweet mediaAlex Miller tweet mediaAlex Miller tweet media
English
2
5
40
3.2K
Furong Huang
Furong Huang@furongh·
In 2010, I came to the U.S. straight from undergrad for a PhD. Fifteen years later the map looks messy, but the line of best fit is clear. 🤍 2010 — PhD Year 1: my advisor said, “Take the ML course.” I had never heard of ML. With the most supportive, inspiring advisor, I pivoted from electronic communications and cognitive radio to MRFs, graphical models, and generative models—grounded by a solid foundation in signals and probability theory. Year 3 – we moved into tensor methods. I went all in on unsupervised learning and spectral methods. Before graduation – a year of internships across MSR Boston and Redmond on AI for healthcare. I was so lucky to work with the best researchers as mentors. But biology humbled me. Long nights, protein and cell-slice data, multithreaded pipelines. Progress crawled and the spark dimmed. I chose to keep a CS spine: applications are welcome when they sharpen the core. Fresh out of grad school – I joined UMD faculty,then deferred a year to do a postdoc at MSR NYC. Online learning and RL paradise, ego check included. While others shipped papers, I went back to RL textbooks and rebuilt foundations in learning theory. Back at UMD – I aimed for pure theory. Reality steered me to trustworthy AI, especially RL robustness. I also faced a truth: I missed the 2014 deep learning wave. That stung. It changed how I work. 2023 – LLMs arrived like a tide and I didn’t want to miss the wave again. I read restlessly, rallied my students, we pivoted and shipped. 2023 → now – we’re building toward foundation models for robotics: SMART, TACO, Premier-TACO, PRISE, Make-an-Agent, TraceVLA, FLARE, IVE, and more. The timing feels right. We’re going deep on robotics and physical intelligence. 🤖 What I’ve learned: careers have seasons. Adjust, ride the tide, and do not let the next wave leave you on the beach. If you’re mid-pivot too, I’m rooting for you — happy to swap notes. ✨ #Robotics #LLM #EmbodiedAI #PhysicalIntelligence #UMD #AcademicLife #ResearchJourney
Furong Huang tweet mediaFurong Huang tweet mediaFurong Huang tweet media
English
41
149
1.7K
122.7K
Liangming Pan
Liangming Pan@PanLiangming·
Life update: I've joined the School of Computer Science at Peking University @PKU1898 as an Assistant Professor! I'm looking for Ph.D./intern/visiting researchers for my new research group. If you are interested in NLP and LLM, check my research at liangmingpan.bio
Liangming Pan tweet media
English
16
32
313
24.9K
Fuheng Zhao retweetledi
Mikita Balesni 🇺🇦
Mikita Balesni 🇺🇦@balesni·
New paper: Can LLMs do multi-step reasoning without chain-of-thought? Models can answer questions like "Who is the spouse of the singer of Imagine?". But is this true internal reasoning (Imagine->John Lennon->Yoko) or memorization/pattern matching? We now have a better answer!
Mikita Balesni 🇺🇦 tweet media
English
9
64
426
54.2K
Fuheng Zhao retweetledi
Thinking Machines
Thinking Machines@thinkymachines·
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to prompt engineering. Here we share what we are working on and connect with the research community frequently and openly. The name Connectionism is a throwback to an earlier era of AI; it was the name of the subfield in the 1980s that studied neural networks and their similarity to biological brains. thinkingmachines.ai/blog/defeating…
Thinking Machines tweet media
English
230
1.3K
7.6K
3.4M
Fuheng Zhao retweetledi
Aditya Parameswaran
Aditya Parameswaran@adityagp·
New research agenda we're kickstarting at Berkeley: redesigning data systems to serve the dominant workload of the future: agents! Agentic speculation is massive, heterogeneous, steerable, and redundant: properties data systems can better support and take advantage of. Take a look: arxiv.org/abs/2509.00997
Aditya Parameswaran tweet media
English
6
47
268
33.8K
Fuheng Zhao retweetledi
Ming Yin
Ming Yin@MingYin_0312·
I implemented GRPO and DPO from scratch in vanilla Pytorch to unravel every piece of training details. Hope it could be helpful for those who care about the implementation details of the algorithms. 👉 github.com/mingyin0312/RL… #AI #RL #LLM
English
16
208
1.5K
105.4K
Xuandong Zhao
Xuandong Zhao@xuandongzhao·
Remember when GPT-4 dropped? The jump from GPT-3 felt so huge, I thought AGI was inevitable with GPT-5 Two years later, GPT-5 is here with impressive benchmark scores, but it's a far cry from the AGI I imagined Did anyone else have similar expectations? #OpenAI #GPT5
English
3
0
17
4.1K