eric gao

690 posts

eric gao banner
eric gao

eric gao

@gaoooric

econ phd @ mit

Cambridge, MA Katılım Eylül 2016
888 Takip Edilen244 Takipçiler
Hayashi Heikichi
Hayashi Heikichi@lianda_edu·
Three types of people in academia (crucial distinction):Genius / Theory Monster Math god (IMO/Fields level). Builds models and proves theorems alone. Destination: MIT/Harvard theory. → Lives purely on brainpower. Craftsman / Hard Worker Strong math + coding. Independently drives projects and grinds out papers. → Lives on raw execution and persistence. Resource Integrator Master at recruiting talent, building teams, picking the right directions, and combining different skillsets. → Lives on organization skills + sharp judgment. Which one are you?
English
7
7
65
8.9K
eric gao
eric gao@gaoooric·
@RadishHarmers @ashtree128 stanford students are a bit special… someone started a flood hanging a hammock from a sprinkler. would not trust them to not burn dorms down
English
0
0
0
21
Sridhar Ramesh
Sridhar Ramesh@RadishHarmers·
@ashtree128 Sounds like they should forbid using dangerous fire-starting equipment in the dorms, not forbid opting out of the meal plan.
English
9
2
757
27K
Aakash Gupta
Aakash Gupta@aakashgupta·
The conspiracy version of this is wrong. The real version is worse. Anthropic published a postmortem last September documenting three separate infrastructure bugs that degraded Claude's quality for weeks. Routing errors sent requests to wrong server pools. A compiler bug corrupted token selection. An adaptive thinking system started under-allocating reasoning on complex turns. 30% of Claude Code users got misrouted during the affected period. None of that was intentional. All of it produced exactly the pattern in this chart. Here's what actually drives the decline. Every AI company faces the same constraint: inference costs scale linearly with users but revenue doesn't. Quantization (compressing model weights from 16-bit to 8-bit or 4-bit) cuts GPU memory by 2-4x. Adaptive thinking allocation reduces compute per request. Batching groups requests to maximize throughput. Each optimization is individually rational. Each one shaves quality by a few percent. Stack five of them under peak load and users feel it. The timing matches launches perfectly because launch day has minimum users on the new model and maximum GPU allocation per request. Three months later you have 10x the users on the same infrastructure. The quality delta between "launch day inference budget" and "Tuesday afternoon at peak load inference budget" is the entire gap in that chart. Benchmarks miss this because benchmarks run on dedicated hardware with no load balancing, no quantization, no request batching. The model that scores 92% on MMLU in a lab scores 92% on MMLU in production too. But the user experience of interacting with that model through six layers of inference optimization at 4pm EST? That's a different product. The real problem is that "intentional nerfing" gives companies too much credit. Intentional nerfing implies control. What's actually happening is that nobody fully understands how inference optimization degrades the long tail of capabilities until users report it weeks later.
Marcin Krzyzanowski@krzyzanowskim

"Anthropic, OpenAl and Google release their new models with high quality from day one then slowly nerf them until the next model, so when the next model hits, its perceived as a bigger jump than it actually is" sounds right what's happening

English
25
46
380
52.6K
eric gao retweetledi
Michiel Bakker
Michiel Bakker@bakkermichiel·
🚨📄 New preprint! We find the “boiling the frog” equivalent of AI use. In a series of RCTs, we show that after just 10 min of AI assistance people perform worse and give up more often than those who never used AI. w Grace Liu @brianchristian Mira Dumbalska and Rachit Dubey 🧵
Michiel Bakker tweet media
English
26
237
728
133.9K
eric gao retweetledi
Ivan Werning
Ivan Werning@IvanWerning·
Congratulations to Ludwig! Extremely well deserved. Very happy for him and a testament to all his great coauthors. Here he is as a 3rd or 4th year student presenting in our macro student lunch at MIT.
Ivan Werning tweet mediaIvan Werning tweet media
English
5
76
605
78K
Elias Al
Elias Al@iam_elias1·
BREAKING: An AI just wrote a research paper. Submitted it to a top science conference. Passed peer review. Nobody on the review panel knew it was AI. The paper is called "The AI Scientist." Published last week in Nature. Built by Sakana AI in Tokyo, with researchers from Oxford and UBC. Here is what it did — completely on its own. It read existing scientific literature. Formed a hypothesis. Designed an experiment. Ran the experiment. Analyzed the results. Wrote the full academic paper. Then peer-reviewed its own work. No human at any stage. They submitted three fully AI-generated papers to a top ML conference under blind peer review. Human reviewers were told some might be AI, but not which ones. One was accepted. It scored higher than 55% of human-authored papers at that same conference. The accepted paper cost $15 in compute to produce. Fifteen dollars. Now here is the part nobody is talking about. The team found a clear scaling law: stronger foundation models produce higher-quality research outputs. Better base model in, better science out. Which means this gets dramatically better — automatically — every time a new model drops. Right now it is limited to computational ML experiments. No biology. No chemistry. No physical labs. For now. What happens when the thing that discovers new science... is itself?
Elias Al tweet media
English
65
147
464
121K
Bojan Tunguz
Bojan Tunguz@tunguz·
Stanford to Hebrew University: top 20 universities of US unicorn founders by @IlyaStrebulaev.
Bojan Tunguz tweet media
English
3
12
102
17.3K
John Horton
John Horton@johnjhorton·
@tunguz @IlyaStrebulaev @grok please make a table for this data & compute these founder numbers as ratio to undergraduate class size. Use MIT as reference so ratio is 1.
English
2
0
16
4.6K
eric gao retweetledi
Math Files
Math Files@Math_files·
You can’t grow wheat in ℤ/6ℤ because it’s not a field.
English
12
70
375
166.1K
eric gao retweetledi
Mojobo
Mojobo@MojoboJomo·
why wouldn't this work? like, all that wasted energy going nowhere, why not use it? But I probably don't know anything about this
Mojobo tweet media
English
483
65
6.1K
729.4K
Aakash Gupta
Aakash Gupta@aakashgupta·
Peanuts in Coke is one of the most accidentally perfect food pairings in history, and the chemistry explains why this guy can't go back. Coca-Cola sits at pH 2.5, roughly the same acidity as stomach acid. When you drop roasted peanuts into that, the phosphoric acid partially denatures the surface proteins on the nut, releasing free glutamate. You're generating umami in real time inside the glass. The salt on the peanuts suppresses bitter taste receptors on your tongue, which amplifies your perception of sweetness without adding a single gram of sugar. Coca-Cola already has 39g of sugar per can. Your brain registers it as even sweeter because the salt is clearing the noise from competing flavor signals. Then carbonation does two things. CO2 dissolved in liquid forms carbonic acid, which triggers pain receptors (TRPA1), not taste receptors. That mild irritation resets your palate between sips so you never get flavor fatigue. Every sip hits like the first. Second, the bubbles physically agitate the peanut surface, accelerating the protein breakdown and glutamate release. The longer the peanuts sit, the more umami you extract. The fat content seals it. Peanuts are 49% fat by weight. Fat is the only macronutrient that activates CD36 receptors, which your brain interprets as richness and satisfaction. Mix that with sugar, salt, acid, umami, and carbonation and you've accidentally triggered every major reward pathway in the human taste system simultaneously. Georgia farmers in the 1920s did this because they needed one hand free while working. They stumbled into the optimal salt-acid-umami-fat-carbonation loop a century before food science could explain why it worked.
猫山課長@nekoyamamanager

30年前くらいに村上春樹のエッセイで、アメリカではコーラにピーナッツを入れて飲むのがポピュラーだと書いてあった。「ふぅん」と思ってから長い時間が経ったが、ついにやってみた。 何だこれバカ美味いんでやんの。 これ以外でもうコーラ飲みたくなくなるレベル。

English
539
10K
58.6K
9.6M
Aakash Gupta
Aakash Gupta@aakashgupta·
Arnold accidentally described the most reliable character test on Earth in a gym story about his son-in-law. The incline press is a brutal choice and that's the point. The bench press tells you nothing. Everyone looks strong on the flat bench. Incline isolates the anterior deltoids and upper chest, smaller muscles that fatigue faster. You hit failure sooner. The mask comes off sooner. Angela Duckworth's research at Penn quantified exactly what Arnold was watching for. Grit scores predicted West Point cadet survival better than SAT scores, high school rank, and physical fitness combined. The cadets who dropped out of Beast Barracks weren't the weakest. They were the ones who had never been pushed past the point where quitting felt rational. Arnold watched Pratt give up on the incline. Then he watched him push through anyway. That sequence is everything. The willingness to keep going when the thing stops being fun is the single highest-correlation trait with long-term success in every field with data on it. The gym is the last honest room left. No titles, no résumés, no pitch decks. Just gravity and iron and whatever you actually have inside you when the weight gets heavy. Most fathers-in-law take you to dinner and ask what you do for a living. Arnold took Pratt to the weight room and found out who he actually is.
Ankor Inclán@ankorinclan

Dijo una vez Arnold Schwarzenegger: “Cuando mi hija empezó a salir con alguien, le dejé claro algo: ‘No vas a casarte con un hombre que no sea mejor que yo’. Tenía que ser más fuerte, más exitoso y, por supuesto, un mejor actor. Cuando supe que era Chris Pratt, lo llevé al gimnasio. En el press inclinado, lo vi rendirse… pero también lo vi esforzarse. Ese día supe que tenía corazón. Y para mí, eso vale más que cualquier músculo.”

English
17
26
622
531.4K
eric gao retweetledi
Nate Silver
Nate Silver@NateSilver538·
You're going to see a lot of NBA draft reform proposals. Here's ours: replace the NBA draft with an auction.
Nate Silver tweet mediaNate Silver tweet media
English
206
56
1.5K
2.2M