Emmett @ Aster

6 posts

Emmett @ Aster

Emmett @ Aster

@asterailabs

Aster: An AI Research Agent that Accelerates Scientific R&D https://t.co/o4kG3vaEeK

가입일 Ocak 2026
20 팔로잉16 팔로워
Emmett @ Aster
Emmett @ Aster@asterailabs·
@JIACHENLIU8 This is super interesting work! Wondering about the full preservation of a search space -- while it's good to have lossless data, this seems pretty hard for agents to parse and understand Wonder if there's a good format for concisely collapsing search tree info in fewer tokens.
English
1
0
1
248
Amber Liu
Amber Liu@JIACHENLIU8·
My bet: in the near future, 80%⬆️ of CS research will be done by AI in collaboration with humans. However, today's research ecosystem is still built around the human, not the AI scientist. For example, the 8-page paper PDF is a lossy compression of months of branching exploration into a linear story, optimized for a human reviewer to skim in 30 minutes. It hides two structural taxes: 📖 Storytelling Tax — failures, rejected hypotheses, and dead ends get stripped. On RE-Bench (24,008 runs, 21 frontier models), failed runs = 90.2% of total compute cost, with a 113× median failed-to-success token ratio. Every lab independently rediscovers the same dead ends. 🔧 Engineering Tax — the gap between reviewer-sufficient prose and agent-sufficient spec. Across 8,921 PaperBench requirements (23 ICML'24 papers), only 45.4% are fully specified in the PDF. The rest is tacit lab knowledge. Tolerable when readers were human. Critical now that agents read, reproduce, and extend. We propose ARA: the Agent-Native Research Artifact — replace the narrative PDF with an agent-executable package, in 4 layers: 🧠 structured scientific logic ⚙️ executable code w/ full specs 🌳 exploration graph (every failure preserved) 📊 evidence grounding every claim
English
27
83
539
92.3K
Larry Dial
Larry Dial@classiclarryd·
The NanoGPT Speedrun WR has broken below 90s, dropping from 92.1 to 89.8 from 4 recent contributions: 1. Tuned Kernels (-0.4s) from @.EmmetBicker on github & AI System Aster 2. Tuned Value Embeds (-0.4s) from @photon_mz 3. Sparse comms for bigram gradients (-0.3s) from @roeeshenberg 4. max_seq_len schedule and increased min lr (-1.2s) from @.dualverse-ai on github & AI System Station
Larry Dial tweet media
English
12
12
243
16.3K
Emmett @ Aster
Emmett @ Aster@asterailabs·
Moved to SF at 18 to join Magic and train LLMs. Two weeks ago, I left & started own AI company. Yesterday, our AI set the GPT-2 Speedrun Record. And that’s just the start. It launches tomorrow. 🚀
Emmett @ Aster tweet media
English
1
0
2
102