Hui Kang Tong
2K posts


@yujia_bao @thinkymachines @tinkerapi Thanks for addressing the bug I posted - github.com/thinking-machi…
English

There's been a lot of excitement around auto-research, but one underappreciated bottleneck: coding agents struggle to run LLM training jobs at scale. A small infrastructure mistake can have major consequences on the output.
I recently joined @thinkymachines, and @tinkerapi solves exactly this. It standardizes the training process — training a 1T parameter model is as simple as training a 4B one. That makes auto-research with coding agents like Claude Code actually viable.
English


