Jing Yu Koh@kohjingyu
I've observed 3 types of ways that great AI researchers work:
1) Working on whatever they find interesting, even if it's "useless"
Whether something will be publishable, fundable, or obviously impactful, is irrelevant to what these people work on. They simply choose something that feels interesting, weird, beautiful, or off in a way they can't ignore. For many of these people, "interestingness" is also often strong research intuition for an important problem that hasn't fully materialized yet, but their ideas often end up being meaningful during the process of exploration.
The canonical example for this in physics is Richard Feynman who got intrigued by the way that plates wobbled. He followed this curiosity on something that seemed like a useless endeavor, and it ended up feeding into deeper physics (and eventually won him a Nobel prize):
"It was effortless. It was easy to play with these things. It was like uncorking a bottle: Everything flowed out effortlessly. I almost tried to resist it! There was no importance to what I was doing, but ultimately there was."
The AI version of this that I've observed before is when someone obsesses over a "minor" failure case, a weird training dynamic, a small theoretical mismatch, or just something that most people think is pointless to chase down. These threads end up becoming interesting and impactful more often than you'd expect. The risk is that one can spend a long time on a pointless rabbit hole, but I've observed that the best researchers often have a very good sense for when an idea is a dead end vs. whether it's promising given more effort.
2) Working on what they feel extremely strongly is the "right" way to do something
These people have a clear picture of how the field *should* progress, and they're willing to work on unpopular things to prove their vision. They'll commit to something that others think is wrong, premature, or not worth it. An interesting quantitative way of measuring this is the citation graph of a paper. If you see a paper that has been around for many years but only started getting cited a lot more in recent years, that means that they were early (and right!). An obvious example is diffusion, the first paper of which was as early as 2015 (Sohl-Dickstein et al., 2015) but the ideas only started getting real traction in 2021 or later.
The failure mode here is getting stuck defending a pet theory long after it's been falsified. And there's obviously many examples in our community of people who do a lot of goal post shifting or beat a dead horse for many decades. But when these ideas are legitimately undervalued, they result in paradigm shifts instead of incremental progress.
3) Crushing SOTA
There's a type of researcher who isn't necessarily the most "philosophically original" or creative, but they are extremely effective at pushing a system to its limits. You can give these people a pre-existing task and benchmarks, check in on them in a month, and they will have crushed SOTA. Obviously this is not about benchmark hacking or short term wins. It's a real skill to take a combinatorial space of noisy research ideas and papers and conduct a rigorous search and ablation process.
I've also found that this type of researcher has great intuition about the field: a sense for which ideas will scale, which tweaks are meaningful, good values for hyperparameters, and quickly figuring out which papers are worth paying attention to.
—————
I think that these archetypes are all concrete expressions of good "research taste". (1) is a taste for interesting questions, (2) is a taste for long term worldviews, and (3) is a taste for careful execution and science. The best researchers I know often have a preference for operating in one of these modes, but frequently weave in and out of each depending on the stage of the project.