Min Lin

102 posts

Min Lin

Min Lin

@mavenlin

Singapore เข้าร่วม Nisan 2015
295 กำลังติดตาม1.2K ผู้ติดตาม
Min Lin รีทวีตแล้ว
Alex LeBrun
Alex LeBrun@lxbrun·
I am joining @ylecun and an exceptional founding team to lead @amilabs as CEO. We have secured a $1.03 billion USD seed round to fuel our mission to build intelligent systems capable of truly understanding the real world—a long-term scientific endeavor.
English
220
287
5.7K
457.5K
Min Lin รีทวีตแล้ว
Min Lin
Min Lin@mavenlin·
Continual learning is a "learning" problem, innovating on model architecture but still relying on sgd to do the learning is leading to nowhere.
dr. jack morris@jxmnop

English
1
2
8
2.8K
Min Lin
Min Lin@mavenlin·
@kellerjordan0 Gradient descent: I'm still the ultimate outer loop.
English
0
0
0
522
Keller Jordan
Keller Jordan@kellerjordan0·
Hinton, LeCun, and every other neolab: Gradient descent is fundamentally broken. It needs thousands of examples to learn what humans do in only a few. It’s time to start looking for a radical new learning paradigm to close the gap. In-context learning: Do I mean nothing to you?
English
51
33
818
114.7K
Min Lin
Min Lin@mavenlin·
@daniel_mac8 He actually said that "continual learning" and "world models" as prerequisites need to be solved first, then robotics can have its breakout moment.
English
0
0
2
34
Dan McAteer
Dan McAteer@daniel_mac8·
Demis Hassabis, Google DeepMind CEO, on the 3 breakthroughs needed for AGI. > Continual Learning > World Models > Robotics It *is* possible that we get all 3 in 2026.
English
40
42
375
13K
Min Lin รีทวีตแล้ว
Yuandong Tian
Yuandong Tian@tydsh·
For research, "(3) Crushing SoTA" is often the most visible to the community, but "(1) working on interesting but useless thing" and "(2) insist on the right way to do things" are the underlying forces that can potentially lead to massive paradigm shift in the end.
Jing Yu Koh@kohjingyu

I've observed 3 types of ways that great AI researchers work: 1) Working on whatever they find interesting, even if it's "useless" Whether something will be publishable, fundable, or obviously impactful, is irrelevant to what these people work on. They simply choose something that feels interesting, weird, beautiful, or off in a way they can't ignore. For many of these people, "interestingness" is also often strong research intuition for an important problem that hasn't fully materialized yet, but their ideas often end up being meaningful during the process of exploration. The canonical example for this in physics is Richard Feynman who got intrigued by the way that plates wobbled. He followed this curiosity on something that seemed like a useless endeavor, and it ended up feeding into deeper physics (and eventually won him a Nobel prize): "It was effortless. It was easy to play with these things. It was like uncorking a bottle: Everything flowed out effortlessly. I almost tried to resist it! There was no importance to what I was doing, but ultimately there was." The AI version of this that I've observed before is when someone obsesses over a "minor" failure case, a weird training dynamic, a small theoretical mismatch, or just something that most people think is pointless to chase down. These threads end up becoming interesting and impactful more often than you'd expect. The risk is that one can spend a long time on a pointless rabbit hole, but I've observed that the best researchers often have a very good sense for when an idea is a dead end vs. whether it's promising given more effort. 2) Working on what they feel extremely strongly is the "right" way to do something These people have a clear picture of how the field *should* progress, and they're willing to work on unpopular things to prove their vision. They'll commit to something that others think is wrong, premature, or not worth it. An interesting quantitative way of measuring this is the citation graph of a paper. If you see a paper that has been around for many years but only started getting cited a lot more in recent years, that means that they were early (and right!). An obvious example is diffusion, the first paper of which was as early as 2015 (Sohl-Dickstein et al., 2015) but the ideas only started getting real traction in 2021 or later. The failure mode here is getting stuck defending a pet theory long after it's been falsified. And there's obviously many examples in our community of people who do a lot of goal post shifting or beat a dead horse for many decades. But when these ideas are legitimately undervalued, they result in paradigm shifts instead of incremental progress. 3) Crushing SOTA There's a type of researcher who isn't necessarily the most "philosophically original" or creative, but they are extremely effective at pushing a system to its limits. You can give these people a pre-existing task and benchmarks, check in on them in a month, and they will have crushed SOTA. Obviously this is not about benchmark hacking or short term wins. It's a real skill to take a combinatorial space of noisy research ideas and papers and conduct a rigorous search and ablation process. I've also found that this type of researcher has great intuition about the field: a sense for which ideas will scale, which tweaks are meaningful, good values for hyperparameters, and quickly figuring out which papers are worth paying attention to. ————— I think that these archetypes are all concrete expressions of good "research taste". (1) is a taste for interesting questions, (2) is a taste for long term worldviews, and (3) is a taste for careful execution and science. The best researchers I know often have a preference for operating in one of these modes, but frequently weave in and out of each depending on the stage of the project.

English
7
15
249
43.1K
Min Lin รีทวีตแล้ว
Yarin
Yarin@yaringal·
Putting timelines on new research breakthroughs is stupidly hard since these come as step functions. I can recall many arguments I've had over the past two years trying to explain that it's irresponsible to say "we're gonna have AGI in a year" because 1) people's definition of AGI is subjective, and 2) we still need research breakthroughs which could happen in a year's time, or could happen in 10 years' time. The uncertainty on any timeline prediction would be really high. I actually had a discussion with a VC recently asking about their bets, trying to find inconsistencies in their predictions (a Dutch book). I argued that continual learning/lifelong learning/adaptation is one of the biggest problems to solve, and that new research is needed for that, therefore them putting money on short term bets that rely on CL being solved would be really high risk (and inconsistent with their other bets). That said, it's worth studying the dot com bubble - the impact the internet has had on the world over the past 25 years was massive, even if expectations at the time were completely unrealistic (expectations which led to the bubble busting). Similarly, AI will slowly diffuse through society (at which point we'll probably stop calling it "AI"), and will have a massive impact on our day to day lives. There's still lots of research to be done - from adaptation and continual learning to AI robustness and security
Ilya Sutskever@ilyasut

One point I made that didn’t come across: - Scaling the current thing will keep leading to improvements. In particular, it won’t stall. - But something important will continue to be missing.

English
13
7
133
38.9K
Min Lin
Min Lin@mavenlin·
I'm also not very much bothered by the random order permutation, it still make sense to fit n! models for any frame of n tokens text. It is just that the ex-post fitted posterior may suggest L2R most of the time. (Greedily choosing the order can be seen as the crude version of this posterior). I believe there are cases where the best order is not L2R, it is more of whether it is worth the extra compute on training to fit the n! models.
English
0
0
4
3.3K
Min Lin
Min Lin@mavenlin·
@ducx_du Another argument is that, maybe by sharing the parameters for all the n! models, it would provide a good regularization on the parameters. So training perplexity doesn't matter that much, it is the downstream task?
English
2
0
2
645
Min Lin
Min Lin@mavenlin·
I was debating with @ducx_du in the past few days on a few points, sharing them to provide some more food for discussion. There can be two interpretations, 1. DLLM is fitting a loose elbo with uniform posterior distribution over the order. 2. It is fitting n! number of models with shared parameter. And then we should not only look at the loss, but to measure the Bayes code length, aka, ex-post fit a posterior over order. The problem of the second view is that the parameter size maybe too small to fit n! models, but we may want to really do an ex-post fit and compare the perplexity with the L2R. It seems to be a trade-off problem between the amount of compute spent and the perplexity. I do strongly believe that L2R is very likely to be better in perplexity.
Cunxiao Du@ducx_du

Diffusion LLMs (DLLM) can do “any-order” generation, in principle, more flexible than left-to-right (L2R) LLM. Our main finding is uncomfortable: ➡️ In real language, this flexibility backfires: DLLMs become worse probabilistic models than the L2R / R2L AR LMs. This thread is about why “any order” turns into a curse. (Work with Xinyu Yang @Xinyu2ML , Min Lin @mavenlin , Chao Du @duchao0726 and the team.) Blog Link: #2af0ba07baa880c29fc4c8c198244cc8" target="_blank" rel="nofollow noopener">notion.so/Understanding-…

English
2
3
27
9.2K
Min Lin
Min Lin@mavenlin·
@yacinelearning @navvye Yep, I remember that permuted mnist is a very special setting as different tasks are now in quite orthogonal subspaces. Figure 2 to 5 in arxiv.org/abs/1805.09733 shows that what actually works is replay (VGR/coreset). Parameter space regularization (EWC/VCL) merely adds anything.
English
2
0
3
148
Yacine Mahdid
Yacine Mahdid@yacinelearning·
man deep learning has the best nomenclature instead of "yeah the model is dumb as rocks now" we say "the model has entered a catastrophic forgetting regime"
Yacine Mahdid tweet media
English
19
15
342
56.9K
Min Lin
Min Lin@mavenlin·
@yacinelearning @navvye Strictly speaking, it doesn't work, they have a separate task head for each task.
English
1
0
1
52
Yacine Mahdid
Yacine Mahdid@yacinelearning·
@navvye well in their MNIST + atari game thingy they did in 2017 yes not sure if it was used later on
English
1
0
1
732
Min Lin
Min Lin@mavenlin·
@yoavgo @tailblues Yep, that's my assumption. The "in context tokens" or the "weights" both belong to the brain state. Okay, I think in this case in context learning is both induction and learning.
English
1
0
0
108
(((ل()(ل() 'yoav))))👾
lets talk about "In context learning". it is clearly NOT "learning", because its ephemeral. It IS some form of generalization from examples, which is very cool. but we need a name. how do we call this skill of generalization from example?
English
42
1
84
13.3K
Min Lin
Min Lin@mavenlin·
@yoavgo @tailblues 1. I was talking about Solomonoff's induction. 2. Generally, whatever I observe changes my brain state. I would see it as induction as well as learning.
English
1
0
0
85
Min Lin
Min Lin@mavenlin·
@yoavgo @tailblues With the following relations induction -> compression -> Bayes/plugin code -> fitting parametric model (contemporary meaning of learning in the ML context?)
English
1
0
0
77