Unsupervised Learning

36 posts

Unsupervised Learning banner
Unsupervised Learning

Unsupervised Learning

@ULpodcast

Probing AI’s sharpest minds—what’s here, what’s ahead, and what it means for businesses and the world. a podcast by @Redpoint https://t.co/VlGXWeY7SQ

Katılım Ocak 2025
5 Takip Edilen81 Takipçiler
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
“Hire the best people, give them the means to succeed, and get the f*** out of the way.” @ylecun on what FAIR, Bell Labs, and Xerox PARC got right - and what's challenging to maintain at major labs today.
Jacob Effron@jacobeffron

It’s hard to imagine more of a dream Unsupervised Learning guest than @ylecun. Yann is one of the godfathers of AI, and he has some fascinating contrarian views on the limitations of LLMs. It was incredible to get to have a wide-ranging discussion with Yann about these views, reflections on his time at Meta and departure and what’s next for him. We hit on: ▪️ LLM limitations and a path forward for robotics ▪️ Why he left Meta ▪️ How he came to so dramatically disagree with his Turing co-laureates Geoff Hinton and Yoshua Bengio on LLMs ▪️ His predictions for 2027 ▪️ His new company AMI and the bet on world models ▪️ Why he compares OpenAI and Anthropic to Sun Microsystems ▪️ Why he tells PhD students to stop working on LLMs Plus some sharp views on the current safety discourse, how breakthrough research actually happens and what FAIR got right and wrong. YouTube: youtu.be/ngBraLDqzdI Spotify: bit.ly/4dL8fvT Apple: bit.ly/4wxgpiX

English
0
7
49
7K
Unsupervised Learning retweetledi
Yann LeCun
Yann LeCun@ylecun·
Fun interview with Jacob Effron on the Unsupervised Learning podcast.
Jacob Effron@jacobeffron

It’s hard to imagine more of a dream Unsupervised Learning guest than @ylecun. Yann is one of the godfathers of AI, and he has some fascinating contrarian views on the limitations of LLMs. It was incredible to get to have a wide-ranging discussion with Yann about these views, reflections on his time at Meta and departure and what’s next for him. We hit on: ▪️ LLM limitations and a path forward for robotics ▪️ Why he left Meta ▪️ How he came to so dramatically disagree with his Turing co-laureates Geoff Hinton and Yoshua Bengio on LLMs ▪️ His predictions for 2027 ▪️ His new company AMI and the bet on world models ▪️ Why he compares OpenAI and Anthropic to Sun Microsystems ▪️ Why he tells PhD students to stop working on LLMs Plus some sharp views on the current safety discourse, how breakthrough research actually happens and what FAIR got right and wrong. YouTube: youtu.be/ngBraLDqzdI Spotify: bit.ly/4dL8fvT Apple: bit.ly/4wxgpiX

English
26
62
482
62.6K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
It’s hard to imagine more of a dream Unsupervised Learning guest than @ylecun. Yann is one of the godfathers of AI, and he has some fascinating contrarian views on the limitations of LLMs. It was incredible to get to have a wide-ranging discussion with Yann about these views, reflections on his time at Meta and departure and what’s next for him. We hit on: ▪️ LLM limitations and a path forward for robotics ▪️ Why he left Meta ▪️ How he came to so dramatically disagree with his Turing co-laureates Geoff Hinton and Yoshua Bengio on LLMs ▪️ His predictions for 2027 ▪️ His new company AMI and the bet on world models ▪️ Why he compares OpenAI and Anthropic to Sun Microsystems ▪️ Why he tells PhD students to stop working on LLMs Plus some sharp views on the current safety discourse, how breakthrough research actually happens and what FAIR got right and wrong. YouTube: youtu.be/ngBraLDqzdI Spotify: bit.ly/4dL8fvT Apple: bit.ly/4wxgpiX
YouTube video
YouTube
English
10
33
212
95.9K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
Fun pod dropping tomorrow!
Jacob Effron tweet media
English
2
2
39
2.8K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
Always enjoy getting to chat with @swyx on our annual cross-episode with @latentspacepod on the state of AI. We hit on what’s shifted, what surprised us and what’s next. We covered: ▪️ Whether AI infrastructure has finally stabilized ▪️ Implications of agents buying developer tools ▪️ The AI coding wars ▪️ The foundation model vibe shift ▪️ Why Swyx reversed his view on open models ▪️ When to train your own model ▪️ What's top of mind for the best AI engineers YouTube: youtu.be/A_7WafI9bhE Spotify: bit.ly/3QHcCix Apple: bit.ly/4eJERaa 0:00 Intro 1:17 What the Top AI Engineers Are Thinking About 2:13 Has AI Infra Finally Stabilized? 6:39 When Does Doing RL In-House Make Sense? 11:26 Why Selling Dev Tools to Agents is Different 17:18 AI Coding Wars 29:04 Consumer AI Plateau 30:22 Codex vs Claude Code 44:52 Future of Open Models
YouTube video
YouTube
English
4
4
30
18.3K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
OpenAI's Chief Scientist, @merettm, on where alignment stands today: his belief that there's a research path to "an extremely happy world" has increased considerably. But so has his urgency. "We're not that far" from very transformative models. The window for getting alignment right is narrowing even as the tools for doing so are improving.
Jacob Effron@jacobeffron

At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: youtu.be/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire

English
2
1
13
1.4K
Unsupervised Learning retweetledi
Haider.
Haider.@haider1·
OpenAI Jakub Pachocki says continual learning is the core goal behind scaling GPT models The push to scale GPT models and train them with RL comes from the belief that they can learn in context more efficiently This is not off the roadmap "it is what we're working towards"
English
13
16
111
6.5K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
OpenAI's Chief Scientist says AI is getting close to being as good as a human research intern. This past September, @sama and @merettm predicted fully autonomous AI researchers by 2028. Jakub's update: "I think we're not very far from models that can work autonomously for a couple days... and produce much higher quality artifacts on their own."
Jacob Effron@jacobeffron

At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: youtu.be/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire

English
1
7
46
4.6K
Unsupervised Learning retweetledi
OpenAI Newsroom
OpenAI Newsroom@OpenAINewsroom·
Compute powers every layer of AI, and the investments we’ve made mean we can run more promising research experiments, train more capable models, and support broader access. @merettm talks about our progress building an automated AI researcher and what’s ahead as AI can take on harder and harder problems.
Jacob Effron@jacobeffron

At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: youtu.be/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire

English
34
36
455
53.6K
Unsupervised Learning retweetledi
Business Insider
Business Insider@BusinessInsider·
OpenAI's chief scientist outlined OpenAI's internal goal of building an "AI research intern" by September 2026 at a company livestream last October. bit.ly/3OwqMlS
English
1
1
9
5.1K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
OpenAI's Chief Scientist, @merettm, on the continual learning wave: frontier labs are already building this into the core of the technology. The entire premise of scaling was to create systems that learn in context. Jakub says continual learning is not some separate missing piece, but “exactly what we’re working toward.”
Jacob Effron@jacobeffron

At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: youtu.be/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire

English
0
12
112
15.1K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: youtu.be/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire
YouTube video
YouTube
English
15
66
622
137.2K
Unsupervised Learning retweetledi
Patrick Chase
Patrick Chase@patrickachase·
"Talent is the only durable moat." @jakeserval on how a motivated competitor with access to the latest AI tools can copy everything within a year. The only thing that compounds in a way that can't be replicated is the people you build with.
Patrick Chase@patrickachase

This week on Unsupervised Learning, @jacobeffron and I sat down with @jakeserval, co-founder & CEO of @getserval. Serval is going directly after ServiceNow in ITSM and already working with companies like Notion, Clay, Abridge, Fox, Mercor, and Verkada. We partnered with Serval at the Series A. What’s stood out most over the last year is their speed of execution. We get into how they’re winning customers and talent so quickly, including: ▪️ Why building a system of record beats layering on top ▪️ The "mirror architecture" that lets Serval land enterprise customers ▪️ Why ITSM is more vulnerable to AI disruption than other verticals ▪️ The Future IT Stack when agents submit their own requests ▪️ The AI-native org chart ▪️ Why recruiting is the #1 job of every Serval employee ▪️ The Dream Team Draft: recruiting during hypergrowth YouTube: youtu.be/Q0bxRANHjFY Spotify: bit.ly/4m4PJRX Apple: bit.ly/3POsYp8 0:00 Intro 1:25 What is Serval? 4:51 Early Doubts and Strategy 6:34 AI Tailwinds in ITSM 8:04 Competing with ServiceNow 9:41 Why ITSM Is Vulnerable 11:52 Automation via Codegen 16:27 Critical Guardrails 28:32 Internal Support Complexity 30:24 Hiring as the Moat 31:44 Dream Team Recruiting 33:49 Managers vs Super ICs 36:44 Junior Engineers and AI Native Workflows 43:13 Quickfire

English
1
3
6
3.3K
Unsupervised Learning retweetledi
Patrick Chase
Patrick Chase@patrickachase·
This week on Unsupervised Learning, @jacobeffron and I sat down with @jakeserval, co-founder & CEO of @getserval. Serval is going directly after ServiceNow in ITSM and already working with companies like Notion, Clay, Abridge, Fox, Mercor, and Verkada. We partnered with Serval at the Series A. What’s stood out most over the last year is their speed of execution. We get into how they’re winning customers and talent so quickly, including: ▪️ Why building a system of record beats layering on top ▪️ The "mirror architecture" that lets Serval land enterprise customers ▪️ Why ITSM is more vulnerable to AI disruption than other verticals ▪️ The Future IT Stack when agents submit their own requests ▪️ The AI-native org chart ▪️ Why recruiting is the #1 job of every Serval employee ▪️ The Dream Team Draft: recruiting during hypergrowth YouTube: youtu.be/Q0bxRANHjFY Spotify: bit.ly/4m4PJRX Apple: bit.ly/3POsYp8 0:00 Intro 1:25 What is Serval? 4:51 Early Doubts and Strategy 6:34 AI Tailwinds in ITSM 8:04 Competing with ServiceNow 9:41 Why ITSM Is Vulnerable 11:52 Automation via Codegen 16:27 Critical Guardrails 28:32 Internal Support Complexity 30:24 Hiring as the Moat 31:44 Dream Team Recruiting 33:49 Managers vs Super ICs 36:44 Junior Engineers and AI Native Workflows 43:13 Quickfire
YouTube video
YouTube
English
0
5
8
4.6K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
I truly believe AI can dramatically improve healthcare and there's no better proof point than @AbridgeHQ. They're already proving it with a rapidly growing footprint of doctors and health systems nationwide. And what got me so excited to invest was that it's still the earliest days. There's so much more to build that will make patients' lives better. You won't find a faster-shipping, more talented or kinder team.
Abridge@AbridgeHQ

What’s it like building at Abridge? “It feels like the beginning of the Internet… It’s love for software that I have never seen.” Watch to see what our builders say about working at Abridge. We’re fortunate to be backed by investors who share our vision, including @a16z @eladgil @khoslaventures @NVIDIA Ventures @IVP @SVAngel @lightspeedvp @Redpoint @sparkcapital @BessemerVP @usv We’re hiring in SF!

English
3
4
32
3.7K
Unsupervised Learning retweetledi
Jacob Effron
Jacob Effron@jacobeffron·
After playing around with Opus 4.5, @MaxJunestrand knew the roadmap had to change. As model capabilities improve, this often happens. As Max says, "You need low ego. You have to say that's now the world. We will operate under those boundaries and run at an intensity that is practically unheard of" and be willing to throw out what you spent time building before.
Jacob Effron@jacobeffron

Legora sets the bar for operating at AI speed. Watching them become one of the fastest growing software companies of all time these past few years has provided constant lessons on what’s required to win in this new world. Fresh off @WeAreLegora's $550M Series D, CEO @MaxJunestrand joined @loganbartlett and me on Unsupervised Learning to provide a masterclass on building an AI-native company. He shared some amazing lessons around - Constantly rebuilding for the bleeding edge of model capabilities - Partnering with customers for both immediate impact and long-term transformation - Running Legora differently from traditional software companies He also included some spicy takes on - Why foundation models entering legal is good for Legora - Pricing AI products - The future of the legal industry It’s impossible to listen to Max and not pick up the infectious energy that makes Legora such a special company. Check out the full episode: YouTube: youtu.be/wzRZp-1EuaE Spotify: bit.ly/3Nc53za Apple: bit.ly/40ufr8m 0:00 Intro 1:16 Legora’s Series D Story 3:24 Why You Need Low Ego to Build in AI 5:58 From 60% to 100% Accuracy in One Summer 7:04 Law Firm Economics Shift 14:09 Pricing Seats Vs Outcomes 18:31 Why Foundation Models Entering Legal Helps Legora 30:10 Convincing a 75-Year-Old Partner to Go All In 33:02 Hiring Legal Engineers 34:32 Running an AI-Native Company 35:57 The Opus 4.5 Christmas Breakthrough 40:02 Building With Customers 44:01 All In On US Expansion 51:22 Stockholm Startup DNA

English
0
1
8
1.3K
Unsupervised Learning retweetledi
logan bartlett
logan bartlett@loganbartlett·
Few founders could get me out of my podcast retirement. @MaxJunestrand is one of those founders. @jacobeffron and I sat down with Max, CEO of @WeAreLegora, to discuss lessons from constantly rebuilding, why foundation models entering legal is good for Legora, and the future of the legal industry. Max is a special founder and gives a ton of practical lessons on what it means to build an AI-native business.
Jacob Effron@jacobeffron

Legora sets the bar for operating at AI speed. Watching them become one of the fastest growing software companies of all time these past few years has provided constant lessons on what’s required to win in this new world. Fresh off @WeAreLegora's $550M Series D, CEO @MaxJunestrand joined @loganbartlett and me on Unsupervised Learning to provide a masterclass on building an AI-native company. He shared some amazing lessons around - Constantly rebuilding for the bleeding edge of model capabilities - Partnering with customers for both immediate impact and long-term transformation - Running Legora differently from traditional software companies He also included some spicy takes on - Why foundation models entering legal is good for Legora - Pricing AI products - The future of the legal industry It’s impossible to listen to Max and not pick up the infectious energy that makes Legora such a special company. Check out the full episode: YouTube: youtu.be/wzRZp-1EuaE Spotify: bit.ly/3Nc53za Apple: bit.ly/40ufr8m 0:00 Intro 1:16 Legora’s Series D Story 3:24 Why You Need Low Ego to Build in AI 5:58 From 60% to 100% Accuracy in One Summer 7:04 Law Firm Economics Shift 14:09 Pricing Seats Vs Outcomes 18:31 Why Foundation Models Entering Legal Helps Legora 30:10 Convincing a 75-Year-Old Partner to Go All In 33:02 Hiring Legal Engineers 34:32 Running an AI-Native Company 35:57 The Opus 4.5 Christmas Breakthrough 40:02 Building With Customers 44:01 All In On US Expansion 51:22 Stockholm Startup DNA

English
7
4
48
19K
Jacob Effron
Jacob Effron@jacobeffron·
Legora sets the bar for operating at AI speed. Watching them become one of the fastest growing software companies of all time these past few years has provided constant lessons on what’s required to win in this new world. Fresh off @WeAreLegora's $550M Series D, CEO @MaxJunestrand joined @loganbartlett and me on Unsupervised Learning to provide a masterclass on building an AI-native company. He shared some amazing lessons around - Constantly rebuilding for the bleeding edge of model capabilities - Partnering with customers for both immediate impact and long-term transformation - Running Legora differently from traditional software companies He also included some spicy takes on - Why foundation models entering legal is good for Legora - Pricing AI products - The future of the legal industry It’s impossible to listen to Max and not pick up the infectious energy that makes Legora such a special company. Check out the full episode: YouTube: youtu.be/wzRZp-1EuaE Spotify: bit.ly/3Nc53za Apple: bit.ly/40ufr8m 0:00 Intro 1:16 Legora’s Series D Story 3:24 Why You Need Low Ego to Build in AI 5:58 From 60% to 100% Accuracy in One Summer 7:04 Law Firm Economics Shift 14:09 Pricing Seats Vs Outcomes 18:31 Why Foundation Models Entering Legal Helps Legora 30:10 Convincing a 75-Year-Old Partner to Go All In 33:02 Hiring Legal Engineers 34:32 Running an AI-Native Company 35:57 The Opus 4.5 Christmas Breakthrough 40:02 Building With Customers 44:01 All In On US Expansion 51:22 Stockholm Startup DNA
YouTube video
YouTube
English
6
12
50
31.9K