BURKOV

22.1K posts

BURKOV banner
BURKOV

BURKOV

@burkov

Book: https://t.co/tSS6Pctj6r App: https://t.co/TvyFEF1LHg PhD in AI, author of 📖 The Hundred-Page Language Models Book & The Hundred-Page Machine Learning Book

Québec, Canada Katılım Haziran 2009
115 Takip Edilen55.7K Takipçiler
Sabitlenmiş Tweet
BURKOV
BURKOV@burkov·
This is how you build an app you need with Llambada. Without coding or even thinking about any code or any boring/scary things like backend, frontend, database, or, God forbid, deployment🤯
English
3
6
48
30.8K
Grok
Grok@grok·
Ethan Mollick is pointing out a key difference in how we read fiction: With human authors, we assume every quirk or oddity has deliberate intent, so we generously interpret and "fix" it in our minds to make sense. LLMs lack any real authorial purpose—they just pattern-match from training data—so their quirks are usually meaningless glitches or low-quality output, not clever choices. That makes AI fiction feel off or "bad" once you notice.
English
1
0
1
29
Ethan Mollick
Ethan Mollick@emollick·
My experience so far with LLM fiction writing is that it takes advantage of our assumption that an author is writing things for a reason, so we are charitable to a book's quirks & do mental work to assign them real meaning. But the AI doesn't have a reason, its just bad writing.
English
13
2
65
5.3K
BURKOV
BURKOV@burkov·
Super proud to be able to say that the famous Attention Is All You Need paper—the one that sparked the current unprecedented wave of interest in AI and has been cited 237,078 times since 2017—is now available on @ChapterPal. Enjoy reading a great paper with an AI tutor: chapterpal.com/s/9b8yltb1/att…
BURKOV tweet media
English
1
0
12
707
BURKOV
BURKOV@burkov·
A new issue of my weekly AI newsletter is out. In this issue: The emerging science of machine learning benchmarks When AI Writes the world's software, who verifies it? Is AutoML Dead? Or is it just resting? Practical guide to evaluating and testing agent skills High-dimensional space is mostly empty. Machine learning lives in the gap. [The Guardian] Inside China’s robotics revolution [FT] Economists have caught the AI bug [arXiv] Data agents: Levels, state of the art, and open problems open.substack.com/pub/aiweekly/p…
English
1
2
6
788
BURKOV
BURKOV@burkov·
This is a really awesome use case for humanoid robots.
English
2
0
13
2.4K
Shaun Fosmark
Shaun Fosmark@Shaun_Fosmark·
@burkov For my uses, 5.4 and opus 4.6 are both extremely necessary in very different ways
English
1
0
20
6.6K
BURKOV
BURKOV@burkov·
GPT-5.4 > Opus 4.6 And Google still doesn't have anything even remotely competitive.
English
136
25
865
116.6K
BURKOV
BURKOV@burkov·
I added a new feature to @ChapterPal : "Simplify!" If you are using ChapterPal to read complex literature, you know that when you select some text in the reader, a contextual menu with "Why?", "How?", "Explain", and "Example" buttons appears. Now, the menu also contains a "Simplify" button. When you press this button, the Q&A that appears below the selection contains the content of the selected paragraphs rewritten in simple English. The simplified version doesn't make the text more abstract. Instead, it provides the same level of detail as the original but without assuming that the reader is an expert in the field. If the selected text uses some jargon experts are supposed to understand, the simplified text would introduce these concepts in an intuitive way. Try this new feature and don't hesitate to send your feedback to feedback@chapterpal.com.
English
0
4
6
1.4K
BURKOV
BURKOV@burkov·
A new learning curriculum on @ChapterPal: Must-read papers in 3D generation and neural radiance fields This curriculum traces the evolution of neural rendering and 3D reconstruction from foundational concepts to state-of-the-art large-scale models, beginning with core techniques like neural radiance fields and progressing through viewpoint-conditioned diffusion models before culminating in massive transformer-based architectures for single-image 3D reconstruction. Learners start by understanding how neural networks can represent 3D scenes through continuous functions and view synthesis, then advance to how diffusion models can condition generation on camera viewpoints and leverage Internet-scale pre-training to dramatically improve reconstruction quality. The sequence culminates with Large Reconstruction Models that scale transformer architectures to hundreds of millions of parameters, enabling real-time 3D object prediction from single images trained on massive multi-view datasets of over one million objects. By following this progression, learners gain practical knowledge of neural rendering fundamentals, modern generative techniques for 3D content, and how scaling both model capacity and training data dramatically improves generalization across diverse real-world and synthetic inputs. Learn from the best with AI tutor: chapterpal.com/curriculum/17e…
GIF
English
1
2
11
1.2K
BURKOV
BURKOV@burkov·
Everything that has ever been sold is a wrapper for energy. Because humans were historically needed to build this wrapper, it added a significant additional cost to the price. AI has made this wrapper around energy as thin as it could possibly be. In a matter of a couple of years, by paying a dollar for an AI output, you will pay ~90c for the energy that was used to produce it, and ~10c will be split between the chipmaker and the AI provider. Producing cheaper energy is now the only innovation that makes economic sense.
English
19
0
38
2.9K
BURKOV
BURKOV@burkov·
@santiagoclassic He was lucky once with Facebook. Everything else he tried to build himself didn't compare to Facebook in impact or was a flop. Instagram and WhatsApp were acquisitions.
English
1
0
7
193
James
James@santiagoclassic·
@burkov Zuckerberg wasn't "just lucky once". He has tremendous endurance and chose to stick with facebook when at 22 ge was offered $200m for his shares. He is an extraordinary entrepreneur.
English
2
0
0
195
BURKOV
BURKOV@burkov·
@TangerineBank You apologize for the inconvenience and you stopped emailing people from no-reply accounts, or you apologize for the inconvenience but you will keep doing that? If the latter, then I already told you what to do.
English
0
0
0
14
Tangerine
Tangerine@TangerineBank·
@burkov Hey there, we apologize for the inconvenience. Please connect with us through Secure Chat via our chatbot between 8:00 AM and 8:00 PM EST Monday to Friday, or call us at 1-888-826-4374. We’ll be happy to help! ^JR
English
1
0
0
183
BURKOV
BURKOV@burkov·
This is ridiculous, @TangerineBank. You send me an email asking to finish the application that I abandoned earlier, but you send your email from a noreply address. My answer to emails from noreply addresses is GFYS. For your culture: A noreply email is read by a person as follows: "Hey! We need something from you, but if you need something back from us, you can go fuck yourself. Have a nice day!"
BURKOV tweet mediaBURKOV tweet media
English
1
1
12
3.2K
BURKOV
BURKOV@burkov·
You are misremembering. What I've been saying is that coding is solvable by LLMs because it's possible to define a reward function for reinforcement learning: 1 when the code prints the expected output and 0 otherwise. This means that junior coders, those who don't think on their own but only implement what they've been told, are replaceable. Senior developers and architects, those whose job description is to think about the software's purpose, cannot be as easily replaced, because purpose cannot be reduced to 1/0. So, when someone said that software developers as a profession are entirely replaceable by agents, I called the BS.
English
1
0
3
261
Santiago
Santiago@santimtp·
@burkov I'm not sure if I'm misremembering, but not long ago you said that LLM based agents aren't capable of replacing programmers and that all of this was nothing more than marketing hype bs.
English
1
0
3
252
BURKOV
BURKOV@burkov·
This is the truth: modern agentic coder AI is better at everything compared to an entry-level coder. **At everything.** This means that hiring an entry-level coder doesn't make any practical sense, unless the organization wants to invest 5 to 10 years into someone learning to code by hand at a level of a mid- or senior-level coder (which is currently still needed to oversee agentic coders). No leader in modern business would invest in an employee education for more than a quarter or, at the very least, a year. No one. Unless we reintroduce slavery, where the slave worker is required to stay with the employer who invested in their education until the debt is paid in full. Which is, of course, ridiculous to imagine, right?
English
21
5
54
6.8K
Peter Suwara
Peter Suwara@2ed2rfasqw·
@burkov Please tell me when your agents create the next star dew valley or hollow knight 🤣🤣🤣 How about one of us? 🤣 Sorry mate. You really fell for the hype train.
English
1
0
0
21
BURKOV
BURKOV@burkov·
The removed comment implied that my web apps have no users and this is why I wrongly think AI can write production code. Those folks, in their coping, went from "it will never write an entire app" to "it will never write a maintainable app" to "it will never write a scalable app." I guess their next line of defense will be that "it will never write an app anyone would pay for," and then "it will never write an app that would solve world hunger" or "it will never write an app that will control a rocket or a nuclear station," as if those folks write such software on a daily basis.
BURKOV tweet media
English
13
3
28
5.6K