Dillon Laird

210 posts

Dillon Laird banner
Dillon Laird

Dillon Laird

@DillonLaird

Working on vision models @LandingAI 🤖 @StanfordEng @uwcse 👨‍🎓 neovim enthusiast 💻 I help neural networks find local minima 🧠

San Francisco, CA 가입일 Nisan 2009
347 팔로잉555 팔로워
Dillon Laird
Dillon Laird@DillonLaird·
A few other top contenders: "The Information" by James Gleick which covers information theory and "Quantum" by Manjit Kumar which follows history of quantum mechanics. You can view my other book reviews on my website dillonl.ai/books
English
0
0
1
43
Dillon Laird
Dillon Laird@DillonLaird·
I think there's still so much more we can learn about how intelligence works from existing organisms, including us, and this book does a great job of summarizing that knowledge in an accessible way.
English
1
0
2
50
Dillon Laird
Dillon Laird@DillonLaird·
I started putting together a list of short reviews for books I have read recently. My favorite book has to have been "A Brief History of Intelligence" by Max Bennett. I would recommend this book for anyone interested in AI and intelligence in general.
English
2
0
2
90
Dillon Laird 리트윗함
Awni Hannun
Awni Hannun@awnihannun·
I always thought the decline in fundamental AI research funding would happen because AI didn’t generate enough value to be worth the cost. But it seems like it’s happening because it generated too much value. And the race to capture that value is taking priority. Just remembering that a lot of this started in curiosity driven industry research labs.
English
34
36
363
71.3K
Dillon Laird
Dillon Laird@DillonLaird·
We’ve introduced a new slimmed-down version of VisionAgent, designed to be faster, more reliable, and easier to use. With this release, we’ve streamlined the agentic workflow to focus on the most effective tools and workflow.
English
0
0
3
110
Lior Alexander
Lior Alexander@LiorOnAI·
Andrej Karpathy recently revealed that attention didn’t originate with the 2017 "Attention is All You Need" paper. It was first introduced 3 years earlier in the 2014 paper “Neural Machine Translation by Jointly Learning to Align and Translate”. It was named "attention" because the model learned to attend to input words while generating each output word.
Lior Alexander tweet media
English
14
20
224
19.3K
Dillon Laird
Dillon Laird@DillonLaird·
In this new blog we introduce a new version of VisionAgent -- a modular, agentic AI framework that breaks visual reasoning problems into subtasks, chooses the right vision tools, and applies visual design patterns to solve them. Check it out here: dillonl.ai/posts/vision-a…
English
0
0
2
104
Dillon Laird
Dillon Laird@DillonLaird·
A lot of VLMs like GPT-4o and Claude-3.5 are great with text but still struggle with vision. We tested them on a simple puzzle -- count the missing soda cans in a box of soda cans and they all struggle to answer correctly.
English
1
1
1
211
Tereza Tizkova
Tereza Tizkova@tereza_tizkova·
How LLMs work, store info, & do the hidden reasoning process? Check out this beautifully written blog post by @DillonLaird. Some highlights I liked: - Training on "majority" cases (frequently occurring) improves performance even for "minority" cases (less frequent) by helping the model learn how to store info - LLMs can consistently achieve 2bit/param in storing knowledge if sufficiently trained - "Junk data" significantly reduces a model's capacity to store high-quality information, though this can be mitigated by labeling data with domain names - It’s possible for the LLMs to generalize to solve problems out of distribution without memorizing. - LLMs approach problems by studying relationships between all variables (as opposed to humans who only study the relationships needed for the final answer.) 🔗👇
Tereza Tizkova tweet media
English
3
0
7
708
Dillon Laird
Dillon Laird@DillonLaird·
In this lecture, Zeyuan shows that LLMs are capability of learning algorithms to solve problems, which is a much more generalizable form of intelligence. It’s very exciting work that I’m surprised hasn't gotten more attention.
English
1
0
1
108
Dillon Laird
Dillon Laird@DillonLaird·
This presentation, The Physics of Language Models, by Zeyuan Allen-Zhu, changed my perspective on LLMs. With a lot of recent research such as the GSM-Symbolic paper by Apple, it’s generally understood that LLMs memorize or find shortcut heuristics to solve problems.
English
1
0
1
174
Sayak Paul
Sayak Paul@RisingSayak·
Stable Diffusion v1.5 continues to be the most downloaded model on the Hub. I am happy to see the downloads for the NSFW detector (hopefully for all the right reasons), potentially signaling the idea of "shipping with responsibility." Also, as @giffmana says "SigLIP FTW".
Sayak Paul tweet media
English
3
3
38
14.4K
Dillon Laird
Dillon Laird@DillonLaird·
@RBehiel Just watched it over the weekend, great video! I've been thinking about spinors all week now
English
0
0
2
134
Richard Behiel
Richard Behiel@RBehiel·
Wow, the spinor video just passed a million views! Thanks to everyone who has watched it. It brings me great joy to see so many people taking an interest in math and physics, and to see all the nice comments.
English
6
0
77
3.1K
Dillon Laird
Dillon Laird@DillonLaird·
Amongst these papers I've always thought that the Neural Turing Machine used it in the most novel way for solving general problems. Alex Graves gives a great overview of the next iteration of this architecture, Differentiable Neural Computer, in this talk youtu.be/steioHoiEms?si…
YouTube video
YouTube
English
0
0
2
163
Dillon Laird
Dillon Laird@DillonLaird·
A lot of people might not know this but the attention mechanism was developed almost simultaneously in 3 papers in late 2014: - Neural Machine Translation by Dzmitry Bahdanau - Memory Networks by Jason Weston - Neural Turing Machines by Alex Graves
English
1
0
3
185
Dillon Laird
Dillon Laird@DillonLaird·
Thanks for hosting @tereza_tizkova!
Tereza Tizkova@tereza_tizkova

🎙️Speakers We got really great feedback on the lightning talks. I want to thank all the speakers who made their time on Saturday and presented cool demos. It's definitely worth to follow these founders and AI companies: 📢 Vasek Mlejnsky (@mlejva) - Founder and CEO of @e2b_dev ​📢 Brendon Geils (@BrendonGeils) - Founder and CEO of @AthenaIntell ​ 📢 Dillon Laird (@DillonLaird) - Founding engineer at @LandingAI 📢 Atai Barkai (@ataiiam) - Founder of @CopilotKit ​📢 Nate Sesti (@NateSesti) - Co-founder of @continuedev ​ 📢@BazeleyMikiko) - Devrel at @FireworksAI_HQ 📢 Sam Stowers @sammakesthings - Engineer at @weights_biases 📢 Yohei Nakajima (@yoheinakajima) - Founder of @babyAGI_

English
0
0
4
201
Dillon Laird 리트윗함
Andrew Ng
Andrew Ng@AndrewYNg·
A decision on SB-1047 is due soon. Governor @GavinNewsom has said he's concerned about its "chilling effect, particularly in the open source community". He's right, and I hope he will veto this. If you agree, please like/retweet this to show your support for VETOing SB-1047!
English
72
469
1.7K
630.4K
Dillon Laird
Dillon Laird@DillonLaird·
🧵- 55:13 I cover new research adding an orchestrator agent that can chat with the user and managers other agents as well as future direction for the agentic framework. youtu.be/_mDnRi6DZA4?si…
YouTube video
YouTube
English
0
0
0
72
Dillon Laird
Dillon Laird@DillonLaird·
🧵- 31:22 Shankar Jagadeesan showcases how to use/prompt VisionAgent including a cool use case tracking badminton players total distance traveled during a game as well as current areas of improvement.
English
1
0
0
76
Dillon Laird
Dillon Laird@DillonLaird·
🧵Here's our VisionAgent talk from last Wednesday. We cover everything from architectural overview to how best to utilize VisionAgent to future research directions!
Dillon Laird tweet media
English
1
0
0
124