Salman // 萨尔曼

1.3K posts

Salman // 萨尔曼 banner
Salman // 萨尔曼

Salman // 萨尔曼

@ForBo7_

「Open to Collabs」 • Dabbler • Learner • Explorer • Logger • https://t.co/jTudwv3AAp student • Dabbling in Embodied AI • 自学中文 // Self-learning Chinese

China Katılım Eylül 2022
857 Takip Edilen244 Takipçiler
Sabitlenmiş Tweet
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
Doing lesson 15 of the @fastdotai course; deducing how to rearrange convolutions as a matrix product
Salman // 萨尔曼 tweet media
English
1
1
7
13.3K
Salman // 萨尔曼 retweetledi
AA
AA@measure_plan·
i built an app to search old travel photos on my computer with natural language queries using free local AI models: - smolVLM to describe the scene and colours - roboflow RF-DETR to detect objects - chromaDB to store metadata labels and run semantic search - python + streamlit for the interface fast and free search has been achieved internally
English
18
27
331
39.1K
Salman // 萨尔曼
I always find the term 面包车 (mianbaoche) quite funny. It refers to a car that looks like 🚐, and 面包 means bread...😄
English
0
0
0
35
Salman // 萨尔曼
I was wondering what 加仑 even meant! Must be from Cantonese then 😄 Mandarin: jialun Cantonese: gaaleon
Salman // 萨尔曼 tweet media
English
0
0
1
55
Salman // 萨尔曼 retweetledi
ThePrimeagen
ThePrimeagen@ThePrimeagen·
i am using supermaven again and i have something to say about this whole AI thing. I think as a group (swe) we rushed so fast into Agents when inline autocomplete + actual skills is crazy. A good autocomplete that is fast like supermaven actually makes marked proficiency gains, while saving me from cognitive debt that comes from agents. With agents you reach a point where you must fully rely on their output and your grip on the codebase slips. Its insane how good cursor Tab is. Seriously, I think we had something that genuinely makes improvement to ones code ability (if you have it). Truly acts as a multiplier, and we left it in the dust because it is not sexy. hurts me on the inside.
English
219
134
3.7K
181.9K
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
fastai close reading repo. The repo is primarily designed for use with solve.it.com, thought it should work with other LLM tools. If you encounter any issues, feel free to make a pull request so we can make this even better. github.com/ForBo7/fastai-…
English
0
3
15
1.7K
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
Created close reading notebooks for almost every lesson of @jeremyphoward's fastai deep learning course (it's more than a course) Close reading is a technique for reading out of text, not into. Use a LLM, and you're in flow state for longer–you ask right there, with all context.
Salman // 萨尔曼 tweet media
English
4
4
103
30.8K
Salman // 萨尔曼 retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
A listener has created this detailed vocabulary and set of linked references for anyone interested in diving deeper: share.solve.it.com/d/28d1864aad07…
Machine Learning Street Talk@MLStreetTalk

A masterclass from @jeremyphoward on why AI coding tools can be a trap -- and what 45 years of programming taught him that most vibe coders will never learn. - AI coding tools exploit gambling psychology - The difference between typing code and software engineering - Enterprise coding AND prompt-only vibe coding are "inhumane" i.e. disconnecting humans from understanding-building - AI tools remove the "desirable difficulty" you need to build deep mental models. Out on MLST now!

English
12
58
374
52.9K
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
- functools - itertools two built in python libraries worth exploring
English
0
0
0
32
Salman // 萨尔曼 retweetledi
Vuk Rosić 武克
Vuk Rosić 武克@VukRosic99·
i'm thinking how to make an online alternative for AI labs where we can do research that we want to do and have some support between each other it's like online research lab, no salaries 😂 just open source research people would do it in their free time, a lot of AI phd students told me they would participate, but we need an exact roadmap, research ideas, ways to meaningfully contribute to the science, etc. i'm gonna post more about it on my social media
English
14
6
70
3.2K
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
i've found that good communication with LLMs is not only asking it to ask you as much as possible, but for you to purposely extract out of it itself as much as possible e.g., you want it to create a poster; first ask it what good poster design entails
English
1
0
0
25
Salman // 萨尔曼 retweetledi
Vuk Rosić 武克
Vuk Rosić 武克@VukRosic99·
How to have MASSIVE impact If you want to have a MASSIVE impact onto the world -> post 1 high quality science / math / ai research / physics / egineering / ... / blog post per week for the rest of your life. Start today. Do not wait until you get better, you will never be happy. You will get better by posting it. If you're a beginner, explain just simple concepts like SVD, Nesterov's Momentum, RMSNorm, softmax, KL divergence, backprop... > No AI written text. Use AI to learn and understand it, but AI text lacks details. A few high quality paragraphs are better than pages of AI slop.
Vuk Rosić 武克 tweet media
English
4
1
19
948
Salman // 萨尔曼 retweetledi
Dr. Luke in China
Dr. Luke in China@96Stats·
China just released an open-source voice LLM called Habibi (um..nice name haha) that can do 20+ Arabic dialects all in one As someone who did some NLP projects, this is wayyyy harder than it sounds as data is so messy, and Arabic isn’t “one language” in daily life, as dialects can be wildly different. I actually know the professor who made this model too, very clever guy with lots of NLP experience. He already made some models for various Chinese dialects, and i even know someone in Urumqi who made one for Uyghur and minority languages in Xinjiang University. Basically China bossed this area and now they’re making and selling it for other countries. Huge, because it shows people are coming to them as they do do it the best.. not the US
Dr. Luke in China tweet media
English
27
298
1.9K
93.6K
Salman // 萨尔曼
Salman // 萨尔曼@ForBo7_·
i'm finding crafting summaries for close reading is a whole craft in of itself 🥵 - length of the summary - ensuring it's helpful for the llm - ensuring it's helpful for the reader - level of detail needed and what to skip
English
1
0
0
21
Salman // 萨尔曼 retweetledi
Jim Fan
Jim Fan@DrJimFan·
We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:
English
137
282
1.7K
264.5K
Salman // 萨尔曼 retweetledi
Chris Paxton
Chris Paxton@chris_j_paxton·
Literally a robot mule. Carrying produce and helping around on the farm.
English
15
22
201
17.9K