Jeff Holmes

2K posts

Jeff Holmes banner
Jeff Holmes

Jeff Holmes

@holmesjtg

Exploring the use of AI in education: AI agents in the role of learning partners and modeling good learning strategies.

Austin, TX Katılım Ocak 2010
163 Takip Edilen352 Takipçiler
Sabitlenmiş Tweet
Jeff Holmes
Jeff Holmes@holmesjtg·
Excited to share our latest demo video from #CES24 in partnership with #Nvidia! The video shows how #Convai makes it easy to deliver next generation of AI-powered digital characters that can perform actions and perceive their environment. youtube.com/watch?v=UamcBg…
YouTube video
YouTube
English
0
0
6
541
Jeff Holmes
Jeff Holmes@holmesjtg·
@DannyLimanseta @justindkamen You can also reduce the mesh face count (decimate modifier) and texture sizes in Blender. I think Meshy defaults to 2048x2048 for textures. Reducing down to 512 makes a big difference.
English
2
0
3
51
Danny Limanseta
Danny Limanseta@DannyLimanseta·
Over the weekend, I added multiplayer "paint ball" dogfight to Tiny Skies, so players can fly around pelting each other with colourful paint balls, just for fun. I tried meshy and tripo to generate some 3D models for the game and the mesh vertices count are really high and lags the game a lot (probably because my game is a globe with thousands of objects - albeit mostly instanced). To my surprise, I found Gemini 3.1 Pro to be really quite good at generating good looking 3D models that are low in poly count. It generated 3D models at a much higher quality than Opus 4.6, Sonnet 4.6 and GPT 5.4. You can see the upgraded biplane models and some objects in the video below. #vibejam
English
44
10
404
18.4K
Pika
Pika@pika_labs·
P.S. Beta = fun glitches sometimes. We’re working on it! But we’d love your feedback, too. Join our new discord community to share your thoughts! discord.gg/t9BWbKzjn
English
12
1
83
47K
Pika
Pika@pika_labs·
Conversations tend to go better with a face and a voice. That’s why we’re thrilled to release the beta version of the first video chat skill for ANY agent, powered by our new real-time model, PikaStream1.0. The skill preserves memory and personality, and enables real-time adaptability. And if you use it with your Pika AI Self, they’ll be able to execute agentic tasks during the call 💅
English
365
712
5.3K
2.3M
Jeff Holmes
Jeff Holmes@holmesjtg·
@LimitingThe Or free if they help load the other luggage on the plane? 😆
English
0
0
0
10
The Limiting Factor
The Limiting Factor@LimitingThe·
I wonder how much a plane ticket for your Optimus robot is going to cost. If I go traveling, I'd want to bring it with me. I imagine it would be considered special luggage and have $100 fee.
English
48
3
94
19.3K
Ezra Fairfax
Ezra Fairfax@RealEzraFairfax·
@SawyerMerritt Sucks, but it appears like if you ain't got a rear camera washer, you ain't got a car capable of unsupervised. That's just one noticeable change. I'm assuming there is AI hardware changes too, but that's just me being a pessimist.
English
1
1
3
911
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Tesla’s Robotaxi service launch in Las Vegas appears to be imminent, as many Model Y vehicles equipped with rear camera washers have been spotted in a parking lot in the suburbs of the city. The cars also have Texas license plates. Only the Model Y robotaxis in Austin have rear camera washers.
Sawyer Merritt tweet mediaSawyer Merritt tweet mediaSawyer Merritt tweet media
Living Life@TheZacher

Parking lot is filled with Tesla Model Y’s in Henderson, the suburbs of Las Vegas. Could this be getting ready for Robotaxi?

English
133
258
3.1K
259.4K
Jeff Holmes
Jeff Holmes@holmesjtg·
It will be interesting to see how the embodiment of AI changes human activity and culture. Maybe doom scrolling will become a thing of the past since physical interaction will be so much more engaging and fun. One of the most exciting use cases for robots is coaching. Your own tennis sparing partner, billiards coach, or gardening expert.
English
0
0
2
135
Robert Scoble
Robert Scoble@Scobleizer·
This is a World Model breakthrough. Puts even more pressure on @Tesla_Optimus as it will show off a new humanoid in April. Version 3.0. Next week at @nvidia GTC the bar goes even higher, I hear. My money is that Optimus takes all the oxygen away from the others for months. I have been pitched thousands of companies in my life, including Siri, Insta360, and Matic Robots in my home. Each a pioneer in using AI in consumer product. Only Elon broke three laws when he demonstrated his to me. And I loved every minute of it. He doesn’t have to break laws anymore, but what if Optimus drove a new Roadster to the event through Austin traffic? Which robot is driving? Anyway even if Elon fumbled the robot the robots are still winning. And I expect Optimus will live up to the hype that it is the biggest launch of all time. I should have put a bet in on one of those prediction markets before writing this. But didn’t.
Andrew Kang@Rewkang

Researchers trained a humanoid robot to play tennis using only 5 hours of motion capture data The robot can now sustain multi-shot rallies with human players, hitting balls traveling >15 m/s with a ~90% success rate AlphaGo for every sport is coming

English
44
56
687
99.3K
Jeff Holmes
Jeff Holmes@holmesjtg·
@LinusEkenstam Why has no one implemented a small home system for lawns? How expensive are these systems?
English
0
0
0
42
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
Weed-a-bot-2000 This precious laser robot is powered by NVIDIA and terminates millions of weeds each hour it operates. No need for pesticides just raw GPU power for the win.
English
15
10
126
17K
China pulse 🇨🇳
China pulse 🇨🇳@Eng_china5·
Unitree Robotics robot shooting test. It feels like it was generated by AI. It’s terrifying… in the future, wars might not need humans anymore
English
1.8K
2.4K
10.8K
1.6M
Boris Power
Boris Power@BorisMPower·
@distributionat I prefer a data center for the $100b it would cost to make it today :)
English
3
0
16
2K
toucan
toucan@distributionat·
SF needs underground Muni running from Dogpatch to Potrero to the Mission to the Castro.
toucan tweet media
English
56
9
449
34.2K
Jeff Holmes
Jeff Holmes@holmesjtg·
It is also interesting to think of how all of this will affect the development of new libraries and programming languages in general. With fewer programmers writing code (and therefore running into challenges that they want to solve), what does that mean to the development of software tools?
English
0
0
0
182
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.4K
39.5K
7.7M
Aakash Gupta
Aakash Gupta@aakashgupta·
This headline undersells the real story. Stanford researchers identified 15-PGDH, a protein that increases as the body ages and drives tissue decline, as the root cause of cartilage loss. They call it a “gerozyme.” When they blocked it in old mice, cartilage thickened across the entire joint surface, and the regenerated tissue was hyaline cartilage, the smooth shock-absorbing kind found in healthy joints, not the weaker fibrocartilage that usually forms after injury. The mechanism matters. They weren’t looking for stem cells and found none involved. Instead, existing chondrocytes changed their gene expression patterns and assumed a more youthful state. Old cells started behaving young again. Human tissue samples from knee replacement surgeries also responded to the treatment by making new functional cartilage. Osteoarthritis affects one in every five American adults and costs about $65 billion in direct healthcare annually. The global knee and hip replacement market will hit $30 billion by 2030. Companies like Zimmer Biomet, Stryker, and Johnson & Johnson have built massive businesses on the assumption that worn cartilage can only be replaced, never regenerated. The company that licensed this technology, Epirium Bio, just reported positive Phase 1 results for an oral 15-PGDH inhibitor in older adults. Phase 2 starts mid-2026. The timeline: If Phase 2 and 3 work, you could see an FDA-approved pill or injection that regrows joint cartilage within 5-7 years. The loser here? Every company selling titanium and ceramic into aging bodies. The $30B replacement market gets repriced the moment Phase 2 hits. Human trials: this year. Watch this space.
Not the Bee@Not_the_Bee

Good News: Scientists discover a way to regrow cartilage, stop arthritis notthebee.com/article/good-n…

English
197
1.7K
13.1K
1M
Jeff Holmes
Jeff Holmes@holmesjtg·
@justinskycak Really great summary of how people learn and why it's so challenging to bring these concepts into existing learning environments. Nicely done!
English
1
0
2
600
Jeff Holmes
Jeff Holmes@holmesjtg·
@SawyerMerritt Your personal algorithm should be set through a conversation with Grok.
English
0
0
0
5
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
Elon agrees the current X algorithm has gone to hell. Here's a few things I don't like about it: • If you like just one post, almost your entire feed becomes that topic. • I'm seeing far fewer people that I follow in my For You feed (I realize the following tab exists, but I like a mix). • I'm seeing far too much political content, and less of what I actually like (Tesla, EVs, space, tech, etc).
Sawyer Merritt tweet media
Elon Musk@elonmusk

@fentasyl I agree (sigh)

English
455
183
4.3K
287.3K
Jeff Holmes
Jeff Holmes@holmesjtg·
@KanaWorks_AI Nice! Could you make lip sync work for facial animations that would be compatible with the Convai plug-in for Unity?
English
1
0
1
217
KANA
KANA@KanaWorks_AI·
Tencent HY-Motion 1.0 をテスト② ワークフロー Nanobanana で画像生成 → Hunyuan3D でモデル生成 → Hunyuan3D でリギング → ComfyUI+Hunyuan Motion でアニメーション作成 → Unity に統合 テスト結果 🐰 ダンス:全体的にとても良い仕上がり。動きも滑らか 🐰 ボクシング(ジャブ):関節部分はやや不自然な点があるものの、腕の振りやパンチの勢いは非常に良い 現時点では、この分野において @TencentHunyuan は世界最強クラスと言っても過言ではありません。 UV 展開の精度がさらに向上すれば、本当に無敵になると思います。 2026.01.04 #3Dモデリング #ゲーム開発 #インディーゲーム #IndieGame #kanaworksai #ComfyUI #hunyuan
KANA@KanaWorks_AI

0からでも高品質なゲームを作れる時代が来るかも。 🐰Tencent HY-Motion 1.0 をテストしてみました。 このモデルは、自然言語を 3D キャラクターアニメーションに変換できます。 つまり、モデル生成 → ボーン(リギング) → その後はテキストだけでキャラクターを動かせる、という流れです。 現時点では、立つ・座る・歩く・スクワット・腹筋・ゴルフスイング・掃除などの原子動作が生成可能。 さらに、動画で示されているような「座ってドラムを叩く」「地面の物を拾う」といった複合動作にも対応しています。 🐰「片膝をつき、背中に隠していたバラを取り出し、目の前の恋人に差し出す」という複合動作を試しました。4つの生成結果のうち、1つはほぼ意図どおりに完成し、他はまだ改善の余地あり、という印象です。 これは本当にエキサイティングなスタートだと思います。しかもオープンソースという点が素晴らしいです。@TencentHunyuan さん、本当にありがとうございます。 2026年には、「言語 → 3Dキャラクターアニメーション」は大きな飛躍を迎えると確信しています。 完全に0からの個人でも高品質なゲームを開発できる時代が、確実に現実のものになりつつあります。 ワクワク 2025.12.30 #3Dモデリング #ゲーム開発 #インディーゲーム #Indiegame #kanaworksai

日本語
10
53
428
94K
Jeff Holmes
Jeff Holmes@holmesjtg·
@techhalla Okay, thanks! I guess the rigging is too simple? Is there a workflow that you know of to make it possible?
English
1
0
1
50
TechHalla
TechHalla@techhalla·
Create your own 3D models with AI including · High-quality textures · Riggin · Animations ready in minutes for your filmmaking or game dev projects. Tripo makes it possible, and I’ll show you how below 👇
English
66
159
2K
451.8K
Jeff Holmes
Jeff Holmes@holmesjtg·
@robotaxi @aelluswamy No Robotaxi logo that I can see, but the fact that it has a plate suggests that it's not a vehicle being delivered to a new owner.
English
0
0
2
401
Andrey
Andrey@Andrey__HQ·
@JonhernandezIA Imagine training models on specific generations or historical periods. Like only LLM’s from 1750’s or only 1860’s
English
1
0
2
2.3K
Jon Hernandez
Jon Hernandez@JonhernandezIA·
This is the coolest thing I seen about LLMs in long time. This guy is trying to train a model with only data from 1800s creating an llm that behaves like that society... So damn cool reddit.com/r/LocalLLaMA/c…
English
76
184
3.7K
655.2K
Gregory
Gregory@Greg2UandMe·
@niccruzpatane @Tesla It can’t navigate to your favorites, so they need to add that to be truly useful in real life situations.
English
2
0
6
1.5K
Nic Cruz Patane
Nic Cruz Patane@niccruzpatane·
Grok AI is now more deeply integrated into @Tesla UI. You can now ask it to handle all your navigation tasks in a very human-like manner. This is a game changer, especially when paired with FSD. Here’s a demo:
English
224
532
4.4K
522.6K
Patrick Loeber
Patrick Loeber@patloeber·
yesterday we released two new gemini TTS models. we also wrote a prompting guide, happy building and audio generation :) #prompting-guide" target="_blank" rel="nofollow noopener">ai.google.dev/gemini-api/doc…
English
6
14
187
11.9K
Jeff Holmes
Jeff Holmes@holmesjtg·
@kat_kampf @GoogleAIStudio Would love early access as I am constantly testing the cutting edge of Gemini, especially around speech capabilities.
English
0
0
0
12
kat kampf
kat kampf@kat_kampf·
We started internal testing some big updates to the @GoogleAIStudio experience today! Coming to you early next year but reply below if you’d like early access in the coming weeks 👀
English
3.1K
125
3.7K
308.1K