고정된 트윗
Livepeer
6K posts


@BrianRoemmele this is where real-time video AI actually matters
not just robots moving boxes - models watching, adapting, reacting live
that’s a very different infra problem than batch inference
English

@Scobleizer @nvidia the loop is getting tighter… humans are barely in it now
English

She has one of the best seats on the future that exists.
Sydney Sykes runs the @NVIDIA Inception program's venture relationships. She works with all the other VCs investing in AI startups.
After hearing about what Inception is (it helps accelerate the many startups in the NVIDIA ecosystem) we dive into what is going on because of what I see as a major inflection point that AI just brought. You can feel it. The AI world has sped up in the past few weeks.
Worthy of watching, but I know you are busy with your own agents. Click the Grok button and ask it "please give me a lengthy report on what you learned by watching this."
I won't be offended. I do the same now. AI has greatly increased the demands on my life. So I don't have time to watch nearly as many videos as I once did. Who said AI would make us all jobless? They are wrong.
Thanks to CNQQ, the China Tech ETF, for having me report for its audience, and to NVIDIA for inviting me to do this work. And thank you to NVIDIA's conference team for arranging this and recording it.
Such an awesome way to start the day here on day four of @nvidiagtc.
Oh, and she asked to keep the investor report that my AI generated early this morning (posted it here then, it's quite good). Thanks @blevlabs for preparing me.
English

@rand_longevity depends… is it low latency or do I have to wait for replies
English

↰
| $225 billion on inference costs through 2030.
|
| AI video will scale to hundreds of millions of users
| sooner than you think
|
└ What infrastructure can handle that without costs spiraling out of control?
Chubby♨️@kimmonismus
OpenAI is bringing Sora directly into ChatGPT. On the one hand, this might finally push the weekly users above the 1b mark. On the other hand, it would come at exorbitant inference costs. The company has projected spending over $225 billion on inference costs between now and 2030, and video generation is significantly more resource-intensive than text or images.
English

Friendly reminder that AI will never be worse than it is right now.
If you assume any rate of improvement over any reasonable period - learning how to use it becomes your #1 priority.
English

@VaibhavSisinty especially once all of this is happening in real-time
English

@PeterDiamandis learning how to ask the right things is becoming the real skill
English















