Jonathan Irwin

989 posts

Jonathan Irwin banner
Jonathan Irwin

Jonathan Irwin

@jonoirwin

5 * cron job - would definitely run again || Susceptible to stingrays 🐢 https://t.co/VCPKzne2yp 🙌 || YC W22

(New York)^2 Katılım Haziran 2009
3.6K Takip Edilen326 Takipçiler
Jonathan Irwin retweetledi
Tavus
Tavus@tavus·
Most conversational AI understands words, not people. Introducing Raven-1, our audio and video perception model that gives AI the ability to understand emotion, intent, and context the way humans do.
English
55
93
974
1.7M
Jonathan Irwin retweetledi
Hassaan Raza
Hassaan Raza@hassaanraza·
The interface of the future is human. We’ve raised a $40M Series B from CRV, Scale, Sequoia, and YC to teach machines the art of being human, so that using a computer feels like talking to a friend or a coworker. And today, I’m excited for y’all to meet the PALs: a new human-computing interface. PALs are emotionally intelligent, multimodal, and capable of understanding and perceiving. They can see, hear, reason, and even look like us. We’re releasing our 5 favorite PALs to start. Each PAL has its own distinct personality- from AI assistants to best friends. PALs: - Meet us where we are. Face-to-face over video call, on the phone, or even by text. - Are always thinking. They’re proactive, reach out first, remind you about what you forgot, or might just check in on you. - Understand us, finally. PALs can see us, understand our tone, emotion, and intent, and communicate in ways that feel more human. - Evolve with you. PALs have advanced memory, remember your preferences and needs, and adapt themselves over time. - Are capable. PALs can handle complex tasks — from responding to your emails to moving your schedule around to creating docs and doing research for you. Science fiction promised us a new human-computer interface, beyond the GUIs of yesterday, a human-like interface that would feel second-nature to use. That future never came, until now. Charlie’s story brings this idea to life. We’re excited for you to meet Charlie and his PALs for free at tavus.io Enjoy the film 👇
English
336
300
2.4K
2M
Jonathan Irwin retweetledi
Tavus
Tavus@tavus·
Last night we rolled out the red carpet for a special premiere and a first-ever look at the future of human computing. We took over the Presidio theatre in SF, filled it with retro computers, and brought everything back to life for a full on immersive experience.
Tavus tweet mediaTavus tweet mediaTavus tweet mediaTavus tweet media
English
17
22
101
12.6K
Jonathan Irwin retweetledi
cerebriumai
cerebriumai@cerebriumai·
and that's a wrap! #vapicon ✅ turns out everyone faces similar challenges when building voice agents - scalability & low latency - both of which Cerebrium can solve! reach out to us for up to $60 free credits before October 16th 👀 thank you san francisco and @Vapi_AI 🤍
cerebriumai tweet media
English
5
4
22
1.3K
Jonathan Irwin retweetledi
cerebriumai
cerebriumai@cerebriumai·
Ever wished your voice assistant could actually do something useful—like send invoices or manage subscriptions? We just published a tutorial on integrating @PayPal's Model Context Protocol (MCP) into a real-time voice agent. cerebrium.ai/blog/integrati… #mcp #voiceai #genai #llm
English
2
3
5
1.5K
Jonathan Irwin retweetledi
cerebriumai
cerebriumai@cerebriumai·
We’re excited to share that we’ve raised an $8.5M seed round to scale the high-performance, serverless infrastructure platform for AI. Led by @GradientVC, with participation from @ycombinator Authentic Ventures, and an incredible group of angels and operators. 🧵👇
cerebriumai tweet media
English
5
4
29
5.8K
Jonathan Irwin retweetledi
Justine Moore
Justine Moore@venturetwins·
These AI celebrity videos are a weirdly effective way to learn new concepts. And insanely viral - this one has 5M views in two days 🤯
English
456
1.5K
20.4K
2.4M
Jonathan Irwin retweetledi
Rime
Rime@rimelabs·
Big news: Rime has raised a $5.5M seed round! 💸💸 We're building the most expressive, lifelike AI voices for real-time conversations, voices that sound truly human. Led by Unusual Ventures with support from Founders You Should Know, Cadenza, and incredible angels like Michael Akilian, Maran Nelson, Nick Arner, Molly Mielke, and more. From powering phone orders at Domino’s to automating healthcare calls, Rime is already behind tens of millions of conversations each month. We just launched Arcana, the most expressive spoken language model on the market, and we’re still just getting started! 🚀 Fast. Realistic. Built for scale. 🎙️ Have a quick chat with our voices. 🤝 Join our team. 🎯 Full announcement below.
Rime tweet media
English
14
9
64
10.7K
Jonathan Irwin retweetledi
Dalton Caldwell
Dalton Caldwell@daltonc·
The S&P 500 now has 3 YC companies: DoorDash, Airbnb and Coinbase.
English
103
165
4K
315.5K
Jonathan Irwin retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
All of the billions spent on AI innovation just so we can get these beautiful cat videos
English
4.8K
30.9K
251.2K
24.7M
Jonathan Irwin retweetledi
Barsee 🐶
Barsee 🐶@heyBarsee·
It's been 24 hours since OpenAI unexpectedly shook the AI image world with 4o image generation. Here are the 14 most mindblowing examples so far (100% AI-generated): 1. Studio ghibli style memes
Barsee 🐶 tweet mediaBarsee 🐶 tweet mediaBarsee 🐶 tweet mediaBarsee 🐶 tweet media
English
3K
13.4K
162.4K
51.9M
Jonathan Irwin retweetledi
CALL TO ACTIVISM
CALL TO ACTIVISM@CalltoActivism·
This is Marco Rubio explaining how the USA promised to defend Ukraine forever if they got rid of their nuclear arsenal left after the Soviet Union fell. This is why lil marco was sinking into the couch. He was hoping we wouldn’t find it…so don’t RT right now this very second.
English
3.1K
67.7K
131.5K
8.4M
Jonathan Irwin retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent). I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations. Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL. Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome. (Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet x.com/karpathy/statu…)
Andrej Karpathy@karpathy

DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M). For reference, this level of capability is supposed to require clusters of closer to 16K GPUs, the ones being brought up today are more around 100K GPUs. E.g. Llama 3 405B used 30.8M GPU-hours, while DeepSeek-V3 looks to be a stronger model at only 2.8M GPU-hours (~11X less compute). If the model also passes vibe checks (e.g. LLM arena rankings are ongoing, my few quick tests went well so far) it will be a highly impressive display of research and engineering under resource constraints. Does this mean you don't need large GPU clusters for frontier LLMs? No but you have to ensure that you're not wasteful with what you have, and this looks like a nice demonstration that there's still a lot to get through with both data and algorithms. Very nice & detailed tech report too, reading through.

English
361
2.1K
14.3K
2.4M
Jonathan Irwin retweetledi
cerebriumai
cerebriumai@cerebriumai·
In preparation for our PH launch on the 11th, we're giving away awesome Windbreakers to those who contribute to our examples Github repo! Look in the comments for details on how to enter as well as ideas to submit! #startups #ai #genai #Giveaway #developertools
cerebriumai tweet media
English
1
3
3
353