Ryan Brown
67 posts



We’re kicking off an early beta of the new Sesame iOS app, which includes the ability to search, text and think. sesame.com/beta

I still can't explain why Sesame in TTS is so much better than all the competition: it sounds more natural, has less latency and is funnier.

Want to share what it actually took in terms of schedule and units required to get @ImpulseLabs_ across the finish line to shipping. Caveat: we built almost everything from scratch, which meant modules like the power electronics and temperature sensors had to ALSO be matured / derisked. We built *hundreds of units* (and ran / ruined a ton through rel testing) before a paying customer got one. Maybe next time I’ll pick a less regulated industry 🤣. 0. team in place 1. prototype stove with V1 power electronics / battery (server rack): +4mo (1 unit built) 2. V2 power electronics, V1 temp control: +4mo 3. Works-like with V2 temp control: +4mo (5-10 dev kits) 4. P0 (boards only, at CM): +4mo 5. P1 (looks like, works like, made at CM) with V3 power + temp control: +4mo (September 2023 — UL process kicked off) (10 units built, P1 shown at CES) 6. EVT with V4+V5 power + temp control, production inverter: +7mo (lunar new year) (~50 built) 7. DVT: +6mo (~70 built) 8. DVT2 w/ final UL/FCC fixes: +3mo. (~20 built) 9. PVT: +2mo (MET gives us UL 858 certification) 10. MP ramp with “final final UL 858 fixes”: +3mo This was probably the most insane hardware program I’ve ever participated in. It’s amazing to be able to share the end result.

We're shipping @ImpulseLabs_ After four years of company-building and a lot of engineering, manufacturing, and compliance work, customers are getting units. Huge thanks to everyone who pre‑ordered since January 2024 and stuck with us. You made this moment happen.

went to a fancy dinner in SF and everybody went to Harvard (or Stanford)


joining @sesame as head of design today! when I first met @brendaniribe a few months ago, his vision for sesame and his way of talking about design and craft inspired me. felt the same when i met @natemitchell. couldn’t be more excited to build with this amazing team.




Today we're excited to introduce Vy, our AI that sees and acts on your computer. At Vercept, our mission is to reinvent how humans use computers–enabling you to accomplish orders of magnitude more than what you can do today. Vy is a first glimpse at AI that sees and uses your computer just like you do. It runs natively on your computer, with access to all your applications and accounts that you’re already signed into. It is available for download starting today! As part of the release, we’ve built VyUI, an AI model bridging the gap between language and your screen. It sets a new state-of-the-art on UI grounding benchmarks like ScreenSpot v1, ScreenSpot v2, and GroundUI Web–beating leading models from OpenAI, Google, and Anthropic. Check out what Vy can do in the thread below: 🧵👇

the core research team behind the sesame voice model is <9 ppl as @_apkumar walked through in our latest 1.5 hr podcast, talent density beats team size most days

HOLY SHITT, Sesame Labs just dropped CSM (Conversational Speech Model) - Apache 2.0 licensed! 💥 > Trained on 1 MILLION hours of data 🤯 > Contextually aware, emotionally intelligent speech > Voice cloning & watermarking > Ultra fast, real-time synthesis > Based on llama architecture & Mimi like decoder > Apache 2.0 licensed > Weights on the Hub So cool to see such a strong Speech backbone out in the wild! Kudos @sesame team! 🤗












