sindarin.

162 posts

sindarin. banner
sindarin.

sindarin.

@SindarinTech

Fast voice AI agents that actually work. Get started at https://t.co/hoFcp5Dcbd

Inscrit le Ocak 2023
117 Abonnements1.4K Abonnés
Tweet épinglé
sindarin.
sindarin.@SindarinTech·
Speak to our state of the art conversational speech AI right here in your browser: sindarin.tech
English
2
2
7
6.5K
sindarin.
sindarin.@SindarinTech·
Thinking about rebranding to TurboClankers, Incoporated
English
0
0
1
1.2K
sindarin. retweeté
Kai Brokering
Kai Brokering@kai_brokering·
@batwood011 Yeah, sindarin had the fastest turn taking over year ago but people only knew about Vapi and Vocode
English
0
1
3
512
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
We have built an absolutely killer context engineering platform for real-time voice agents at @SindarinTech. DM me and I’d be happy to show it to you.
Philipp Schmid@_philschmid

What is context Engineering? “Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.” Read it: philschmid.de/context-engine… Cool to see that i wrote something useful for people. 🙂

English
0
1
1
767
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
Here's the thing about deploying truly low-latency, full-duplex, realtime speech products in the real world: You only have a couple of hundred milliseconds to do *all of the work* that you need to do to prepare your response to the user. Most products that implement sophisticated text-based chat agents today do all sorts of RAG; auxiliary LLM calls; API calls; and so on after their user hits "send" and before generating a final response. You cannot do it that way for realtime speech. For all practical purposes, you will *never* be able to do it that way for realtime speech. You are battling the speed of light and RDBMS r/w speeds with every API call. You are battling LLM TTFT and ITL - which can't fit the latency criteria even using a 7b on an 8xB200 today - with every secondary LLM call. So the architecture of a realtime, full-duplex speech product needs to be fundamentally different. You need new models and new infrastructure that can make speech products useful while still being lightning-fast. That's what we've built at @SindarinTech.
English
0
1
12
14.4K
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
When you speak with the Persona on our landing page at sindarin.tech, we immediately get a summary in a Slack channel that looks like this. If you say something hilarious, I'll post it here.
Brian Atwood tweet media
English
1
1
3
1K
sindarin.
sindarin.@SindarinTech·
We released one of our first demos, smarterchild.chat, nearly two years ago. Someone just had an 18 minute conversation with it. It's still using gpt-3.5-turbo.
English
1
1
2
584
sindarin. retweeté
Startup Archive
Startup Archive@StartupArchive_·
Peter Thiel: “AI in 2024 is like the Internet in 1999” “It’s clearly going to be important, big, transformative, have all kinds of interesting social and political effects - maybe even effects about how humans think about themselves. But on a business level, it’s very, very treacherous. There were a lot of different Internet businesses that failed, and even the ones that succeeded, it was quite a rollercoaster.” He gives Amazon as an example. In 1999, it hit $113 a share, but by October 2001, it declined to $5.50. “If you’d held it from December 1999 to today, you would have made 25x your money, but you would have first lost 95%. And then if you’d bought it in October 2001, you would have made 500x. So in some sense, Amazon was the obvious Internet company to invest in, and even that was quite a roller coaster.” He continues: “My suspicion is that that’s roughly where we are in AI. It’s correct as a technology, but extremely bubbly and crazed as a company-building thing or as a sector to invest.” Video source: @triggerpod (2024)
English
13
130
784
88.3K
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
Conversational Canvas™️
English
1
1
1
510
sindarin.
sindarin.@SindarinTech·
Rachel, the concierge on our website, is now powered by Llama 4 🎊
English
0
0
0
299
sindarin. retweeté
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
OpenAI scraping the public web and your work, then selling you access for $2000: 😬🖕♻️🤬🤢 DeepSeek training on your work, then returning it for free and compiled as part of a model: 😻🦾🐘🐳🇨🇳
English
1
10
103
3.4K
Olivia Moore
Olivia Moore@omooretweets·
Our team @a16z is hosting a small group dinner for AI voice agent builders in SF Who should we invite? Looking for anyone who is using voice as a critical wedge for a new product, B2B or B2C!
English
235
28
571
131.5K
sindarin. retweeté
kache
kache@yacineMTB·
If you don't like vibe coding it's probably because you have no vibes to begin with just a completely vibeless individual
English
91
103
1.9K
116.1K
sindarin. retweeté
Startup Archive
Startup Archive@StartupArchive_·
Mark Zuckerberg on the importance of engineers if you’re building a technology company “We never thought about ourselves as a website or a social network or anything like that.” Mark believes many companies define themselves too narrowly: “It’s one of the things I observed as soon as I came out to [Silicon] Valley. All these companies that called themselves technology companies were not really set up that way. The CEO wasn’t technical. The board of directors had no one technical on it… And it’s like alright, if that’s your team, then you’re not a technology company.” He believes there’s a balance: “You don’t want everyone to be an engineer because there’s other things that matter too. But if you don’t have a high enough share of the company as engineers, then you’re not a technology company.” This makes sense when you view it in the context of Mark’s strategy for Meta: “I define our strategy as: If we can learn faster than every other company, we’re going to win. We’re going to build a better product than everyone else because we’re going to get it out first, we’re going to have a good feedback loop, and we’re going to learn what people like better than other people.” He concludes: “I think that’s basically the formula. Be a technology company. Build a good foundation. Learn from what other people are focused on in the world. And iterate as quickly as you can.” Video source: @AcquiredFM (2024)
English
13
50
370
37.1K
sindarin.
sindarin.@SindarinTech·
Should we build a smart speaker with fully-customizable voices, arbitrary function-calling, an integrations marketplace, and (optionally) full self-hosting?
English
0
0
1
620
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
The full problem statement is: How can you get the lowest possible latency with the smartest possible model at the lowest possible price to accomplish business objectives to the highest possible standard with the best possible UX?
Brian Atwood@batwood011

This is why we spend weeks to reduce the latency in our conversational engine by as little as 100ms at a time. Right now, most solutions still hover around 1.5-2s. Speaking with AI will never feel comfortable until the latency is reliably 200-500ms. And that won’t happen by settling for good enough. It takes real obsession.

English
0
1
4
565
sindarin. retweeté
Brian Atwood
Brian Atwood@batwood011·
This is why we spend weeks to reduce the latency in our conversational engine by as little as 100ms at a time. Right now, most solutions still hover around 1.5-2s. Speaking with AI will never feel comfortable until the latency is reliably 200-500ms. And that won’t happen by settling for good enough. It takes real obsession.
Nikunj Kothari@nikunj

AI slop is the new default unless we start to..

English
4
1
14
2.8K