
@elonmusk the only issue is conversational ability / actual e2e latency
if funnel goes something like bci ->signals ->text -> llm ->tts. and it takes 2-3sec end to end
then there's no point.
need to optimize funnel heavy & track every ms to accomplish sub 700ms
regardless, great feat
English










