

Nanea Reeves
6.3K posts

@nanea
Nanea(nah-nay-ah) Reeves - CEO, TRIPP - Apple's Best In Spatial Computing, TIME Best Inventions @trippvr https://t.co/F1qMCe3QWe




Alt-X (@downloadaltx) builds AI agents that turn real estate deal documents into fully built underwriting models in Excel automatically, with every number cited back to the source. Congrats on the launch, @SamadiRyan and Michael! ycombinator.com/launches/PjC-a…











Right before they momma and Auntee transitioned they sang her this last song. 🪶🇺🇲Family Love and farewell. Surrounded and shrouded in the life she birthed.









🚨 NEW: Apple granted new patent that allows for a technology that eliminates the need for prescription lenses in upcoming AR/XR glasses by correcting all prescriptions including astigmatism on device.


@OpenAI and @Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers. This has been a decade in the making. Deployment begins in early 2026, and when fully rolled out, it will be the largest high-speed AI inference deployment in the world. OpenAI and Cerebras were both founded in 2015 with radically ambitious goals. OpenAI set out to build the software that would push AI toward general intelligence. Cerebras set out to rethink computing hardware from first principles. Our teams met as far back as 2017. We shared ideas, early work, and a common belief: there would come a point when model scale and hardware architecture would have to converge. That point has arrived. ChatGPT set the direction for the entire industry. It showed the world what AI could be. Now we’re in the next phase - not proving capability, but delivering it at global scale. The history of technology is clear on one thing: speed drives adoption. The PC industry didn’t operate at kilohertz. The internet didn’t change the world on dial-up. AI is no different. As models grow more capable, speed becomes the bottleneck. Slow systems limit what users can do, how often they engage, and whether AI becomes infrastructure or remains a novelty. Cerebras was built for this moment. By keeping computation and memory on a single wafer-scale processor, we eliminate the data-movement penalties that dominate GPU systems. The result is up to 15× faster inference, without sacrificing model size or accuracy. That speed changes product design, user behavior, and ultimately productivity. For consumers, it means AI that feels instantaneous. For the economy, it means agents that can finally drive serious productivity growth. For Cerebras, 2026 will be a defining year. With this collaboration with OpenAI, Cerebras’ wafer-scale technology will reach hundreds of millions - and eventually billions - of users. We’re proud to work alongside OpenAI to bring fast, frontier AI to people around the world. This is what a decade of long-term thinking looks like.
