
Aloniq
43 posts

Aloniq
@AloniqVC
We back Pre-seed to Round A technology startups. Extraordinary innovation begins with an inspiring team!


Excited to announce today that my startup, @positron_ai, has closed a $230M Series B financing round at an over $1B valuation, co-led by great folks at @jumptrading, Arena, Unless Ventures, and strategic backing by @Arm! bloomberg.com/news/articles/…

Today, we’re thrilled to announce $20M in funding led by @a16z, with support from @saranormous, @amasad, @akothari, @garrytan, @justinkan, @atShruti, @naval, @scottbelsky, @gokulr, @soleio, @kevinhartz and more. @wabi is ushering in a new era of personal software, where anyone effortlessly create, discover, remix, and share personalized mini apps. For 50 years, software was made for people. The next 50, it will be made by people. Just as YouTube unlocked creative power through video, Wabi will unlock creative power through software. The YouTube moment for apps is here. We can’t wait to see what you create.

T̶h̶e̶r̶e̶'s̶ ̶a̶n̶ ̶a̶p̶p̶ f̶o̶r̶ t̶h̶̶a̶t̶ There’s an app for you. Meet Wabi: the first personal software platform. Generate beautiful, useful and fun little apps informed by your life.


We've fundraised $3.3M to build @archestra_ai 🎉



My startup, @positron_ai just raised $51.6M Series A to rebuild the infrastructure powering AI inference and bring Superintelligence to everyone. Led by Valor Equity Partners (@valorep) , Atreides Management (@Atreidesmgmt) , and DFJ Growth (@dfjgrowth) —the same teams behind SpaceX, Tesla, xAI, and other companies pushing compute to its physical and economic limits. Why inference? Because as AI moves from research into production, the real bottleneck isn’t training—it’s deploying transformer models at scale. GPUs were built for flexibility, not predictable workloads. That mismatch means wasted bandwidth, poor density, and massive power costs. So at Positron, we built silicon specifically optimized for inference: •>90% memory bandwidth utilization (GPUs typically ~30%) •Multi-model hosting per card (higher density, lower power) •Zero code changes — drop in compatability with the HuggingFace Transformers ecosystem. U.S.-made silicon, stable supply chain, geopolitically resilient We’re already shipping and running in production environments today. Here’s what we’ve built, and link to our WSJ coverage and press release in the follow up reply ↓









