
J. Brant Arseneau
294 posts

J. Brant Arseneau
@jbarseneau
Chairman & CIO @ Mach33 - Neural Network guy since the 80’s… deep tech architect and investor. https://t.co/0un674VG0J






On the wall at work... the Nikola Tesla patent on the alternating motor - no brushes or rare Earth magnets needed. Edison couldn't figure it out. One of Tesla's many accomplishments is the AC induction motor, the technology that makes the first Tesla cars perform they do, with no friction brushes for contacts and no permanent magnets and no Rare Earth minerals (in contrast, a hybrid car like the Prius uses a kilogram of neodymium and 13kg of lanthanum). Today, 90% of industrial motors are AC induction motors. It has been called one of the 10 greatest discoveries of all time. “Let the future tell the truth, and evaluate each one according to his work and accomplishments. The present is theirs; the future, for which I have really worked, is mine.” — Nikola Tesla (1856-1943)





















This is the future we are building at 8090. Our Software Factory is giving the companies that adopt it a new way to think about software. Build v buy was never as simple as it sounded. If you build you have complex maintenance issues. If you buy, you also generally double or triple your budget with services. The new way is to focus on the business logic and have Software Factory create something bespoke. Fits like a glove, highly tuned for you, low maintenance costs. Try it here: 8090.ai

When I saw our team's evals of Kimi 2.6, I thought "ok, things are gonna get interesting now". This is the first open-weight model that plays like a top-class agentic model. Watching it go through ambiguous and meticulous chained tool work successfully puts it squarely in the wheelhouse of Opus 4.6. We're looking at an open weight model, but with much cheaper direct inference provider pricing. For a subclass of our eval set, it's outperforming GPT 5.2. We're about to undergo a gigantic industry shift. Open weight is no longer for those who fine tune, those who want on-prem. It's an actual, reliable option for it's quality/price/latency profile for difficult agentic work. It's not perfect. It's token hungry, relatively slow, and can get stuck in “thinking loops". But those are things we can engineer around. For value it is, and how it positions itself against major labs, this is a dramatic day for open weight models. We sprinted as a team and worked closely with @FireworksAI_HQ to get this to our customers on day 0. No one should wait to try out a change like this. Try it yourself and tell me where it's working for you.








