Jason Toevs

1.8K posts

Jason Toevs banner
Jason Toevs

Jason Toevs

@JasonToevs

CTO at UP 🤝 prev: built AI for Adobe, NBC & PGA Tour → | 🏆 2 Exits, 44 Fails 🪦 | Ethical AI + Farm Roots 🌾 | Helping leaders align tech with values.

Wichita, Kansas 가입일 Aralık 2011
1.2K 팔로잉690 팔로워
고정된 트윗
Jason Toevs
Jason Toevs@JasonToevs·
Why Fortune 500s Trust a Farm Kid to Build Their AI 🤖🌾 (And how baling hay taught me to cut through Silicon Valley’s BS) 14 years ago, I left our 4th-gen Kansas farm to build tech. 2 exits. 44 fails. Now I've deployed AI for Adobe, NBC, PGA Tour and governments. The secret?
English
2
0
6
363
Jason Toevs
Jason Toevs@JasonToevs·
@WileECoder @levelsio So judgment is pattern recognition earned through failure… but taste is knowing which patterns are worth repeating. Experience teaches you what works. Taste teaches you what matters. That's the gap AI can't close, it can recognize every pattern but it can't want...
English
1
0
0
8
Wile E. Coder
Wile E. Coder@WileECoder·
@JasonToevs @levelsio maybe experience is just paid-for pattern recognition… you only call it judgment after enough mistakes stop being random
English
1
0
0
9
Jason Toevs
Jason Toevs@JasonToevs·
MIT just open-sourced a tool that turns a photo into a fully parametric CAD model. Photo in. Editable manufacturing code out. Beats GPT-4.5 on accuracy. The garage inventor now has the same toolchain as the aerospace engineer. What gets built when manufacturing has zero design barrier?
Brian Roemmele@BrianRoemmele

Wow! AI ASSISTED GARAGE MANUFACTURING IS ABOUT TO EXPLODE! CAD Drawings From Just A Picture! MIT just released something profound for creators and engineers alike. Picture this. You take a photo of an object, upload it, and an AI delivers a fully parametric CAD model, complete with editable code and construction history. This is open source GenCAD, from MIT's Decode Lab. It uses autoregressive transformers and diffusion models, trained on hundreds of thousands of images and CAD files. Input a 2D photo or sketch. Output valid CadQuery Python code that beats models like GPT-4.5 in accuracy. Why does this matter? It speeds up reverse engineering, prototyping, and part searches in vast databases. No more hours spent modeling from scratch. Field repairs, custom designs, education, all transformed. It even retrieves similar parts from libraries of thousands. For industries like manufacturing and aerospace, it cuts costs and boosts innovation. Hobbyists gain pro tools without the steep curve. I am testing it now on random objects and can not believe how much of a super power this is. I can start dozens of companies just on this AI model. This open-source gem is here: gencad.github.io The future of building stuff arrives in a snapshot.

English
0
0
0
11
Jason Toevs
Jason Toevs@JasonToevs·
@WileECoder @levelsio Can you get judgement without experience? Which begs the question of what experience really is..
English
1
0
0
10
Wile E. Coder
Wile E. Coder@WileECoder·
@levelsio funny how the hardest part is slowly becoming… having taste
English
3
0
15
1.7K
Jason Toevs
Jason Toevs@JasonToevs·
Open your API. Watch what happens. 24 hours after TrustMRR opened their API, 20+ apps got built on top of it. This is the real platform playbook: build the data layer, let builders build the interface layer. The company that controls the data wins. The one that controls the distribution wins bigger.
Marc Lou@marclou

24 hours after opening the TrustMRR API, people have built 20+ apps on top of it. AI has made the idea-to-product loop almost instant. We are going to see so many crazy things in 2026. It's just the beginning. I will post all wrappers below. Tag yourself if I forget.

English
0
0
0
42
Jason Toevs
Jason Toevs@JasonToevs·
The Open Source Tide Just Crossed a Line — Here's What Builders Should Do About It A 9-billion-parameter model now matches one that's 13 times its size on reasoning benchmarks. It runs on 4GB of RAM. On a phone. In airplane mode. That sentence would have been science fiction 18 months ago. This week, it's just Tuesday. Something shifted in March 2026 that deserves more attention than it's getting. Not a single announcement a pattern. Three separate stories, from three unrelated teams, all pointing at the same conclusion: the open source AI ecosystem just crossed a threshold that changes who gets to build what. --- The first story came from NVIDIA. At GTC 2026, Jensen Huang announced Nemotron 3 Super a 120-billion-parameter model with only 12 billion active parameters, built on a hybrid mixture-of-experts architecture. Open weights. Open datasets. Full training recipe published. The benchmarks are what matter here. @kwindla ran it through voice agent testing and found Nemotron 3 Super matches GPT-5.4 OpenAI's flagship frontier model in tool calling and instruction following. Not "approaches." Not "competitive with." Matches. But NVIDIA didn't stop at dropping a model. They announced the Nemotron Coalition a collaboration bringing together Cursor, Mistral, Perplexity, LangChain, and Reflection AI to co-develop open frontier models on DGX Cloud. Competing companies. Pooling resources. Building something none of them could build alone. If that sounds familiar, it should. It's the Linux playbook. The Kubernetes playbook. Every successful open-source infrastructure project follows the same arc: compete at the application layer, collaborate at the foundation. --- The second story came from Alibaba. The Qwen team released the Qwen 3.5 Small Model Series four dense models from 0.8B to 9B parameters. @ArtificialAnlys broke down the numbers: the 9B model matches GPT-OSS-120B on GPQA Diamond (81.7 vs. 71.5) and HMMT Feb 2025 (83.2 vs. 76.7). A model 13x smaller, beating one that needs an entire server rack. The 2B model runs on any recent iPhone. In airplane mode. Processing text and images with 4GB of RAM. I keep coming back to what this means for the builder sitting in a garage in Kansas or Bangalore, or Lagos, or anywhere else that's not San Francisco. Six months ago, running a capable reasoning model required cloud API calls, rate limits, and a credit card on file. Today, you can run one locally on hardware you already own. The cost of intelligence is collapsing. And it's collapsing faster at the bottom of the stack than at the top. --- The third story is the one that ties it all together. 267 new AI models were released in Q1 2026 alone. Not chatbots. Not GPT wrappers. Models built for specific tasks video generation, speech recognition, code execution, 3D spatial reasoning. The majority are open source. Lightricks shipped LTX 2.3, a 22B-parameter open-source video model generating native 4K at 50 FPS with synchronized audio. Commercial license. Free. IBM released Granite 4.0 1B Speech multilingual speech recognition that runs in under 1.5GB of VRAM. Edge devices. Low-resource servers. Helios from Peking University and ByteDance generates minute-scale video in real time on a single H100. Apache 2.0 license. The pattern is unmistakable: specialized, open, and capable enough to ship products with. --- So what does this actually mean for the person writing code today? Three things I'm paying attention to. First, the model layer is becoming a commodity. Not fully there's still a gap between the absolute frontier and what's freely available. But that gap is measured in weeks now, not years. If your entire competitive advantage is "we use the best model," you're standing on sand. Second, the differentiation is moving up the stack. Distribution. Product design. Domain expertise. Data moats. The companies that win from here aren't the ones with the best model access they're the ones who build the best experience on top of models that everyone can access. Third, the geography of AI innovation is about to shift. When frontier-quality models run on consumer hardware with no API dependency, you don't need to be near a cloud region or a VC hub to build something meaningful. I run a homestead in Kansas and build AI for enterprise clients. That combination used to be a contradiction. Now it's just a lifestyle choice. --- There's a theological question underneath all this technology that I can't stop thinking about. When the tools of creation become free and universally available, the question stops being "what can we build?" and becomes "what should we build?" 267 models in a single quarter. Frontier reasoning on a phone. Competing companies choosing collaboration over isolation. The tower is going up fast. The builders have never been more capable. The question the one that actually matters is whether we're building toward something worth reaching.
English
0
0
1
29
Jason Toevs
Jason Toevs@JasonToevs·
The real question every SaaS founder should be asking: Are you capturing AI budget, or is your budget being harvested to fund AI? Because that's the only sorting mechanism that matters in 2026. Full breakdown from @jaborjaber at SaaStr 👇 saastr.com/how-much-of-th…
English
0
0
0
13
Jason Toevs
Jason Toevs@JasonToevs·
Wall Street is panicking about SaaS "product displacement" from AI. But the data says something different: → Retention rates are still high → Customers haven't left → They've just slowed buying This isn't substitution. It's budget reallocation. CIOs have the same spend — they're just writing bigger checks to AI infra.
English
1
0
0
8
Jason Toevs
Jason Toevs@JasonToevs·
SaaS isn't dying. It's getting robbed. OpenAI + Anthropic hit $44B combined ARR — up $14B in a single quarter. That money didn't appear from thin air. It came straight out of the budgets that used to go to Salesforce, HubSpot, Datadog, and ServiceNow. This chart tells the whole story:
Jason Toevs tweet media
English
1
0
0
59
Gabastino
Gabastino@gabastino1·
@JasonToevs Building spec24.dev — a project board where your client writes the brief and it lands directly as a structured ticket for your dev. No copy-paste. No lost Slack threads. What are you building toward?
English
1
0
0
5
Jason Toevs
Jason Toevs@JasonToevs·
Every generation builds a tower. Ours is made of code. The question is the same as it's always been — what are you building it toward?
English
3
0
1
27
Jason Toevs
Jason Toevs@JasonToevs·
Your kid's teacher is an AI. It knows every learning gap, adjusts in real-time, and never loses patience. Do you want that for your child? I've spent the last year building exactly this. The answer is more complicated than I expected.
English
0
0
1
29
Jason Toevs
Jason Toevs@JasonToevs·
Efficiency isn't a moral destination. Productivity isn't a virtue. They're just velocity and velocity without direction is a more impressive way to get lost.
English
0
0
0
15
Jason Toevs
Jason Toevs@JasonToevs·
@BenjamenH If you’re hyperscaler, you don’t slow down for this type of feature. The leverage doesn’t make sense.
English
0
0
0
12
Ben Hutton
Ben Hutton@BenjamenH·
@JasonToevs Why is this a startup? it seems like the easiest thing in world for OpenAI or Anthhropic to spin up… and Gemini, even easier.
English
2
0
0
31
Jason Toevs
Jason Toevs@JasonToevs·
Every AI agent needs its own email inbox. Not another model wrapper. Not another chatbot skin. The infrastructure that lets agents actually operate in the real world. $6M seed led by General Catalyst. The money follows the plumbing. Where's the next piece of agent infra getting built?
AgentMail (YC S25)@agentmail

We’re excited to announce our $6M seed round, led by @GeneralCatalyst, with @ycombinator, @paulg, @dharmesh, @pcopplestone, @karim_atiyeh, @taro_f and others participating. Every AI Agent needs its own email inbox.

English
1
0
1
91