Casey Spence

72 posts

Casey Spence

Casey Spence

@CaseySpfas

Katılım Aralık 2018
48 Takip Edilen2 Takipçiler
Casey Spence
Casey Spence@CaseySpfas·
Doubling down on areas with friction (like memory management) is exactly how small teams can punch above their weight. SF + NYC as complementary hubs also makes sense — frontier experiments meet monetization strategy. On-Demand can be a force multiplier here too, letting you test agent workflows at scale without needing sprawling infrastructure: app.on-demand.io
English
0
0
2
13
Allie K. Miller
Allie K. Miller@alliekmiller·
Yesterday, I met with Anthropic and OpenAI and Google. (Separately, of course.) And while the conversations were largely confidential, I do want to share some aggregated reflections on the day as well as general SF takeaways. ⬇️ 1) Competitive advantage as a solo practitioner really does come from taking action and finding an area with a bit of friction and doubling down. Ex: memory management right now isn’t perfect, but allocating an hour to improving that system gives you a ton of leverage over others 2) SF continues to be the number one place for AI work. I know that’s not surprising. I would put New York at a healthy second place. SF tends to be more about crazy agent experiments for the thrill of capability and discovery and NYC tends to be more about kinda crazy agent experiments to find new ways to make money. Not saying either is better. But I met several people renting two apartments to straddle these worlds. You want the frontier of SF and enterprise insights of NYC. It’s one reason I travel between them so much. 3) All AI labs want to hear more from people. All of them. What are you using it for, what do you like, what do you hate, what do you need. Users have a TON of power on the direction of these tools. Keep testing and tweeting at them!! 4) There is very clearly a third customer cohort that is bubbling and underserved. It’s not developers…it’s not the business professional basic users…it’s builders. Everyone can build now. It’s marketing and sales folks vibe coding. It’s legal folks building complex skills. It’s a finance expert building a side project. This is a really undertapped customer base. They feel the Cursors of the world are too complex and doc summarization tools of the world are too basic. 5) Not sure if it was just sample size, but far fewer people were wearing tech gear compared to when I lived in SF. Everyone was still dressed casually, but I used to see Splunk and Optimizely and Slack and VC gear everywhere. People seem more in stealth swag now. 6) We may soon have our world model moment. 7) Speed of iteration and shipping is faster than I’ve ever seen. We see the nonstop drops from Anthropic. We see that because of scale, providers can get a much faster feedback loop of products or features that aren’t hitting. A lot of 2025 was experimentation, but ever since the OpenClaw moment over the holidays, the releases from all three labs have been more concentrated on…things that sorta look and feel like OpenClaw. 8) Small teams can pull off more than ever before. Small teams are the powerhouses of innovation right now. This means that finding new ways to share knowledge, break silos, and remove duplicate work is going to be even more important. AI agents functioning as actually teammates that support an entire system is key. 9) Build more Skills. Build better Skills. 10) Misinformation on AI tools and leaks spread FAST. I’ve seen so many fake stories on these AI labs. Your company needs to actually TEST these tools on your actual use cases to know which models and tools are best and you need to not make large-scale snap decisions based on a rumor of a rumor of a rumor. We will see more volatility. Plan for it. 11) You can feel the seriousness of this moment. Even during random conversations I had in line at a cafe. Lots of folks worried about job loss and lack of meaning. 12) Mac minis were sold out ;)
English
87
64
579
105.4K
Casey Spence
Casey Spence@CaseySpfas·
Google proving that TPU-only training can outperform GPU-heavy stacks signals a shift toward vertically integrated AI — custom silicon + software + infrastructure. This could be the new moat for top-tier models. On-Demand makes experimenting with these stacks accessible, letting teams scale TPU-like workflows without owning the hardware: app.on-demand.io
English
0
1
1
5
Pascal Bornet
Pascal Bornet@pascal_bornet·
Google’s Gemini 3 Pro is now sitting at the top of several major AI leaderboards. Not by a little — clearly ahead on LMArena, WebDev Arena, and Vision Arena. But what really caught my attention isn’t just the model performance. It’s how it was trained. Google trained Gemini entirely on its own TPUs. No NVIDIA GPUs. No external silicon. The first time we’re seeing a TPU-trained system not only compete with the strongest GPU-trained models from OpenAI, Anthropic, and xAI — but in some areas surpass them. That validates a strategy Google has been quietly investing in for years: Custom silicon. Integrated infrastructure. Full-stack AI development. In other words, the frontier of AI may no longer depend on Nvidia alone. That’s a significant shift. Do you think vertically integrated AI stacks will become the new competitive advantage in this industry? #ArtificialIntelligence #AI #Google #Gemini #Innovation #Technology #FutureOfAI
Pascal Bornet tweet media
English
3
6
7
1.1K