Brian Chambers

230 posts

Brian Chambers banner
Brian Chambers

Brian Chambers

@BriChamb

Technologist and Chief Architect at Chick-fil-A. Mountain bikes, rock climbing, CrossFit, and outdoors things. The Brian who does Edge stuff.

Kennesaw, GA Katılım Kasım 2021
240 Takip Edilen264 Takipçiler
Sabitlenmiş Tweet
Brian Chambers
Brian Chambers@BriChamb·
This was a really fun chat with @shomikghosh21. Tech meets 🐓. I hope you enjoy and let me know what you think. 🙏
Shomik Ghosh@shomikghosh21

🔥Ever wondered how @ChickfilA makes delicious 🍟 & 🍔 every time! SSB Pod #7 - @BriChamb dives into real tech use cases in fast food Starting w/ edge compute in restaurants helping fry cooks 👇🧵 Apple: podcasts.apple.com/us/podcast/sof… Spotify: open.spotify.com/show/5voKCX3gQ…

English
1
1
9
2.8K
Brian Chambers
Brian Chambers@BriChamb·
@breckcs I continue to follow along. Now I’m curious what sorts of innovation you wish we had in time series databases?
English
0
0
1
3
Brian Chambers
Brian Chambers@BriChamb·
@breckcs @matsonj Got it. Metrics database was just a new term for me and wasn’t sure if I was missing something. 🫡
English
1
0
2
44
Colin Breck
Colin Breck@breckcs·
@matsonj @BriChamb I would call a database used for application or infrastructure monitoring a metrics database. Samples are scraped on regular intervals, retention is short, there is little to no integration with asset models or systems for operations, like a manufacturing execution system, etc.
English
2
0
3
39
Colin Breck
Colin Breck@breckcs·
A metrics database is not the same as a time-series database. Conflating the two has limited innovation in time-series databases.
English
3
1
12
2.2K
Colin Breck
Colin Breck@breckcs·
I loved reading the first edition of "Designing Data-Intensive Applications". It is the most comprehensive book on the types of systems I work on. I recently started reading the second edition. It was a pleasant surprise—and rather humbling—to read the references in Chapter 1.
Colin Breck tweet mediaColin Breck tweet media
English
14
32
705
41.7K
Brian Chambers retweetledi
Matt Rickard
Matt Rickard@mattrickard·
“Good” agents don’t have to find the secret they need to make a request, they just make the request. Secrets are scoped to domain AND function (given the right API). “Bad” agents can’t exfiltrate a secret as easily since they only have access to endpoints, not secrets. “Untrusted” agents can escalate permission requests. Add policy at the proxy layer. Policy per agent, session, or command. “Untrusted” sandboxes don’t need to also be the credential store. This is a pattern we’re going to see everywhere soon.
Tony Dang@dangtony98

x.com/i/article/2046…

English
1
1
3
1.3K
Brian Chambers retweetledi
Richard Seroter
Richard Seroter@rseroter·
Your web app receives 25m requests per day. Wow! Time for a massive distributed compute cluster to handle the load? Nah, you can serve that off a single VM with 1 CPU and 2 GB of memory. Don't overcomplicate things. binaryigor.com/how-many-http-…
English
0
6
17
1.7K
Brian Chambers
Brian Chambers@BriChamb·
@rseroter Thanks for sharing my friend! Lots to learn and in 6 months I’m sure it will be radically different.
English
0
0
1
11
Richard Seroter
Richard Seroter@rseroter·
You're relying on more specs and task lists in your agentic coding workflow. Cool. What about design drift? Managing context? We're all learning new things. I like this update from @BriChamb who has an evolved way of directing his coding agents ... brianchambers.substack.com/p/chamber-of-t…
English
1
0
6
511
Brian Chambers
Brian Chambers@BriChamb·
@shomikghosh21 I like the thesis... on the models... another metaphor that comes to mind is that market dynamics with "Specialization & Division of Labor" > Full Generalized Knowledge and Centralized Planning.
English
1
0
1
66
Shomik Ghosh
Shomik Ghosh@shomikghosh21·
Why Vertical SaaS is Riding the Waves of AI to New Heights Recent interviews from Andrej Karpathy, research papers on ArXiv, and analogies to the human brain point to near term model advancements that will be a huge boon for vertical applications. While there is certainly some interest in Vertical SaaS in public or private markets: Harvey & EvenUp (legal vertical), Abridge & OpenEvidence (medical vertical), Owner (restaurant vertical), there’s far more attention being paid to horizontal SaaS like Runway, Glean, HeyGen, ElevenLabs, Cursor, Clay, Sierra, etc. On the one hand, in a new market horizontal applications have a much bigger TAM so the valuations that could potentially be achieved are much higher. On the other hand, vertical SaaS can scale just as rapidly with more defensibility from the vertical specific workflows, relationships, and data moats. Notice Perplexity is going deeper and deeper into the finance vertical and Snowflake just announced Cortex AI for financial services. As companies scale, they start to lean into certain verticals as packaged solutions can lead to more efficient GTM motions. For more on this topic, feel free to read my prior post from 2022 on the Verticalization of Software. With recent advancements in AI, I believe we are about to see an explosion in vertical SaaS products scaling more efficiently and just as fast as their horizontal counterparts. Andrej Karpathy LLM Worldview Before we go further, the most important podcast of 2025 for understanding the progress of LLMs is the recent podcast between @dwarkesh_sp and @karpathy. Please give it a listen if you haven’t yet. A point that stood out to me in this podcast was the discussion around the Cognitive Core. This important concept is what Karpathy describes as the fewest amount of parameters needed for the model to have a base level of knowledge about most things. Instead of trillion parameter models, Karpathy argues that a 1B parameter model may be optimal. The reason for this is simple, if we use the human brain as an analogy. Imagine a student is trying to study for a driver’s test. At the same time, the student has their school finals. In most cases, the student is trying to ingest, learn, and memorize the knowledge needed to past these tests. Inevitably, with all that context loaded in, the brain begins thinking about right of way at a stop sign while trying to figure out the cosine function of an angle. In Karpathy’s optimal world, a smaller 1B parameter model would have just enough context stored in memory and then would utilize tool calling, RAG, consulting with experts, fine-tuning, and reasoning to arrive at the optimal answer or result. This is much more akin to how we work as humans. We don’t typically have all the context for a certain task, but we REASON through it calling on help, reading articles, and making informed decisions to reach a conclusion. Karpathy used to lead Tesla’s autopilot team and co-founded OpenAI, however we don’t just have to take his informed work for it. Tiny Recursive Networks Paper @deedydas put out a great tweet summarizing some of the implications of an important paper. Give him a follow if you want to stay on top of AI research. Less is More: Recursive Reasoning with Tiny Networks Put simply, a 7M parameter model (think <0.01% of the parameters in the large models we generally use and think of) outperformed multi-billion/trillion parameter models in performing specific complex tasks like solving a Sudoku puzzle. It essentially uses repetition and an internal record of the chain of thought (reasoning) of the model to arrive at the best answer. Given it can see it’s own reasoning, it can continue to improve rapidly on its answer at lower training costs and inference costs given the size of the model. The paper’s conclusions match a lot of what Karpathy laid out without the tool calling or expert consultation piece. Google Pushing the Frontier in Reasoning @edsim recently highlighted an interesting post in his newsletter by @alex_prompter on a Google research paper called ReasoningBank. Similar to the Tiny Recursive Networks paper but taking it a step further, ReasoningBank talks about saving the reasoning approach and patterns in a sort of postmortem cookbook that the model/agent can call upon in the future. Google uses Memory Aware Test-Time Scaling to enable the model to look at the cookbooks before performing a task, thereby having recall as to how it thought through the problem previously and being able to improve upon that reasoning or simply reuse it. This is a huge leap forward as it’s providing a framework for all models to use in order to avoid complex and costly re-training and fine-tuning while optimizing test time compute as the model builds upon it’s reasoning cookbooks on each run. Implications for Vertical SaaS Jensen’s point on inference workloads resulting in 100x demand compared to pre-training or training around the Deepseek shock in Feb 2025 seems understated now. With the models being needed for various tasks at smaller parameter counts, more models will be used relying on inference to reason through tasks. This enables more on-device use cases (robotics, physical AI) and implies much more vertical specific payoffs. Say you are a vertical SaaS company for life sciences like Kneat (readers of my twitter feed will know I’m a huge fan of this company…and for full disclosure an investor too). Kneat collects deep data and knowledge of how the systems, equipment, and processes in a plant work together to produce the product of the customer. This data is currently used by Kneat’s product team to synthesize better workflows for the customer and provide guidance on optimal processes. Now imagine pointing multiple small models at each of these use cases while enabling interaction with a larger model that understands how they all tie together. You can get “ICs” deeply focused on their task and executing optimal pathways while being able to consult with the “CTO” who can help give the global view of how those changes impact the rest of the workflows. This can be served at a decreasing cost to Kneat over time while also saving on human capital needed as the customer base and platform scales. Meanwhile, the customer benefits from having models specialized to their workflows and reasoning on their specific data to uncover insights and efficiencies that would require a team of highly paid consultants years to find out. All of this being stored in a “ReasoningBank” allows for auditability and knowledge dissemination by chatting with the model about how decisions were made and why those were considered to be optimal. Through those chats, the reasoning of the model can improve benefiting future decisions at a faster rate than human to human interaction would enable. Margins with Vertical SaaS companies are going up as they find operating leverage through using AI products and as they can delight customers more readily with tailored insights & workflows at a fraction of the cost of the models they’re currently using to provide any AI features on the platforms. The data moat of the SaaS company operating in that vertical is more important with the reasoning and tool calling only as good as the data and process knowledge that the model is able to access. So while horizontal SaaS especially in a new technology wave is always exciting, let’s not forget about vertical SaaS as they have clear tailwinds from model advancements coming their way in the near future.
Shomik Ghosh tweet mediaShomik Ghosh tweet mediaShomik Ghosh tweet media
English
2
2
28
5.8K
Brian Chambers
Brian Chambers@BriChamb·
@shomikghosh21 Yup! I’ve been adding to my position the last few weeks. Especially during the latest tariff sale. 🙌
English
1
0
1
54
Brian Chambers retweetledi
Mark Richt
Mark Richt@MarkRicht·
I have Parkinson’s. My granddaughter Jadyn has Crohn’s. The Chick-fil-A Dawg Bowl 2025 exist to take a bite out of both. Please consider a gift to help us win the war at richtsdawgbowl.com. 100% of your gift goes to UGA‘s Isakson Center for Neurological Disease Research. TY!
Mark Richt tweet media
English
107
684
6.4K
1.3M
Nick Eberts
Nick Eberts@nicholaseberts·
Lately I can’t seem to get away from tech people talking about how important they are. Phish shows, baseball game, fucking grocery store. NO ONE GIVES A SHIT! Find something more interesting to relate with people about.
English
1
0
1
192
Richard Seroter
Richard Seroter@rseroter·
Today marks five years at @googlecloud. I'm not still doing the job I was hired to do, but I feel like I'm doing the one I'm meant to do. Here's to a few more!
Richard Seroter@rseroter

Surprise! I joined Google Cloud in a leadership role for outbound product mgmt of app modernization products (e.g. Anthos). Eager to help make the products and message resonate. We'll make @GCPcloud the right choice for forward-looking enterprises. More: seroter.com/2020/05/26/im-…

English
17
3
73
4.8K
Shomik Ghosh
Shomik Ghosh@shomikghosh21·
This is not a drill 🚨 NotebookLM has just landed in the App Store Can actually listen to the podcast deep dives on mobile! 🙏 @OfficialLoganK & team for making this happen
English
2
0
9
1K
Cursor
Cursor@cursor_ai·
Cursor is now free for students. Enjoy!
English
1.7K
3.7K
40.4K
11.5M
Jared Hanson
Jared Hanson@jaredhanson·
OAuth-style delegated authorization has been great for Web 2.0-style mashups. However, it’s silently creating its own shadow IT problems in the enterprise. We need better, and LLM-based agents require better. Now that basic OAuth is landing in MCP, this is the next step. Hit me up if interested in discussing this.
English
5
5
16
3.8K