Itay Adler
3.1K posts


@LukeW Exactly! Using tools like frontman make it effortless for any stack holder to produce changes visually on the product right from the browser github.com/frontman-ai/fr…
English

@zachkrall trying to fit all the trends into one box to fit as many personas as possible to pump that valuation baby
English

@rotempe4 if you try Gemma download the unrestricted ones from huggingface otherwise it’s super annoying.
The gpt-oss one is also nice
English

@floriandarroman I stopped using both, Frontman won.
github.com/frontman-ai/fr…
English

1) use agent harnesses that use 1/10 of the tokens
2) the open source chinese models where a lot of companies including NVIDIA run a service for you to use them are competing quite well with Opus (kimi/minimax/glm/qwen...)
3) use frontman which has all this for FE work github.com/frontman-ai/fr…
English

Uber's CTO told @LauraBratton5 that AI coding tools—particularly Anthropic’s Claude Code—has already maxed out its 2026 AI budget 📈
“I'm back to the drawing board, because the budget I thought I would need is blown away already,” Neppalli Naga said.
theinformation.com/newsletters/ap…
English

@MatthewBerman true, we are still suffering from him dissing frontman github.com/frontman-ai/fr…
English

Never get on Theo's bad side.
He's now added to my list of people never to cross, along with Elon (re: OpenAI), Palmer (re: Jcal), and Peter Thiel (re: Gawker).
Congrats Theo, you're in good company.
Theo - t3.gg@theo
Robinhood spam called me all morning, begging to take the post down. I told them to give me the stock I purchased. “We can’t do that sir” Burn them to the ground. Evil company.
English

@svpino plugging into frontman any of the chinese models brings an experience thats on par with the top models, tbh for a lot of tasks I don't see a major difference anymore. plug github.com/frontman-ai/fr…
English

Obviously, models are a big deal, but coding harnesses play a huge role in making these models look good.
I suspect that you can get the best frontier model out there, put it in a shitty harness, and the experience will be very disappointing.
The reverse is also true: put a mediocre model in a strong harness, and it might match the experience that you get from the best agentic coding tools out there.
So, yes, Opus 4.6 and GPT-5.3-Codex are amazing models, but the Claude Code and Codex harnesses do a lot of the lifting to make them work the way they are.
Of course, these models might also have some specific training on their harnesses. This is also an advantage.
English

@rotempe4 opencode…
ואז פשוט תחליף לאיזה מודל שאתה רוצה, המודלים הסיניים כמו
Kimi ממש טובים גם למשימות פיתוח
עברית

@zeeg In frontman we just spent quite the effort to optimize the agent token usage, I wouldn’t be surprised if it’s more efficient than most harnesses, Claude code in particular
English

Do yourself a favor and ignore these kinds of takes.
"The more tokens I spend the more advanced I am"
The people who spend the most on tokens, actually, are generally wasting compute with garbage multi-agent coordinator "experiments". They produce absolutely nothing yet feign they're on the cutting edge. They're not.
There is certainly a degree minimum viable usage, but if you do not live in these projects, in these companies, you cannot fathom what the real world looks like. You do not need to consume a thousand dollars a day to achieve the best results.
The numbers quoted here, just like the original post, are completely fabricated. Certain tasks will lend themselves to more token consumption (50m+ in a day), while many others will be an order of magnitude less and be just as if not more productive and valuable.
Measuring net tokens is no different than measuring net lines of code. Its a garbage metric and does nothing more than show output.
Steve Yegge@Steve_Yegge
I'm not trying to misrepresent anyone, and perhaps my Googler friends are misinformed. But I strongly suspect that by my own notions of what constitutes advanced AI adoption--and indeed, what most of the industry would expect from Google right now--you are not doing great. At Anthropic, which is basically the bar at this point, everyone is burning, I'd guess, 10M to 15M tokens a day. If Google can convince me that half their engineers are burning 4M tokens a day, then I'd be happy to post a retraction with an apology.
English














