Itay Adler

3.1K posts

Itay Adler

Itay Adler

@itayad

https://t.co/VGmXTOfnL7

Israel Se unió Haziran 2011
357 Siguiendo239 Seguidores
Itay Adler
Itay Adler@itayad·
@bonesy why is the stat card showing Deni with an Iranian flag?
English
0
0
0
25
Bones🦴
Bones🦴@bonesy·
Deni Avdija is literally one of the best young players in the nba and no one cares This statline is insane
Bones🦴 tweet media
English
39
14
592
42.6K
Luke Wroblewski
Luke Wroblewski@LukeW·
finally. the designer/developer handoff is dead.
English
13
13
210
52.1K
SE Hozaifa
SE Hozaifa@SeHozaifa·
If AI Writes 90% of your Code... Do you Deserve 100% of the Credit?
SE Hozaifa tweet media
English
79
0
64
2K
Itay Adler
Itay Adler@itayad·
@zachkrall trying to fit all the trends into one box to fit as many personas as possible to pump that valuation baby
English
0
0
0
1.6K
Zach Krall
Zach Krall@zachkrall·
ok i still dont understand why chat, cowork, code are even separate tabs to begin with
English
116
20
1.3K
124.3K
Aman
Aman@Amank1412·
Backend Developer interview: We can write 80% of our code with AI why should we still hire you. What will be your response?
English
80
1
53
18.1K
Itay Adler
Itay Adler@itayad·
@rotempe4 if you try Gemma download the unrestricted ones from huggingface otherwise it’s super annoying. The gpt-oss one is also nice
English
0
0
1
209
Rotem Perets
Rotem Perets@rotempe4·
היום אנסה לשים מודל מקומי מאחורי הסוכן שאני בונה, יש עוד משהו חוץ מג׳מה 4 ששווה לי לנסות?
עברית
9
0
18
3K
Konny
Konny@konnydev·
Screw favorite colors, whats your favorite localhost port?
English
67
3
55
3.5K
kapilansh
kapilansh@kapilansh_twt·
genuine question are people still using VS Code or has everyone quietly switched to Claude Code or Codex or Cursor or am i the only one ?
English
143
1
138
17.2K
Jake
Jake@JakeKing·
Who's building devtools in 🇨🇦right now? want my algo to be filled with cool people building cool shit up here.
English
107
10
225
14K
John
John@ionleu·
drop ur startup link
English
364
3
122
14K
Theo - t3.gg
Theo - t3.gg@theo·
Robinhood refused a buy order, didn't notify me, withdrew my money anyways, and cost me over $10k in lost gains in the last 24 hours. What the hell should I be using instead?
English
450
163
6.5K
929.2K
Florian Darroman
Florian Darroman@floriandarroman·
I stopped using OpenClaw. Claude won.
English
54
10
212
23.6K
Itay Adler
Itay Adler@itayad·
1) use agent harnesses that use 1/10 of the tokens 2) the open source chinese models where a lot of companies including NVIDIA run a service for you to use them are competing quite well with Opus (kimi/minimax/glm/qwen...) 3) use frontman which has all this for FE work github.com/frontman-ai/fr…
English
0
0
0
10
Anissa Gardizy
Anissa Gardizy@anissagardizy8·
Uber's CTO told @LauraBratton5 that AI coding tools—particularly Anthropic’s Claude Code—has already maxed out its 2026 AI budget 📈 “I'm back to the drawing board, because the budget I thought I would need is blown away already,” Neppalli Naga said. theinformation.com/newsletters/ap…
English
97
151
1.3K
1.6M
Itay Adler
Itay Adler@itayad·
turns out NVIDIA is giving us free model usage for kimi/minimax/glm you name it, its on "Trial" so who knows for how long we get free clanker time
English
1
0
0
16
Itay Adler
Itay Adler@itayad·
@svpino plugging into frontman any of the chinese models brings an experience thats on par with the top models, tbh for a lot of tasks I don't see a major difference anymore. plug github.com/frontman-ai/fr…
English
0
0
0
3
Santiago
Santiago@svpino·
Obviously, models are a big deal, but coding harnesses play a huge role in making these models look good. I suspect that you can get the best frontier model out there, put it in a shitty harness, and the experience will be very disappointing. The reverse is also true: put a mediocre model in a strong harness, and it might match the experience that you get from the best agentic coding tools out there. So, yes, Opus 4.6 and GPT-5.3-Codex are amazing models, but the Claude Code and Codex harnesses do a lot of the lifting to make them work the way they are. Of course, these models might also have some specific training on their harnesses. This is also an advantage.
English
37
6
78
11K
Yair Bareket ⚓️⚖️🎩
זה בסדר אם אני לא מבין בקלוד קוד? שאין לי תיק השקעות? שכשאני שומע אודיובוקס אני מקשיב לפנטזיה, לא למדריכי שיפור עצמי? שאני משתדל להיות פעיל ובריא, אבל כנראה לא אגיע למאה גרם חלבון ביום? שאני אוהב ומחבק את הילדים שלי, אבל לא נלחם מספיק שיעזבו מסכים? שאין לי תכנית לחמש שנים קדימה?
עברית
80
15
1.4K
48.5K
Itay Adler
Itay Adler@itayad·
@rotempe4 opencode… ואז פשוט תחליף לאיזה מודל שאתה רוצה, המודלים הסיניים כמו Kimi ממש טובים גם למשימות פיתוח
עברית
0
0
0
14
Rotem Perets
Rotem Perets@rotempe4·
כל נסיון עבר שלי עם קודקס (קודבייסים קטנים) לא עבד טוב. עדיין הרבה אנשים טוענים שהוא ממש טוב וזה מייצר לי פומו 😆 לקודקס או לא לקודקס, זו השאלה?!
עברית
7
0
11
940
Itay Adler
Itay Adler@itayad·
@zeeg In frontman we just spent quite the effort to optimize the agent token usage, I wouldn’t be surprised if it’s more efficient than most harnesses, Claude code in particular
English
0
0
0
26
David Cramer
David Cramer@zeeg·
Do yourself a favor and ignore these kinds of takes. "The more tokens I spend the more advanced I am" The people who spend the most on tokens, actually, are generally wasting compute with garbage multi-agent coordinator "experiments". They produce absolutely nothing yet feign they're on the cutting edge. They're not. There is certainly a degree minimum viable usage, but if you do not live in these projects, in these companies, you cannot fathom what the real world looks like. You do not need to consume a thousand dollars a day to achieve the best results. The numbers quoted here, just like the original post, are completely fabricated. Certain tasks will lend themselves to more token consumption (50m+ in a day), while many others will be an order of magnitude less and be just as if not more productive and valuable. Measuring net tokens is no different than measuring net lines of code. Its a garbage metric and does nothing more than show output.
Steve Yegge@Steve_Yegge

I'm not trying to misrepresent anyone, and perhaps my Googler friends are misinformed. But I strongly suspect that by my own notions of what constitutes advanced AI adoption--and indeed, what most of the industry would expect from Google right now--you are not doing great. At Anthropic, which is basically the bar at this point, everyone is burning, I'd guess, 10M to 15M tokens a day. If Google can convince me that half their engineers are burning 4M tokens a day, then I'd be happy to post a retraction with an apology.

English
13
27
272
24.1K