TG

1.2K posts

TG banner
TG

TG

@ossaijaD

Software Engineer

Katılım Temmuz 2014
386 Takip Edilen186 Takipçiler
TG
TG@ossaijaD·
@bendersej @mattpocockuk Depending on the model you get between 70k to 150k of reliable instruction following.
English
1
0
0
309
Benjamin André-Micolon
@ossaijaD @mattpocockuk Not sure what you mean by « smart zone »? I’m at around 33k tokens in context before the first message on a fresh session. It’s not lean but it works perfectly for me, so I haven’t optimized it yet (for example I don’t use rules but layered Claude.md’s)
English
1
0
0
348
Matt Pocock
Matt Pocock@mattpocockuk·
What 'advanced' AI coding techniques are you using? I.e. what do you feel like you've discovered that no-one else knows about yet?
English
178
14
474
89.6K
TG
TG@ossaijaD·
@mattpocockuk Not sure this is advance but session are best treated as ephemeral objects, my workflow is designed into phases each phase produces an artifact(.md) for the next phase. That way I don’t get to fight context rot, context anxiety or whatever name it goes by this days.
English
2
1
3
823
Benjamin André-Micolon
@mattpocockuk Thanks! Custom CLIs and MCP servers + Custom Telegram integration (for HITL) For example one helper prints out the entire local DB schema + connection string by + ports of the services - checkout aware (I run multiple sessions in parallel in checkouts, not work trees)
English
1
0
9
2.1K
TG
TG@ossaijaD·
Kimi K2.5 has been here, I haven't seen a model with more propensity to write comments than Kimi K2.5
TG tweet media
English
0
0
0
17
TG
TG@ossaijaD·
@opencode Kimi K2.6 still not showing up for selection for me
TG tweet media
English
0
0
0
19
TG
TG@ossaijaD·
@theo Kimi K2.5 is very good with well scoped spec expect nothing but better from 2.6
English
0
0
6
5.2K
Theo - t3.gg
Theo - t3.gg@theo·
How are people feeling actually using Kimi K2.6 for code?
English
186
19
1.8K
255.9K
TG
TG@ossaijaD·
@songjunkr I find working with Kimi K2.5 on @opencode zen better and faster experience than codex and opus 6.6
English
1
0
1
896
송준 Jun Song
송준 Jun Song@songjunkr·
3개월 뒤에는 Mythos 수준의 오픈소스 모델을 사용할 수 있을거라고 확신해요. 지금도 사람들이 잘 모르지만, 오픈소스 모델로 Opus-4.6과 비슷한 수준의 코딩이 가능하거든요. 못믿으시겠다면 GLM-5.1을 클라우드로 한번 사용해보세요.
한국어
42
24
560
26.2K
Dave Font
Dave Font@davefontenot·
Hosting a first of its kind talk on taking care of your human in the age of ai Reply for an invite
English
51
2
94
7.4K
TG
TG@ossaijaD·
Kimi K2.5 sure love writing them comments
English
0
0
0
19
TG
TG@ossaijaD·
What incentives do non brand model providers have for been token efficient?
English
0
0
0
14
Theo - t3.gg
Theo - t3.gg@theo·
I have feelings about Opus 4.7.
English
78
20
653
147.1K
Mario Zechner
Mario Zechner@badlogicgames·
the jensen interview is wild.
English
18
1
126
17.9K
TG
TG@ossaijaD·
Dwarkesh:Jensen is the first time someone has seriously challenged Ant worldview on AI, the latest been if China was first to Mythos like model they will use it against the world but we will us it responsibly so we should be the only one who get to make and control them.
English
0
0
1
3.6K
TG
TG@ossaijaD·
@XP_Ehsaan @Real_SilverLine @songjunkr @AnthropicAI As for standing your own inference stack I hear this is not as simple as it seems unless you have the engineering resource to pull it off. You may however want to consider other providers zen from opencode is great, gives you access to many of the latest open source models
English
1
0
0
44
TG
TG@ossaijaD·
@XP_Ehsaan @Real_SilverLine @songjunkr @AnthropicAI At this stage of model development being able to give yourself optionality is key. Going in on opus 4.6 isn’t int. I think since this year other models have performed better than anything that has come out of Ant so far. Remain to be seen where 4.7 lands eventually
English
1
0
0
61
송준 Jun Song
송준 Jun Song@songjunkr·
Opus 4.7 토큰 테스트 토크나이저 차이로 제미나이의 2배를 사용합니다. Opus 4.6 대비해서도 50% 많이 사용해요. 이건 사실상 같은 한도에서 모델이 50% 더 비싸진겁니다.
송준 Jun Song tweet media
한국어
88
238
2.6K
248.3K
TG retweetledi
Simon Willison
Simon Willison@simonw·
Shocking result on my pelican benchmark this morning, I got a better pelican from a 21GB local Qwen3.6-35B-A3B running on my laptop than I did from the new Opus 4.7! Qwen on the left, Opus on the right
Simon Willison tweet mediaSimon Willison tweet media
English
164
149
2.4K
203.6K
Kyle Mistele 🏴‍☠️
your code is part of the prompt btw if your code sucks your outputs will continue to suck you have to improve the code otherwise you will never escape the gravity well of all your bad patterns and tech debt more true with autoregressive LLMs than with humans
English
4
1
50
1.9K