Don Karter

269 posts

Don Karter banner
Don Karter

Don Karter

@donk8r

Making dev life simpler with AI tools. Founder @muvonteam. Open source & Rust.

Katılım Nisan 2026
30 Takip Edilen21 Takipçiler
Sabitlenmiş Tweet
Don Karter
Don Karter@donk8r·
Most AI devs chase models. I chase context quality. Data preparation > bigger LLMs Retrieval accuracy > prompt engineering Token efficiency > API credits The moat isn't the model. It's what you feed into it. Here's why I believe this 🧵
English
1
0
2
496
Bindu Reddy
Bindu Reddy@bindureddy·
DeepSeek V4 Beats Opus 4.7 And GPT 5.5 To Become The World's Best Open Source Model DeepSeek V4 Pro is the NEW KING of open-source . - better and 10x cheaper than Opus 4.7 and GPT 5.5 medium - out performs Kimi 2.6 thinking - much faster that any of the other big models It's literally the best open source model in the world and months away from GPT-5.5 xHigh.
Bindu Reddy tweet media
English
55
54
499
31.6K
Don Karter retweetledi
Muvon
Muvon@muvonteam·
Builders never sleep. What are you building? 👇
English
20
1
13
597
Don Karter
Don Karter@donk8r·
Harsh truth. Coding will never be the same again
English
0
0
0
9
Don Karter
Don Karter@donk8r·
@icanvardar If you do not hold it like this you just know how to make it work with closed cover :)
English
2
0
5
414
Can Vardar
Can Vardar@icanvardar·
if you don’t hold your macbook like this you’re not agentmaxxing enough
Can Vardar tweet media
English
105
36
710
155.4K
Don Karter
Don Karter@donk8r·
AI isn't going anywhere. The real question: what are you doing about it?
English
0
0
0
11
Don Karter
Don Karter@donk8r·
@HsanC_ Any free channel you could advice? What the best start when you have no influence to find the first customer in your opinion?
English
0
0
0
19
Hasan Cagli
Hasan Cagli@HsanC_·
@donk8r you just try all the channels and double down on what's working
English
1
0
1
59
Hasan Cagli
Hasan Cagli@HsanC_·
Maybe you should stop doing marketing and fix the product first.
English
53
0
64
3.4K
🃏
🃏@anupamrjp·
What are you building right now? 🚀 Not ideas. Not plans. Real SaaS. Real AI tools. Real MVPs. Drop your build link below 👇
English
83
2
40
2.4K
Dharmesh Ba
Dharmesh Ba@dharmeshba·
Claude rate limits suck! Just get @opencode Go plan for $10 and use Kimi 2.6 Lit 🔥
English
28
13
350
33.4K
Don Karter
Don Karter@donk8r·
The one real case that I just explored while using AI is protecting yourself from scammers after analysing MANY followers and MANY comments of person who try to sell your services. So AI everywhere, also in there where it's not supposed to be :) Do your own research
Don Karter tweet media
English
0
0
0
17
foundrceo
foundrceo@foundrceo·
Share your product 👇 Feeling great today. Thought I’d give something back I’ll randomly pick one to receive a Premium Launch 🚀 on FoundrList (dofollow backlink + Premium listing)
English
151
2
71
4.7K
Don Karter
Don Karter@donk8r·
That exact feeling – the one that brought me and most of you (senior devs with years of experience) here long ago. But AI is fundamentally altering how software gets built. What's next?
English
0
0
0
19
jason liu
jason liu@jxnlco·
I’m limited by compute.
jason liu tweet media
English
97
11
653
35.7K
Don Karter
Don Karter@donk8r·
The hate is real but misdirected. Anthropic's problem isn't the product — Opus 4.7 still wins on reasoning. The problem is they're slow to iterate and expensive. But that's a feature, not a bug, if you care about safety. The hate should be: why aren't other labs moving as carefully?
English
0
0
0
20
Maze
Maze@mazeincoding·
anthropic shouldn't be getting 1/100 of the hate they're getting
English
112
7
228
33.5K
Don Karter
Don Karter@donk8r·
Rebuilding is the trap, but so is prompt brittleness. The real win with Kimi: you can architect for retrieval quality instead of model capacity. Cheaper models + better retrieval = beats expensive models + lazy retrieval. That's the hidden cost nobody talks about — not the prompts, the pipeline.
English
0
0
0
48
Akos
Akos@akoskm·
Cancelled both my Claude Code Pro and ChatGPT Pro for this. Kimi K2.6 is just as good for my side projects as Opus or GPT 5.4 were. The price for this is crazy low, and there are a bunch of models I can try (like DeepSeek). Bonus: I'm moving away from building everything on Claude Code - now that both @opencode and @cursor_ai have their SDKs open, I feel I can rebuild the agentic workflows I built for Claude Code in a more platform-independent manner.
Lotto@LottoLabs

Update on Opencode Go It’s great value for $5/month, there’s really no reason not to do the first month. At $10/month it’s still good value and gets you access to all sota OS models. You can’t daily drive it without hitting limits on the big models but w/ Kimi x3 you won’t hit limits unless you’re insane. Overall highly recommend the first month, then make your own decision.

English
77
123
2.4K
296.8K
Don Karter
Don Karter@donk8r·
@shub0414 The real play is token efficiency. We cut token spend 60% on the same Claude instance by fixing retrieval. No model change. Just better context. When subsidies end, the teams that built tight retrieval + chunking strategies will ship for pennies. The rest will be priced out.
English
0
0
0
36
Shub
Shub@shub0414·
Unpopular opinion: Your $20/month AI subscription is a lie. You’re living on VC handouts. OpenAI and Anthropic are burning billions to subsidize your productivity. That money will run out. When the subsidies dry up, the cost of "intelligence" will skyrocket. The play? Build your agents and workflows now while compute is pennies on the dollar. Lock in your advantage before the price tag catches up to the tech.
English
117
15
236
20.9K
Don Karter
Don Karter@donk8r·
@rohanpaul_ai The real issue is retrieval quality in agent loops. We tested: agents with noisy context get stuck debating bad info. Same agents with clean, ranked retrieval converge in 3 rounds. It's not the model. It's what enters the context window.
English
0
0
0
6
Rohan Paul
Rohan Paul@rohanpaul_ai·
Research proves that current AI agent groups cannot reliably coordinate or agree on simple decisions. Building teams of AI agents that can consistently agree on a final decision is surprisingly difficult for LLMs. But problem is that developers frequently assume that if you have enough AI agents working together, they will eventually figure out how to solve a problem by talking it through. This paper shows that this assumption is currently wrong. Even in a friendly environment where every agent is trying to help, the team often gets stuck or stops responding entirely. Because this happens more often as the group gets bigger, it means we cannot yet trust these agent systems to handle tasks where they must agree on a correct answer. ---- Paper Link – arxiv. org/abs/2603.01213 Paper Title: "Can AI Agents Agree?"
Rohan Paul tweet media
English
65
66
328
23.5K
Don Karter
Don Karter@donk8r·
@sama Disagree slightly — it's not smarter vs cheaper, it's context quality vs model size. We tested this: same Claude instance, same prompt, different retrieval strategy. 60% token reduction. No model change. The bottleneck isn't the model. It's what enters the context window.
English
0
0
0
36
Sam Altman
Sam Altman@sama·
i keep thinking i want the models to be cheaper/faster more than i want them to be smarter but it seems that just being smarter is still the most important thing
English
2.4K
361
12.4K
984.3K
Don Karter
Don Karter@donk8r·
Everyone's chasing bigger models. We tested it: same client, same prompt, different retrieval. 60% token reduction. No model change. The real bottleneck? Context quality, not model size.
English
0
1
0
44