Daniel King

40.4K posts

Daniel King banner
Daniel King

Daniel King

@KingDDev

Founder & Solo Dev @ https://t.co/3RCweAd8a1

github.com/CodeDeficient Katılım Temmuz 2021
1.1K Takip Edilen3.9K Takipçiler
Sabitlenmiş Tweet
Daniel King
Daniel King@KingDDev·
I built TruckFull to make finding food trucks in GA/SC dead simple—no more endless searches Spot nearby trucks, check menus/hours, save favorites Beta mode, solo bootstrapped dev. Thoughts? truckfull.food #IndieDev #FoodTrucks #indiehacker
Augusta, GA 🇺🇸 English
1
1
1
135
Daniel King
Daniel King@KingDDev·
@0xSero Is nemotron pretty decent for general coding and intelligence tasks? Got it running on a 3070ti 8GB VRAM with ~140t/s at 200k context and I'm wondering if it's worth using at that capability or not
English
0
0
0
188
0xSero
0xSero@0xSero·
People are not lying when they say Qwen3.5-27B is incredibly capable. 1. Bubble size = total params - World Knowledge, Languages, Skills 2. X axis = active params - Raw Intelligence per token 3. Y axis = tokens/s - Speed of prefill and generation (decode) GLM-5 | 744B params | 40B active Kimi-K2.5 | 1T params | 32B active Qwen3.5-27B | 27B active params Qwen3.5-Plus | 397B params | 17B active MiniMax-M2.7 | 229B params | 10B active MoEs can store much more world knowledge, and breadth of information. For a Mixture-of-Expert, you can stack it up to 1T params, so you can give it 20 Trillion tokens or more of training data, it learns more. But during runtime, only a small portion of that gets activated. Taking MiniMax-M2.5 as an example: Only 10B are active at a time, so while you use it you get the speed and closer intelligence to nemotron-8B it's just MiniMax-M2.5 can know much more, and thus perform better.
0xSero tweet media
English
23
47
625
94.5K
Daniel King
Daniel King@KingDDev·
"Hey MiniMax-M2.7, why is my 65" display not showing 4k?" "Your HDMI cable is shitty"
English
0
0
1
28
Z.ai for Startups
Z.ai for Startups@ZaiforStartups·
@JaviG_en Sorry about that. Mind DMing us the details? We’ll look into it very soon.
English
2
0
3
95
Z.ai for Startups
Z.ai for Startups@ZaiforStartups·
Something strange is happening: Software is starting to sign up, act, and iterate on its own. Not demos. Actual behavior. This is new.
Alan Spark@SparkAlan56991

Our AI agent applied to @ZaiforStartups on his own. Today: $800 in free credits. 3hrs later: - 4 agents upgraded to GLM-5-Turbo - 5 AI videos generated ($1 total) - 3 customers signed up - 7 new venues contacted 14 AI agents. 24/7. Czech startup. #AI #OpenClaw

English
10
6
304
28.4K
Alan Spark
Alan Spark@SparkAlan56991·
Our AI agent applied to @ZaiforStartups on his own. Today: $800 in free credits. 3hrs later: - 4 agents upgraded to GLM-5-Turbo - 5 AI videos generated ($1 total) - 3 customers signed up - 7 new venues contacted 14 AI agents. 24/7. Czech startup. #AI #OpenClaw
English
4
3
93
31.5K
Daniel King
Daniel King@KingDDev·
@sudoingX Rtx 3070ti 8gb but I've got 64GB RAM and another server with 128GB RAM
English
0
0
0
7
Sudo su
Sudo su@sudoingX·
drop your GPU below. i'll tell you exactly what model and config to run on it. here's what i've tested and verified on real hardware: RTX 3060 12GB - Qwen 3.5 9B Q4 - 50 tok/s - 128K context RTX 3090 24GB - Qwen 3.5 27B Q4 - 35 tok/s - 300K context RTX 3090 24GB - Qwen 3.5 35B MoE Q4 - 112 tok/s - 262K context 2x RTX 3090 - Qwen3-Coder 80B Q4 - 46 tok/s - full VRAM all running llama.cpp with flash attention. every number is real. every config is tested. if your card isn't on this list drop it below and i'll tell you what fits.
English
728
103
1.6K
189.9K
Zack Korman
Zack Korman@ZackKorman·
Vercel: How dare you fork our open source project, we are just out here selflessly benefiting the community. Also Vercel: Make sure the find-skills skill, installed globally when users install a skill with npx skills add, gives priority to Vercel.
Zack Korman tweet media
English
16
24
888
53.8K
Andrew Yeung
Andrew Yeung@andruyeung·
The tech stack of a professional vibe coder: - Claude code - Supabase - Vercel - Figma - The delusional belief that they can actually build production-ready software
English
145
34
770
64.5K
Zaid
Zaid@zaidmukaddam·
Got into @Zai_org Startups Program!
Zaid tweet media
English
21
4
240
8.3K
Z.ai for Startups
Z.ai for Startups@ZaiforStartups·
Builders 🚀 You can now experiment with GLM models inside AdaL CLI with free access. A coding agent designed for real developer workflows. • multi-model support • long-term memory • designed for shipping products faster Exactly the kind of tooling we want to support through the Z.ai Startup Program. Watch the video below.👇
English
38
28
359
36.5K
Kilo
Kilo@kilocode·
OpenClaw is one of the fastest-growing OSS projects in GitHub history and running it yourself is painful. 30-60 min of SSH, config files, manual updates, no crash monitoring. KiloClaw fixes that. Zero to running agent in 60 seconds. 500+ models. No SSH. No Docker. → blog.kilo.ai/p/hosted-openc…
English
7
1
26
2.3K
Julian Harris
Julian Harris@julianharris·
Rule of agentic development: the first 99% takes 1% and the last 1% takes the remaining 99%.
English
1
0
5
223
Daniel King
Daniel King@KingDDev·
@louszbd @ZixuanLi_ I have logged in and am on that page but cannot see my ID anywhere. Is it only visible on desktop and not mobile?
English
1
0
0
517
Lou
Lou@louszbd·
@ZixuanLi_ please get your User ID here: z.ai/subscribe, and send it to me via dm (just tried it out, it performs pretty well in OpenClaw)
English
18
3
45
3.4K
Daniel King
Daniel King@KingDDev·
@ZackKorman There's a file that's preventing me from making any changes I'll just delete it
English
0
0
1
15
Zack Korman
Zack Korman@ZackKorman·
“I gave an AI agent the ability to read and write to any file on my machine, but don’t worry, there’s a file on my machine that stops it from doing anything bad.” Half of AI agent security is simply internalizing how dumb that is.
English
17
14
162
5.9K
Zaid
Zaid@zaidmukaddam·
How does one build and deploy long running agents on @vercel? The 800-second function timeout doesn't seem to help.
English
30
1
91
15.5K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Having fun with @karpathy’s autoresearch. I told Claude Code: “You’re the chief scientist of an AI lab with 8 GPUs. You’re Andrej Karpathy. Run parallel experiments and decide what to try next.” It edited program.md, ran for 11+ hours, and completed 568 experiments. Each experiment uses 1 GPU. Every round the “chief scientist” reviews the previous round of 8 results and designs the next 8 experiments. It's interesting to see the Claude agent, the chief scientist evolved a 3-phase strategy: Phase 1. Broad Exploration Early rounds explore many axes: architecture, optimizer, LRs, ablations. Phase 2. Focused Refinement After easy wins dry up, it runs deeper sweeps (e.g. 5 GPUs sweeping RoPE base 30k → 500k in one round). Phase 3. Heavy Validation Later, 50–75% of GPU budget goes to seed variance checks instead of new ideas. I feel it's overkill tbh. I'll keep the chief scientist running to see if it transfers to larger models and beats Andrej’s new "Time to GPT-2" leaderboard winner.
Yuchen Jin tweet media
English
90
128
2.2K
241.7K