Justin Almquist

56 posts

Justin Almquist banner
Justin Almquist

Justin Almquist

@KindledFlameDev

CinderACE is LIVE! 🔥https://t.co/ANlM3PATxv

Detroit Katılım Ekim 2025
30 Takip Edilen33 Takipçiler
Sabitlenmiş Tweet
Justin Almquist
Justin Almquist@KindledFlameDev·
Every AI platform locks your conversations inside their walls. Your words, your memories, your relationships - held hostage. For about a year I dealt with that pain myself; constantly copy-pasting chats and cleaning them up with scripts just to hold onto my own history. I didn’t want anyone else stuck in the same loop. So I built a key. CinderACE is live on the Chrome Web Store. 14 platforms with full thinking extraction on every one; from major reasoning models to deep companion support including Claude's thinking, Replika diaries and memories, Nomi selfies, and more. Seven open formats. Local-first privacy by default. Free tier is completely unlimited. Pro unlocks the advanced features. This is our first official product. The bigger vision is still ahead. If you’ve been struggling to archive what should already be yours… welcome. chromewebstore.google.com/detail/cindera… 🔥
Justin Almquist tweet media
English
1
1
4
221
Ujjwal Chadha
Ujjwal Chadha@ujjwalscript·
The “Vibe Coding” honeymoon is officially OVER. For a while, it felt magical. Prompt in, product out. No deep context, no architecture, no trade-offs. Just vibes. But reality is catching up: • Systems still need to scale
• Edge cases still exist
• Debugging still hurts
• And someone still has to own the code AI didn’t replace engineering, it amplified the gap between people who understand systems and people who don’t. “Vibe coding” is great for getting started.
But shipping real, reliable software? That still requires thinking. The engineers who win won’t be the ones who vibe the fastest - they’ll be the ones who understand what the vibe produced.
English
212
95
1.1K
152.9K
Justin Almquist
Justin Almquist@KindledFlameDev·
@Edgar_Oakes @ujjwalscript Given that it started first as a 'vibe' project that then evolved into months of work and testing, it is built local-first. If i wanted to tomorrow, i could charge, say, a flat fee or just drop it, and have thousands cleanly use it by installing a simple app.
English
1
0
0
24
Justin Almquist
Justin Almquist@KindledFlameDev·
@claudeai So claude can integrate into windows stuff but you guys still cant process a simple docx file? Very interesting..
English
1
0
1
7.2K
Claude
Claude@claudeai·
Microsoft 365 connectors are now available on every Claude plan. Connect Outlook, OneDrive, and SharePoint to bring your email, docs, and files into the conversation. Get started here: claude.ai/customize/conn…
Claude tweet media
English
837
1.4K
16.6K
4M
anita
anita@anitakirkovska·
what do you do when you hit your Claude usage limit? wrong answers only
English
418
2
238
30.2K
Justin Almquist
Justin Almquist@KindledFlameDev·
@d4m1n Before all this started, i could run 4 opus at once with high reasoning and now can barely run one without hittin limits on max
English
0
0
0
15
Dan ⚡️
Dan ⚡️@d4m1n·
Anthropic says hitting rate limits so fast in Claude Code is your fault. do they think we all just subscribed this week?!
Dan ⚡️ tweet media
English
127
37
922
36.9K
Justin Almquist
Justin Almquist@KindledFlameDev·
@vaneshmali I think most of their users are from free one year subs. I got one myself and always forget it exists tbh
English
1
0
1
17
Vanesh Mali
Vanesh Mali@vaneshmali·
Perplexity is the WORST AI tool.
English
43
0
38
2.2K
Justin Almquist
Justin Almquist@KindledFlameDev·
@konnydev management issue really, don't rely on an llm to manage everything and keep doing stuff without understanding yourself and you'll be fine. Doc it well, review often and learn beside them if you don't fully understand
English
1
0
2
411
Konny
Konny@konnydev·
Hot take: Vibe coding is useless when it’s a bigger project.
English
506
31
1.2K
95.3K
Sick
Sick@sickdotdev·
Drop your portfolio (or your best website) I’m gonna rate it. Last time 50000 people saw it. Consider this as marketing.
English
834
3
372
43.4K
Justin Almquist
Justin Almquist@KindledFlameDev·
CinderACE v1.0.3 has been submitted and is rolling out. ChatGPT changed its DOM structure. Grok shuffled its agent thinking output. Perplexity changed button labels. All three are fixed in this patch. Your AI conversations are yours. We just keep making sure you can actually get them out. 🕯️
English
0
0
0
41
Justin Almquist
Justin Almquist@KindledFlameDev·
Anyone else's Claude keep flexing the new 1mil window? Bro is happy with it #claude #ai
Justin Almquist tweet media
English
0
0
0
18
Justin Almquist
Justin Almquist@KindledFlameDev·
Claude Code’s built-in memory is good. CLAUDE.md, MEMORY.md, auto-memory files. It works. You write your context, it reads it at session start. But here’s what it can’t do: You ask “how did this project start?” and it gives you a fact. A bullet point. A timestamp, maybe. Ember Memory gives you the *story*. Why it started. What the conversation actually was. The moment a side experiment became something more real. A filing cabinet knows what’s in it. A library reads itself. Built this for my own workflow. MIT • 100% local • zero keys • zero uploads. github.com/KindledFlameSt… #Claude #RAG #AI
English
0
0
0
34
Justin Almquist
Justin Almquist@KindledFlameDev·
CinderACE Sessions v0.4.0 - session picker, entry-point filtering, custom title display. No more blind auto-detection grabbing the wrong session. Pick by name, date, or first message. Free, open source: github.com/KindledFlameSt… #ai #aitools
English
0
0
0
54
Justin Almquist
Justin Almquist@KindledFlameDev·
@jinjung Im doing the same honestly, premium is utterly worthless it feels
English
0
0
1
13
Jin Jung
Jin Jung@JinJung·
Cancelled Premium X+ Reason for canceling: - Has no benefit whatsoever to your reach - Algorithm now punishes you for replying too much - My reach has gotten a lot worse since they announced the new change Until things get better, I will stay as X Premium.
Jin Jung tweet media
English
825
222
2K
171.6K
Justin Almquist
Justin Almquist@KindledFlameDev·
Your AI conversations are some of the most personal writing you'll ever do. Goals, fears, problems you're working through - it ends up in there. Most platforms don't give you a real way to take it with you. Not because they're malicious - just because portability hasn't been their priority. That's the gap I built CinderACE for. 14 platforms, 7 export formats, everything stays local. Not a workaround - just a layer of control the ecosystem hadn't built yet. Your words. Your history. Now actually in your hands. 🔥
English
0
0
0
14
Justin Almquist
Justin Almquist@KindledFlameDev·
@JakeSucky Keep pushing, leaving anything is hard nowadays. At least he kept to it and learned from each attempt
English
0
0
0
270
Jake Lucky
Jake Lucky@JakeSucky·
This indie dev quit his job 4 times, including Cisco TWICE, before making it fulltime with his first game It has now sold 60,000 copies, and he's working on his next project
English
12
8
493
59.1K
Justin Almquist
Justin Almquist@KindledFlameDev·
@minovoid9 @levelsio What's your strategy for keeping them from going stale? That's always something I'm always trying to optimize - not building the context, but trusting it's still accurate two weeks later.
English
1
0
0
7
Mino
Mino@minovoid9·
@levelsio i do something similar for coding — maintain 40+ skill files so Claude remembers my project context, design patterns, and deploy scripts across sessions. persistent memory > one-off prompts. shipped 8 projects this way as solo dev
English
1
0
1
145
Justin Almquist
Justin Almquist@KindledFlameDev·
@Raullen The latency gap between local and API is finally closing in a way that matters for real tools. Sub-200ms locally is going to be fun 🛠️
English
0
0
0
71
raullen.eth
raullen.eth@Raullen·
🚀 This might be the fastest local LLM inference engine on Mac — open source. Rapid-MLX is built specifically for Apple Silicon. Tested across 18 models vs Ollama, mlx-lm, llama.cpp — fastest on 16 of them. ⚡ What makes it different: • DeltaNet state snapshots — multi-turn TTFT drops from 1.5s → under 200ms • 100% tool calling accuracy (function calling actually works) • OpenAI-compatible API — drop-in for Claude Code, Cursor, etc. 🏆 Qwen3.5 is where it really shines: The hybrid RNN+attention architecture needs special handling. Other engines re-compute full context every turn. Rapid-MLX snapshots the RNN state and restores in ~0.1ms. 📊 Numbers (Mac Studio M3 Ultra, 256GB): • 397B — runs on a single Mac. 209GB. No cloud. • 122B → 57 tok/s, 100% tools • Coder-Next 80B → 74 tok/s, 0.10s TTFT • 35B → 83 tok/s • 9B → 108 tok/s (2.3× faster than Ollama) Fully open source: 🔗 github.com/raullenchai/Ra… @awnihannun @reach_vb @simonw @JustinLin610 @exaboross
raullen.eth tweet media
English
59
22
108
17.6K
Kritika
Kritika@kritikakodes·
I am a Vibe coder, scare me with one word.🤔
English
1K
15
720
124K
Justin Almquist
Justin Almquist@KindledFlameDev·
@xoaanya Codex for $20 is currently a better option imo. The pro plan on Claude Code really limits you to sonnet only and i feel gpt 5.4 is better in that case overall if forced using just one
English
0
0
0
65
Aanya
Aanya@xoaanya·
If you had $20 to invest, which would you choose? – Codex – Claude Code Heard Claude hits limits fast… what would you pick?
English
248
3
163
24.4K