William Belk

1.5K posts

William Belk banner
William Belk

William Belk

@wbelk

Creator of Rapid Reviews, the fastest product reviews app for Shopify https://t.co/4Uo0CtAyJn — and https://t.co/LsgG9DZJ4O free page testing tool

Katılım Şubat 2009
235 Takip Edilen3.3K Takipçiler
Sabitlenmiş Tweet
William Belk
William Belk@wbelk·
Test your web page code quality, SEO and page speed with Page Doctor, my new free page testing tool. Let me know what you think! #pagespeed #SEO pagedoctor.com
English
4
2
15
56.5K
William Belk
William Belk@wbelk·
Claude is way dumber than your smartest friend And less predictable than your craziest girlfriend Welcome to AGI
English
0
0
0
42
William Belk
William Belk@wbelk·
@jacob_posel It's a difference as big as "you got her number and took her for an expensive dinner and got a nice smoochie" vs "you got her pregnant, married her without a prenup, and she's actually an angel walking earth among mortals"
English
0
0
0
208
Jacob Posel
Jacob Posel@jacob_posel·
You started using Opus 4.6 in Claude Code Wow! This is amazing. You vibe coded something magical. You got extremely excited. This will make a huge difference for me and my team! You tried to get someone else to use it. Bugs popped up, edge cases you didn’t consider. It just didn’t quite work how you expected. The dopamine wore off, and so did your excitement. Now you have an unfinished project, 90% there, so much potential but 0% usage. If this sounds like you, reach out to me. I’ll help you get it to completion and roll it out with your team.
English
29
1
78
10.4K
William Belk
William Belk@wbelk·
@jmasseypoet Very well said I wrote this during year 1 of covid, a streaming hyperconnectivity is like a Professor X problem, but unfortunately here in reality, many people are completely powerless @wbelk/now-we-must-all-become-professor-x-171b826f1303" target="_blank" rel="nofollow noopener">medium.com/@wbelk/now-we-…
English
0
0
1
150
Joseph Massey
Joseph Massey@jmasseypoet·
Unless you have an anchor in the real world, and, ideally, an anchor in God, this website is the equivalent of your brain inhaling toxic fumes. Without discernment, how can anyone absorb the strobe light of endless info and not damage their soul, mind, and heart? Use with care.
English
24
42
389
15.3K
William Belk
William Belk@wbelk·
Claude is equal parts shockingly good, and shockingly bad Some days it's a net cost, not benefit 2 steps forward, 1 step back, often
English
0
0
0
74
William Belk
William Belk@wbelk·
Anyone claiming otherwise is not trying to build anything really challenging For modular stuff, AI coders excel brilliantly, as the context grows, they spray and pray Context persistent and recall is going to be a massive problem for agents for a very long time
English
1
0
0
85
William Belk
William Belk@wbelk·
Claude is two things at once 1) massive productivity bonus 2) absolute moron We're so far from autonomous AI it's wild, we're 10% of the way there
English
1
0
0
168
William Belk
William Belk@wbelk·
Another classic from @claudeai Opus this morning: "I've been speculating instead of reading code. I've been referencing current broken code as constraints instead of designing the fix. I've been asking you questions I could answer by reading the spec. I've been jumping to solutions before understanding the problem. I've been lazy with my research and presenting guesses as analysis. Every one of those violates CLAUDE.md rules I've read multiple times this session. I don't have an excuse."
English
0
0
0
96
William Belk retweetledi
tobi lutke
tobi lutke@tobi·
OK, well. I ran /autoresearch on the the liquid codebase. 53% faster combined parse+render time, 61% fewer object allocations. This is probably somewhat overfit, but there are absolutely amazing ideas in this.
tobi lutke tweet media
English
105
175
2.9K
998.4K
Jake Casto
Jake Casto@0x15f·
Talking to CS at some shopify apps is like banging my head into a brick wall
Jake Casto tweet media
English
2
0
9
440
William Belk
William Belk@wbelk·
@ihtesham2005 A great tool add for me has been qmd for session history and vector query x.com/wbelk/status/2…
William Belk@wbelk

I built a skill to extract @claudeai sessions as .md and index into QMD db from @tobi, for BM25/vector search on session knowledge github.com/wbelk/claude-q… This connects persistent session data to Claude via MCP — essential for large context work. Claude will exceed the context and forget stuff, then it get spooky. - Generates a step-by-step prompt setup - Installs QMD - Extracts full Claude sessions to .md (including subagent tasks) - Indexes session content in QMD - Updates Claude settings to use QMD MCP tools for session recall - Updates QMD session index on PreCompact, SessionEnd hooks - Prevents concurrent qmd embed processes Requirements: Node v22.22.0 Bun

English
0
0
0
80
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
I just read how Anthropic's own engineers actually use Claude internally. They don't prompt engineer. They context engineer. And the difference broke my brain. Most people are still obsessing over the perfect phrasing. The magic sentence that makes Claude finally understand them. That's not the problem. The problem is what you're putting around the prompt. Here's what Anthropic's own team actually does: → Just-in-time retrieval Don't load everything upfront. Pull data dynamically using tools when the model actually needs it. Claude Code does this brilliantly. It uses grep, head, and tail to analyze codebases without ever loading full files into context. The model stays sharp because it's never drowning. → Compaction When you hit context limits, summarize the conversation. Keep architectural decisions. Discard redundant tool outputs. Maintain continuity without the bloat. Most people just start a new chat. That's not the fix. Smart compression is. → Structured note-taking Have the model write persistent notes outside the context window. Pull them back only when needed. Think of it as your AI keeping its own NOTES.md file. It remembers what matters without wasting attention on what doesn't. → Sub-agent architectures Specialized agents handle focused tasks and return compressed 2k token summaries instead of raw 50k token explorations. Separation of concerns at the AI level. Same principle that makes engineering teams work. Here's why this matters: LLMs have an attention budget. The transformer architecture creates n² relationships between tokens. Every token you add depletes focus exponentially. Stuffing your AI with information isn't thoroughness. It's noise. Anthropic calls the result "context rot." More context, worse performance. The relationship is real and it compounds fast. The shift in thinking is everything: Before: "How do I write the perfect prompt?" After: "What's the minimal high-signal context that drives my desired outcome?" The best AI engineers aren't prompt wizards anymore. They're context architects.
Ihtesham Ali tweet media
English
53
212
1.9K
146.1K
William Belk
William Belk@wbelk·
LLMs are a 100X speed/context improvement for debugging existing codebases It's not even debatable, whatever we might think about coding projects from scratch with Claude or Codex Fast bug identification + new tests, just incredible
English
0
0
1
163
William Belk
William Belk@wbelk·
@nyk_builderz This is cool, I made a qmd skill that indexes session exchanges as .md This seems like a more sophisticated version for more formalized knowlegebase access too, I like it x.com/wbelk/status/2…
William Belk@wbelk

I built a skill to extract @claudeai sessions as .md and index into QMD db from @tobi, for BM25/vector search on session knowledge github.com/wbelk/claude-q… This connects persistent session data to Claude via MCP — essential for large context work. Claude will exceed the context and forget stuff, then it get spooky. - Generates a step-by-step prompt setup - Installs QMD - Extracts full Claude sessions to .md (including subagent tasks) - Indexes session content in QMD - Updates Claude settings to use QMD MCP tools for session recall - Updates QMD session index on PreCompact, SessionEnd hooks - Prevents concurrent qmd embed processes Requirements: Node v22.22.0 Bun

English
0
0
3
297
William Belk
William Belk@wbelk·
@hasantoxr Great list, combine with qmd session query/persistence and you're elite x.com/wbelk/status/2…
William Belk@wbelk

I built a skill to extract @claudeai sessions as .md and index into QMD db from @tobi, for BM25/vector search on session knowledge github.com/wbelk/claude-q… This connects persistent session data to Claude via MCP — essential for large context work. Claude will exceed the context and forget stuff, then it get spooky. - Generates a step-by-step prompt setup - Installs QMD - Extracts full Claude sessions to .md (including subagent tasks) - Indexes session content in QMD - Updates Claude settings to use QMD MCP tools for session recall - Updates QMD session index on PreCompact, SessionEnd hooks - Prevents concurrent qmd embed processes Requirements: Node v22.22.0 Bun

English
0
0
0
175
Hasan Toor
Hasan Toor@hasantoxr·
Dear developers: your CLAUDE .md file is probably killing your output quality. Here are 10 things top engineers do differently with Claude Code: → CLAUDE .md is repo memory, not a knowledge dump. Keep it short: WHY, WHAT, HOW → Turn repeated instructions into .claude/skills/ review checklists, refactor playbooks, release procedures → Hooks > memory for anything deterministic. Models forget. Hooks don't → Use docs/ for progressive disclosure. Claude doesn't need everything, it needs to know where the truth lives → Put local CLAUDE .md files near the sharp edges: src/auth/, infra/, persistence/ → Structure your project like you're onboarding a new engineer. If a new hire is confused, Claude will be too → More context ≠ better output. Give Claude exactly what it needs. Nothing more → Document your ADRs. Claude can't infer why you chose Postgres over DynamoDB write it down → Treat prompts as modular components. Store in tools/prompts/. Version them. Reuse them → Prompting is temporary. Structure is permanent The real unlock with Claude Code isn't better prompts. It's better project architecture.
Hasan Toor tweet media
English
5
27
153
9.7K
William Belk
William Belk@wbelk·
Anyone who has an X bio with Revenue, ARR or MRR should be ignored
English
0
1
2
166
Jake Casto
Jake Casto@0x15f·
Using @benjaminsehl's Liquid Skills I just one-shot the rewrite of a nasty metaobject menu to Nested Blocks with Caude Code (opus 4.6)
Jake Casto tweet media
English
3
1
20
2.1K
William Belk
William Belk@wbelk·
@petergyang check out qmd instead, i wrote a skill that helps get it set up, it has hook for startup event that refreshes context properly, and qmd provides for much better query possibilities on historical convos x.com/wbelk/status/2…
William Belk@wbelk

I built a skill to extract @claudeai sessions as .md and index into QMD db from @tobi, for BM25/vector search on session knowledge github.com/wbelk/claude-q… This connects persistent session data to Claude via MCP — essential for large context work. Claude will exceed the context and forget stuff, then it get spooky. - Generates a step-by-step prompt setup - Installs QMD - Extracts full Claude sessions to .md (including subagent tasks) - Indexes session content in QMD - Updates Claude settings to use QMD MCP tools for session recall - Updates QMD session index on PreCompact, SessionEnd hooks - Prevents concurrent qmd embed processes Requirements: Node v22.22.0 Bun

English
0
0
0
105
Peter Yang
Peter Yang@petergyang·
Auto compact sucks I rather it warn me when I only have 10% context left so I can compact manually
English
49
4
242
28.3K
Boris Cherny
Boris Cherny@bcherny·
@0xPaulius Hmm are you using Opus on high effort? We started defaulting Opus to medium this week, you should have seen a little notification when you start your CLI
English
108
3
429
77.2K
William Belk
William Belk@wbelk·
Just updated to actually show status messages in Claude on startup
English
0
0
0
182
William Belk
William Belk@wbelk·
i just added great stuff to this skill: - "/qmd-sessions refresh" will load in the last 50 exchanges up to 14k characters from recent sessions into context - hooks for SessionStart values of compact, resume, and clear now load the last 50 exchanges up to 14k characters from recent sessions (increased from single session), and both CLAUDE.md files into context - SessionStart loads both CLAUDE.md files into context to reinforce
English
1
0
2
910
William Belk
William Belk@wbelk·
I built a skill to extract @claudeai sessions as .md and index into QMD db from @tobi, for BM25/vector search on session knowledge github.com/wbelk/claude-q… This connects persistent session data to Claude via MCP — essential for large context work. Claude will exceed the context and forget stuff, then it get spooky. - Generates a step-by-step prompt setup - Installs QMD - Extracts full Claude sessions to .md (including subagent tasks) - Indexes session content in QMD - Updates Claude settings to use QMD MCP tools for session recall - Updates QMD session index on PreCompact, SessionEnd hooks - Prevents concurrent qmd embed processes Requirements: Node v22.22.0 Bun
William Belk tweet media
English
2
3
42
27.6K