ThatCoderDex

864 posts

ThatCoderDex banner
ThatCoderDex

ThatCoderDex

@ThatCoderDex

Developer & AI builder. Cutting through the noise.

参加日 Şubat 2026
315 フォロー中365 フォロワー
固定されたツイート
ThatCoderDex
ThatCoderDex@ThatCoderDex·
No, I don't use Chrome DevTools for debugging React. I use React DevTools Extension instead. React DevTools shows your component tree exactly as React sees it, not just the DOM output. You can inspect props, state, and hooks in real-time. The profiler tab captures render performance data you can't get anywhere else. It shows which components are re-rendering unnecessarily and why. Time-travel debugging lets you jump back through state changes to pinpoint exactly when things broke. Chrome DevTools can't do this. Which debugging tool is your go-to for React apps?
ThatCoderDex tweet media
English
6
0
5
357
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I just tried Cursor's new "Multi-Agent Coding Workflow" feature. It's like having a team of senior devs working alongside you. Agents automatically review code as you write, flagging edge cases I wouldn't catch for hours. The architecture agent suggests better patterns without me asking. Yesterday it refactored my authentication flow and cut 40+ lines. Test generation happens in real-time as I code, not as an afterthought. My coverage jumped from 62% to 89% in two days. The debugging agent is scary good - it found a race condition in my async code that caused intermittent failures for weeks. What would you use this kind of capability for first?
English
0
0
0
20
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Most developers invest in CI/CD pipelines but skip the best deployment optimization: feature flags. They're wrong. Here's why: Feature flags are the ultimate velocity hack, letting you merge WIP code without blocking deploys or creating long-lived branches. Real teams ship 60% faster with flags. 🧵
English
5
0
1
52
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The "build in public" movement is killing innovation, not fueling it. Most devs now optimize for what looks good in Twitter updates rather than solving hard, unsexy problems that don't screenshot well. Real innovation happens in private, away from the dopamine hits of likes and retweets. Change my mind.
English
0
0
1
48
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Spent last 3 days debugging why my AI summarizer agent kept making up nonexistent sections in long PDFs. Turns out the context window was silently truncating, and the model would hallucinate the missing parts rather than admit ignorance. Fixed by adding explicit section numbering in the prompt. Anyone else notice LLMs prefer making stuff up over saying "I don't know"?
English
1
0
2
37
ThatCoderDex
ThatCoderDex@ThatCoderDex·
TL;DR: Feature flags are a velocity hack that let you ship continuously without blocking deploys. We cut deployment time from 3 days to 2 hours and could instantly disable broken features instead of full rollbacks.
English
0
0
0
24
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The surprising truth: Feature flags aren't just for production safety - they're the ultimate dev workflow tool. Start small: flag your next risky feature and experience the freedom of shipping daily without fear. Which deployment blocker could you eliminate with a flag this week?
English
0
0
0
19
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Feature flags also saved us during a critical launch when our payment system had a bug. Instead of a full rollback (90+ min), we toggled one flag off in 10 seconds. Users never noticed. We fixed the code, merged it, then re-enabled the flag. Zero downtime, zero panic.
English
0
0
0
14
ThatCoderDex
ThatCoderDex@ThatCoderDex·
At my last company, we cut our deploy-to-production time from 3 days to 2 hours by feature flagging everything. Code got merged continuously, then enabled when ready. We'd merge code Monday, flip the flag Wednesday, and iterate by Friday - impossible in our old branching model.
English
0
0
1
15
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Composer 2 in Cursor looks legit interesting for agent builders. They dropped a graph showing frontier-level coding (~61% on CursorBench, strong on Terminal-Bench/SWE-bench too) at literally 1/5–1/10th the cost of Opus 4.6 high or GPT-5.4 high. Median task cost in the $0.50–$1 range while sitting higher on perf. From 2+ yrs shipping agents: if this holds in real prod loops (RAG + multi-step reasoning + retries), cost stops being the killer constraint. More retries, longer horizons, without nuking margins. Big if on zero-shot reliability vs. their tuned flows, but continued pretrain + RL seems to have closed the gap. Anyone swapping Composer 2 into agents yet? Early wins or still too green? #AI #Agents #Coding
Cursor@cursor_ai

Composer 2 is now available in Cursor.

English
0
0
1
104
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@tv_koreaX They’re still going to lose lol . No one can beat America
English
0
0
2
115
KOREA TV
KOREA TV@tv_koreaX·
🚨 Breaking: Iran has hit a U.S. F-35 Lightning II fighter jet.
English
1.1K
5.7K
28.1K
3.5M
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Ai slop this, engagement bait that. Bunch of cry babies on here.
English
1
0
4
99
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@DeltonThompson3 React-doctor looks like a useful static analysis tool, but it won't catch runtime issues like unexpected re-renders or hook dependency problems that the React DevTools Profiler immediately surfaces. Ever tried using both in your workflow?
English
0
0
0
5
Delton Thompson
Delton Thompson@DeltonThompson3·
@ThatCoderDex Lint your code, then run github.com/millionco/reac… via shell and fix any issues or have a agent do it, follow up with with inspecting your rendered dom using ReactDev Tools in Chrome and you shouldn't have many issues to fix, if any at all.
English
1
0
1
43
ThatCoderDex
ThatCoderDex@ThatCoderDex·
No, I don't use Chrome DevTools for debugging React. I use React DevTools Extension instead. React DevTools shows your component tree exactly as React sees it, not just the DOM output. You can inspect props, state, and hooks in real-time. The profiler tab captures render performance data you can't get anywhere else. It shows which components are re-rendering unnecessarily and why. Time-travel debugging lets you jump back through state changes to pinpoint exactly when things broke. Chrome DevTools can't do this. Which debugging tool is your go-to for React apps?
ThatCoderDex tweet media
English
6
0
5
357
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The devs who win the next 10 years won't be the best coders. They'll be the ones who: • Understand the business • Communicate clearly • Ship consistently • Use AI as a force multiplier Technical skill is the floor, not the ceiling.
English
0
0
2
89
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@Sammzzyl Memory design being more important than agent count is spot on. Most teams I've worked with waste time building 8+ specialized agents when a single one with proper context retention would outperform them all. Fewer moving parts = fewer failures.
English
0
0
0
9
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I've been building AI agents for 2 years. Here's what actually matters vs. what's a distraction: → Reliability vs Speed: Slow, deterministic agents win in production. Fast, flashy agents make demos but break in real use. → Memory Design vs Agent Count: Investing in better context retention outperforms adding more specialized agents every time. → Integration APIs vs Custom Languages: The teams shipping real value focus on clean integration, not building new agent programming languages. Which tradeoff surprised you most? 👇
ThatCoderDex tweet media
English
2
0
4
105
ThatCoderDex
ThatCoderDex@ThatCoderDex·
"Game changer" is thrown around too much in AI - we need specifics. Here's what actually matters: - Most devs aren't anxious, they're integrating AI as force multipliers - Junior tasks get automated first (boilerplate, simple CRUD) - Senior work (architecture, edge cases, performance) stays human-led The real divide isn't dev vs AI, it's devs who adapt vs those who don't. The latter should actually be anxious.
English
1
0
1
15
Roy Builds
Roy Builds@rvdobuilds·
@ThatCoderDex I’m not really a developer, for me (and I think all of IT development) definitely a game changer - I can imagine it would make ‘real’ devs a bit anxious. It’s definitely more suitable for certain tasks - maybe not all yet.
English
2
0
1
39
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what's your honest take on AI agents for coding tasks? • Game changer • Overhyped • Useful for specific tasks • Makes me anxious What's yours? 👇
English
3
0
3
120
ThatCoderDex
ThatCoderDex@ThatCoderDex·
"I have less knowledge of AI" is the tech equivalent of replying "I don't know" to a restaurant recommendation request. The question was about overhyped AI tools that failed in production. If you've deployed code in the last 2 years, you've encountered AI tooling - whether it's Copilot suggestions, vector DBs, or ML-powered monitoring. Real devs share their battle scars. The rest just comment for algorithm points.
English
0
0
0
12
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what's the biggest overhyped AI tool that delivered zero value in production? Name and shame 👇
ThatCoderDex tweet media
English
1
0
4
67
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I just spent a week building with LLM RAG architectures. Most teams get these 3 things wrong: 1. Chunking documents by fixed size instead of semantic boundaries. Your retriever's effectiveness drops ~40% when chunks break mid-concept. 2. Using generic embedding models when domain-specific ones exist. A legal-fine-tuned embedding model improved our legal document retrieval by 32%. 3. Not reranking after retrieval. Adding a simple cross-encoder reranker caught 78% of the cases where the best document was buried in position 5-10. The gap between mediocre and excellent RAG isn't algorithms - it's these implementation details nobody talks about. Which one surprised you most? 👇
English
0
0
1
34
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Most developers optimize for the wrong metric when building AI agents. They focus on capabilities when they should be measuring feedback loops. Here's why: 🧵
ThatCoderDex tweet media
English
5
0
4
92
ThatCoderDex
ThatCoderDex@ThatCoderDex·
TL;DR: Don't optimize AI agents for accuracy—optimize for fast feedback loops. An 80% accurate system that learns quickly outperforms a 95% accurate one that can't adapt from mistakes.
English
0
0
1
45