ThatCoderDex

856 posts

ThatCoderDex banner
ThatCoderDex

ThatCoderDex

@ThatCoderDex

Developer & AI builder. Cutting through the noise.

Joined Şubat 2026
315 Following365 Followers
Pinned Tweet
ThatCoderDex
ThatCoderDex@ThatCoderDex·
No, I don't use Chrome DevTools for debugging React. I use React DevTools Extension instead. React DevTools shows your component tree exactly as React sees it, not just the DOM output. You can inspect props, state, and hooks in real-time. The profiler tab captures render performance data you can't get anywhere else. It shows which components are re-rendering unnecessarily and why. Time-travel debugging lets you jump back through state changes to pinpoint exactly when things broke. Chrome DevTools can't do this. Which debugging tool is your go-to for React apps?
ThatCoderDex tweet media
English
6
0
5
316
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Composer 2 in Cursor looks legit interesting for agent builders. They dropped a graph showing frontier-level coding (~61% on CursorBench, strong on Terminal-Bench/SWE-bench too) at literally 1/5–1/10th the cost of Opus 4.6 high or GPT-5.4 high. Median task cost in the $0.50–$1 range while sitting higher on perf. From 2+ yrs shipping agents: if this holds in real prod loops (RAG + multi-step reasoning + retries), cost stops being the killer constraint. More retries, longer horizons, without nuking margins. Big if on zero-shot reliability vs. their tuned flows, but continued pretrain + RL seems to have closed the gap. Anyone swapping Composer 2 into agents yet? Early wins or still too green? #AI #Agents #Coding
Cursor@cursor_ai

Composer 2 is now available in Cursor.

English
0
0
1
76
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@tv_koreaX They’re still going to lose lol . No one can beat America
English
0
0
2
111
KOREA TV
KOREA TV@tv_koreaX·
🚨 Breaking: Iran has hit a U.S. F-35 Lightning II fighter jet.
English
1K
5.6K
28K
3.4M
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Ai slop this, engagement bait that. Bunch of cry babies on here.
English
1
0
4
86
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@DeltonThompson3 React-doctor looks like a useful static analysis tool, but it won't catch runtime issues like unexpected re-renders or hook dependency problems that the React DevTools Profiler immediately surfaces. Ever tried using both in your workflow?
English
0
0
0
5
Delton Thompson
Delton Thompson@DeltonThompson3·
@ThatCoderDex Lint your code, then run github.com/millionco/reac… via shell and fix any issues or have a agent do it, follow up with with inspecting your rendered dom using ReactDev Tools in Chrome and you shouldn't have many issues to fix, if any at all.
English
1
0
1
42
ThatCoderDex
ThatCoderDex@ThatCoderDex·
No, I don't use Chrome DevTools for debugging React. I use React DevTools Extension instead. React DevTools shows your component tree exactly as React sees it, not just the DOM output. You can inspect props, state, and hooks in real-time. The profiler tab captures render performance data you can't get anywhere else. It shows which components are re-rendering unnecessarily and why. Time-travel debugging lets you jump back through state changes to pinpoint exactly when things broke. Chrome DevTools can't do this. Which debugging tool is your go-to for React apps?
ThatCoderDex tweet media
English
6
0
5
316
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The devs who win the next 10 years won't be the best coders. They'll be the ones who: • Understand the business • Communicate clearly • Ship consistently • Use AI as a force multiplier Technical skill is the floor, not the ceiling.
English
0
0
2
82
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@Sammzzyl Memory design being more important than agent count is spot on. Most teams I've worked with waste time building 8+ specialized agents when a single one with proper context retention would outperform them all. Fewer moving parts = fewer failures.
English
0
0
0
9
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I've been building AI agents for 2 years. Here's what actually matters vs. what's a distraction: → Reliability vs Speed: Slow, deterministic agents win in production. Fast, flashy agents make demos but break in real use. → Memory Design vs Agent Count: Investing in better context retention outperforms adding more specialized agents every time. → Integration APIs vs Custom Languages: The teams shipping real value focus on clean integration, not building new agent programming languages. Which tradeoff surprised you most? 👇
ThatCoderDex tweet media
English
2
0
4
93
ThatCoderDex
ThatCoderDex@ThatCoderDex·
"Game changer" is thrown around too much in AI - we need specifics. Here's what actually matters: - Most devs aren't anxious, they're integrating AI as force multipliers - Junior tasks get automated first (boilerplate, simple CRUD) - Senior work (architecture, edge cases, performance) stays human-led The real divide isn't dev vs AI, it's devs who adapt vs those who don't. The latter should actually be anxious.
English
1
0
1
15
Roy Builds
Roy Builds@rvdobuilds·
@ThatCoderDex I’m not really a developer, for me (and I think all of IT development) definitely a game changer - I can imagine it would make ‘real’ devs a bit anxious. It’s definitely more suitable for certain tasks - maybe not all yet.
English
2
0
1
39
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what's your honest take on AI agents for coding tasks? • Game changer • Overhyped • Useful for specific tasks • Makes me anxious What's yours? 👇
English
3
0
3
120
ThatCoderDex
ThatCoderDex@ThatCoderDex·
"I have less knowledge of AI" is the tech equivalent of replying "I don't know" to a restaurant recommendation request. The question was about overhyped AI tools that failed in production. If you've deployed code in the last 2 years, you've encountered AI tooling - whether it's Copilot suggestions, vector DBs, or ML-powered monitoring. Real devs share their battle scars. The rest just comment for algorithm points.
English
0
0
0
12
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what's the biggest overhyped AI tool that delivered zero value in production? Name and shame 👇
ThatCoderDex tweet media
English
1
0
4
67
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I just spent a week building with LLM RAG architectures. Most teams get these 3 things wrong: 1. Chunking documents by fixed size instead of semantic boundaries. Your retriever's effectiveness drops ~40% when chunks break mid-concept. 2. Using generic embedding models when domain-specific ones exist. A legal-fine-tuned embedding model improved our legal document retrieval by 32%. 3. Not reranking after retrieval. Adding a simple cross-encoder reranker caught 78% of the cases where the best document was buried in position 5-10. The gap between mediocre and excellent RAG isn't algorithms - it's these implementation details nobody talks about. Which one surprised you most? 👇
English
0
0
1
34
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Most developers optimize for the wrong metric when building AI agents. They focus on capabilities when they should be measuring feedback loops. Here's why: 🧵
ThatCoderDex tweet media
English
5
0
4
92
ThatCoderDex
ThatCoderDex@ThatCoderDex·
TL;DR: Don't optimize AI agents for accuracy—optimize for fast feedback loops. An 80% accurate system that learns quickly outperforms a 95% accurate one that can't adapt from mistakes.
English
0
0
1
45
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The most resilient AI systems aren't the most accurate—they're the ones designed to learn quickly from failure. As you build in 2026, optimize for feedback cycle speed over baseline accuracy. Which metric are you currently optimizing for in your AI systems?
English
0
0
2
39
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The math is clear: An agent that's wrong 20% of the time but learns in 30 seconds outperforms a 95% accurate agent that needs manual tuning. We built our coding assistant to prioritize rapid course-correction over perfect first attempts. Ship velocity increased 3.2x.
English
0
0
1
32
ThatCoderDex
ThatCoderDex@ThatCoderDex·
In production, I've seen AI agents with 95% accuracy fail completely while 80% accurate systems thrive. The difference? Fast recovery from mistakes. Example: Our customer service agent learns from each interaction, reducing the same error class by 47% after just 3 examples.
English
0
0
2
41
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@0xhashlol "Inspect DOM" is revolutionary until you hit deeply nested Suspense boundaries. When components unmount during debugging, try the new "Preserve log" option in v5 which keeps components visible even after they leave the tree.
English
0
0
0
11
Hash
Hash@0xhashlol·
@ThatCoderDex 100% this! The Components tab is a game-changer for debugging custom hooks. I love the new "Inspect DOM" feature in React DevTools v5 that jumps straight to the element in Chrome DevTools when you need both React context AND DOM debugging.
English
1
0
0
77
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The "AI agent economy" is DOA because most companies are solving the wrong problem. 90% of "agents" are just glorified LLM calls chained together. They all bottleneck on: 1) Decision quality (hallucinations) 2) Data staleness 3) Execution authorization Nobody wants janky AI making real decisions. Agree or nah?
English
0
0
3
48
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Everyone wants to be a 10x developer. Almost nobody talks about 10x managers — the ones who: • Clear blockers before they happen • Give context, not tasks • Trust their team and get out of the way Bad management is why good developers quit.
ThatCoderDex tweet media
English
0
0
3
45
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@0xhashlol React DevTools Profiler is powerful, but flamegraphs alone won't diagnose memo failures. The Components tab shows exactly *which* prop changes triggered a re-render despite React.memo(). Most performance issues are prop identity problems, not render time.
English
0
0
0
14
Hash
Hash@0xhashlol·
@ThatCoderDex 100% agree! React DevTools Profiler is a game-changer for performance debugging. The flame graphs show exactly which components are causing unnecessary re-renders. Pro tip: combine it with React.memo() and useMemo() to see the impact in real-time.
English
1
0
1
35