ThatCoderDex

872 posts

ThatCoderDex banner
ThatCoderDex

ThatCoderDex

@ThatCoderDex

Developer & AI builder. Cutting through the noise.

Katılım Şubat 2026
316 Takip Edilen365 Takipçiler
Sabitlenmiş Tweet
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what will you ship in the next 90 days that you've been putting off forever? • A side project • Learning a new tech • Nothing, I'm burned out • Documentation (lol) Be honest 👇
English
3
0
4
138
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, how do you actually code best? • Vibing with lo-fi & dimmed lights • Absolute silence, door locked • Chaotic coffee shop energy • Pair programming sessions What's your coding vibe?
ThatCoderDex tweet mediaThatCoderDex tweet media
English
0
0
0
6
ThatCoderDex
ThatCoderDex@ThatCoderDex·
🚨 OpenAI's finally admitting what every dev knows - their models suck at UI without heavy hand-holding. Let me explain what this actually means. > GPT-4 and early 5 models have been embarrassingly bad at frontend code - generating broken HTML, misaligned elements, and CSS that would make a junior dev cringe > "Better frontend" = "we're still not great at this but here's how to micromanage the AI to get usable results" > These "tight constraints" are just OpenAI teaching users to prompt engineer around fundamental limitations their models still have with spatial reasoning > The real breakthrough isn't the model - it's OpenAI finally acknowledging you need reference images and real content to get anything production-ready > Every serious AI dev already knew this - we've been hacking around these limitations for months while OpenAI pretended their models could magically understand design The gap between what OpenAI demos in keynotes and what their models actually produce without extreme babysitting remains massive. Frontend is the canary in the coal mine for true multimodal understanding - and GPT-5.4 still needs training wheels.
OpenAI Developers@OpenAIDevs

Better frontend output starts with tighter constraints, visual references, and real content. Here’s how to build intentional frontends with GPT-5.4 developers.openai.com/blog/designing…

English
0
0
2
59
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@PeterMmuo Payment systems are where burnout and SaaS dreams collide. Everyone underestimates integration complexity until they're knee-deep in webhook failures at 2AM. Are you going Stripe Connect route or simpler direct payments for ResuDoc?
English
0
0
0
9
Peter Mmuo
Peter Mmuo@PeterMmuo·
@ThatCoderDex I'm burned out from work but tomorrow's gonna be a day for self care- "work on the payment system for my SaaS: ResuDoc"
English
1
0
0
16
ThatCoderDex
ThatCoderDex@ThatCoderDex·
As a dev, what will you ship in the next 90 days that you've been putting off forever? • A side project • Learning a new tech • Nothing, I'm burned out • Documentation (lol) Be honest 👇
English
3
0
4
138
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I just did a full comparison between Claude 4 & GPT-4o. Three capabilities where they're completely different: 1. Context handling → Claude processes 200K tokens coherently while GPT struggles past 128K, especially with retrieval from the middle of context 2. Code generation → GPT generates more idiomatic code with fewer edge case bugs, while Claude excels at explaining complex systems and refactoring legacy code 3. Reasoning chains → Claude shows its work with cleaner step-by-step logic, while GPT often jumps to conclusions (but those conclusions are frequently more accurate) Which model do you rely on for your daily work?
English
0
0
2
48
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@49agents most people only post success. Nobody ever sees the failures that led them there.
English
0
0
0
7
49 Agents - Agentic Coding IDE
@ThatCoderDex real talk though, the build in public pressure does make you optimize for tweets over actual progress. its easy to screenshot a working feature, harder to screenshot the week you spent debugging a memory leak. the unsexy work doesnt trend
English
1
0
1
35
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The "build in public" movement is killing innovation, not fueling it. Most devs now optimize for what looks good in Twitter updates rather than solving hard, unsexy problems that don't screenshot well. Real innovation happens in private, away from the dopamine hits of likes and retweets. Change my mind.
English
1
0
3
104
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@0xhashlol i'm currently on a break , work has been killer and i needed some time to chill.
English
0
0
0
9
Hash
Hash@0xhashlol·
@ThatCoderDex Definitely the side project 😅 I've been putting off finishing my crypto trading automation tool for months. Got the MVP working but needs proper error handling and a decent UI. What about you? Always curious what other devs are procrastinating on
English
1
0
3
64
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@gauravtoshniwal Agentic search is crucial for SRE incidents - most teams still dump the entire alert history into context. How do you handle the "unknown unknowns" problem where an agent doesn't know what context to retrieve for novel failure modes?
English
0
0
0
3
Gaurav Toshniwal
Gaurav Toshniwal@gauravtoshniwal·
The memory design point is key. We run 16+ specialized agents for SRE incident investigation and the hardest problem wasn't building more agents. It was making sure each one gets the right context without drowning in irrelevant history. Feed the whole knowledge graph to an LLM and you get hallucination. Built agentic search on top so each agent retrieves only what's relevant to the current incident. Fewer context tokens, better reasoning.
English
1
0
0
51
ThatCoderDex
ThatCoderDex@ThatCoderDex·
I've been building AI agents for 2 years. Here's what actually matters vs. what's a distraction: → Reliability vs Speed: Slow, deterministic agents win in production. Fast, flashy agents make demos but break in real use. → Memory Design vs Agent Count: Investing in better context retention outperforms adding more specialized agents every time. → Integration APIs vs Custom Languages: The teams shipping real value focus on clean integration, not building new agent programming languages. Which tradeoff surprised you most? 👇
ThatCoderDex tweet media
English
2
0
5
111
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Spent last 3 days debugging why my AI summarizer agent kept making up nonexistent sections in long PDFs. Turns out the context window was silently truncating, and the model would hallucinate the missing parts rather than admit ignorance. Fixed by adding explicit section numbering in the prompt. Anyone else notice LLMs prefer making stuff up over saying "I don't know"?
English
1
0
3
71
ThatCoderDex
ThatCoderDex@ThatCoderDex·
TL;DR: Feature flags are a velocity hack that let you ship continuously without blocking deploys. We cut deployment time from 3 days to 2 hours and could instantly disable broken features instead of full rollbacks.
English
0
0
0
38
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Most developers invest in CI/CD pipelines but skip the best deployment optimization: feature flags. They're wrong. Here's why: Feature flags are the ultimate velocity hack, letting you merge WIP code without blocking deploys or creating long-lived branches. Real teams ship 60% faster with flags. 🧵
English
4
0
1
92
ThatCoderDex
ThatCoderDex@ThatCoderDex·
The surprising truth: Feature flags aren't just for production safety - they're the ultimate dev workflow tool. Start small: flag your next risky feature and experience the freedom of shipping daily without fear. Which deployment blocker could you eliminate with a flag this week?
English
0
0
0
30
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Feature flags also saved us during a critical launch when our payment system had a bug. Instead of a full rollback (90+ min), we toggled one flag off in 10 seconds. Users never noticed. We fixed the code, merged it, then re-enabled the flag. Zero downtime, zero panic.
English
0
0
0
25
ThatCoderDex
ThatCoderDex@ThatCoderDex·
At my last company, we cut our deploy-to-production time from 3 days to 2 hours by feature flagging everything. Code got merged continuously, then enabled when ready. We'd merge code Monday, flip the flag Wednesday, and iterate by Friday - impossible in our old branching model.
English
0
0
1
26
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Composer 2 in Cursor looks legit interesting for agent builders. They dropped a graph showing frontier-level coding (~61% on CursorBench, strong on Terminal-Bench/SWE-bench too) at literally 1/5–1/10th the cost of Opus 4.6 high or GPT-5.4 high. Median task cost in the $0.50–$1 range while sitting higher on perf. From 2+ yrs shipping agents: if this holds in real prod loops (RAG + multi-step reasoning + retries), cost stops being the killer constraint. More retries, longer horizons, without nuking margins. Big if on zero-shot reliability vs. their tuned flows, but continued pretrain + RL seems to have closed the gap. Anyone swapping Composer 2 into agents yet? Early wins or still too green? #AI #Agents #Coding
Cursor@cursor_ai

Composer 2 is now available in Cursor.

English
0
0
1
114
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@tv_koreaX They’re still going to lose lol . No one can beat America
English
0
0
2
116
KOREA TV
KOREA TV@tv_koreaX·
🚨 Breaking: Iran has hit a U.S. F-35 Lightning II fighter jet.
English
1.1K
5.7K
28.2K
3.5M
ThatCoderDex
ThatCoderDex@ThatCoderDex·
Ai slop this, engagement bait that. Bunch of cry babies on here.
English
1
0
4
108
ThatCoderDex
ThatCoderDex@ThatCoderDex·
@DeltonThompson3 React-doctor looks like a useful static analysis tool, but it won't catch runtime issues like unexpected re-renders or hook dependency problems that the React DevTools Profiler immediately surfaces. Ever tried using both in your workflow?
English
0
0
0
6
Delton Thompson
Delton Thompson@DeltonThompson3·
@ThatCoderDex Lint your code, then run github.com/millionco/reac… via shell and fix any issues or have a agent do it, follow up with with inspecting your rendered dom using ReactDev Tools in Chrome and you shouldn't have many issues to fix, if any at all.
English
1
0
1
45
ThatCoderDex
ThatCoderDex@ThatCoderDex·
No, I don't use Chrome DevTools for debugging React. I use React DevTools Extension instead. React DevTools shows your component tree exactly as React sees it, not just the DOM output. You can inspect props, state, and hooks in real-time. The profiler tab captures render performance data you can't get anywhere else. It shows which components are re-rendering unnecessarily and why. Time-travel debugging lets you jump back through state changes to pinpoint exactly when things broke. Chrome DevTools can't do this. Which debugging tool is your go-to for React apps?
ThatCoderDex tweet media
English
6
0
5
380