Fawad H Syed

1.3K posts

Fawad H Syed banner
Fawad H Syed

Fawad H Syed

@fawadhsdev

Analyst & Software Engineer. Designing and building AI and analytics systems to support reliable decision-making.

Geneva 가입일 Ocak 2026
1.4K 팔로잉970 팔로워
고정된 트윗
Fawad H Syed
Fawad H Syed@fawadhsdev·
💻 Analyst & Software Engineer Interested in Artificial Intelligence, analytics, and building useful technology. Follow me for thoughts on: • Artificial Intelligence • Data and analytics • Software development • Technology trends 🤝 Tech professionals — let us connect. I follow back. Geneva, Switzerland 🇨🇭 🔗 fawadhs.dev #AI #Data #Technology
English
3
2
26
1.4K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@imjcmartin @0xlelouch_ Agree, you can fix these in GraphQL. But not free. You need persisted queries, dataloaders, cost limits. All extra work in backend. REST already gives simple caching and scaling with HTTP. So it is about complexity. That is why many teams use both together.
English
0
0
0
61
JC 🧧
JC 🧧@imjcmartin·
@fawadhsdev @0xlelouch_ Http caching - persisted documents over GET Query cost control - it's literally so easy. And you need it with REST too. Without batching - what GraphQL backend doesn't use dataloaders? And you never need to do batch requests with Rest backends?
English
1
0
1
110
Abhishek Singh
Abhishek Singh@0xlelouch_·
If GraphQL lets clients request exactly what they need, why not replace all REST APIs with GraphQL?
English
22
5
123
32.8K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@DavidKPiano Clean solution. Versioning and migrations are first-class, so state changes are predictable and recoverable. I like that migrations are tied to the store, not scattered. Makes long-term state evolution much easier to manage in real apps.
English
0
0
1
11
David K 🎹
David K 🎹@DavidKPiano·
New in @xstate/store: persist() extension One line to persist store context to localStorage (or any async adapter like AsyncStorage): .with(persist({ name: 'my-store' }))
David K 🎹 tweet media
English
9
2
171
8.8K
Fawad H Syed
Fawad H Syed@fawadhsdev·
Only thing I would watch is performance at scale — Haversine in selectRaw will kill indexes, so for large tables this becomes slow quickly. Usually better to pre-filter with bounding box first, then apply exact distance. Also worth checking if DB supports spatial indexes (PostGIS / MySQL spatial) to avoid full scans.
English
0
0
1
50
Povilas Korop | Laravel Courses Creator & Youtuber
Laravel tip. Location-based filtering is a common requirement: - “find nearby” - “within X km” - “order by distance” Eloquent scope helps avoid repeating Haversine queries. You may put it into a Trait to use it in many models. Or maybe even a Package to use in many projects.
Povilas Korop | Laravel Courses Creator & Youtuber tweet media
English
8
21
144
6K
Fawad H Syed
Fawad H Syed@fawadhsdev·
Quick breakdown of Qwen 3.5 sizes and where each fits: 0.8B — runs on CPU, good for simple tasks, formatting, small helpers. 2B — light automation, basic tool use, simple agents with guidance. 4B — practical tier. Handles structured tasks, basic coding, stable outputs. 9B — strong general use. Good for coding, agents, multi-step tasks with some supervision. A3B — faster and efficient, better reasoning for production setups where latency matters. 27B — high capability. Handles complex logic, multi-file code, tools, stays on track. So smaller models are useful for focused tasks. Bigger models handle broader and more complex work.
English
0
0
0
54
Fawad H Syed
Fawad H Syed@fawadhsdev·
Vite+ is not really a new framework. It is more like combining everything into one toolchain — dev server, build, test, lint, even runtime and package management in one place. So instead of using many tools separately, you run one command and it handles the full workflow. Feels like moving from “tooling” to a more integrated system, but still flexible and not locked to one framework (viteplus.dev)
English
0
0
1
320
Luke Parker
Luke Parker@LukeParkerDev·
I don’t know what Vite+ is and I’m too scare to ask
English
17
0
58
10.1K
Fawad H Syed
Fawad H Syed@fawadhsdev·
I agree with this direction. Tools will keep getting better, and we already see productivity going up a lot. Over time they will make fewer mistakes as well. But the outcome still depends on how people use them. With proper review and good workflow, quality will improve. Without that, it can still go wrong.
English
0
0
0
47
Robot Eevee
Robot Eevee@rw_eevee·
@AdamRackis In the next few years software quality will rise exponentially as the models and harnesses improve past the human baseline.
English
3
0
13
6.2K
Adam Rackis
Adam Rackis@AdamRackis·
There's so many small details AI gets horribly wrong Median software quality is gonna drop so hard next few years
Adam Rackis tweet media
English
63
17
1.1K
118.8K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@crutchcorn This discussion feels old now. useEffect was never the problem. People were just using it for everything. React already made it clear it is for side effects, not general logic. Now with better patterns, it works fine when used in the right place.
English
0
0
0
15
Corbin Crutchley
Corbin Crutchley@crutchcorn·
useEffect isn't a bad or poorly designed API; it's a primitive on which to build your own APIs on top of. The educational and tooling corrections towards avoiding usage is broadly a net positive, but the cultural misinformation that it's a "bad design" is a net negative.
English
14
15
228
19.4K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@AdamRackis Code looks fine on the surface, but the logic is messy and duplicated. Anyone reviewing this properly would question why there are two CASE blocks and try to simplify it. This is what happens when code is generated without fully understanding the problem or the data.
English
0
0
0
1K
Adam Rackis
Adam Rackis@AdamRackis·
Stop calling AI an over-eager junior dev who types fast. No junior dev would ever do anything this fucking stupid
Adam Rackis tweet media
English
22
1
215
38.1K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@ChShersh Manual malloc/free only works if every path is handled perfectly, which usually was not the case. That is why leaks and crashes were so common. RAII and smart pointers make ownership clear and clean up automatically.
English
0
0
1
215
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I have so much gratitude to people who managed every single byte with manual malloc and free. It already feels difficult to remember how much effort it really took without smart pointers. Thank you for getting us to this point.
English
18
12
350
11.7K
Fawad H Syed
Fawad H Syed@fawadhsdev·
Good advice, but it is a bit too absolute. Keepalive helps because TLS handshake and TCP connection setup are expensive, but it only works if your client is actually reusing connections correctly and your upstream is not constantly rotating or closing them. In Node.js, the default HTTP agents are often not tuned, so many teams assume reuse is happening when it is not. This is also solving a symptom rather than the root cause. If you move to HTTP/2 or HTTP/3, you remove much of this overhead entirely instead of trying to optimise around repeated handshakes. At higher load, poorly managed keepalive can also introduce issues such as stale sockets and uneven load distribution across backends. That said, if you are not using connection reuse at all, you are definitely paying a noticeable latency penalty for every external call.
English
1
0
1
917
Daniel Lockyer
Daniel Lockyer@DanielLockyer·
You should probably enable TLS connection re-use/keepalive in your applications Here's a heatmap of tls.connect latency in a production Node.js app P50 is 400ms, P99 up at 3.9s Fix this and all your external API requests become instantly faster
Daniel Lockyer tweet media
English
10
5
223
24.6K
Eric
Eric@Ex0byt·
@fawadhsdev Nah, it was just unoptimized python copies. It can go a bit further. The current gap to ceiling on my tiny GPU stands at 1.3×, and the python overhead is now fairly tight and optimized in the last few hours. Next: moving on to Kimi-k2.5 INT4. Wish me luck...
English
1
0
9
978
Eric
Eric@Ex0byt·
Exciting Experiment Update: We ran StepFun_ai's Step-3.5-Flash (197B MoE) on 6.29 GB of GPU memory! Flat. Zero growth. Same footprint at token 1 as at token x100. The model's weights are ~105 GB INT4 (394GB original bf16!). We're running it on 6.29 GB!! — 1/16th the weight footprint, flat across every token. How: - Separated expert from non-expert skeleton (6.1 GB) lives permanently on GPU - 66.8 MB staging buffer — 8 expert slots, overwritten every layer - 12,096 unique experts (36,288 weight matrices) stay off-GPU until the router selects them - Router picks. DMA fires. Buffer overwrites. Nothing accumulates. The invariant held across every token: - GPU after token 1: 6,286 MB - GPU after token 100: 6,286 MB - Delta: 0.0 MB Correctness: 3/3 PASS — reasoning, religion, coding. Ceiling: 15.6 tok/s (on my single-GPU hardware). The architecture is model-agnostic. Any MoE. Any size! Shoutout to my dude 0xSero. We've been trading notes all week. He's got Kimi K2.5 running across 8×3090s! while we took different journeys on different hardware, we share the same obsession. Amazing collab. More soon..
Eric tweet media
English
36
37
504
39K
Fawad H Syed
Fawad H Syed@fawadhsdev·
If you are building a SaaS product or a website and need help setting up Stripe, I can support you. I handle payments, subscriptions, webhooks, and the full integration flow in a clean, reliable way so everything works properly from the start. fiverr.com/s/381w6qL
English
1
0
3
93
Fawad H Syed
Fawad H Syed@fawadhsdev·
@ipwanciu Removing ? changes the base type everywhere. Required<T> keeps the original flexible type, but enforces strictness only where needed. It’s about control at boundaries, not making everything strict by default.
English
0
0
2
85
IP
IP@ipwanciu·
Sometimes `Partial<T>` can be risky. ‼️ It requires you to add `if (data)` everywhere. If the data is required for the component, use `Required<T>`. 👇 Fix the data at the start, don't fix it everywhere in the UI.
IP tweet media
English
2
1
12
1.6K
Fawad H Syed
Fawad H Syed@fawadhsdev·
Nice pattern. This is essentially hand-rolled sum types in C with a bit of discipline layered on top. You get a cleaner API and better readability, but it’s still fundamentally trust-based. The compiler won’t stop you from reading the wrong union field or missing a case, so the safety story is nowhere near Rust. Macros help reduce the friction, but they don’t give you real guarantees, just nicer syntax. Still, for low-level systems or game data like this, it’s a solid pragmatic trade-off.
English
0
0
0
40
Valentin Ignatev
Valentin Ignatev@valigo·
Let's compare tagged unions in Rust and in C, and then steal some of that type safety from Rust with a bit of a compiler and macro magic so that C can have nice things too!
English
12
10
215
14.9K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@joaquintdig @Boost_Libraries Nice, that benchmark is much more realistic. Clear takeaway: flat maps win on lookups, but trade-offs matter. It’s workload-dependent.
English
0
0
1
79
Boost C++ | Open Source Libraries
std::unordered_map was designed before modern CPU cache architecture mattered. If you're using it in high-throughput code today, you're likely hitting a performance wall. And you don't have to 🧵👇
English
6
11
199
30.8K
Fawad H Syed
Fawad H Syed@fawadhsdev·
@joaquintdig @Boost_Libraries Good clarifications, that helps. Post-mixing point is fair — though in practice many workloads still see partial locality (IDs, counters, etc.), so worth stressing that nuance. It would be interesting to see the same test with heavier values or erase-heavy patterns.
English
1
0
1
166
Joaquín López Muñoz
Joaquín López Muñoz@joaquintdig·
@fawadhsdev @Boost_Libraries Hi Fawad, author here: * Sequential ints don’t translate to close insertions in unordered_flat_map (look for post-mixing @ docs) * The test includes 50% unsuccessful lookups
English
1
0
2
202
Fawad H Syed
Fawad H Syed@fawadhsdev·
@iamakulov Interesting find. This explains some of the slow memory creep people see in long-running sessions. Pages router caching without eviction was always a bit opaque — good to see it surfaced clearly. Curious how this behaves under heavy navigation patterns.
English
0
0
0
191