link-

4K posts

link- banner
link-

link-

@BassemDy

SWE @GitHub - De-influencer - I post about software engineering, management, career or anything I'm curious about | https://t.co/8ae8VVm93g

YouTube, Podcast, Discord → Katılım Mayıs 2009
562 Takip Edilen6.1K Takipçiler
Sabitlenmiş Tweet
link-
link-@BassemDy·
I have started preparing a hybrid (pre-recorded + live sessions) course on practical system design that will be shipped on maven The course is designed for engineers with at least 3 years of experience, & these are the expected learning outcomes: TLDR; I want to teach the bulk of what staff+ engineers learn in a decade ⏺ Design fault-tolerant distributed systems & learn the tradeoffs of consistency and handling high-throughput workloads ⏺ Develop zero-downtime migration strategies for large-scale distributed systems ⏺ Learn proven design patterns applied in designing systems at big tech companies ⏺ Protect your systems with abuse prevention patterns ⏺ Learn how to observe & operate distributed systems in production ⏺ Level up from Junior+ to Senior and Senior to Staff / Architect ⏺ Bonus: conquer the system design interview with in-depth knowledge & practical insights You can now pre-signup for the course (link in the first comment because big boss Elon wants it that way 🙄)
link- tweet medialink- tweet media
English
1
3
28
5K
Ahmed El Gabri
Ahmed El Gabri@ahmedelgabri·
I have been using git worktrees for years, before agentic coding became trendy. I have been getting many questions from friends and colleagues on how I work with them.
English
1
2
11
629
link-
link-@BassemDy·
@wesbos I never thought we’d have to worry about scaling that damned contributions graph…
English
0
0
1
626
link-
link-@BassemDy·
@kitarp29 We listen and we do not judge 😏
English
0
0
0
40
link-
link-@BassemDy·
AI.
link- tweet media
3
0
5
6.1K
link-
link-@BassemDy·
@moaz_eldfrawy 😄 yeah I remember that one, it was pretty dumb..
English
0
0
0
43
link-
link-@BassemDy·
So.. who's going to package @openclaw in a mac mini and sell it as the ultimate personal assistant product?
English
1
0
4
358
link-
link-@BassemDy·
Clawdbot is fascinating! Took me a few hours to get it setup on an isolated VM on an untrusted VLAN, with a shared folder from my NAS for memory, all running in my small homelab. It even setup whisper to transcribe my voice notes and respond to my voice prompts! If this thing is capable of running Nvidia’s personaplex for full-duplex speed to speed conversations? Game over!
English
1
0
6
1.4K
link-
link-@BassemDy·
What my LLM says. Publication Assessment Journal: Frontiers in Nutrition • Impact Factor: ~4.0 (mid-tier) • Credibility: Mixed. Frontiers is a legitimate peer-reviewed publisher, but has faced criticism for sometimes lax review standards. It's on the "predatory-adjacent" watchlist for some academics, though not outright predatory. The journal is indexed in PubMed/MEDLINE, which is a positive sign. • Open Access: Yes (author-pays model, which can incentivize volume over quality) Verdict: Acceptable but not top-tier. I'd trust it more than a no-name journal, less than JAMA/Lancet/NEJM. Author Assessment Authors: Junhui Jiang, Hu Zhao, Jiong Chen, Junhao Du, Weixiang Ni, Baohua Zheng, Junhong Wu, and Chunhong Xiao Affiliations: • Fuzong Clinical Medical College of Fujian Medical University, Fuzhou, China • Department of General Surgery, The 900th Hospital of Joint Logistic Support Force, PLA, Fuzhou, China Assessment: • These are Chinese military/academic hospital researchers • No major red flags, but not internationally recognized names in nutrition or oncology • Corresponding author (Chunhong Xiao) uses a personal email domain — minor concern • Limited ability to verify their publication track record without database access Verdict: Credible academics, but not leading experts in the field. Study Design & Findings What they did: • Analyzed NHANES data (2007-2018) — 25,879 U.S. adults • Estimated dietary creatine intake from meat/fish consumption • Looked for association with self-reported cancer diagnosis Key Findings: • Each standard deviation increase in creatine intake → 5% lower cancer risk • Strongest effect in males (7% reduction) and overweight individuals (8% reduction) • Older adults (66-80) showed stronger protective association • Underweight individuals showed OPPOSITE effect — higher creatine = higher cancer risk Reliability Assessment Strengths: • Large sample size (25,879) • Used NHANES — a well-respected, nationally representative dataset • Adjusted for many confounders (smoking, alcohol, BMI, etc.) • Transparent methodology Weaknesses: 1. Cross-sectional, not causal — They can only show association, not that creatine prevents cancer 2. Self-reported cancer — Subject to recall bias 3. Creatine intake was estimated, not measured — they assumed 0.11g/oz for all meat 4. Reverse causation possible — Sick people may eat less meat 5. Confounding by healthy diet — People who eat more meat in "normal" amounts may have better overall diets 6. Effect size is tiny — 5% risk reduction is barely clinically meaningful 7. Underweight paradox — The opposite finding in underweight people suggests the relationship is complex/confounded Bottom Line Is this credible? Moderately. It's real research, properly conducted, but: • It's observational, not experimental • The effect is small • The journal is decent but not elite • Don't change your diet based on this alone What it actually shows: People who eat moderate amounts of meat may have slightly lower cancer prevalence. That's it. It doesn't prove creatine prevents cancer.
English
0
0
0
486
link-
link-@BassemDy·
Why is this regurgitation of what's already open sourced in abundance so impressive? I would be more impressed if this was solving an entirely new problem by remixing existing knowledge. Recreating a utility using what it was trained on seems like a massive waste of cycles.
Michael Truell@mntruell

Watch Cursor build a 3M+ line browser in a week

English
0
0
1
401
link-
link-@BassemDy·
I’m not sure about the usefulness of agents dropping as your experience with a tech stack increases. If anything, today, the usefulness grows with your experience. At least anecdotally speaking. I’m delegating entire problems to agents and it’s solving them at least as good as I would but Nx faster.
English
1
0
1
20
Moaz El-Defrawy
Moaz El-Defrawy@moaz_eldfrawy·
@BassemDy My experience so far: For easy and predictable problems, I can see AI speeding up things during the coding phase. However, the better you are at your stack and typing, the less useful AI is. For hard problems, the bottleneck is figuring out what to do, not coding!
English
1
0
0
37
link-
link-@BassemDy·
Writing code has never ever been the bottleneck. If I unleash 10 agents running in parallel 24/7 solving well defined problems, AND deploying changes, handling operational problems, troubleshooting alerts, mitigating incidents, I would still have a backlog that takes years to get through and define well enough for an agent to take a shot at
English
1
0
5
293
link-
link-@BassemDy·
@simonw I was telling a colleague today that even with our (almost) unlimited access “AI” capability, our backlog is as massive as ever, and growing
English
0
0
0
179
Simon Willison
Simon Willison@simonw·
This is really good, and worth reading in full - it's the best articulation I've seen yet of the idea that driving down costs involved in producing code will increase demand for software and let us take on much more ambitious projects
Addy Osmani@addyosmani

Every time we've made it easier to write software, we've ended up writing exponentially more of it. When high-level languages replaced assembly, programmers didn't write less code - they wrote orders of magnitude more, tackling problems that would have been economically impossible before. When frameworks abstracted away the plumbing, we didn't reduce our output - we built more ambitious applications. When cloud platforms eliminated infrastructure management, we didn't scale back - we spun up services for use cases that never would have justified a server room. @levie recently articulated why this pattern is about to repeat itself at a scale we haven't seen before, using Jevons Paradox as the frame. The argument resonates because it's playing out in real-time in our developer tools. The initial question everyone asks is "will this replace developers?" but just watch what actually happens. Teams that adopt these tools don't always shrink their engineering headcount - they expand their product surface area. The three-person startup that could only maintain one product now maintains four. The enterprise team that could only experiment with two approaches now tries seven. The constraint being removed isn't competence but it's the activation energy required to start something new. Think about that internal tool you've been putting off because "it would take someone two weeks and we can't spare anyone"? Now it takes three hours. That refactoring you've been deferring because the risk/reward math didn't work? The math just changed. This matters because software engineers are uniquely positioned to understand what's coming. We've seen this movie before, just in smaller domains. Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we'd need fewer developers. Each one instead enabled us to build more software. Here's the part that deserves more attention imo: the barrier being lowered isn't just about writing code faster. It's about the types of problems that become economically viable to solve with software. Think about all the internal tools that don't exist at your company. Not because no one thought of them, but because the ROI calculation never cleared the bar. The custom dashboard that would make one team 10% more efficient but would take a week to build. The data pipeline that would unlock insights but requires specialized knowledge. The integration that would smooth a workflow but touches three different systems. These aren't failing the cost-benefit analysis because the benefit is low - they're failing because the cost is high. Lower that cost by "10x", and suddenly you have an explosion of viable projects. This is exactly what's happening with AI-assisted development, and it's going to be more dramatic than previous transitions because we're making previously "impossible" work possible. The second-order effects get really interesting when you consider that every new tool creates demand for more tools. When we made it easier to build web applications, we didn't just get more web applications - we got an entire ecosystem of monitoring tools, deployment platforms, debugging tools, and testing frameworks. Each of these spawned their own ecosystems. The compounding effect is nonlinear. Now apply this logic to every domain where we're lowering the barrier to entry. Every new capability unlocked creates demand for supporting capabilities. Every workflow that becomes tractable creates demand for adjacent workflows. The surface area of what's economically viable expands in all directions. For engineers specifically, this changes the calculus of what we choose to work on. Right now, we're trained to be incredibly selective about what we build because our time is the scarce resource. But when the cost of building drops dramatically, the limiting factor becomes imagination, "taste" and judgment, not implementation capacity. The skill shifts from "what can I build given my constraints?" to "what should we build given that constraints have in some ways been evaporated?" The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don't reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work. The pattern is so consistent that the burden of proof should shift. Instead of asking "will AI agents reduce the need for human knowledge workers?" we should be asking "what orders of magnitude increase in knowledge work output are we about to see?" For software engineers it's the same transition we've navigated successfully several times already. The developers who thrived weren't the ones who resisted higher-level abstractions; they were the ones who used those abstractions to build more ambitious systems. The same logic applies now, just at a larger scale. The real question is whether we're prepared for a world where the bottleneck shifts from "can we build this?" to "should we build this?" That's a fundamentally different problem space, and it requires fundamentally different skills. We're about to find out what happens when the cost of knowledge work drops by an order of magnitude. History suggests we (perhaps) won't do less work - we'll discover we've been massively under-investing in knowledge work because it was too expensive to do all the things that were actually worth doing. The paradox isn't that efficiency creates abundance. The paradox is that we keep being surprised by it.

English
48
117
1.4K
175.7K
ℏεsam
ℏεsam@Hesamation·
constant refactoring is not a sign of perfection, it’s a sign of weak engineering.
English
4
4
43
3.8K
link-
link-@BassemDy·
@Hesamation Oh boy… the refactoring cult is gonna chop me to pieces now
English
1
0
1
19