Grzegorz Galezowski

3.7K posts

Grzegorz Galezowski banner
Grzegorz Galezowski

Grzegorz Galezowski

@GGalezowski

coder, OO designer, amateur guitar player and catholic dogmatic theology enthusiast. Author of free TDD book: https://t.co/ugwtQLYyT7

Kraków شامل ہوئے Haziran 2012
707 فالونگ447 فالوورز
Grzegorz Galezowski
Grzegorz Galezowski@GGalezowski·
@codeopinion Both. I honestly never liked the "code is a liability" one-sided type of thinking. Remember the time where CD Projekt Red was hacked and the source code was stolen (copied)?
English
0
0
0
43
Derek Comartin
Derek Comartin@codeopinion·
Is code an asset or liability?
English
6
0
0
769
Grzegorz Galezowski ری ٹویٹ کیا
JetBrains ReSharper
JetBrains ReSharper@resharper·
The Release Candidate build for ReSharper 2026.1 has just landed!🎉 This blog post tells you everything you need to know about the upcoming release: blog.jetbrains.com/dotnet/2026/03…
JetBrains ReSharper tweet media
English
0
7
22
1.5K
Seb
Seb@plainionist·
In tests, clarity matters more than DRY. Duplication is allowed. Abstractions should be the exception.
English
4
2
12
265
Grzegorz Galezowski ری ٹویٹ کیا
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
Someone builds a project management tool with Claude Code over a weekend. Ships it. Tweets "just replaced Jira." The app works. One user, happy path, localhost. Then two people edit the same record simultaneously, and the data is silently corrupted. They don't know what an optimistic lock is. They never needed to before. The prototype is maybe 1% of what makes software actually work. The other 99% is what you find after real users show up: race conditions, failed transactions, sessions expiring at the wrong moment, a payment webhook that fires twice and charges someone double. AI didn't cover any of that. It built exactly what you asked for. And the confidence is the worst part. "Just need to adjust a few things before we go live." The few things you need to adjust are the product. That's like laying a foundation and telling people you basically built the house. Vibe coding works. For personal tools, throwaway scripts, and prototypes you'll never put in front of paying users, it's genuinely fast and good enough. I use it. But there's a hard ceiling, and it shows up the moment the stakes get real. Agentic engineering is a different discipline. You're not prompting for code. You're decomposing problems, designing system boundaries, writing specs precise enough that the agent doesn't go sideways. You review everything it builds, because it will make mistakes that only look wrong if you know what correct looks like. You guide it. You catch what it misses. If you don't know what a distributed transaction is, the agent won't save you. It'll generate something broken with complete confidence, and you won't know until production. The hard part of software was never writing the first 200 lines. It never was.
English
182
264
2K
200.4K
Aaron Stannard
Aaron Stannard@Aaronontheweb·
I've been flipping through /r/Entrepreneur today and it looks like nearly 100% of the self-posts in there were written by ChatGPT and no one gives a shit
English
8
1
10
881
Grzegorz Galezowski ری ٹویٹ کیا
Julien Couvreur
Julien Couvreur@jcouv·
We merged an early C# 15 preview feature into .NET 11 preview 3: unions. Adds union declarations (`union Pet(Cat, Dog, Bird) { ... }`) and union types (attributed with `[Union]`). They can be treated by pattern matching/switch expressions as a closed set for exhaustiveness.
English
16
97
482
62.9K
Grzegorz Galezowski ری ٹویٹ کیا
Marcin Grzejszczak
Marcin Grzejszczak@MGrzejszczak·
#AI can generate microservices in seconds. But it usually forgets timeouts, retries, and circuit breakers. Next week I’ll run a live demo where we generate services with AI and then break them by injecting latency. No slides. Just a terminal. buff.ly/pGTsQx7
English
0
1
2
267
Grzegorz Galezowski ری ٹویٹ کیا
JetBrains ReSharper
JetBrains ReSharper@resharper·
ReSharper 2026.1 EAP 7 is here! This build introduces the Monitoring tool window, bringing runtime performance monitoring previously available in @JetBrainsRider. Observe key runtime metrics all in one place to better understand how your application behaves as it runs. Available with a dotUltimate subscription. Download the build here: jetbrains.com/resharper/next…
JetBrains ReSharper tweet media
English
0
4
16
1K
Seb
Seb@plainionist·
New week - new book 😉 What are reading currently?
Seb tweet media
English
1
0
0
45
Grzegorz Galezowski
Grzegorz Galezowski@GGalezowski·
@plainionist Yeah, I like that it's less prescriptive and allows different directions. For example this is from Alistair Cockburn's Component + Strategy article:
Grzegorz Galezowski tweet media
English
0
0
1
12
Seb
Seb@plainionist·
@GGalezowski Great architecture pattern as well 👍😉
English
1
0
0
15
Seb
Seb@plainionist·
Clean Architecture is still the best choice for most projects! Agree?
English
4
0
6
126
Grzegorz Galezowski
Grzegorz Galezowski@GGalezowski·
@mkristensen How would you optimize performance of something relying heavily on WPF rendering? There is an OSS DAW written in (mostly) dotnet called ReBuzz and in more complex songs the playback stutters. Running profiler shows ~85% CPU is spent in WPF.
English
0
0
0
24
Mads Kristensen
Mads Kristensen@mkristensen·
A super easy way for me to optimize performance of my .NET application using my existing unit tests, the built-in profiler, and Copilot to tie it all together into a single click of a button. Coming to Visual Studio 2026 very soon...
English
5
7
107
7.9K
Sukh Sroay
Sukh Sroay@sukh_saroy·
New research just exposed the biggest lie in AI coding benchmarks. LLMs score 84-89% on standard coding tests. On real production code? 25-34%. That's not a gap. That's a different reality. Here's what happened: Researchers built a benchmark from actual open-source repositories real classes with real dependencies, real type systems, real integration complexity. Then they tested the same models that dominate HumanEval leaderboards. The results were brutal. The models weren't failing because the code was "harder." They were failing because it was *real*. Synthetic benchmarks test whether a model can write a self-contained function with a clean docstring. Production code requires understanding inheritance hierarchies, framework integrations, and project-specific utilities. Different universe. Same leaderboard score. But it gets worse. A separate study ran 600,000 debugging experiments across 9 LLMs. They found a bug in a program. The LLM found it too. Then they renamed a variable. Added a comment. Shuffled function order. Changed nothing about the bug itself. The LLM couldn't find the same bug anymore. 78% of the time, cosmetic changes that don't affect program behavior completely broke the model's ability to debug. Function shuffling alone reduced debugging accuracy by 83%. The models aren't reading code. They're pattern-matching against what code *looks like* in their training data. A third study confirmed this from another angle: when researchers obfuscated real-world code changing symbols, structure, and semantics while keeping functionality identical LLM pass rates dropped by up to 62.5%. The researchers call this the "Specialist in Familiarity" problem. LLMs perform well on code they've memorized. The moment you show them something unfamiliar with the same logic, they collapse. Three papers. Three different methodologies. Same conclusion: The benchmarks we use to evaluate AI coding tools are measuring memorization, not understanding. If you're shipping code generated by LLMs into production without review, these numbers should concern you. If you're building developer tools, the question isn't "what's your HumanEval score." It's "what happens when the code doesn't look like the training data."
Sukh Sroay tweet media
English
132
252
1.1K
229.4K
Grzegorz Galezowski
Grzegorz Galezowski@GGalezowski·
@DanielW_Kiwi But seriously what is the point of making claims like the one you mentioned? Who exactly is supposed to benefit from this and how?
English
1
0
1
23
Grzegorz Galezowski
Grzegorz Galezowski@GGalezowski·
@plainionist Most people stay on procedural anyway. Then pull in lots of unnecessary libraries like DI containers and MediatRs in hope it will solve their design issues 😅
English
1
0
1
23
Mads Kristensen
Mads Kristensen@mkristensen·
What features or extensions make you jump from Visual Studio to other IDEs and editors to perform certain tasks?
English
105
9
32
11.6K
Grzegorz Galezowski ری ٹویٹ کیا
ThePrimeagen
ThePrimeagen@ThePrimeagen·
i cannot believe how much better 5.3 is than 4.6. after some internal testing results show its 15.2% better
English
231
82
4.9K
411.3K