Andrew Luzin

5.1K posts

Andrew Luzin banner
Andrew Luzin

Andrew Luzin

@dronnix

Software development, Linux, Distributed systems, etc. Forest, mountains, alpine skiing, cycling, trail riding, etc.

Georgia Katılım Ekim 2009
137 Takip Edilen426 Takipçiler
Andrew Luzin
Andrew Luzin@dronnix·
@samat Нужен скорее не навык ставить задачи, а навык работы мейнтейнером: ревьювить изменения, глубоко понимать их последствия, сохранять архитектуру поддерживаемой.
Русский
0
0
4
536
Samat Galimov
Samat Galimov@samat·
Удивительно наблюдать, как гиганты (не)справляются с вайб-кодингом: Github потерял последнюю девятку в своем SLA. Отдельно замечу, что status page вендоров уже давно нельзя верить и вот люди собрали собственный, народный. mrshu.github.io/github-statuse… Амазон был вынужден сильно замедлить выкатку фич после падений и теперь требует сениор-ревью перед релизами. arstechnica.com/ai/2026/03/aft… Хотя казалось бы, уж у них-то должны быть и автоматизированные тесты и легион ручных QA-инженеров! Вот целый список подобных падений, с оценкой, сколько пользователей они задели. crackr.dev/vibe-coding-fa… С одной стороны, нужны нормальные процессы, чтобы слоп не лез в продакшен и были хотя бы пара живых людей, которые понимают, что там происходит под капотом в сложной системе. С другой стороны, мы сейчас во временном переходном периоде, когда ещё есть программисты, которые не умеют ставить задачи (хотя и умеют писать код самостоятельно) и с этим нужно что-то делать. В общем, быть техдиром опять становится интересно. Планирую обсудить это сегодня вечером на онлайн-конференции. Начало в 18:00, моё выступление в 20:00 по Москве. Бесплатно и без смс, регистрация тут. До встречи! stratoplan-school.com/management/?ut…
Samat Galimov tweet media
Русский
6
5
115
31.7K
Andrew Luzin retweetledi
mrdoob
mrdoob@mrdoob·
mrdoob tweet media
ZXX
15
298
2.3K
140.8K
Andrew Luzin
Andrew Luzin@dronnix·
@arpit_bhayani Another PoV: you should be an expert at at least one area to add value to / ensure AI-generated solution.
English
0
0
1
36
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
I agree. But even specialists are sitting on a goldmine - like both, it depends on the domain. Even for generalists, if they are building everything that anyone else is able to do, then they have no moat. If specialists are building expertise in a not-so-relevant domain, that is of no use either. Eventually, the ones who win big have to be at the right place at the right time, doing the right thing - irrespective of whether they are generalists or specialists.
shirish@shiri_shh

generalists are about to win big If you understand a little of tech, business, and people, and can connect everything fast. you're sitting on a goldmine right now.

English
22
12
387
31.6K
Andrew Luzin
Andrew Luzin@dronnix·
@bunopus @webholt Просто нужно дать определение уровням. По классике Senior может самостоятельно работать над задачей, сформулированной в бизнес-терминах. Мидлу нужна декомпозиция и помощь в технической части.
Русский
0
0
0
541
Evgeny Kot
Evgeny Kot@bunopus·
Зависит от бизнеса. Ну взять тот же Гугл. Какое-то время назад базовым считался уровень L5 (senior). Всё что ниже — up or out. Но в 2019 году стало понятно, что столько сеньоров компании не нужно, и базовым стал L4. Так что "нормальность" - вещь очень относительная
Vlad :: gwer@webholt

Дежурно напоминаю, что нормальное состояние большинства — быть мидлом. Если все синьоры и лиды, то скорее всего никто не сеньор и не лид, просто всех подтянули, чтоб никому обидно не было.

Русский
8
0
70
24.6K
Andrew Luzin retweetledi
Cindy Sridharan
Cindy Sridharan@copyconstruct·
@zeeg Or for that matter, pioneering new open source projects that’s not some personal productivity tool. Where’s the next generation of file systems, databases, orchestrators, device drivers etc? Vibecoders not interested in solving hard problems?
English
6
6
70
5K
Andrew Luzin retweetledi
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
This is how you get dumber btw, true even before AI. Turn on DnD, put your phone in a drawer. The best option if you can is to separate work + personal devices so your work device can't even see personal stuff. This is also partially why Pomodoro was all the rage a decade+ ago.
𝐑.𝐎.𝐊 👑@r0ktech

POV: you’re a developer in 2026😂

English
83
206
3.6K
377.8K
Andrew Luzin retweetledi
Ayaan 🐧
Ayaan 🐧@twtayaan·
DevOps engineers explaining Kubernetes to the team. 😂
English
69
582
3.9K
473.6K
Andrew Luzin
Andrew Luzin@dronnix·
@mr_mig_by @shipilev Ну как же не имеет. Автоматизация работы с текстами дешевле автоматизации работ в шахте, поэтому в шахтах будут люди.
Русский
1
0
0
21
Andrew Luzin
Andrew Luzin@dronnix·
@shipilev @mr_mig_by Капитализм: автоматизируем то, что дешевле автоматизировать, а не то, где тяжелее работать.
Русский
1
0
2
114
Aleksey Shipilëv
Aleksey Shipilëv@shipilev·
@mr_mig_by Не визуализация, а прямо анекдот: "В Вилларибо у мужчин средняя длина члена 20 см, а в Виллабаджо -- 10 см. В Вилларибо спрашивали, а в Виллабаджо измеряли."
Русский
3
1
34
1.8K
Marc Brooker
Marc Brooker@MarcJBrooker·
I think that the future of software development is going to be a specification-driven loop, driven by AI agents. This is going to require significant innovation in specification, especially specification of performance properties.
English
15
5
92
8.4K
Phuong Le
Phuong Le@func25·
Go is the rare modern language that lets you pass around real values that get copied, not just hidden references. In Java, Python, Ruby, JavaScript, most things act like a reference to an object somewhere else. The language just does not make you write the star * or the ampersand &. Go does both models, it has plain values like a struct value, and it has pointers like *User. That is why pointers exist in Go. Without pointers, Go could not clearly support both "copy this value" and "share this one thing". Treat 'pointer vs value' as a design choice you make on purpose In Go every function call passes arguments by value. - If the argument type is User, the whole User value is copied into the function. - If the argument type is *User, the pointer value is copied, which is just an address, and both sides can reach the same User. That gives you very practical control. - When you want a function to be unable to change the caller's struct, accept a value, not a pointer. - When you need a function or method to change the caller’s data, use a pointer. - When copying would be expensive, pass a pointer. If a struct contains large arrays or many fields, copying it on every call costs CPU time and memory bandwidth. When 'no value' is a real state, pointers give you nil. A *Config can be nil, which can mean not provided or not loaded yet. A plain Config always has some value, even if it is the zero value. Go also makes pointers safer and simpler than C. There is no pointer arithmetic in normal Go. You can not do p + 1 unless you step into unsafe. Memory is managed by the runtime and garbage collector, so you are not doing malloc and free for normal code. Some Go types already behave like references even when you do not write a star. - A slice like []byte contains a pointer to an array plus a length and capacity. - A map and a channel also point to runtime-managed data. That means passing a []byte "by value" still shares the same underlying bytes. If a function takes []byte and writes to it, the caller can see the changes. If you want "read only" behavior, you often need to copy() the slice contents yourself. A simple way to use this in day to day Go is to start with value types for structs, then switch to pointers only when one of these is true: 1. the callee must modify the caller's struct, 2. the struct is large enough that copying shows up in profiling, 3. nil is meaningful, 4. shared identity is required. This keeps most code easy to reason about, and the few places that can mutate or share state are marked by *T right in the type signature.
English
14
24
297
22.4K
Phuong Le
Phuong Le@func25·
"sync.Pool is an object pooling pattern to optimize memory" No, that's not correct. sync.Pool is a CPU optimization. The goal of using sync.Pool is to cache temporary, already-allocated objects for reuse, which reduces pressure on both the memory allocator and the garbage collector, since these operations use CPU: - Heap allocations for the memory allocator: uses CPU to find space for the object, update allocator metadata, possibly zero the memory, etc. - GC scanning, marking, and sweeping work: uses CPU because the GC is doing real work over memory and metadata. Therefore it helps optimize CPU time. But it does not reduce 'CPU time' alone, it reduces 'CPU time in allocation-heavy code paths'. So you need to answer this question before optimizing: "Does this code path spend a meaningful amount of CPU time on allocations?" Another hidden problem is that sync.Pool only saves CPU when you actually reuse objects. Go is allowed to throw away pool items after a while (see how it works: youtube.com/watch?v=fwHok9…). So if your program does not Put/Get from the pool frequently enough, the pool may be empty, and then you allocate anyway. So does it reduce memory after all? 50/50, sync.Pool can actually increase memory usage because anything you Put into it is still strongly referenced by the pool until the runtime decides to drop it. That means pooled objects count as live heap while they sit there. If your goal is 'use less RAM', the right way to decide is to check both allocation rate and resident or heap behavior with your workload, for example via runtime.MemStats and heap profiles with pprof. It's common to see sync.Pool reduce allocations a lot while leaving RSS about the same, or even higher.
YouTube video
YouTube
English
3
33
264
23K
Andrew Luzin retweetledi
Aleksey Shipilëv
Aleksey Shipilëv@shipilev·
Like any great multiplier, AI tools are magnifying both good and bad outcomes. If you measure productivity by easily gamed and/or misleading metric, you will get a ten-feet high wave of slop. Used irresponsibly, you are running a trial version of paperclips maximizer.
English
0
1
19
1.3K
Andrew Luzin
Andrew Luzin@dronnix·
@dsp_ With --dry-run=server it's pretty useful!
English
0
0
0
101
David Soria Parra
David Soria Parra@dsp_·
I gave Claude access to my k8s cluster. Grabbing popcorn and waiting what happens.
English
6
2
24
4.1K
Andrew Luzin retweetledi
Phuong Le
Phuong Le@func25·
mmap in Go is not always faster I/O like most people think. It performs blocking disk access that the runtime CAN'T see, and it can actually stall your entire application/process Go runs many goroutines on a small, fixed number of OS threads, controlled by GOMAXPROCS, right? When a goroutine does a known blocking operation like read() or pread(), the runtime knows it's entering a syscall. It marks that thread as blocked and can start another OS thread so other goroutines keep running. But with mmap(), this is a different story. mmap maps a file directly into your process's memory, so you can read or write the file by accessing it like a byte slice instead of calling read() or write(). The OS loads pages from disk on demand via page faults whenever you touch parts that aren't already in RAM. The only (big) problem is this: --- "a page fault is invisible to Go." --- What does this mean? A page fault turns a simple memory access into long blocking I/O, but Go treats it like CPU work. Go does not mark the goroutine as being in a blocking syscall state for that, because there is no syscall entry point involved: - mmap is a syscall when you create the mapping, which is fine; that part is visible to Go. - What's invisible is when you read or write bytes through the already-mapped memory. That access is just a normal CPU load or store instruction. So the OS thread can block in the kernel while still "owning" the processor, and the Go scheduler cannot do the usual handoff it does for known blocking syscalls (e.g., read(), pread()). Net effect: you can get hidden stalls and reduced parallelism, especially if enough running goroutines hit major faults at the same time (or if GOMAXPROCS=1).
English
8
22
228
13.8K