S V

101 posts

S V

S V

@open_parens

Love all science, particuarly CS, Math, SWE. Previously: SWE @Google NYC, @IITKgp. Opinions own. Not evil.

Katılım Mart 2014
278 Takip Edilen24 Takipçiler
S V
S V@open_parens·
@AresZell @justinskycak Assuming you are a beginner, I'd recommend Apostol vol 2 for theory and Herstein's Primer for practice. Computing linear regression coefficients is a good first aim because it requires good theoretical understanding and ease ofanipialting matrices.
English
0
0
0
8
Justin Skycak
Justin Skycak@justinskycak·
A common failure mode in math teaching: Throwing students into abstract, proof-based courses before they've actually mastered the mechanical "grunt work" of the subject. Eg: Axler's "Linear Algebra Done Right" is really a second course in linear algebra – even Axler says this.
English
23
27
509
21.5K
S V
S V@open_parens·
I ran into a `sec(theta)` integral today and not only did Gemini derive it neatly; it provided historical context around its use in Mercator projection and Barrow's proof of its antiderivative was the first recognized use of partial fractions for integration. Fantastic.
S V tweet mediaS V tweet mediaS V tweet media
English
0
0
0
9
S V
S V@open_parens·
One of the best things a once-mathy-but-lost-touch-now person can do is to revist math topics. I did this for a few topics and it is some of the best time I've spent.
Math Files@Math_files

Stages of life

English
0
0
0
5
S V
S V@open_parens·
I have deep respect for traditions, heritage and our culture. So, these "we did this before you" posts come across as lacking the virtue of magnanimity that our culture teaches. So what if somebody rediscovered something on their own? Besides, who decides the truth? We can't even paint an accurate picture of what's happening around us today. Are we to really believe that something that allegedly happened 100s of years ago is pristine information? Or that books are the final word on any topic? Most topics are too complex to have a definitive word. I greatly enjoyed INSV Kaundinya's journey because of the heritage it represented. Crucially, it didn't have a tone of "we did ships before anyone else". Let's keep it that way.
English
1
0
0
20
SD Prasad
SD Prasad@s_dprasad·
@open_parens @Fintech03 A historical truth about any country should be viewed as a historical fact only. It shouldn't be cringe at all. Just because India is lagging behind many nations, historical truths shouldn't become cringe. You need not assume that people are sharing truths to seek validation.
English
1
0
0
29
Parimal
Parimal@Fintech03·
Apologies but Double Entry Bookkeeping which the West attributes to the Italian Luca Pacioli in 1494 was already functioning in India as the Bahi-Khata system in the 7th century. Indian merchants did not just use standard 1, 2, 3 in their internal books. They used Mahajani/Landa scripts, which were purposefully designed to be non-convertible. In these scripts, a 2 did no look like a character that could easily be turned into a 3/8. The characters were designed with terminating loops. If you tried to add a stroke to a Mahajani digit, it would intersect a loop in a way that looked obviously botched. They wrote their ledgers w/o vowels. To an outsider (like a tax collector/a rival), the ledger looked like a jumble of consonants. Only the head accountant knew the key to turn those consonants into specific commodity names.
English
13
133
502
19.2K
S V
S V@open_parens·
@aakashgupta Purcell in ElectroMagnetism: "It's quite astonishing how unhelpful it is simply to read a solution. You'd think it would do some good, but ...". Just like code in the mind & compiler accepted-code tend to be far apart.
S V tweet media
English
0
0
5
627
Aakash Gupta
Aakash Gupta@aakashgupta·
This is a 12-year-old study that has failed replication three times. And the underlying claim is still probably right. The paper is Mueller and Oppenheimer, 2014. 67 students at Princeton. Longhand note-takers scored higher on conceptual questions. Became the most cited paper in every “ban laptops” argument on Earth. Then three separate labs tried to reproduce the result. Urry et al. at Tufts in 2021, 145 students. No effect. Morehead et al. in 2019, two experiments. No effect. A meta-analysis pooling eight similar studies. No effect. So why am I saying it’s still right? Because a 2023 Norwegian EEG study with 256 channels found something the behavioral research couldn’t measure. Handwriting produces theta and alpha connectivity patterns between parietal and central brain regions that typing does not produce. Those specific frequencies are the ones your hippocampus relies on for memory formation. Your brain treats handwriting as a motor-spatial problem. Five brain regions fire in coordination: premotor cortex, parietal cortex, cerebellum, fusiform gyrus, sensorimotor cortex. Typing activates a fraction of that network. The original study measured the right outcome with the wrong methodology. The real finding lives at the neural level: handwriting rewires the encoding process itself.
Brandon Luu, MD@BrandonLuuMD

Students who took notes by hand scored ~28% higher on conceptual questions than laptop note-takers. Writing forces your brain to process and compress ideas instead of copying them.

English
50
324
1.4K
127.1K
S V
S V@open_parens·
@s_dprasad @Fintech03 No other country has these "we used to be so great" X accounts. Living in the past betrays insecurity. To each their own but there's the cringe factor to consider.
English
4
0
0
42
SD Prasad
SD Prasad@s_dprasad·
@open_parens @Fintech03 Maybe the idea is simply sharing the truth widely..idea is not to gain any material advantage but just helping create a more correct version of history by countering misinformation.
English
1
0
0
30
S V
S V@open_parens·
Changing existing software has always been and still is the most expensive piece of the software lifecycle. Just like David, my experience is that current AI tools struggle to improve even a relatively small amount of code. When I was a TL at Google, I consciously made a choice to only use a tiny portion of my brain capacity to design and author software. If something required non-obvious reasoning, I looked for simpler alternatives. This was to effectively bound the complexity of things my team owned/wrote/supported at scale and we could change things easily. Eventually, I came to rely on just a few workhorse patterns of thinking/designing/authoring. They are broadly applicable, vastly simplify complexity and produce ageable architectures. Putting down a few of them here. (1) Designing domains to solve problems openparens.pages.dev/blog/2023/abiy… (2) Homomorphic code style: minimal tests, maximum correctness openparens.pages.dev/blog/2023/soft… (3) Ageable tests - tests that effectively stay orthogonal to new features and code refactors. Yes, there absolutely IS a way to write OBJECTIVELY good tests. Will write about it. (4) Functional shell with imperative core. I didn't blog about this yet, but destroyallsoftware.com/screencasts/ca… comes close. (5) Sidestep age-poorly patterns openparens.pages.dev/blog/2023/soft… (6) Composable APIs. For example, if A, B, C... are API calls, then Serial(A, B, C), Parallel(A, B, C), Transactional(Parallel(A, B, C)) should also be available combinators for API calls. This age-proofs APIs. (7) Pull (poll if required), not Push. Keeps state space bounded and responsibility of each system is clear. If I can get my agent to constrain itself to this line of thinking, the promised land of 10x acceleration AND maintainable code has a shot.
David Cramer@zeeg

im fully convinced that LLMs are not an actual net productivity boost (today) they remove the barrier to get started, but they create increasingly complex software which does not appear to be maintainable so far, in my situations, they appear to slow down long term velocity

English
0
0
0
19
S V
S V@open_parens·
@andrey_kurenkov @PlayAstrocade Cost of various SWE tasks in AI era IME: - TYPING code 0 - Discovering good tools/abstractions for a particular job ~0 - Lifetime cost of software factoring improvability: ~70% of today for experts but *higher* for the rest.
English
0
0
1
31
S V
S V@open_parens·
I'd wager they'd discover complexity of SWE soon enough. Good SWE is difficult for one main reason: new feature velocity and explainability explodes as an exponential function of number of bad decisions while good decisions take time, thought and expertise. Most software is bad. No better time for an extreme generalist with expertise up and down the stack (instruction code through to pixels, disk APIs through to distributed web APIs) and software lifecycle (for e.g. understanding that change is where SWE hours are mainly spent, not initial write).
English
0
0
1
502
vixhaℓ
vixhaℓ@TheVixhal·
Computer science is gradually returning to the domain of physicists, mathematicians, and electrical engineers as large language models automate much of what we currently call software engineering. The field’s center of gravity is shifting away from manual code writing and toward deeper theoretical thinking, mathematical insight, and systems-level reasoning.
English
329
1.7K
15.5K
947.9K
S V
S V@open_parens·
@bhogleharsha @manishbatavia Wonder why we are so uncomfortable with a few pauses of silence. I notice the same behavior in interviews - only a few interviewees (Musk famously) are comfortable pausing to think before responding. Wonder why.
English
0
0
0
1.3K
S V
S V@open_parens·
Automated coding reminds me of 2011 when everyone rushed to make mobile apps and desktop was going to be dead. Desktop held its own on many things - for e.g. where information density and large screens actually matter. Examples: Booking flights, Monitoring markets, Software Development, Any sort of research etc. Mobile's strength AND weakness was its portable nature. Similarly, automated code's strength AND weakness could turn out to be oneshotting. I haven't had much fun nor confidence in improving a codebase after oneshotting it.
Claude@claudeai

Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.

English
0
0
0
31
S V
S V@open_parens·
@TCzajka FWIW, there's an EWD on this topic with much the same points.
S V tweet media
English
0
0
0
19
Tomek Czajka
Tomek Czajka@TCzajka·
0-based indexing is just simpler and better than 1-based indexing. Here is why: 0. Half-open ranges are better than closed ranges: [a, b) + [b, c) = [a, c) size of [a, b) = b - a With closed ranges things are needlessly complicated, you need "+ 1" corrections everywhere: [a, b] + [b + 1, c] = [a, c] size of [a, b] = b - a + 1 1. Once you accept half-open ranges, starting with 0 is better than starting with 1: The first n elements are [0, n). Starting with 1 again creates a needless complication, requiring a "+ 1" correction: The first n elements are [1, n + 1).
Gracia@straceX

we treat zero-indexed arrays like a law of programming. but it actually started as a small practical decision in early computer systems. In this article, I break down the story behind it and explain why that tiny decision still shapes how programmers write code today.

English
7
2
37
5.3K
S V
S V@open_parens·
Great job and congrats on the successful surgery. If I had been there, I'd have appreciated it & even enquired if it became technically complex. My experience is that invisible excellence is usually only appreciated by those who themselves excel at something. Since excellence is rare, it is unlikely you'll meet many patients who'll appreciate your work. Just statistics. Do make sure to savour the few times you do get appreciated! Those moments provide immense pride and satisfaction.
English
0
0
7
2.3K
Arshiet Dhamnaskar
Arshiet Dhamnaskar@arshiet·
It is disheartening to see the reception the medical fraternity gets in the end. I was part of a surgery that went on for 11 hours, from induction to closure. Earlier, when I was speaking of surgeries going on for approximately 5-6 hours, my non-surgeon friends would ask, "so how often do you get breaks for water, food, washroom etc?" I found that laughable But today, I stood there for 11 hours, without a drop of water to drink, fatigued at the end of it (enjoyed the surgery of course). But you know the first thing that the relatives of the patient asked when I shifted the patient out of the operation theatre? "Why did you take so long?" And no, it was not a voice of concern, asking why we took so long, if there was anything wrong with the patient. It was a stern question of, "Why couldn't you do it earlier?" When explained that it was a difficult case, and we had done a pretty good job in the end they began asking for a 'guarantee' that the patient would be fully fit and fine. (she almost is, but how can we predict the future?) Neurosurgery is not a child's play It takes to go around the brain and spinal cord One wrong move, and suddenly you are on the verge of life and death or disability. Why don't people understand?
English
38
46
542
44.7K
S V
S V@open_parens·
Cool to have some famous people similar disabilities as me. Just a shame I don't share the same abilities too. I too find new stuff difficult to follow quickly. But, the reasont is NOT that it is new. The surface level appreciation is indeed quick. But, as soon as I learn something new, I poke small variations into it and see if those still make sense to me. I start doing it subconsciously and by the time I snap out of it, the presenter is a few more minutes into the talk and I've lost track. Small variations is a good way to understand the essence of the thing you are learning. Truly understanding something is not just about why something works but also why something else that is very similar does not work. Learning the essence of something compresses it better for easier recall later on.
Crémieux@cremieuxrecueil

Niels Bohr was a bit of a meathead: he played football and drank copious amounts of beer. He was also very slow and couldn't follow the plots of movies. But his slowness masked incredible insightfulness of the sort that occasionally flummoxed his colleagues:

English
1
0
0
88
S V
S V@open_parens·
@lemire Matches with my experience. The best people also tend to know when to spend time looking for a better solution vs when to go on autopilot. Similar to clock management - i.e. knowing when to trade off clock for move quality - in classical chess.
English
0
0
0
167
Daniel Lemire
Daniel Lemire@lemire·
I often make the case that speed and quality are related, but not according to the inverse relationship people often imagine. People assume that the slower you work, the better the quality. Empirically, it falls on its face. Your slowest employee is not likely to be the best. You know those software engineers paid $1M or more? I bet most of them produce a lot of code in a year. In most cases, the fastest teams and the fastest people are also the best. You are not safer with a surgeon who works at half the speed: you are more likely in the hands of a poorly trained surgeon. In the same breath, let me state that in the last 20 years or so, I have made maybe 5 to 10 lasting contributions. In particular, building software is hard and even a relatively small piece of software might require years of work. But it does not imply that working slower makes your software better!!! But what about AI? AI seems to allow us to go faster… So, as per my model, it should (up to a point) increase our quality. I believe it should apply. 1.AI allows you to make mistakes faster and thus learn faster. If you can try an idea 50% faster, then you will make better software over time. 2.AI should make it easier to keep your software relevant as the world changes around you. Yet danger lurks. If you go too fast, you risk falling for “AI slop.”
Daniel Lemire tweet media
English
13
3
103
12.2K
S V
S V@open_parens·
The improvements in HBM, in GPUs/TPUs follow Steve Jobs' 1980 explanation that a major role of hardware is to accelerate software. "software is something that either is changing too rapidly or is uh you don't exactly know what you want yet or you didn't have time to get it into hardware or the technology is not there to get it into hardware. What I see happening is that more and more software is getting integrated into Hardware. Yesterday's software is today's Hardware. So, those two things are merging I think and the line between hardware and software is going to get finer and finer and finer and finer". Link 👇
English
1
0
0
31