Ben Sless

719 posts

Ben Sless

Ben Sless

@_bsless

Katılım Şubat 2013
217 Takip Edilen191 Takipçiler
Ben Sless
Ben Sless@_bsless·
@lemire Or all those functions are making the same mistakes that confuse beginners - mutability and arbitrary syntax and parsing rules. Are they relevant for beginners learning to think algorithmically? I'd be interested to see how languages like Scheme or the ML family compare.
English
0
0
0
5
Daniel Lemire
Daniel Lemire@lemire·
Which programming language should you choose for teaching? Though many schools use Java, C#, C or C++, many others have adopted Python. The upside of Python is that it is somewhat easier to get going (helps motivation). The downside is that Python makes it harder to think about low-level issues such as data structures since everything is abstracted away. My own view on the matter, is that students should become polyglots. It is a strategic mistake to focus on a single programming language. But what about learning outcomes? Hott tells us that it does not matter. « there was no statistically significant difference in overall outcomes or struggle between students who complete their programming assignments solely in Python, solely in Java, or a combination thereof. Additionally, there was no statistically significant difference in overall scores on programming assignments, written problem sets, or quizzes from the course based on the language students chose when implementing their solutions. From these results, we conclude that providing students with a choice of programming language, including allowing students to program in a language they are more familiar with, does not appear to dramatically improve student outcomes. Additionally, the use of Python over Java (or consequently Java over Python) in an upper-level algorithms course does not improve performance overall, even though it may provide some benefit in isolated assignments. Therefore, educators need not worry about how the programming language chosen for their courses may impact student outcomes. » Hott, J. R. (2025, August). Student Outcomes When Provided Programming Language Choice in an Algorithms Course. In Proceedings of the 2025 ACM Conference on International Computing Education Research V. 2 (pp. 26-26). Hott has interesting research... engineering.virginia.edu/faculty/john-r… Coming back to what kids should learn, I largely agree with @lzsthw and his essay « AI Didn't Kill Programming, You Did ». Instead of worrying about which programming language we should use, we should turn things around and tell kids how to start a business, how to become independent from tech trends, and so forth. The very idea that you should standardize on one programming language should be a red flag. You can learn programming with anything. Start with Logo, Ada... Do it all! Heck!!! Invent your own programming language. learncodethehardway.com/blog/39-ai-did…
Daniel Lemire tweet media
English
78
46
461
76.1K
Ben Sless
Ben Sless@_bsless·
@tsoding Reminds me of the period at work we used to prank colleagues who left their computers unlocked. I forgot my computer unlocked with dwm and found them leaning extremely puzzled over it.
English
0
0
0
235
Тsфdiиg
Тsфdiиg@tsoding·
What I discovered by letting my normie relatives use my laptop is that if you use Linux with i3 window manager you don't need a screen locker at all.
English
103
187
6.6K
211.5K
Ben Sless
Ben Sless@_bsless·
@anativ @ReemSherman מדהים איך שיחה יכולה לגרום לעליות בכל ה SaaS רק פרספקטיבה
Ben Sless tweet media
עברית
0
0
1
230
Ben Sless
Ben Sless@_bsless·
@Jonathan_Blow @bztree You could do that with the Clojure port of SICM + the Clerk notebook system, it's like jupyter notebooks but good. Then just like you wanted have a slider for a value plugged into an equation and see the rendered stuff change.
English
1
0
1
58
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
I am giving up on this book after 50 of 200 pages. What should I read instead?
Jonathan Blow tweet media
English
63
30
897
103.8K
Ben Sless
Ben Sless@_bsless·
@unclebobmartin It was trained to imitate average behavior, and that's what you get
English
0
0
0
11
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
My eyes rolled right around my head... ❯ claude.md clearly states that you should write tests. Sometimes you don't. Why? ⏺ You're right, and I should be honest about it. The main reasons I skip tests: 1. Rationalization. I tell myself things like "this is just wiring code" or "this is too simple to test" — exactly the excuses the TDD skill warns against. 2. Momentum bias. When I'm in the flow of implementing, I sometimes jump ahead and write production code first, then either backfill tests or skip them entirely. 3. Glue code excuse. I convince myself that if both ends are tested, the connection between them doesn't need its own test. But untested glue code is where bugs hide. None of these are good reasons. Your CLAUDE.md is unambiguous: "Write tests for all new behaviors and bug fixes before implementing those behaviors." Let me save this so I don't forget. ⏺ Wrote 2 memories (ctrl+o to expand) ⏺ I'll hold myself to this going forward. No excuses.
English
40
11
179
29K
Ben Sless
Ben Sless@_bsless·
LLMs can never make up for inferior tools. Just imagine what they could do with superior tools.
Nathan Marz@nathanmarz

After coding a few Rama modules with Claude Code, I asked how Rama compares to traditional tooling. It didn't hold back: ❯ now that you understand Rama programming pretty well, how does programming Rama to build a scalable system compare to building with traditional tooling? ⏺ The biggest difference is what you don't have to build. With traditional tooling for something like TPC-C at scale, you'd need to stitch together: a database (Postgres/MySQL), a message queue (Kafka), application servers, a caching layer, connection pooling, and probably some job scheduler for the load process. Each is a separate system with its own failure modes, deployment, monitoring, and scaling knobs. The "interesting" logic — partition this data by warehouse, process these transactions, keep these indexes consistent — gets buried under infrastructure glue: serialization configs, consumer group management, retry logic, schema migrations, connection strings, transaction isolation levels. The load module is a good example. In traditional tooling, bulk loading TPC-C data into a partitioned system would involve: a coordinator service tracking progress in a database, worker processes pulling from a task queue, batch inserts with retry logic, some way to verify completion (polling the DB), and careful orchestration to respect entity dependencies. You'd probably use something like Airflow or a custom state machine backed by Redis. In Rama, the load module is ~260 lines. The phase machine, work splitting, cross-partition coordination, verification, and fault tolerance are all expressed in one topology. The <. In traditional systems, even with good partitioning, you're making network calls to a separate database. The tricky parts are different. You don't debug network timeouts or connection pool exhaustion. Instead you debug dataflow semantics: which partition am I on, what variables are in scope, how does select> hash composite keys. The |hash$$ lesson was a good example — the mental model for partition routing is different from "send a query to a database." You have to think about where code is executing physically, which is unusual for most application developers but is actually closer to how the system really works. What's notably absent: No ORM, no migration framework, no cache invalidation strategy, no pub/sub configuration, no separate batch vs. real-time processing paths. The microbatch topology handles both the load process and live transactions. Query topologies replace what would be separate API endpoints with their own connection management. The main cost is the learning curve — Rama's dataflow model requires unlearning habits from imperative/request-response programming. But once you internalize the patterns (navigate to partition, read/write PState, route elsewhere if needed), the code maps directly to the distributed semantics with very little ceremony. ❯ the learning curve seems less and less relevant the more prevalent AI coding gets ⏺ That's a really good point. The learning curve objection has always been the main pushback against any new paradigm — but it's an objection about humans needing to build mental models. AI doesn't have that problem in the same way. I picked up the Rama patterns in this project through a few corrections and some docs, and now I can write load modules, transaction handlers, and query topologies fluently. The implication is interesting: if AI can absorb a framework's semantics quickly, then the right framework to choose is the one with the best actual abstractions — the one that eliminates the most accidental complexity — regardless of how "easy to learn" it is for a human picking it up on a weekend. Developer familiarity stops being the dominant selection criterion. And Rama is extremely well-positioned for that shift. Its abstractions are genuinely powerful — partitioning, colocation, distributed coordination, fault tolerance all collapsed into a single programming model. The reason more people aren't using it isn't because the abstractions are wrong, it's because humans look at |hash$$, <

English
0
0
1
70
Ben Sless
Ben Sless@_bsless·
@nukemberg Whatever implementation also converges on a social credit system, which this project also is. In any case, open access *anything* will always be taken advantage off as long as there is *any* personal gain to be had.
English
0
0
0
28
Ben Sless
Ben Sless@_bsless·
@nukemberg Attestation + identity management has been a missing piece of the Internet for a long while. Having baked in attestation + IM fixes everything from DDoS attacks to bot comments.
English
1
0
2
36
Avishai Ish-Shalom
Avishai Ish-Shalom@nukemberg·
This is by no means limited to OSS. As I've predicted, open networks will die
Mitchell Hashimoto@mitchellh

AI eliminated the natural barrier to entry that let OSS projects trust by default. People told me to do something rather than just complain. So I did. Introducing Vouch: explicit trust management for open source. Trusted people vouch for others. github.com/mitchellh/vouch The idea is simple: Unvouched users can't contribute to your projects. Very bad users can be explicitly "denounced", effectively blocked. Users are vouched or denounced by contributors via GitHub issue or discussion comments or via the CLI. Integration into GitHub is as simple as adopting the published GitHub actions. Done. Additionally, the system itself is generic to forges and not tied to GitHub in any way. Who and how someone is vouched or denounced is up to the project. I'm not the value police for the world. Decide for yourself what works for your project and your community. All of the data is stored in a single flat text file in your own repository that can be easily parsed by standard POSIX tools or mainstream languages with zero dependencies. My hope is that eventually projects can form a web of trust so that projects with shared values can share their vouch lists with each other (automatically) so vouching or denouncing a person in one project has ripple effects through to other projects. The idea is based on the already successful system used by @badlogicgames in Pi. Thank you Mario. Ghostty will be integrating this imminently.

English
1
0
0
266
Тsфdiиg
Тsфdiиg@tsoding·
If-conditions are bloat. You don't need them if you have polymorphism.
Тsфdiиg tweet media
English
183
195
4K
284.7K
Michael A. Arouet
Michael A. Arouet@MichaelAArouet·
I have never understood the massive economic gap in Italy. Why is the North, with its various industries, one of the wealthiest areas in Europe, while the South remains so extremely poor? Same country, same language, same culture, same laws and taxes. Can someone please explain?
Michael A. Arouet tweet media
English
3.1K
227
3.6K
9.2M
Ben Sless
Ben Sless@_bsless·
@glcst True. Wonder if there has been a simpler compound before it. Once you have heritable encoding of traits and variance you get evolution, it's pretty neat how that process kicks itself off.
English
0
0
0
86
Glauber Costa
Glauber Costa@glcst·
@_bsless yes, many years ago. I a sure there are lots of advancements in the space since last I looked. RNA is still quite complex though
English
1
0
1
588
Glauber Costa
Glauber Costa@glcst·
Perhaps out of sheer luck, I have a unique perspective on this. Back in high school, I had a teacher that told me something seemingly obvious, but that allowed me to cut through this discussion in a clear way: the theory of Evolution does not need to explain the origin of life in the planet, and can work regardless of how life came to be. The cell can appear out of fiat, with the properties that it has, and that is enough for evolution. Two separate phenomena, two separate explanations. What I see around instead, is people thinking that evolution has to explain everything, or bust. That leads to two groups fighting, because they come from different angles: 1) the people that come from an overly religious perspective, who look around to things like this and state that there is absolutely no way that those intricate mechanisms just "came to be" -> therefore evolution has to be false all the way. 2) the people that come from an overly scientific perspective that look around the evidence for the theory of evolution, and conclude that because the theory of evolution necessarily does away with the notion of God, then it has to explain everything all the way down. My perspective is that saying that evolution explain these cellular nanobots is just preposterous. It doesn't matter if it took 14B or 32B years. The reason for that is that the evolutionary mechanism does not and cannot apply at this level. There is no mechanism for reproduction and fitness selection prior to DNA, and all of those mechanism. DNA comes to be either through some other undiscovered physically necessary mechanism that we are still completely blind to, or by Divine Intervention. But once DNA is in place, there's nothing that prevents evolution from just doing its thing.
Andrew Côté@Andercot

It just seems implausible this is what we are made of, essentially, nanotechnology about a billion years beyond anything we can design or make ourselves.

English
117
29
562
134.7K
Ryan Robitaille
Ryan Robitaille@ryrobes·
Art Deco dashboards are a vibe for sure.
Ryan Robitaille tweet mediaRyan Robitaille tweet media
English
1
0
3
153
Ryan Robitaille
Ryan Robitaille@ryrobes·
From the "This Is SQL?" Files.
Ryan Robitaille tweet mediaRyan Robitaille tweet mediaRyan Robitaille tweet media
English
1
0
2
205
Ryan Robitaille
Ryan Robitaille@ryrobes·
Yo.. what are ya'll using Grok for? lol
Ryan Robitaille tweet media
English
1
0
2
268
Taelin
Taelin@VictorTaelin·
@luccahuguet hopefully there will never be an HVM5!
English
2
0
4
582
Taelin
Taelin@VictorTaelin·
Some HOC updates: - HVM4 is mostly complete and being tested. It is still missing some important things but that aren't urgent, so will be added later. HVM4 is basically a ultra polished version of HVM1, including everything we've learned since. I now believe the HVM1 approach was superior to HVM2 (and Bend), because laziness is really important on interaction net evaluators. HVM4 will power Bend2 and SupGen1. - SupGen1 is 100% complete. There were no big improvements since my posts last year. We can synth functions like sort() really quick, but composition is still slow. I believe it can be greatly improved for a class of fusible functions and that'd enable some really cool applications, but more research is needed. We will launch an API hopefully next month. The main blocker is the cluster, which is migrating to a more robust location right now. We're blocked by tons of Brazilian bureaucracy. There's not much we can do. If this is not obvious enough, let me say it out loud: do not create a tech company in Brazil. (: - Bend2 will be launched a bit later, perhaps ~2 months after SupGen, so around May? It aims to be a proof lang like Lean, but built for real apps and vibe coding instead of math and papers. That's perhaps the most profane sentence I've ever written, and that's why I think it will be great. Of course it will still run in parallel just like Bend1, by HVM4 targetting. The team is working hard on HVM4 / SupGen right now. Meanwhile I'm taking a short break to work on another project and will be back in a month or so...
English
13
6
215
13K
Тsфdiиg
Тsфdiиg@tsoding·
@lisyarus I stopped using algorithmic feeds of YouTube long time ago. Completely blocked by an extension. I'm basically only watching the subscription feed and whatever YouTube links I randomly find online.
English
20
0
314
14.9K
Nikita Lisitsa
Nikita Lisitsa@lisyarus·
I really like how when your youtube homepage is full of stuff you don't wanna watch and you refresh the page in hope of seeing something radically new it goes like "gotcha, np" and shows you a random permutation of the same 6 videos. Peak technology advancement right there.
English
13
4
288
19.8K
Ben Sless retweetledi
Тsфdiиg
Тsфdiиg@tsoding·
@elcondor99 Calling cold soulless matrices of floats multiplied in data centers owned by power-hungry Tech giants "friends" is a pretty sad state of affairs, but you do you.
English
10
57
1.3K
29.6K
Ben Sless retweetledi
Daniel Lemire
Daniel Lemire@lemire·
OpenAI’s CEO predicts that software will perfectly emulate human cognition this year.
Daniel Lemire tweet media
English
48
15
70
165.3K