Søren Sjørup 

5.6K posts

Søren Sjørup  banner
Søren Sjørup 

Søren Sjørup 

@zorendk

https://t.co/Z4zLkfHFhI

Copenhagen, Denmark Katılım Mayıs 2009
1.4K Takip Edilen307 Takipçiler
Søren Sjørup  retweetledi
Justine Tunney
Justine Tunney@jartine·
My project was adopted by 32% of enterprises. I have no idea who any of them are. I only know because Wiz (a business intelligence security company) published an aggregate report on it. We had spotty download statistics from Hugging Face but Mozilla didn't trust them, because we saw very few people engaging on GitHub and Discord. There's no way to monetize a community of dark matter developers you can't see. People pay for things if you have leverage, it's scarce, or it solves their pain. In the 2000s when open source was immature and the world was unfamiliar with how it worked, there was an abundance of folks willing to pay for help solving the problems that caused. But as Linux and friends became more polished and perfect, Red Hat's business model dried up. Software is infinitely copyable and requires zero effort to maintain, which makes it fundamentally at odds with having any kind of economy. There have been numerous efforts to make software not be the way that it is. For example, software is the only thing on Earth that can be both patented and copyrighted. Both of them failed. Folks would pirate. Patent trolls abused it. Open source rejected it. Additionally folks have tried to regulate software, with things like FIPS standards, that need corporations to pay for experts for certifications but this was rejected by the industry too. You have little hope of making any income off that unless you're a government contractor and that means having the right connections. There definitely exists an economy for certifying and owning the risks of open source, but I don't think much of the money goes to the people who did the software work. That's just how their world operates, and I think it's a good thing that open source has helped them succed. The most successful model for profiting off software to date, has been to never distribute it. You put it in the cloud and charge rent to anyone wanting to use it. It's the only way the software industry could survive, and everything which isn't that just became open source. At the end of the day, open source simply isn't compatible with any economic model we know, because it's the absense of an economic model. Capitalism won't work. Socialism won't work. The only funding model I've seen work is the most ancient one, which is patronage. Before devices like patents existed ancient innovators would seek the sponsorship of the most powerful folks in their day. This is how people like Archimedes, Michelangelo, etc. got paid. So I wish people wouldn't try to solve the open source monetization problem because there isn't one. Money is orthogonal and it should stay orthogonal. I think it should be a gentleman amateur activity rather than an institutionalized role people perform to feed their kids. Open source is the byproduct of curiosity and it's not the sort of thing you can industrialize. If you bring too much money into the equation, it creates liabilities, responsibilities, etc. that corrupt the motivations at endanger those of us who just want to be curious. However this isn't just my wish, it's a warning. I've seen many folks try to solve this unsolvable problem and it makes them all half mad, and if they're smart then they give up before they go completely mad. The solution for an open source developer looking for income, to me, has always seemed as clear as day. You either win the affection of a Medici, or you make your money doing something else. One modern Medici is the European social safety net. Many open source developers hail from Europe since their economic policies ensure people have food and shelter, giving them the freedom to focus on anything, while taking away many freedoms to be enterprising. In America, I was able to fund my open source work on Cosmopolitan Libc for many years with a very simple stock trading strategy, which was to invest 100% of my money in a tech company on Charleston Road. With financial markets giving me money for nothing, I felt it was fair that I should work on open source to kick back some of the benefits to the community. I've actually been discovering new ways to redistribute wealth from Wall Street to the open source community. It makes me happy to have the opportunity to apply my work towards building a thing for me. I mean, you can't spend your whole career making the tools to other people do things without ever doing a thing yourself? What's great is that financial markets are unbiased. It's surprisingly competitive giving things away for free. I've dealt with plenty of hate and harassment for sharing software with the world. But the NASDAQ won't hate me because of what I am or what it thinks I believe. All that matters is if I can write a cleverer algorithm. That's the thing open source is supposed to be about. The only tradeoff is I'll stop making money the moment I share my algorithm with everyone. So there's no glory or recognition in doing it. Just dollars. In life, you can optimize for earning respect. You can optimize for money. You can even optimize for impact. But you can't maximize all of the above. The world just does not let it happen. But it'll give you a lot more of one if you're willing to give up the others. So anyone who's made the intentional tradeoff to max out one stat at the expense of the others, shouldn't feel unhappy they weren't given all three.
English
21
44
555
56.4K
htmx.org / CEO of Thing-ness (same thing)
suggesting that typescript should be compiled to WASM or whatever is missing the forest for the trees: typescript should be supported natively in the browser get rid of the toolchain entirely so people can just use typescript the same way they can just use javascript, you nerds
English
49
27
550
30.1K
trash
trash@trashh_dev·
whats the go to programming language for when you stop caring?
English
463
11
751
161.7K
Valentin Ignatev
Valentin Ignatev@valigo·
My initial culture shock from macos has passed. Hardware part of the device is absolutely amazing, Macbooks mog my beloved ThinkPads if we don't count repairability. Macos is worse than KDE and Hyprland, but it's much better than Windows. I'm quite excited to learn the platform, and maybe even some ARM assembly. I also have to clarify about xcode because I saw "just install cli without apple id" comments - I know that, but sadly cli is limited (for example, no testing on real devices).
English
17
0
128
15.3K
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
Python doesn’t care about performance. You’d think it cares about correctness. No. You’d think it cares about static types. Also no. You’d think it cares about tooling. Also no. You’d think it cares about coding practices. Also no. You think it cares about security. Also no.
English
99
27
639
47.3K
DHH
DHH@dhh·
Danish election is tomorrow. One of the contested topics is "inequality" — that's what the wealth tax is about — but that premise is entirely wrong. Denmark desperately needs MORE inequality! It needs wealth to accumulate! So it can be invested into next-gen companies.
Bo Møller@BoMoellerDK

Danmark har brug for mere ulighed - @dhh 💪

English
50
31
824
61.1K
Søren Sjørup 
Søren Sjørup @zorendk·
@JustDeezGuy Yeah but the program that concatenates an interpreter with a program is also a compiler. An inefficient one but still.
English
1
0
2
17
Paul Snively
Paul Snively@JustDeezGuy·
@zorendk I think you always want at least two stages (and arguably more), but yes, whether that’s called “compilation” or not is an interesting question, and yes, “derive a compiler from an interpreter” is, what, the 2nd Futamura projection?
English
1
0
1
78
Klara
Klara@klara_sjo·
Hey @Israel, I accept bribes.
English
33
21
188
5.8K
Søren Sjørup 
Søren Sjørup @zorendk·
@HostOfMeta I think the performance information is still there in the types. What I like is that the code doesn’t have to change when a pointer to a struct becomes a struct value.
English
0
0
1
73
Jeremie Pelletier
Jeremie Pelletier@HostOfMeta·
Am in the camp that C's (.) vs (->) is a fantastic feature. When unifying both into (.) it hides memory indirection. Now reading code removes performance information :/ Sure not everything's embedded but everything's slow. GPUs get it right in decoupling linear uniforms from IO.
English
3
0
23
2.1K
Aaron Day
Aaron Day@AaronRDay·
I’m beginning to think Trump isn’t going to build a wall and have Mexico pay for it.
English
200
1.2K
22.7K
476.7K
Pete Cawley
Pete Cawley@corsix·
Speaking as someone who studied joint mathematics and computer science, this right here is how you tell apart the mathematicians from the computer scientists.
Pete Cawley tweet media
English
84
22
850
260.9K
oxcrow
oxcrow@oxcrowx·
To me, there are only few modern languages that can be called simple: Odin, Austral, OCaml, Scheme, Lua, C3, and Go. Most other widely loved / used languages like Rust, Zig, C++ etc. are fairly complex. Now the dilemma is: Users seem to prefer the complex languages more. Why?
English
141
17
525
55K
Grady Booch
Grady Booch@Grady_Booch·
It is increasingly clear that large language models - by the very nature of their architecture - are incapable of producing anything beyond the mediocrity of their training data. For me, the interesting question is this: why are humans able to do so?
Sukh Sroay@sukh_saroy

New research just exposed the biggest lie in AI coding benchmarks. LLMs score 84-89% on standard coding tests. On real production code? 25-34%. That's not a gap. That's a different reality. Here's what happened: Researchers built a benchmark from actual open-source repositories real classes with real dependencies, real type systems, real integration complexity. Then they tested the same models that dominate HumanEval leaderboards. The results were brutal. The models weren't failing because the code was "harder." They were failing because it was *real*. Synthetic benchmarks test whether a model can write a self-contained function with a clean docstring. Production code requires understanding inheritance hierarchies, framework integrations, and project-specific utilities. Different universe. Same leaderboard score. But it gets worse. A separate study ran 600,000 debugging experiments across 9 LLMs. They found a bug in a program. The LLM found it too. Then they renamed a variable. Added a comment. Shuffled function order. Changed nothing about the bug itself. The LLM couldn't find the same bug anymore. 78% of the time, cosmetic changes that don't affect program behavior completely broke the model's ability to debug. Function shuffling alone reduced debugging accuracy by 83%. The models aren't reading code. They're pattern-matching against what code *looks like* in their training data. A third study confirmed this from another angle: when researchers obfuscated real-world code changing symbols, structure, and semantics while keeping functionality identical LLM pass rates dropped by up to 62.5%. The researchers call this the "Specialist in Familiarity" problem. LLMs perform well on code they've memorized. The moment you show them something unfamiliar with the same logic, they collapse. Three papers. Three different methodologies. Same conclusion: The benchmarks we use to evaluate AI coding tools are measuring memorization, not understanding. If you're shipping code generated by LLMs into production without review, these numbers should concern you. If you're building developer tools, the question isn't "what's your HumanEval score." It's "what happens when the code doesn't look like the training data."

English
133
130
1K
111.3K
Erik Meijer
Erik Meijer@headinthebox·
@fwbrasil Existential question, does the programming language even matter anymore when the AI agent does all the work?
English
22
1
52
5.5K
Flavio Brasil
Flavio Brasil@fwbrasil·
A nice plot twist: Scala's bad DevEx is greatly mitigated by AI agents. I don't bother with broken IDE refactoring, don't need to fight sbt anymore, don't need to debug stuff because of bad docs, can easily pinpoint compiler bugs and find workarounds, etc. Just ship 🚀🚀🚀
English
4
0
39
6.1K
John Kenemore
John Kenemore@JohnKenemore·
Interesting your reaction. I wonder when super successful people of our previous generations try to share their lessons learned over a lifetime, we as the younger generation denégrate them thru sarcasm or just disrespect? It’s not a loss to them, nor anyone who reads our comments. It’s a tru loss to the one closing their mind to thought and the offer of shared life wisdom. I pray wisdom touches you and opens your eyes, mind and life to just one nugget of truth that will make you a better person, morally, spiritually or financially any of which will be a move toward the goal of personal growth.
English
2
0
5
63
Documenting Saylor
Documenting Saylor@saylordocs·
Charlie Munger’s 1998 Harvard speech is the ultimate cheat code for life. He compressed 74 years of billionaire wisdom into just 30 minutes. Most people spend 4 years in college and learn less than what’s in this video. Save this video, you will come back to this.
English
58
2.1K
6.9K
762.4K
braai engineer
braai engineer@BraaiEngineer·
well well well. turns out it is quite difficult to build a correct & fast differential dataflow compiler + supervised runtime.
English
1
0
3
72