Jonathan Ellis

20.6K posts

Jonathan Ellis

Jonathan Ellis

@spyced

Brokk founder. Previously DataStax co-founder, JVector author, and Apache Cassandra project chair.

Austin, TX Katılım Nisan 2009
236 Takip Edilen9K Takipçiler
Jonathan Ellis retweetledi
Evgenii Ivanov
Evgenii Ivanov@eivanov89·
io_uring easily beats AIO and gets faster with every kernel — until both suddenly get 30% slower. Join a database developer’s unexpected journey into the Linux kernel and IOMMU. medium.com/ydbtech/how-io…
English
3
27
208
38.9K
Jonathan Ellis retweetledi
Daniel Hnyk
Daniel Hnyk@hnykda·
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English
269
2.1K
8.4K
4.4M
Jonathan Ellis retweetledi
swyx
swyx@swyx·
i challenge you to find a single kernelwriting infra company this cracked and this confident that they can do this all entirely open catch up and its both immediately useful and ~nobody can catch up (if someone does, they still win because Mojo)
Chris Lattner@clattner_llvm

@Zyyon_ Please don’t tell anyone: we aren’t just open sourcing all the models. We are doing the unspeakable: open sourcing all the gpu kernels too. Making them run on multivendor consumer hardware, and opening the door to folks who can beat our work. Plz keep it quiet, ok? 😉

English
21
10
346
44.6K
Jonathan Ellis retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Its annoying that my tireless team of little computer people made out of statistical models that predict words based on the corpus of all human language & thus are reasonable approximations of a compression of the knowledge of humanity take 15 minutes or so to complete some tasks
English
59
53
905
36.7K
Jonathan Ellis retweetledi
Chris Lattner
Chris Lattner@clattner_llvm·
@Zyyon_ Please don’t tell anyone: we aren’t just open sourcing all the models. We are doing the unspeakable: open sourcing all the gpu kernels too. Making them run on multivendor consumer hardware, and opening the door to folks who can beat our work. Plz keep it quiet, ok? 😉
English
12
76
1.3K
85.5K
Jonathan Ellis retweetledi
Taelin
Taelin@VictorTaelin·
more of the same - after begging opus in every way possible to optimize the bend's checker ("make it fast, fix quadratic blowups, think hard pls"), there was zero improvement so I decided to babysit it. i was giving the instructions, it was doing the boring work. i asked it to measure stuff, break timings down, dissect the code exactly how i would 2 hours later: the checker is now ~10x faster so, as of march 2026, and I don't like that, automated research with AI *still* sucks, but a human domain expert using it to empower himself can achieve great things below is the summary of this chat! good night
Taelin tweet media
English
11
14
332
28.2K
Jonathan Ellis retweetledi
Paul Graham
Paul Graham@paulg·
Someone asked what advice founders ignore. That they: 1. Should change their name. 2. Should launch fast. 3. Shouldn't treat fundraising as success. 4. Shouldn't assume they can raise because it's time to. 5. Should fire bad people quickly. 6. Shouldn't talk to acquirers.
English
297
237
4.2K
880.4K
Jonathan Ellis retweetledi
Luis Garicano 🇪🇺🇺🇦
Famously (there is a beautiful Works in Progress piece on this) in 2016, Geoffrey Hinton told an audience in Toronto that medical schools should stop training radiologists, since AI would soon outperform them at reading scans. Ten years later, there are more radiologists than ever, and they earn more than they did then. Hinton was right about the task, but he was wrong (so far!) on the future of the radiology profession. Times have never been better for them. The gap between those two claims, the difference between tasks and jobs, is the subject of a paper I have written with Jin Li and Yanhui Wu, and that we release today: "Weak Bundle, Strong Bundle: How AI Redraws Job Boundaries." (Very relatedly we are also finishing the first draft of our book "Messy Jobs" on AI and Jobs!! You will be the first to hear). We start from the observation that the growing literature on AI and labor markets measures the AI shock by task exposure: people count how many tasks AI can perform in a given occupation AI can perform, and infer that more exposure means more displacement. Eloundou et al. published a paper in Science in 2024 that started this literature, and many follow the same logic. The inference they make is that the more exposed tasks, the worse the outcomes. This is incomplete, because labor markets price jobs, not tasks. A radiologist does not just sell image classification, but does many other jobs: triages cases, communicates with other physicians, trains residents, makes the difficult decisions, and signs a diagnosis. The market buys a bundled service. The question AI poses is not whether it can do one task inside the bundle. The question is whether that task can be pulled out. Thread (1/3) dropbox.com/scl/fo/689u1g7…
Luis Garicano 🇪🇺🇺🇦 tweet media
English
39
411
1.7K
302.1K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
Ghostty just surpassed Terraform in stars (my previous most-starred project I started). It took Terraform 12 years to reach 48K. Ghostty did it in 1 year. It's bigger than Terraform in active usage, too. I take it personally when people doubt I can outdo my past. I can take credit for starting both, but not for ongoing development (for the successes and failures). Neither project is a solo endeavor. I'm still extremely actively involved with Ghostty, but there's also a team of a dozen maintainers. Terraform I stepped back and stopped working on it directly like 6 or more years ago. I consider stars a vanity metric and I don't care about it at all except in this narrow case. I'm a super competitive person (in general), but particularly/especially against my past self. There's no one I like "winning" more against than my past. So, this is my one exception for caring about stars.
Mitchell Hashimoto tweet media
English
132
98
3.9K
171.5K
Jonathan Ellis retweetledi
Jonathan Ellis retweetledi
Ethan Mollick
Ethan Mollick@emollick·
The idea that technology, like AI, deskills us is not a surprise. I learned cursive at school, my father learned how to use a slide rule. Neither skill is widely mourned. What is important is whether we will make deliberate choices about what skills to keep & which they will be.
English
63
36
495
27.3K
Jonathan Ellis retweetledi
Aaron Stannard
Aaron Stannard@Aaronontheweb·
One of the most insidious tics LLMs have when coding is this obsession with adding "fallback" behaviors everywhere These are extremely toxic because they hide real bugs and most importantly, introduce lots of potential privilege escalation vulnerabilities everywhere
English
64
45
1K
52.8K
Jonathan Ellis retweetledi
Niko McCarty.
Niko McCarty.@NikoMcCarty·
I think this is one of the most important articles we've published at @AsimovPress. If you read carefully, there are at least 3-4 ideas in here that *should* be large, well-funded research programs. The article begins by arguing that existing AI models are good at predicting things *within* an existing framework, but are not good at building new frameworks (and, thus, cannot do paradigm-shifting science). As AI models become more widespread in science, they therefore risk "hypernormal science," meaning we will have less actual breakthroughs and more incremental discoveries. The author (Alvin Djajadikerta) supports this argument with several examples, one of which comes from germ theory: "In the mid-nineteenth century, doctors thought that illness was caused by noxious air, and kept meticulous records accordingly. The physician William Farr mapped cholera deaths across London and found they correlated strongly with low elevation, which he thought was because noxious vapors accumulated in low-lying areas. He was actually picking up a real signal: low-lying districts were closer to the contaminated Thames River. But because his data was organized around air quality, he could not find the true cause..." "An AI trained on Farr’s records could have found even subtler correlations, and would have been genuinely useful for predicting which neighborhoods would be hit hardest in the next outbreak. But it would not be able to derive the concept of a waterborne microorganism, as this was not a variable anyone had yet recorded." After giving other examples of this, Alvin begins mapping out ideas to solve this problem and create AIs that are "visionary" rather than "merely predictive." My favorite idea, of his, is to use AI agents as a model organism for metascience. The gist is that many paradigm shifts seem to happen under particular conditions. "Bell Labs, Xerox PARC, and the early Laboratory of Molecular Biology at Cambridge all produced extraordinary concentrations of paradigm-shifting work," Alvin writes, "mostly because they were small groups with enough institutional protection to pursue ideas that looked unproductive by conventional measures." Alvin continues: "We have never been able to run controlled experiments on scientific institutions; it is impossible to create labs that differ in only one respect and compare the results. But we could run AI agents in parallel populations under different research conditions, and analyze the results...In this sense, AI scientists may give metascience its first model organism." "For instance, one could test how group structure shapes discovery: do small, isolated teams produce more conceptual reorganization than large, well-connected ones? Do flat hierarchies outperform rigid ones? One could run AI agent populations that vary these factors independently and measure the results — something that is impractical to do with real institutions..." This essay is excellent throughout and I hope you'll read it.
Niko McCarty. tweet mediaNiko McCarty. tweet media
English
15
82
481
34.8K
Jonathan Ellis retweetledi
Crémieux
Crémieux@cremieuxrecueil·
Across RCTs for ADHD medications, we often see outcomes like increased grades, reduced suicide risk, and lower odds of misbehavior. But one that I think is neglected is the notable increase in reported quality of life. These drugs make people happier about their lives!
Crémieux tweet media
Crémieux@cremieuxrecueil

DEA quotas and related controls are still driving an Adderall shortage. Unfortunately, the DEA is run by busybodies. When one manufacturer asked the DEA to please hurry up, the DEA responded by threatening to shut them down completely.

English
33
37
631
37.6K
Jonathan Ellis retweetledi
Crémieux
Crémieux@cremieuxrecueil·
DEA quotas and related controls are still driving an Adderall shortage. Unfortunately, the DEA is run by busybodies. When one manufacturer asked the DEA to please hurry up, the DEA responded by threatening to shut them down completely.
Crémieux tweet mediaCrémieux tweet mediaCrémieux tweet media
English
11
47
504
55.7K
Jonathan Ellis retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
Opus & Sonnet 4.6 haven't been a great hit for most of my work, or our customers, since (as warned in their tech report) they're over-enthusiastic about agentically taking over, rather than letting the human lead. Any suggestions for competent models that are patient followers?
English
93
14
381
81.6K
Jonathan Ellis retweetledi
Ethan Mollick
Ethan Mollick@emollick·
GPT-5.4 Pro continues to be the only model of its class. For anything really hard & complex, I throw it into the maw with every bit of context I can think of. More often than not, something very useful comes out. I can't get the same results from Codex or Code or anything else.
English
172
113
2.3K
772.7K