Josh Brown

281 posts

Josh Brown banner
Josh Brown

Josh Brown

@hashbrown490

Co-founder of @RoamResearch

Katılım Mayıs 2017
353 Takip Edilen1.9K Takipçiler
Josh Brown retweetledi
Roam Research
Roam Research@RoamResearch·
✨ [[Quality of Life Improvement]] Autocomplete in Roam now shows live previews of pages and blocks, emphasizes matching terms, and shows reference counts so you can gauge relevance before you even link 🔗 Ctrl-O to toggle the preview on/off
Roam Research tweet media
English
3
8
34
3.5K
Josh Brown retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
One common issue with personalization in all LLMs is how distracting memory seems to be for the models. A single question from 2 months ago about some topic can keep coming up as some kind of a deep interest of mine with undue mentions in perpetuity. Some kind of trying too hard.
English
1.6K
1K
19.6K
2.3M
Josh Brown retweetledi
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
There is a definite Heroin aspect to all of this. I don't WANT to dig into the code. I WANT to trust claude -- at least at some level. I WANT to believe that there is a way to wield the lying SOB to generate good clean applications. So give me another hit and let me feel the euphoria again, while the world collapses around me.
English
77
29
563
32.4K
Josh Brown retweetledi
miss white
miss white@cinecitta2030·
Wake up babe there’s a Portuguese Catholic priest who mixes Gregorian chants with industrial techno house music
English
1.5K
10.8K
83.2K
4.7M
Josh Brown retweetledi
Addy Osmani
Addy Osmani@addyosmani·
Every time we've made it easier to write software, we've ended up writing exponentially more of it. When high-level languages replaced assembly, programmers didn't write less code - they wrote orders of magnitude more, tackling problems that would have been economically impossible before. When frameworks abstracted away the plumbing, we didn't reduce our output - we built more ambitious applications. When cloud platforms eliminated infrastructure management, we didn't scale back - we spun up services for use cases that never would have justified a server room. @levie recently articulated why this pattern is about to repeat itself at a scale we haven't seen before, using Jevons Paradox as the frame. The argument resonates because it's playing out in real-time in our developer tools. The initial question everyone asks is "will this replace developers?" but just watch what actually happens. Teams that adopt these tools don't always shrink their engineering headcount - they expand their product surface area. The three-person startup that could only maintain one product now maintains four. The enterprise team that could only experiment with two approaches now tries seven. The constraint being removed isn't competence but it's the activation energy required to start something new. Think about that internal tool you've been putting off because "it would take someone two weeks and we can't spare anyone"? Now it takes three hours. That refactoring you've been deferring because the risk/reward math didn't work? The math just changed. This matters because software engineers are uniquely positioned to understand what's coming. We've seen this movie before, just in smaller domains. Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we'd need fewer developers. Each one instead enabled us to build more software. Here's the part that deserves more attention imo: the barrier being lowered isn't just about writing code faster. It's about the types of problems that become economically viable to solve with software. Think about all the internal tools that don't exist at your company. Not because no one thought of them, but because the ROI calculation never cleared the bar. The custom dashboard that would make one team 10% more efficient but would take a week to build. The data pipeline that would unlock insights but requires specialized knowledge. The integration that would smooth a workflow but touches three different systems. These aren't failing the cost-benefit analysis because the benefit is low - they're failing because the cost is high. Lower that cost by "10x", and suddenly you have an explosion of viable projects. This is exactly what's happening with AI-assisted development, and it's going to be more dramatic than previous transitions because we're making previously "impossible" work possible. The second-order effects get really interesting when you consider that every new tool creates demand for more tools. When we made it easier to build web applications, we didn't just get more web applications - we got an entire ecosystem of monitoring tools, deployment platforms, debugging tools, and testing frameworks. Each of these spawned their own ecosystems. The compounding effect is nonlinear. Now apply this logic to every domain where we're lowering the barrier to entry. Every new capability unlocked creates demand for supporting capabilities. Every workflow that becomes tractable creates demand for adjacent workflows. The surface area of what's economically viable expands in all directions. For engineers specifically, this changes the calculus of what we choose to work on. Right now, we're trained to be incredibly selective about what we build because our time is the scarce resource. But when the cost of building drops dramatically, the limiting factor becomes imagination, "taste" and judgment, not implementation capacity. The skill shifts from "what can I build given my constraints?" to "what should we build given that constraints have in some ways been evaporated?" The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don't reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work. The pattern is so consistent that the burden of proof should shift. Instead of asking "will AI agents reduce the need for human knowledge workers?" we should be asking "what orders of magnitude increase in knowledge work output are we about to see?" For software engineers it's the same transition we've navigated successfully several times already. The developers who thrived weren't the ones who resisted higher-level abstractions; they were the ones who used those abstractions to build more ambitious systems. The same logic applies now, just at a larger scale. The real question is whether we're prepared for a world where the bottleneck shifts from "can we build this?" to "should we build this?" That's a fundamentally different problem space, and it requires fundamentally different skills. We're about to find out what happens when the cost of knowledge work drops by an order of magnitude. History suggests we (perhaps) won't do less work - we'll discover we've been massively under-investing in knowledge work because it was too expensive to do all the things that were actually worth doing. The paradox isn't that efficiency creates abundance. The paradox is that we keep being surprised by it.
Aaron Levie@levie

x.com/i/article/2004…

English
126
624
3.3K
552K
Josh Brown
Josh Brown@hashbrown490·
2-3 claude code instances is the sweet spot for my productivity. Any more and I'm producing slop, any less and I'm waiting for Claude to finish
English
0
0
3
366
Josh Brown retweetledi
patagucci perf papi
patagucci perf papi@kenwheeler·
i’m actually more tired now than pre ai, managing this parallelism is exhausting
English
24
8
373
31.6K
Josh Brown
Josh Brown@hashbrown490·
claude also uses `git -C` instead of regular git commands, I can't allow git -C because it can commit outside of the current repo. It should just prefer git commit
English
0
0
0
127
Josh Brown
Josh Brown@hashbrown490·
It makes it really difficult to run autonomous agents for long when they can't test their work. Also commands like /tasks or /context should be able to run while my subagents are working
English
1
0
0
148
Josh Brown
Josh Brown@hashbrown490·
I can't get claude code to remember that it has a background task running and that it needs to check the background task to verify the code compiles. It's in my claude.md and it should just remember that it has a task it can view. I think it might have to do with subagents not having the parent context? or maybe compaction @bcherny
English
1
0
2
203
Josh Brown
Josh Brown@hashbrown490·
me last week: who would need to run 5-10 claude codes at once that's ridiculous me this week: I need more screens to monitor all the claudes
English
0
0
7
166
Josh Brown
Josh Brown@hashbrown490·
@BaptisteDupuch There are more reasons to choose typescript over clojurescript than just LLMs. Clojurescript is very slow, interop is pain, and it's nearly impossible to use ESM libraries, which all of js world is moving to.
English
0
0
1
111
Baptiste Dupuch
Baptiste Dupuch@BaptisteDupuch·
Something I didn’t anticipate with LLM-assisted coding is how standardizing it’s making people, companies, and codebases. I recently ran into a former colleague—smart, working in a Clojure shop—who chose TypeScript for a new project mainly to work better with Claude. It felt like watching someone take the red pill (with Clojure)… then go back for the blue one. Wild.
English
1
0
5
191
Josh Brown
Josh Brown@hashbrown490·
Recently with opus 4.5 I've found it can often figure out the shape of the data we are working with purely from code, even when the function which created the object is in a separate file.
English
1
0
1
125
Josh Brown
Josh Brown@hashbrown490·
Interesting take on LLMs and dynamic languages. I’d always assumed Clojure was mostly at a disadvantage compared to its host languages.
DHH@dhh

@hubertlepicki LLMs are great at working with token-efficient languages like Ruby and other dynamically-typed environments. Tool calling gives them the same testing harness, LSPs, and linters as human programmers. And they put them to great use.

English
2
0
2
223
Josh Brown retweetledi
Crémieux
Crémieux@cremieuxrecueil·
I just had my face mauled by a pit bull. Ban this fucking breed and euthanize every single one.
English
875
464
10.7K
779.2K
Josh Brown retweetledi
shadcn
shadcn@shadcn·
Interrupting my holiday break to say: iOS 26 is bad. Worse. It's not just liquid glass. Everything is now one or more extra taps away. Actions buried. Keyboard downgraded. Siri is essentially dead. Apple undid years of good work.
English
498
534
11.5K
1.9M
Josh Brown
Josh Brown@hashbrown490·
@jdsimcoe @Apple liquid glass made the icon look washed out so trying to fix that, the new one will also work with "clear" or "tinted" settings
English
0
0
1
37
Josh Brown
Josh Brown@hashbrown490·
kind of ironic that @Apple's Icon Composer doesn't adjust its own icon color based on your settings
Josh Brown tweet media
English
1
0
2
266
Josh Brown
Josh Brown@hashbrown490·
One underrated effect of AI is that it gets me to start things I’d normally avoid for being “too hard”. Even if it doesn’t help in the end, it lowers the barrier enough for me to start
English
1
1
6
242
Josh Brown retweetledi
Simon Willison
Simon Willison@simonw·
I see a lot of complaints about untested AI slop in pull requests. Submitting those is a dereliction of duty as a software engineer: Your job is to deliver code you have proven to work simonwillison.net/2025/Dec/18/co…
English
65
267
1.9K
203.5K