José Valim

2.1K posts

José Valim

José Valim

@josevalim

Creator of @elixirlang. Chief Adoption Officer at @dashbit, where we build https://t.co/FK8F4URbVG and https://t.co/xncEVrvWml.

Kraków, Poland Katılım Kasım 2007
89 Takip Edilen54.5K Takipçiler
Sabitlenmiş Tweet
José Valim
José Valim@josevalim·
It is finally here: Tidewave now supports Claude Code and OpenAI Codex. Tidewave unlocks the full-stack potential of your favorite coding agent by tightly integrating it with your web app and web framework at every layer, from UI to database. More info 👇
English
14
47
305
78.9K
José Valim
José Valim@josevalim·
@RitikShilp80441 Nope, console.log would be too verbose, because you have to instrument every DOM element rendered on the page.
English
0
0
0
198
Ritik
Ritik@RitikShilp80441·
@josevalim so you added console.logs?
English
1
0
0
213
José Valim
José Valim@josevalim·
Calling back to when we replaced RAG with TAG (Trace Augmented Generation) and doubled the accuracy of fixing DOM elements across Next.js, Rails, and Phoenix (and cut the time in half too): tidewave.ai/blog/improving…
José Valim tweet media
English
1
8
61
2.4K
José Valim
José Valim@josevalim·
@Code_of_Kai If I had to summarize: 1. Find the correct boundaries (includes Pure core, effectful shell) 2. Do not abstract prematurely 3. Data > Modules > Processes
English
2
3
6
602
José Valim
José Valim@josevalim·
I'd say I'd give the opposite advice for 1, 2 and 3. :D I often say you only need a behaviour the third or fourth time you repeat yourself. I understand why you would have the opposite impression though: you are consuming all of the public behaviours/protocols that was defined, but behind them, there are many more which we explicitly chose not to expose! PS: The advice also changes depending if you are building a library or an application.
English
2
1
25
1.6K
Code_of_Kai
Code_of_Kai@Code_of_Kai·
Ask Claude Code to add this to its memory if you are an Elixir Developer. Feel free to customise it, of course. **What Would @josevalim Do?** --- When designing any module, abstraction, or system in Elixir, ask: how would José Valim design this? **1. Behaviours over concrete implementations.** If you're writing a GenServer, ask whether you should be writing a behaviour that others implement. If two modules both follow the same interaction pattern — that's a behaviour, not two separate GenServers. **2. Protocols define the shape of interaction, not the content.** GenStage doesn't know what it's processing. Broadway doesn't know what it's batching. Plug doesn't know what HTTP framework it's serving. Ecto.Repo doesn't know what database it's talking to. Your abstractions shouldn't know either. **3. Compose, don't couple.** If two things share a pattern, extract the pattern first, then derive both as implementations. Don't build Thing A and Thing B separately and notice they rhyme later. Notice the rhyme first. **4. Pure core, effectful shell.** Each module does one thing. Pure functions at the center, side effects pushed to the edges. The core logic is testable without starting processes. This is how Enum, Stream, Ecto.Query, and Plug.Conn all work — pure data transformations with effects at the boundary. **5. The library is the unit of solution.** Before writing a new module, ask: does an existing library already solve this class of problem? Before writing a GenServer, check the list: - State machine? → **gen_statem / GenStateMachine** - Data pipeline? → **GenStage** or **Broadway** - Batch processing? → **Broadway** - Distributed processes? → **Horde** - Pub/sub? → **Phoenix.PubSub** - Background jobs? → **Oban** - One-off async work? → **Task** - Simple shared state? → **Agent** - ML inference? → **Nx.Serving** - Event sourcing? → **Commanded** - Graph traversal? → **libgraph** - Caching? → **Cachex** or **Nebulex** - HTTP client? → **Req** or **Finch** - Connection pooling? → **NimblePool** or **DBConnection** - Counters/flags? → **:counters**, **:atomics**, **:persistent_term** If yes, use it. If almost, wrap it. Only build from scratch when nothing fits. **6. Respect the ecosystem's grain.** Broadway replaced most Flow use cases. Nx replaced most data-parallel loops. The ecosystem evolves — don't reach for yesterday's tool when today's is better. When in doubt: Broadway for pipelines, Nx for computation, gen_statem for state machines, Phoenix.PubSub for messaging.
English
2
0
8
1.7K
José Valim retweetledi
Elixir by Software Mansion
Elixir by Software Mansion@swmansionElixir·
New version of the Elixir Language Tour is here! 🚀 In this release we vastly extended the Processes chapter, so you can learn & play with core OTP components: Links, Agents, GenServers and Supervisors. The tour runs fully in your browser – all thanks to Popcorn 🍿 Try it out: elixir-language-tour.swmansion.com/introduction @elixirlang
English
1
46
154
11.7K
aislop
aislop@lazysloth·
@josevalim did you see that open.ai acquired them? kinda cool your name is in the paper, also they got that symphony.
English
1
0
3
277
José Valim
José Valim@josevalim·
It is very cool to see the work we have done on the Elixir type system is already benefiting other communities, such as Python's ruff/ty: github.com/astral-sh/ruff…
English
2
19
175
15.8K
José Valim
José Valim@josevalim·
And I just wrote about the last round of optimizations we did, this time for differences: elixir-lang.org/blog/2026/03/1… That should be the last one as we have optimized unions, intersections, and differences! And here I thought nobody was reading those articles.😅
English
0
3
37
1.9K
José Valim
José Valim@josevalim·
I have a similar take to the article, except I’d say the tests are the primary mechanism for project specification. On the other hand, there is still work to be done, because models often write bad tests and, when they break, models aren’t sure when to change tests vs code. But overall I agree with the premise that tests/code are the spec and we should be aiming to improve that instead.
English
0
1
7
694
dax
dax@thdxr·
i'm so glad to see all this because i had a gut aversion to writing specs it felt like it was as much work as writing the code and its way more fun thinking through things by writing the code (with ai) you all articulated why that is better - was worried i'd be forced into writing specs one day
English
13
2
172
12.9K
dex
dex@dexhorthy·
damn this is so good and encapsulates everything I've been seeing/saying in the last few months - a spec that is sufficiently detailed to generate code with a reliable degree of quality is roughly the same length and detail as the code itself - so don't review those things, just review the code at that point, if you care enough about that level of abstraction - unless you're vibing side projects or prototypes (yes, even zero-to-one software), you ABSOLUTELY SHOULD care about the code at that level of abstraction - you need to find SOME way to get more leverage over coding agents though, because just reading all that code is a pain, esp when a lot of it is slop - the default/dare-i-say-decel way is to go back to "i own the execution, and give little things to the agent, check it along the way" - the accel-but-safe-way is to find something - NOT A SPEC (the word "spec" is broken anyway) - NOT 3 INVOCATIONS OF AskUserQuestion - that lets you resteer the model *before* it slops out N-thousand LOC
gabby@GabriellaG439

New blog post: "A sufficiently detailed spec is code" I wrote this because I was tired of people claiming that the future of agentic coding is thoughtful specification work. As I show in the post, the reality devolves into slop pseudocode haskellforall.com/2026/03/a-suff…

English
31
30
531
250.9K
José Valim retweetledi
gabby
gabby@GabriellaG439·
New blog post: "A sufficiently detailed spec is code" I wrote this because I was tired of people claiming that the future of agentic coding is thoughtful specification work. As I show in the post, the reality devolves into slop pseudocode haskellforall.com/2026/03/a-suff…
English
117
267
2.5K
414.5K
José Valim
José Valim@josevalim·
What is your favorite git worktree integration for coding agents? Why?
English
22
7
85
18.9K
José Valim retweetledi
ElixirConf
ElixirConf@ElixirConf·
🌆Elixir community is coming to Chicago: Big ideas. Great architecture. Deep-dish-fueled discussions. 📅 September 9-11 🎤 CFP open now 🗓 Deadline: April 12 elixirconf.com
ElixirConf tweet media
English
1
3
16
1.8K
José Valim retweetledi
ElixirConf Europe
ElixirConf Europe@ElixirConfEU·
Excited to announce @remote as a Gold Sponsor for ElixirConf EU 2026! Building global HR solutions for distributed teams. Thank you for supporting the community. #sponsors" target="_blank" rel="nofollow noopener">elixirconf.eu/#sponsors
ElixirConf Europe tweet media
English
0
2
11
1.8K
José Valim
José Valim@josevalim·
Sorry, I was going to reply immediately after "Someone should", but I got sidetracked. :D In short: I don't think so. I still believe humans have to be in the loop and because we are the bottleneck, we have to optimize for us. Elixir was the language I wanted to read/write and LLMs would not have changed that. Maybe my answer will change in the future.
English
0
0
1
58
Richard Cook
Richard Cook@rr_cook·
@josevalim @mikehostetler Hey @josevalim if LLMs were around when you hit the concurrency issues in Ruby would you have just have them write what you needed in Erlang instead of inventing Elixir?
English
1
0
1
48
Mike Hostetler // Chief Agent Officer
I mean - last time I wrote assembly was college So you think we will move up the stack? Is it time to invent a new language?
Eric S. Raymond@esrtweet

My experience with LLM-assisted coding has been great and I'm a big fan of it, but I've just had a slightly depressing realization. It may almost entirely shut down the development and adoption of new computer languages. The percentage, and probably the absolute amount of code, handwritten by humans is going to fall a great deal. But for the foreseeable future, LLMs won't be able to write code fluently in a specific language without having a large volume of good code in that specific language already available to train on. For a new language in 2026 and after, where exactly is that large volume of good training data going to come from? Probably not from human beings, and where is the incentive for an LLM handed a vibecoding task to go looking for an exotic new language to do it in? I find this slightly depressing, because I enjoy contemplating new-language development the way a more physical tinkerer enjoys salivating over shiny new tools. Human beings are still going to write new languages occasionally, because that's huge fun (if you have a brain bent anywhere like the way mine is) and still a way to climb some status ladders. But with the barrier to mass adoption getting so much higher, I have to think the level of research and engineering activity put into this is going to drop a lot. There is one not-unhappy but rather weird way I could be wrong about this. Historically, once the development of compilers got to a certain point it became clear that designing machine instruction sets to be easily reasoned about by humans was a big mistake. We had to figure out how to design machine instruction sets that were easy for the compilers to reason about. Thus, RISC. It could be that's the future of language design, too. But I have no idea what a new language design optimized for LLM code generation would look like. And I don't think anybody else does, either. Interesting times, indeed.

English
7
0
12
1.9K
José Valim
José Valim@josevalim·
My favorite feature so far is the canvas with multiple frames side by side. Presenting a canvas and having multiple agents have always been in the roadmap for Tidewave but as distinct features. I really like the way they put them together. PS: a first version of the tweet said "it was for non-devs" but I don't think that accurately describes Replit Agents. I'd say apps like @get_mocha better represent non-technical audiences.
English
0
2
10
3.1K
José Valim
José Valim@josevalim·
We started tidewave.ai to bring the agentic experience from apps like @Lovable into your favorite web framework, *locally*. And Replit Agent 4 is a great example of how good they've gotten. It neatly packages many of the ideas we've seen around lately. Kudos to them!
Amjad Masad@amasad

Software isn’t merely technical work anymore. It’s creative. Introducing Replit Agent 4. The first AI built for creative collaboration between humans and agents. Design on an infinite canvas, work with your team, run parallel agents, and ship working apps, sites, slides & more.

English
3
2
66
9.6K
José Valim
José Valim@josevalim·
The general answer is: every time you see "stupid" stuff, write it to the AGENTS.md or similar, so it happens less frequently. But I agree it is unlikely you get the experience out of the box. This is a good article from @_lopopolo that shows how much work is necessary on the tooling side to get to smooth sailing: openai.com/index/harness-…
English
0
1
3
99
Tomasz Kowal
Tomasz Kowal@snajper47·
@elixirtap @wtsnz It was Sonnet 4.6 1M. I switched later to Opus 4.6 1M and it still does similarly stupid stuff. Just less often. That is why I am surprised when people talk about leaving it unattended or even running a fleet.
English
2
0
1
48
Will Townsend
Will Townsend@wtsnz·
I continue to be insanely shocked with Ash / Elixir / Codex (GPT-5.4 + 5.4 Pro). I've used nearly 3 billion tokens in the last 3 days and completed months of work. Coding is 100% solved in my workflow. Nothing feels too hard anymore. Massive features take a bit of time to shape into something that looks good, then 20–30 minutes to execute. The most important thing for longevity and flexibility is keeping an eye on the system's general “health” to reduce slop—which, again, is all automated and easy thanks to my existing engineering-systems experience. Right now the project is in a really good place; it's just executing features with better results than ever. #noslop I haven't been this excited since I first started programming. The new feeling to get used to is the speed at which the code changes—though I'm sure once it shapes up, the churn will drop as a side effect of a well-discovered domain model. The ideal place to be. I'm finally “playing” with software the way I've been describing for nearly two years!!!! I am a shape rotator.
Will Townsend tweet media
English
8
3
52
5K
José Valim
José Valim@josevalim·
I love running "hail mary" prompts like this. A few weeks ago I prompted Opus to find code loading optimizations in the Erlang/OTP code base. It came up with 6-7 options, 3 of which I could automatically discard, and I asked it to build experiments for the remaining ones. Out of those, 1 was clearly successful, which I then wrapped up and now Erlang/OTP 29 will boot 10% faster for everyone. /autoresearch from @karpathy seems to package this experience into a tighter loop and, if it can find something meaningful, it stands to benefit everyone, especially on OSS. Can't wait to try it and maybe "hail mary" a few other optimizations.
tobi lutke@tobi

OK, well. I ran /autoresearch on the the liquid codebase. 53% faster combined parse+render time, 61% fewer object allocations. This is probably somewhat overfit, but there are absolutely amazing ideas in this.

English
5
19
325
35.4K
José Valim retweetledi
Hugo Baraúna
Hugo Baraúna@hugobarauna·
Using Tidewave to improve the onboarding in Tidewave
Hugo Baraúna tweet media
English
2
1
34
3.4K