are there people out there who just want to refactor every day?
just wake up and find the worst code and just chip away at it and clean it up
wake up the next day do it again, infinitely improving things with zero external impact?
My (unproven) hypothesis would be that you could write the current in-flight tasks to disk and read them back on start up to kick them back off
Things you lose with this would be scheduled / future / cron, but that’s not my goal
I don’t need all the connections & polling for simple job retries. Database is always the bottleneck in prod. Elixir systems I’ve worked on. I think a stupid simple retry would serve most use cases
Oban is great and when the needs dictate the move to it, I’d move to it. It just seems there’s room for a smaller, simpler starting point
@JohnElmLabs I’d say that for very simple retries an arrangement of Tasks would do it. But then, you’ll end up stumbling on things like “restart survival”, logging, error capturing, etc., etc. — and then you’ll find yourself writing your own Oban.
We call it “reinvent the wheel.”
I’ve always felt like for simple job / function retry there was a lighter dep. you could bring than Oban
I think I can finally vibe code it now
All OTP primitives and file system….here goes nothin
I feel like there is a layer missing right now in the development flow with AI.
We’re at the point where I want agents to authenticate and make purchase decisions on my behalf (with my approval). Why can’t I just give them a credit card and say: “find me an API that does X, ask for permission, and give me a key”
It’s easy to develop against these APIs, but there’s still friction in researching, setting up billing, generating credentials, etc.
Almost 5 months since launch, Tidewave has crossed 100k ARR! 🎊
The best part is that we can invest all profits back into the product, since we are not reselling/subsidizing tokens.
Thanks to everyone who is using it and for all of the feedback!
AI is amazing at writing code. Amazing.
It's awful, and I mean awful, at organizing code
It makes sense when you know AI is just a _really good_ next word predictor
I have been agentic coding in @ziglang for over two months and today switched over to @elixirlang to build a basic Phoenix app.
I can say definitively that with the same setup the LLM is *way* dumber writing Elixir than Zig. I suspect the following:
1. I'm writing a phoenix app and not using tidewave so insight to the server environment is more difficult
2. my Zig work didn't require any outside dependencies so outside of the Zig stdlib the entire API surface is available
3. Zig's typesytem
I have some ideas how to improve this, going to experiment.
@mmmykolas@form Yes, checkbox + label can absolutely be styled to work with strictly the form
In practice, I find few devs I've worked with know you can style labels like that ;)
I've read your post.
You can still simplify this. You don't need javascript.
<.form for={@form} phx-change="validate" phx-submit="save">
<.input
type="checkbox"
field={@form[:has_referral_code]}
class="peer sr-only"
/>
<.input
field={@form[:referral_code]}
label="Referral Code"
class="hidden peer-checked:block mt-3"
/>
The Top 3 LiveView Form Mistakes (and how to fix them)
Prevent poor UX, brittle systems, and impossible states by using these 3 strategies.
johnelmlabs.com/posts/top-3-li…#MyElixirStatus
@bcardarella I think you'd benefit from github.com/chrismccord/web -- Converts html to markdown & can execute JS, which is good for stuff like Apple's docs.
Apple docs can also be read by sosumi.ai MCP server
LLM Codegen against well written specs is kind of magical. However, there are some optimizations I've had to do:
1. fetch original spec document (usually in HTML)
2. pandoc convert to markdown
3. ask LLM to analyze and remove anything from the document that is unnecessary and optimize for tokenization
Without losing spec fidelity:
DOM: 760k tokens to 62k tokens
HTML: 3.7M tokens to 750k