I built Flux mostly for myself — a super simple, local Kanban board that runs in Docker, with a clean drag-and-drop UI, real-time updates, webhooks, and full MCP integration so my LLM can actually help manage tasks.
It’s deliberately unopinionated and hackable: no forced workflows, just a flexible source of truth for whatever way you (or your agents) want to work.
Thought others building with AI agents or self-hosted setups might find it useful too, so I open-sourced it to see what people do with it. Early days, rough edges, MIT license.
Feedback / ideas / brutal honesty very welcome! 🚀
Repo: github.com/sirsjg/flux#selfhosted#opensource#Kanban#AIagents#MCP
#Earthquake possibly felt 39 sec ago in #Australia. Felt it? Tell us via:
📱#app" target="_blank" rel="nofollow noopener">m.emsc.eu/#app
🌐m.emsc.eu
🖥emsc-csem.org
⚠ Automatic crowdsourced detection, not seismically verified yet. More info soon!
I've been using voice-to-text with my AI/productivity tools lately and I can't go back.
Not because I'm lazy, but because I finally realized how absolutely insane it is that we've spent the last 40 years forcing ourselves to translate thoughts into finger movements on tiny plastic squares.
Think about it....when you have an idea, it exists in your head as language. Words, sentences, concepts. But to get it into a computer, you have to break it down into individual letter presses, hunting and pecking across a keyboard, constantly interrupting your flow to fix typos and formatting.
It's like having a conversation through morse code.
I used to think voice interfaces were gimmicky. Siri was frustrating, dictation software was clunky, and talking to your computer felt weird. I honestly didnt get it.
But something shifted in the 4 months. The AI actually understands context now. It gets what I mean, not just what I said. And I can do things on the go.
Yesterday I used voice with claude code to spin up sub agents that design, code, and market my app. I literally just talked to my computer and told it what I wanted built, and it coordinated an entire team of AI workers to make it happen. No typing, no clicking through interfaces, no managing different tools.
It felt like the future finally arrived.
We're moving from a world where humans had to learn computer language to a world where computers learned human language. And honestly, I'm here for it.
If voice input goes from 0.1% to 10% (not crazy...) of all computer interactions in the next few years, every interface we know becomes obsolete overnight. The entire software industry will have to rebuild around conversation, not clicks.
Wouldn't be surprised to see it happen.
What do you think?
The most unfortunately named function in Windows, I'd say. No bedtime stories...
And that's not even the scary part. It uses recursion!
But it wasn't me, someone else is credited on this function, so my zero-recursion record still stands :-)
This new op ed by @DarioAmodei opposing the moratorium doesn't make a ton of sense. It criticizes the gorwing state patchwork, calls for federal transparency regulation, and says state regulation should be focused on transparency, too. Yet he opposes a moratorium? A thread.
Starship experienced a rapid unscheduled disassembly during its ascent burn. Teams will continue to review data from today's flight test to better understand root cause.
With a test like this, success comes from what we learn, and today’s flight will help us improve Starship’s reliability.
Looks like our snipers had eyes on the bad guy — but didn’t shoot first?
Maybe they were unsure if it was a friendly?
Going to be a lot of investigation of how this happened in the coming days.