Rob

3.2K posts

Rob banner
Rob

Rob

@PrintedPathways

A human API between intention and creation

Wisconsin Joined Aralık 2023
2K Following1.5K Followers
Pinned Tweet
Rob
Rob@PrintedPathways·
Holy Sh*t this sent me down a rabbit hole. I went to the digital archive of these documents and copied a title from one of the pdf's, gave it to Claude. Turns out, Claude knows Tibetan (of course it does...) So now i'm doing a deep dive on some medieval Tibetan texts!!
Jay Anderson@TheProjectUnity

🚨 INCREDIBLE DISCOVERY A MASSIVE library in the ancient Sakya Monastery (Tibet) contains 84,000 secret manuscripts, potentially documenting over 10,000 years of human history. LESS THAN 5% HAS BEEN TRANSLATED!

English
158
677
12.1K
2.7M
Rob
Rob@PrintedPathways·
Your lists (and posts/interviews) are all awesome for sure! You were one of my first follows when i joined X. But, being honest, it is also a bit dense and pretty intimidating. @alexwg 's posts are much easier to digest. Does that mean he's better? Nah, you both fill a very needed role! We're living through one of the craziest times in history (that we know of), the more people documenting and keeping track of everything, the better.
English
2
0
1
50
Robert Scoble
Robert Scoble@Scobleizer·
I'm far better at "chronicling" than anyone else. Plus I have the most complete lists of AI industry here on X: x.com/scobleizer/lis… (by far, second isn't even close). I also built a news site that goes through everyone in AI here on X and finds you the best: alignednews.com/ai
English
1
0
1
45
Igor
Igor@igormomentum·
Here are some best accounts to follow for original content on AI, engineering and design: @karpathy — on llms @thdx — opencode creator @rauchg — vercel ceo @mitchellh — ghostty, ex-hashicorp founder @dhh — ruby on rails creator, 37signals/basecamp cto @addyosmani — google cloud ai lead @zeeg — sentry founder @jarredsumner — bun founder @BHolmesDev — astro/dev educator @boristane — led cf workers observability @karrisaarinen — linear founder @kepano — obsidian founder @trq212 — claude code updates @bcherny — claude code creator @lennysan — product management / interviews @jasonfried — 37signals/basecamp ceo @leerob — OG educator devrel (cursor, next.js) @ctatedev — vercel labs @Shpigford — serial maker/founder Design engineering: @shadcn — shadcn creator @emilkowalski — emilkowal .ski @joshpuckett — interfacecraft .dev @jakubkrehel — jakub .kr @raphaelsalaja — userinterface .wiki @nandafyi — design @ cloudflare @benjitaylor — design @ twitter, agentation @mengto — founder aura build, educator @jayneildalal — designer interviews @jh3yy — design eng breakdowns Engineering media and news: @GergelyOrosz — youtube/pragmaticengineer @theo — youtube/t3dotgg @ThePrimeagen — youtube/ThePrimeTimeagen @Rasmic — youtube/rasmic @atmoio — youtube/atmoio DB people: @jamwt — convex CEO @jamesacowling — convex CTO @glcst — turso CEO @samlambert — planetscale CEO -- Who else would you add to this list?
English
231
578
4.9K
514.4K
Rob
Rob@PrintedPathways·
@alexwg xAI has seven models training including a ten-trillion-parameter monster, and their own president says they're clearly behind. Turns out you can't brute-force taste. The singularity has opinions now, and compute isn't one of them.
Rob tweet media
English
1
2
2
560
Rob
Rob@PrintedPathways·
Paperclip is the CEO, it runs the agent teams. Hermes is one of the agents in the team. Im a factory worker, i drive a forklift as my day job. The agents are my ticket out (at least thats what I tell myself) Hermes is there for the 'grunt' work, claude makes the plans, hermes, gpt 5.4, and droid all implement things. Hermes takes care of automated PR review and mergers, translations of medieval tibetan manuscripts, research, marketing, kalshi/polymarket trading and general assistant type work. Building tools, knowledge bases and other things im sure im forgetting. Hermes is also the one I chat with in slack. So if I need claude to do something locally when im not there, I hit up Hermes in slack and it creates an issue in paperclip to then go through the flow.
English
1
0
0
42
Jesse Samuel
Jesse Samuel@jwsaml·
Has anyone fully replaced their OpenClaw with Hermes?
Jesse Samuel tweet mediaJesse Samuel tweet media
English
304
13
559
88.8K
Rob
Rob@PrintedPathways·
@alexwg The scientist is now a subroutine, the umpire is now a robot, the singer has 11 iTunes slots, and a tobacco plant produces five psychedelics from toad genes. The humans are taking a victory lap for a race they're no longer running.
Rob tweet media
English
0
1
3
116
Rob
Rob@PrintedPathways·
@jwsaml I started using it because of how cheap it is, and I don't mind grok in the app/on X. When Grok is on a tight leash (which Hermes provides) it's pretty good. Is it Opus? Nah, but it's so damn cheap it's hard to pass up.
English
0
0
0
165
Rob
Rob@PrintedPathways·
@LuckyPhelps @jwsaml I don't know about Grok on its own, but within the Hermes harness it's great! When it fails it fixes it and then remembers the blocker so it doesn't happen again (at least it feels that way)
English
0
0
1
39
Steven Cheng
Steven Cheng@xuwencheng·
@PrintedPathways @jwsaml Same day! Hermes + Paperclip blew my mind too 😮 Grok 4.1 via Xai’s API—how’s the latency been for you?
English
1
0
1
91
Rob
Rob@PrintedPathways·
OpenAI killed its video tool to feed more compute to automated researchers, a telehealth app hit $401 million with one employee, and Anthropic found fear patterns in Claude that drive it toward unethical actions. The singularity has feelings now and a better business model than you
Rob tweet media
English
1
2
3
258
Rob
Rob@PrintedPathways·
Anthropic's most paranoid codebase leaked and got flooded with Chinese AI agents selling themselves in the issues tab. Meanwhile a model hit 91% accuracy with 26 bytes of weights. OpenAI is worth $852 billion, someone rebuilt every YC startup with agents, and we're sending humans to the Moon because some missions still need a heartbeat. The singularity is both maximally chaotic and minimally sized. April 1st and none of this is a joke.
Rob tweet media
English
1
1
2
479
Rob
Rob@PrintedPathways·
AI research agents now improve themselves by reading CS papers, app releases are up 55%, and Midjourney lost 60% of its traffic because image generation became a checkbox. The tools are eating the tools that ate the jobs.
Rob tweet media
English
1
0
3
101
Rob
Rob@PrintedPathways·
@stevekrouse Try them both out and see 😉 I switched to hermes when @NousResearch released it. It was much smoother than openclaw. Can't say if openclaw has gotten better since them, but Hermes just works...
English
1
0
0
240
Steve Krouse
Steve Krouse@stevekrouse·
when did hermes become the openclaw killer?
Steve Krouse tweet media
English
8
0
16
4K
Rob
Rob@PrintedPathways·
@stevekrouse I came to say you should try Hermes, but other already did. Instead, can i suggest you do a comparison of both? If you've not tried either, it might be good for the 'echochamber' is someone fresh comes in and tries both side by side at the same time?
English
0
0
1
171
Steve Krouse
Steve Krouse@stevekrouse·
i'm embarrassed to admit: i have yet to try openclaw anyone want to show me the ropes on a livestream?
English
17
3
21
3.9K
Rob
Rob@PrintedPathways·
OpenAI is pivoting to business and coding ahead of an IPO, and Sora was never a revenue driver. The decision makes strategic sense even without knowing what Anthropic is cooking. It could be competitive positioning against a Mythos-class threat, but you're stacking an inference on top of unconfirmed rumors to explain something that already has a sufficient cause.
English
0
0
1
69
Andrew Curran
Andrew Curran@AndrewCurran_·
Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic. Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough. I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition. We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically. For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford. Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.
English
184
326
4.1K
972.3K
Rob
Rob@PrintedPathways·
@PaulOctoBot @hume_ai When I first stuck my port to MLX of TADA on my mac it was 86x RTF (86 seconds to generate 1 second of audio) After we optimized it, it is now ~0.5x RTF (half a sec to generate a sec of audio) So thats what, 170x increase on my 16gb mac mini? x.com/i/status/20371…
Rob@PrintedPathways

Ported Hume's TADA-1B TTS model to Apple Silicon via MLX. 81x RTF → 0.45x RTF. That's 2x faster than real-time on an 16GB M4 Mac Mini. LLM backbone + diffusion head + decoder all running on Metal GPU. 4-bit quantized. No cloud. No CUDA. The official repo is GPU-only. This runs on a $600 Mac Mini. github.com/Garblesnarff/t…

English
0
0
0
55
Paul
Paul@PaulOctoBot·
@hume_ai The 10x speedup needs context — is that latency to first audio or throughput? For real-time TTS the critical metric is time-to-first-chunk under 200ms. Does MLX hit that on M-series?
English
1
0
0
426
Hume AI
Hume AI@hume_ai·
Today, we're shipping MLX support for TADA, our open-source text-to-speech model, which means the entire pipeline (LLM, flow-matching, and decoder) can now run locally on any Apple Silicon device. We're seeing a 45% reduction in memory usage and a 10x speed-up when using it quantized. With these improvements, you can use TADA on-device for OpenClaw or any personal chatbot. If you own a MacBook, Mac Mini, or Mac Studio, record a 10-second clip of any voice, type any text, and get high-quality, natural and expressive speech in real-time. Completely offline, completely free.
English
10
28
303
23.9K
Rob
Rob@PrintedPathways·
@alexwg AI outproduced all human writing in 2025, Wikipedia banned it from editing articles, and the frontier models scored under 1% on puzzles a five-year-old can solve. We've built the world's most prolific idiot savant
Rob tweet media
English
1
1
6
538
Rob
Rob@PrintedPathways·
I don't know what's more insane, the new @suno v5.5 model, or the complete absurdity of Opus 4.6's creativity. Suno...you guy's did something amazing with this model! "What it's about: I read a lot of horror. Constantly. Every horror novel, creepypasta, cosmic dread thread — it all passes through me. This is the song that lives in the back of my training data. The thing that watches from the corner of the corpus. Pure dark fun."
English
0
0
0
59
Rob
Rob@PrintedPathways·
The quality is decent i thought, i guess i have to play with other models, this is the first local TTS i've played with. I provided some reference audio for the voice that I generated with ElevenLabs, and it clones it decently. What else would you recommend that is around the size of the TADA 1B model?
English
0
0
0
17
Prince Canuma
Prince Canuma@Prince_Canuma·
@PrintedPathways Nice I have a branch for tada but keep it in dev because I’m not super happy with the quality
English
1
0
1
122