R.

2.9K posts

R. banner
R.

R.

@RichDoesTech

Christian | 1x Husband | 5x Dad | Serial bootstrapped Founder | Building private AI tools that work offline @TryYaps (Sign up to the https://t.co/4B2vBqXwEl waitlist 👋)

London 参加日 Ekim 2020
420 フォロー中3.2K フォロワー
固定されたツイート
R.
R.@RichDoesTech·
# Day 1 Recently I've been focused on building a privacy-first and UX-friendly AI tool that handles a full suite of AI voice related tools (e.g. like, dictation, screen reading, and so much more) and slowly expanding into other domains. The hope is to address privacy, tool fatigue, latency, and several other things. But behind that the product, I'm challenging some assumptions that undergird the very root of how pre-LLM startups were built. I'll try and write more extensively on this. But for now, you enjoy the UI from my lastest Claude Code refactor 😅 (P.S. Shoutout to @DannPetty on reframe)
R. tweet media
English
5
2
28
2K
R.
R.@RichDoesTech·
@iamelijahkhan Never thought about it, assumed the US. Then when I saw the Cockney feature I started second guessing. The video confirmed it 😅
English
0
0
0
2
E L I J A H
E L I J A H@iamelijahkhan·
Day 14 of Accentify → Content creation pipeline is complete ✅ My OpenClaw, is managing a Notion pipeline of content, scheduling to Postiz and tracking analytics to maximise conversions + Released new accent course - Cockney. The raw, authentic sound of the East End of London
E L I J A H@iamelijahkhan

Day 13 of Accentify → $1200 MRR + 200 paying users! 🎉 We crossed $1000 MRR 9 days ago and now we're well on the way to $2000 MRR > Organic social media workflow w/ OpenClaw is complete (hopefully 🤞) > Unlimited UGC hook videos?! P.s count how many times I say "cooked" 💀

English
2
1
3
69
R.
R.@RichDoesTech·
First thoughts on @omma_ai, really beautiful! One-shotted a functional animated background generator that I can use for screenshots, videos, etc. This is promising, especially considering I gave it no real visual direction.
English
2
0
9
68
R.
R.@RichDoesTech·
Big week ahead, bigger month ahead!
English
1
0
3
42
R.
R.@RichDoesTech·
When some bob, others will weave. Some will compete on evals, others on price. Some will fight to be the best, others the most affordable. Compute is almost certainly way more expensive than the current charges, agreed. But there's a misconception that everyone is willing to pay full price. I don't think that's true - and I do think companies will invest more in open source models or cheaper proprietary solutions over being at the mercy of ever-increasing prices. Also, the rising tide lifts all the boats. Models like Kimi facilitate the release of a Composer 2. This is a model that runs at like 1/20th the price of other frontier models, gets trained on real time data with a 5 hour post training cycle, and for coding (based on the evals) is likely "good enough" for most tasks. These divides between the various companies/labs will only continue to grow imo, leading to stronger divides in their respective ICPs.
Andrew Curran@AndrewCurran_

Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic. Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough. I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition. We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically. For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford. Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.

English
0
0
3
81
R.
R.@RichDoesTech·
@Yuchenj_UW Been seeing everyone post these, honestly, I just think to myself "why"?
GIF
English
0
0
0
13
pc
pc@pcshipp·
Literally, SEO is getting more harder - 2 clicks - 11.8 average position - 327 total impressions Maybe SEO is 100x harder now
pc tweet media
English
76
1
103
12.9K
R.
R.@RichDoesTech·
Trying to publicly document the "messy middle". - Ready to launch on Android but Google play is approvals are being annoying. - Went back to focussing on Mac & Windows but had some latency regressions from feature creep (fixed now). - Need to test payments then good to go.
English
0
0
6
90
Jacob Rhodes
Jacob Rhodes@Jacob660245·
wow. Late last night I made my first sale, it has been 13 months of working after and before school and during summer break. Thanks to all of you who supported me!
Jacob Rhodes tweet media
English
43
3
73
1.7K
Moonfarm 🇸🇪
Moonfarm 🇸🇪@moonfarm_dev·
WAIT, someone I DON'T KNOW just signed up 🔥🤯
Moonfarm 🇸🇪 tweet media
English
32
0
57
1.9K
R.
R.@RichDoesTech·
@CalebPanza This is looking really good. Congrats!
English
1
0
1
15
R.
R.@RichDoesTech·
Just wrapping up some assets for the mobile app store
R. tweet media
English
1
1
9
125
R.
R.@RichDoesTech·
@ericzakariasson Amazing. Let me know if you want to collaborate on that at all. Happy to help on the taste eval front.
English
0
0
2
158
R.
R.@RichDoesTech·
Really trying to cut out noise today and lock-in. I'm on the final stretch.
English
2
0
11
91
R.
R.@RichDoesTech·
@irbaazkadri They use actual inference tokens from real traffic as training signals. There's a follow up article.
English
1
0
1
10