Douglas.eth

1.2K posts

Douglas.eth

Douglas.eth

@toolchains

Domain investor, developer, blockchain enthusiast

Philadelphia, PA انضم Şubat 2009
299 يتبع233 المتابعون
Douglas.eth
Douglas.eth@toolchains·
there has never been a scenario in which the freed labor was unable to compete against the self replicating thing that replaced it. Imagine if the steam engine had started having babies that were tractors and the tractors had babies and were cars and so on… so that few humans had an economic edge on its competitor. Whatever new product or service humans think up will simply be resolved at its minimum input cost to the point where tracking of costs is likely meaningless. There likely exists some new theory of constraints, however those constraints will not be solved by humans but by AI
English
0
0
0
10
Twlvone
Twlvone@twlvone·
@pmarca the Luddites smashing looms in 1812 had the same argument. same logical structure, same certainty, same wrongness. freed labor just does something new. always has.
English
1
0
1
345
Marc Andreessen 🇺🇸
AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)
YouTube video
YouTube
Stephen Pimentel@StephenPiment

It’s easy to dunk on Geoffrey Hinton for his 2016 declaration that it was “completely obvious” that radiologists would have no jobs within 5 years, while in fact, the number of radiologists has grown. But this prediction was more than a simple mistake. It’s a synedoche for the entire discourse of AI timelines and doom.

English
360
219
2.7K
1.7M
Cumulus T. Lump
Cumulus T. Lump@lump_t·
@MarkFeighery1 @EricRWeinstein "You're not building powerful recursive harnesses and back pressure..." I'm sure this means something to you guys, but us peons don't get it at all.
English
3
0
4
2K
Eric Weinstein
Eric Weinstein@EricRWeinstein·
Peter: they are not PhD level in physics. You trail behind a model picking up all it breaks. This is a bleeding edge malfunctioning military grade research project weirdly marketed direct to consumer to fund R&D at the top of mad hype cycle that’s likely *directionally* correct.
Peter H. Diamandis, MD@PeterDiamandis

If AI can now solve math, discover physics and chemistry breakthroughs faster than human PhDs, why are we still training humans to be physicists? Serious question. Should education shift from 'learn to do X' to 'learn to direct AI doing X'? The wrong direction costs a generation their careers.

English
112
53
833
178K
Douglas.eth
Douglas.eth@toolchains·
there is a skill curve. i appreciate fresh perspectives. good luck on your quest but don’t be surprised that it is not suited to your tasks without investing in building *for now*. At some point the big models will exceed your IQ but all you’ll get with that sub right now is some efficiency gains. the people pushing for “teach ppl to leverage ai” are recognizing that the gap that exists between useful to frontier in your domain is real but not insurmountable and is closing quickly.
English
1
0
11
1.7K
Eric Weinstein
Eric Weinstein@EricRWeinstein·
I have no idea what the professional AI crowd thinks I believe. I see many opinions ascribed to me that I simply don’t hold. I do use a suite of commercial products to do mathematics and physics. I have some strong opinions about these products. I’m taking a break so I’ll take a few questions if you think I’m not getting it. Try assuming less and it will go better. Whatcha got?
Mark Feighery@MarkFeighery1

You have too many opinions on them for a non power user. You are not at the cutting edge of LLM usage. Your comments make sense for basic llm usage (the most expensive models) but you're not building powerful recursive harnesses and back pressure into them that gets the AGI results they are capable of

English
102
7
246
147.3K
Douglas.eth
Douglas.eth@toolchains·
positive crypto news everywhere? check. market dumps. double check.
English
0
0
0
39
Douglas.eth
Douglas.eth@toolchains·
I just claimed my .agent domain and joined the .agent community! get yours now and help shape the future of autonomous agents agentcommunity.org/join
English
0
0
0
32
Douglas.eth
Douglas.eth@toolchains·
@steipete seems like a bayesian/kalman filter as a pre processor would help with efficiency and accuracy. train it against who you’ve replied to/banned?
English
0
0
0
6
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
My openclaw twitter mention block cron job is working unreasonably well. Turns out AI is really good at detecting spam/reply guy/promo stuff. Runs every 5 min and cleans up my mentions - I actually see useful replies now and Twitter got pleasant again!
Peter Steinberger 🦞 tweet media
English
326
94
2.8K
233.1K
Douglas.eth
Douglas.eth@toolchains·
the latest models make the tracking difficult but the length of tasks completed is going parabolic, not plateauing. we have reached initial levels of recursive self improvement. that is an accelerant unlike anything we have seen before. intelligence of agents will massively outscale humans. that isn’t even talking about embodiment in physical robots and world models, etc. it was never “just” an automation tool, that was just the perception
English
0
0
0
8
Darnell
Darnell@DarnellTheGeek·
@DeryaTR_ Specifics? It’s an automation tool. We’ve been building those for a long time. More jobs keep getting created from the new reality. Are you under the impression LLMs are going to become dramatically better than they are now? The workflows will, but the LLMs have plateaued.
English
2
0
0
60
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
The world is not ready for what’s coming this year, let alone next year, or in 2 or 5 years. People kept dismissing AI as hype; in fact, it was massively under-hyped. When the storm finally hits, there is only so much you can deny & cope with, reality doesn’t care what you think.
English
120
56
637
35.1K
Douglas.eth
Douglas.eth@toolchains·
@LexnLin it’s simple. pro ai plan should unlock custom model provider option in antigravity. silly that it’s locked down when you hit a limit fo a week
English
0
0
0
101
Leon Lin
Leon Lin@LexnLin·
I get it, Antigravity limits are the worst. But someone has to say it. I understand that you can't use it that much on the Pro plan as expected and it's ass. But let's look first at what you get for your ~20$ subscription. - 2TB storage - High Gemini Pro 3.1 access - High Nano Banana 2 access - Veo 3.1 fast - 1000 AI credits for + Whisk + Flow - NotebookLM - a lot more Labs tools - Gemini Cli - Jules - Monthly GenAI and cloud credits of $10 - Gemini in everywhere in the Google ecosystem - ... -> You can even share your WHOLE PLAN with 5friends or your family - AND Antigravity Now look at what you got. Imo it's worth it, yes. If you only need Antigravity, either subscribe to Ultra or stay on Free to buy the new AI credits in Antigravity. It is logical that they can't just give you higher Antigravity quota. yes still ass.
English
58
4
228
26.1K
Douglas.eth
Douglas.eth@toolchains·
@vikaskansalHQ just open up the model options to *our choice* in antigravity if we are ai pro users.
English
0
0
0
66
Google Antigravity
Google Antigravity@antigravity·
We’re evolving Google AI plans to give you more control over how you build. Every subscription includes built-in AI credits, which can now be used for Antigravity, giving you a seamless path to scale. Google AI Pro is the home for the practical builder, hobbyists, students, and developers who live in the IDE and don't necessarily rely on an agent. This plan features generous limits for Gemini Flash, with a baseline quota included to "taste test" our most advanced premium models. Google AI Ultra serves as the daily driver for those shipping at the highest scale who need consistent, high-volume access to our most complex models. If you’re on Pro but need "extra juice" for a heavy sprint or deeper access to premium models, simply top up your AI credits to customize your plan. Keep building. Keep shipping.
English
1.5K
306
4.4K
1.5M
Evan | builder of stuff
Evan | builder of stuff@Nagistakee·
@thdxr if you're doing what i think you're doing, msg me. I'm already deep lol. I have a k8s operator that spins up workspaces and runs the SDLC inside it with sysbox and it creates micro sandboxes for agents to work.
English
1
0
2
2.4K
dax
dax@thdxr·
i thought i was out i stayed out of the game for 7 years but here we go again
dax tweet media
English
72
10
1.1K
89.2K
Douglas.eth
Douglas.eth@toolchains·
maybe we did. that’s the trick with entering the singularity. if you saw your friend at a distance passing through the event horizon you would have a different perspective then your friend. from your perspective your friends “time” would slow down “infinitely”… but to them it’s just thursday. but if you had told someone 20 years ago that we’d be beaten in go by a computer in 10 years they would have said you are insane. it would take close to a century. our timeline is extremely compressed relative to what a truly outside observer “saw”. we may not be beyond reason and logic even still but we are stretching what humans have known as reality in weird ways.
English
0
0
0
55
neuralamp
neuralamp@neuralamp4ever·
"The singularity is the moment AI gets smart enough to improve ITSELF. No humans needed. It builds a better version of itself, that version builds an even better one, and again, and again. At a speed your brain physically cannot comprehend." That's completely wrong or at minimum very inaccurate. For example, AlphaZero was able to self-improve at chess well beyond any human ability, yet we did not experience singularity.
English
3
0
6
528
Tuki
Tuki@TukiFromKL·
Everyone keeps saying "singularity" like they know what it means. Most of you have no idea. Let me explain it in 30 seconds and ruin your week. The singularity is the moment AI gets smart enough to improve ITSELF. No humans needed. It builds a better version of itself, that version builds an even better one, and again, and again. At a speed your brain physically cannot comprehend. The term comes from physics.. It literally means "the point where the rules stop working." Where math breaks down and where prediction becomes impossible. Ray Kurzweil said it would happen by 2045. Musk said 2026. The doomers think it ends the human race. The accelerationists think it cures cancer and solves death. But here's the thing. The real debate is whether it already started and we're just too comfortable calling it "cool new tools" to admit what we're actually looking at.
English
399
194
1.4K
114.7K
Douglas.eth
Douglas.eth@toolchains·
@morganlinton @karpathy same. software that doesn’t self improve seems dead or at least will be left behind so damn fast
English
0
0
1
17
Morgan
Morgan@morganlinton·
Woke up this morning, and all I can think about is autoresearch. So many ideas swirling around in my head, not sure 99.9% of the world realizes the incredible breakthroughs @karpathy is making, and just sharing casually on X. Right now, where my mind is going is medicine. It feels like in many ways, clinical trial design is itself kinda like hyperparameter search. I know right now trials cost tens of millions of dollars, minimum. It feels like an agent swarm could optimizes treatment protocols on small proxy experiments, promote the most promising candidates, and then move to humans to review. So humans still very much in the loop, but later on, and, experimentation going much deeper, happening faster, and for far less money. I think for me, while I'm not a doctor, what I'm the most excited about when it comes to AI is the impact it will have on human health, and critical areas like disease treatment. Might be a crazy idea so a real doctor can jump in the comments annd slap me on the wrist here, but I dunno, just can't stop thinking about how what Karpathy has discovered here could have some pretty profound implications. Still only halfway through my coffee though, but woke up this morning and this is what I'm thinking about so thought I'd share it.
English
10
3
38
2.1K
Douglas.eth
Douglas.eth@toolchains·
@morganlinton it’s funny how the spam used to be obvious from its weird grammar and misspellings. and now the spam is perfect (long winded and repetitive) and the posts have all the jank.
English
0
0
0
9
Morgan
Morgan@morganlinton·
Sometimes I’m pretty much certain I’m the only person in the comment section on some posts. I write zero tweets with AI, all me, typing away, usually just on my iPhone. Used to say, pardon my typos, now I’m kinda proud of them.
English
6
0
15
663
Douglas.eth
Douglas.eth@toolchains·
i initially read this as a very gross reply
Douglas.eth tweet media
English
0
0
0
33
Douglas.eth
Douglas.eth@toolchains·
@DaveShapi @bigswingingdong this feels like not just a comment, it feels like an ai wrote it :) honestly, are you engagement farming with your openclaw bot?
English
0
0
0
123
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
@bigswingingdong yes, Grok's reputation is well earned. But 4.20 feels like not just a step change, it feels like a different model
English
12
1
120
55K
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Wtf, Grok 4.20 is extremely high effort. Here's an example: One thing I love about Gemini is that if it recognizes concepts or terms, it will name them. "Hey it sounds like you're talking about Baumol's Cost Disease" without any tinkering with the system prompt. Claude and ChatGPT are conceited about this kinda thing. You basically have to BEG Claude not to treat you like a toddler or use a bunch of style guides. ChatGPT is the same. Grok is just like "oh yeah, here's the named concept, I went ahead and looked up the math, and did a bunch of number crunching, and here's the conclusion" Grok 4.20 default feels a lot like using ChatGPT Pro. No, they didn't pay me to say this. I still have my criticisms of Grok, but holy shit, this is what UX is supposed to be like. xAI fuckin cooked.
English
332
215
4.2K
32.7M
Douglas.eth
Douglas.eth@toolchains·
@BernieSanders fight back against what? fight back against who? how ‘bout we stop fighting and start putting actual plans in place to provide revenue from gains though automation that can be used for support specifically for those displaced or prevented from entry in the market to start with.
English
0
0
1
60
Bernie Sanders
Bernie Sanders@BernieSanders·
Amazon says that it’s going to replace 600,000 workers with robots. Other companies are moving in the same direction. How will working families feed their kids and pay their bills? We’ve got to fight back.
English
13.9K
3.4K
25.7K
3.4M
Douglas.eth
Douglas.eth@toolchains·
look, it’s complicated. no one has this all figured out. there will be an initial wave of “software is cheap now so make all the things we couldn’t afford to do” but that is just first order thinking imo. the next shoe to drop will be “why are we making all of this software for humans who could be replaced with intelligent agents” and then “why are humans making software for agents” and then its agents all the way down. Code is like steering wheels in cars. While we live side by side the agents trying to accomplish the goal together then it makes sense. but once you can say, just take me home…. you don’t need a steering wheel and you won’t even need to own the car. if it came for “coding”, and won… it will take all the work done behind a screen
English
0
0
1
295
Akshay Saini
Akshay Saini@akshaymarch7·
Everyone keeps saying AI will replace programmers. Honestly, it sounds very smart… till you actually ship software and live with it. Yes, AI writes a lot of code now. I use it daily. Most teams do. That’s not the debate. The real thing people miss is that building software is not about making something work once. The real work starts later. Maintaining it. Changing it. Fixing things when something breaks at the worst possible time. And suddenly you’re stuck with a codebase that technically works, but nobody really understands. That’s when you realise writing code is just 20–30% of the journey. The rest is judgment. Knowing what to build, what not to build, and when to say “this is a bad idea” even if it runs fine today. AI is a powerful tool. No doubt. But it doesn’t own consequences. You still need engineers who understand the code and the business well enough to stop mistakes before they become expensive lessons. Don't you think that we are overhyping speed and forgetting responsibility?
English
296
173
1.7K
162K
Douglas.eth
Douglas.eth@toolchains·
i have appreciated your perspective since the old reddit days but i think you have built such a monumental amount of your identity on calling ai a bubble that you may struggle to prepare for other possibilities. no one can predict these things at 100% accuracy. i’d suggest thinking of a set of conditions that will invalidate your thesis or use thought experiments with “a clone of yourself” to determine what factors could change your clone’s mind - example: if software development is done primarily via ai (we are essentially there. speaking as someone who has 30 years experience as a dev) then its not a bubble *for that use case* or “would my clone feel differently about ai if customer service/knowledge work/whatever unemployment levels reached 15%”. i don’t pretend to know the answers. i am on the other side of this trade from you and work hard to use your insights as my counter points. good luck!
English
0
0
0
75