Duane C

1.5K posts

Duane C

Duane C

@DuaneC6

Katılım Kasım 2013
14 Takip Edilen96 Takipçiler
Duane C
Duane C@DuaneC6·
@aspiringpeasant I think the problem here is the question. Both answers are wrong. Or, at least, stupid to some level.
English
0
0
1
30
Duane C
Duane C@DuaneC6·
@ArtsyMarx1st Most screen use happened before companies spent billions of dollars to develop technologically advanced, algorithmic ways to keep you more addicted to it. It fucks with your basic cognition, now.
English
1
1
30
490
Duane C
Duane C@DuaneC6·
@DavidPiperVO @TelvinGriffin Most of the billionaires who support Trump dropped out of college to dominate an industry, backed by CIA intel, or CIA financial backing, and you think Kamala Harris is more establishment than he is? You think Zuck, Bezos, Theil, and Karp aren't establishment? Wild stuff.
English
0
0
0
16
Duane C
Duane C@DuaneC6·
@TelvinGriffin I mean, if she had won a primary, nobody would have complaints about her having ran to begin with. Because she would have won, easily. She'll do it this time, too.
English
1
0
4
83
Duane C
Duane C@DuaneC6·
@NoLimitGains The taxpayers will bail them out again. Don't forget we're currently dumping the SPR today :)
English
4
0
4
50
NoLimit
NoLimit@NoLimitGains·
How are they going to fix this?
NoLimit tweet media
English
358
119
1.5K
138.2K
Duane C
Duane C@DuaneC6·
@digijordan Alternatively, the force required for a larger crater is exponential. Plus, craters cover each other with debris infinitely. See here - much larger craters that are mostly filled in by debris from other blasts. Any bigger than that, and they just blow up the moon.
Duane C tweet media
English
0
0
0
48
Jordan Crowder
Jordan Crowder@digijordan·
Zoom in and see… No matter how wide the craters on the moon are…they all go down roughly the same depth… This would only happen if there’s a solid and extremely strong layer right under the surface. Like a metal shell…or somethin
Jordan Crowder tweet media
English
66
11
196
8.1K
Duane C
Duane C@DuaneC6·
@chelleyourself Opinion and idea laundering for the Russian government. Cointelpro at industrial bot-boosted scale.
English
0
0
2
23
Duane C
Duane C@DuaneC6·
@Bitcoin_Teddy First step is to reign in the CIA, but the last guy who tried to do that after the Bay of Pigs caught a bad case of 'His head exploded', so we don't know step 2.
English
0
0
6
951
Bitcoin Teddy
Bitcoin Teddy@Bitcoin_Teddy·
Joe Rogan & Google AI Researcher Ray Kurzweil Get Into an Awkward Exchange Over Protecting Data from Intelligence Agencies KURZWEIL: "We have the ability to keep total privacy in a device...We know how to build perfect privacy." ROGAN: "How do we do it?" KURZWEIL: Long pause...
English
96
40
547
99.5K
Duane C
Duane C@DuaneC6·
@Tech_girlll Nobody knows how Frankenstein works, but if people were able to reanimate the dead, they wouldn't care *how*. They'd just make more zombies. That's what this guy did. That's what MOST people are doing with AI.
English
0
0
0
8
Mari
Mari@Tech_girlll·
Why is no one talking about this?
Mari tweet media
English
244
13
477
93.3K
Duane C
Duane C@DuaneC6·
@CryptoCyberia Nope, too late. I already gave it all my credentials, birth certificate, and my Amazon login. My Google Home keeps calling me a loser, I can't get it to stop.
English
0
0
0
6
Lain on the Blockchain
Lain on the Blockchain@CryptoCyberia·
Ghuyz, I promise you, when I say something, like OpenClaw in this case, is fucking retarded, I am saying these things for very very good reasons kek. You just should trust me before the victory laps, but not trusting me after the victory laps is nutso.
English
1
0
11
276
Duane C
Duane C@DuaneC6·
@jimstewartson Any time I see the word SoftBank in a post I remember that they own a chips company and they're also in like, more than 100bn worth of debt to OpenAI. Their dreams died when 30% of the world's Helium production detonated in the desert two months ago. They need to get a hobby.
English
0
0
0
16
Jim Stewartson, Decelerationist 🇨🇦🇺🇦🇺🇸
This is a deeply desperate move designed to generate demand that DOES NOT EXIST. They’re going to throw money at anyone with a pulse and claim it’s growth. What a joke.
Rohan Paul@rohanpaul_ai

Bloomberg: OpenAI launches a $ 10Bn joint venture called “The Deployment Company” to help businesses use its AI. The new company, The Deployment Company, has raised more than $ 4B from 19 investors, including TPG, Brookfield, Advent, Bain, SoftBank, and Dragoneer. The basic bet is that AI adoption is no longer mainly a model-quality problem, because many companies already want AI but lack the teams, workflows, data access, security rules, and operating discipline to install it safely inside real business processes. Private equity firms are useful here because they control or advise large webs of companies, and the report says OpenAI’s partners can reach more than 2,000 portfolio companies and clients. That turns enterprise AI selling from one-company-at-a-time pitching into a routed distribution system, where OpenAI can package software, consulting, deployment playbooks, and sector-specific use cases across finance, healthcare, coding, operations, and support. The deeper technical point is that LLMs do not create value just by answering prompts, because they need to be connected to company data, permissions, tools, evaluation systems, and human review loops before they can affect revenue or cost. Anthropic also is building a similar PE-backed route for Claude, which suggests the next AI race may be less about demos and more about who can industrialize deployment fastest. --- bloomberg. com/news/articles/2026-05-04/openai-finalizes-10-billion-joint-venture-with-pe-firms-to-deploy-ai

English
5
16
41
2.3K
Duane C
Duane C@DuaneC6·
@CryptoCyberia There's error messages on all 7 monitors now, so production is really ramping up.
English
0
0
0
24
Duane C
Duane C@DuaneC6·
@White_Janissary Don't forget the fact that bots boosted both sides so that real discussion was impossible. Quiz the average person who was lightly involved/skimmed during Gamergate and they cite impossible/wrong info like sexual assault, massive bribes etc - from disinfo networks at the time.
English
0
0
0
152
Duane C
Duane C@DuaneC6·
@NoLimitGains The "Japan is Done Supporting The US Dollar" shuffle.
English
3
0
1
102
NoLimit
NoLimit@NoLimitGains·
What do you call this pattern?
NoLimit tweet media
English
447
118
1.6K
195.6K
Duane C
Duane C@DuaneC6·
@AndyXAndersen @Dr_Gingerballs For everybody else, there's posting on Twitter. AI is a tool - and it should have brackets that contain a best use-case scenario. Most AI corps/bros are just... applying AI to everything. (See: "I hooked OpenClaw up to my wallet and now I am extremely poor.")
English
1
0
0
13
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@Dr_Gingerballs That post is actually insightful, while yours is empty character assassination. To sum it up, AI is a tool. It has uses where it pays for itself very well. It can be misused, and people can waste a lot of time with it as well. The future is good. For folks who have insight.
English
1
0
0
39
Dr_Gingerballs
Dr_Gingerballs@Dr_Gingerballs·
This guy is a motivational speaker selling a chatbot wrapper. We have to be close to peak stupid.
Dr_Gingerballs tweet media
Daniel Jeffries@Dan_Jeffries1

My post on the Infinite Stack is going nova so let me address the obvious questions all at once rather than in replies to save time: QUESTION: "But AI will be better AND cheaper at everything." Maybe. Maybe not. My personal API bill for a coding agent was $9K last month and that is just for coding. AI is not cheap. It was heavily subsidized with subscriptions. The smarter we make it the more expensive it gets. It takes people, power, datacenters, and engineers to build and run all this intelligence and none of it is cheap. But even if you grant the strongest version of this claim, it doesn't lead where you think it leads. Comparative advantage: if AI is 1,000x better at drug discovery and 2x better at comforting a grieving widow, the efficient allocation is obvious. AI does the drug discovery. The human does the comforting. Every hour the AI spends on low-advantage tasks is an hour it's not spending where its advantage is greatest. The math pushes AI toward its highest-value work and pushes humans toward everything else, and "everything else" is a large and constantly growing category. Compute costs money. Energy costs money. Deployment takes time. Diffusion of innovation follows a well-known curve. Physics doesn't hand out free lunches, not even to neural networks. As long as AI faces any real-world constraints in the real world, and it always will, comparative advantage holds. QUESTION: "But why would anyone hire a human when a machine can do it?" Because we want humans for a lot of things. Craft beer is a $29 billion market in the U.S. despite mass-produced beer being cheaper and more consistent Live concert revenue is growing faster than streaming, $23.6 billion and climbing at 8.8%, because when the recorded version became free, the human live experience became more premium, not less. Etsy thrives because people pay more for human-made. A machine can weave a rug faster and cheaper but I may still want a bespoke, small-production-line rug that feels personal to me. The robot will clean the grill and scrub the toilet at the restaurant. We want the human face greet us at the door. The chef's passion is on the plate, not because the machine can't cook, but because the human origin is part of the very product itself. We want our stories told by people because we relate to people. AI may assist with writing or proofreading or fleshing out the story but we likely don't want stories that are pure algorithm (though stories are algorithms to writers, just advances ones, but I digress). Authenticity and provenance are not going away. They're becoming luxury goods. Sometimes we want a human doing the job for many reasons. Sometimes it will be a human and machine team or sometimes just a machine, but it does not mean machines just do it all because it's cheaper/faster. Sometimes the very product and service itself is the opposite of cheaper or faster and that is the essence of what is being sold. RESPONSE: "But the transition is painful!" Yes. It can be. Sometimes, for sure. I won't bullshit anyone here. When a new technology wave hits, real people can lose real livelihoods. We have no more whale hunters to dig the white gunk out of their heads to make candles. That was an entire, real industry. You may say, well now it is safer, cheaper and better that we don't have to slaughter the leviathan to light our house and you'd be right but people made a living doing that and that is gone as a new tech wave hits. Nobody is denying that. What I am denying is that all jobs just go poof overnight, that it's some kind of snap-your-fingers extinction event. Lamp lighters are gone too. Our cities are safer and smell better because we're not burning gas constantly, but those jobs went away. They did not go away overnight though. The transition to electric was slow, iterative, and took decades. The phasing out of that old profession was gradual. It always is. Is AI faster than the electric light transition? Probably. That's an argument for better safety nets and smarter policy. It's not an argument that civilization is ending. The macro story of progress is no comfort to someone living through the micro story of displacement. I take that seriously and any optimist/futurist who doesn't is a fraud. The transition is the hard part. It always is. But the transition is not the destination, and confusing the two is how you get bad policy. But the apocalypse story is a trojan horse for control freaks, authoritarians and national socialists. They want you afraid so they can take more of your rights and take more control. Fear is the enemy and fear is running rampant right now. The only thing we have to fear is fear itself. RESPONSE: "What about superintelligence? Won't it just replace us entirely?" No and here's why. Humans are already a distributed intelligence. We (Sapiens) are smarter in the aggregate than at the individual level. Neanderthals were faster, stronger, had bigger brains, tougher bones, and more muscle. They were the ultimate survivalists. Picture the ultimate survivalist and you would picture a Neanderthal. They were also more isolationist. Seems like the better survival pattern, right? But evolution already settled that debate. A scaling, collaborating intelligence beats a stronger, more isolated one at solving complex problems. That's us. That's why we're here and they're not. So it stands to reason that superintelligence is likely to be more collaborative, not less. More aligned, more willing to work with us and expand the capabilities of humans and other AIs, because that's how you solve more problems higher up the stack. Collaboration is better. Isolationism and seeing the Universe as a zero-sum game is not intelligent. It's stupid. And the stack is infinite. A superintelligence that wipes out its collaborators is an evolutionary dead end. It's not superintelligent. It's super idiotic. A superintelligence that amplifies collaborative networks is the one that wins. Evolution already ran this experiment. We're the result. RESPONSE: "So there are no real problems?" Of course there are. They're just not the ones the doomers scream about. AI-powered surveillance scaled to see every piece of personal information, that's real. Authoritarian governments weaponizing pattern recognition to control their citizens, that's real. Autonomous weapons that delegate lethal decisions to algorithms that optimize without judgment, that's real. These are practical, institutional, policy-level problems that deserve serious attention and serious solutions and they are getting zero attention while we focus on KYCing the entire population so make sure no kids talk to chat bots. Our politicians are not solving real problems. They are solving imaginary ones and creating more problems for us if they succeed. What superintelligence will not be is some paperclip-maximizing superintelligence launching nukes or humanity going extinct because someone trained a model on too many FLOPs. The real threats are mundane. They are boring. They are the same problems we always have. The real threats are fixable. But only if we're focused on the right problems instead of chasing sci-fi phantoms. RESPONSE: "Why does this matter?" Because the starting point determines everything downstream. If you begin from the premise that work is finite, that there's a fixed amount of stuff to be done and machines are eating through it, then every conclusion you draw from that premise will be wrong. Dead wrong. Any policy you make will be wrong. You'll predict mass unemployment. You'll demand UBI as the only response. You'll want to ban or throttle AI. You'll see a zero-sum world where every machine gain is a human loss. And you'll build policy around scarcity when the reality is abundance. The error compounds and radiates outward, corrupting every inference, every prediction, every policy recommendation. It's like building a skyscraper on a foundation that's three degrees off true. At the ground floor, you can barely tell. By the fiftieth floor, it's about to collapse. Get the foundation right and everything else follows. The problems are infinite. The work is infinite. The stack never stops growing. And every time you make one layer cheaper, the layer above it grows in complexity. Complexity breeds more complexity and new challenges and more varied jobs to solve those challenges. Those challenges are, in some instances, better solved by a machine, in others by a human, in others a human + machine combo. But in no way does it ever mean, machines do everything and we are done. The doomers aren't stupid. Many of them are very bight. But they are deeply misguided and their solutions are worse that the problem they are supposedly fixing. They're starting from the wrong premise. And from a wrong premise, you can reason perfectly and still end up perfectly wrong.

English
3
2
25
2.5K
Duane C
Duane C@DuaneC6·
@Dr_Gingerballs "My costs seem to go up if I use it more. " has bro never owned a car? It's also insane to say that if it gets smarter it'll get more expensive lol. If it handles data better it'll use less data and power. Every current AI steward WANTS world-spanning infrastructure, so fuck em.
English
0
0
0
8
Duane C
Duane C@DuaneC6·
@chaly644 @reset_by_peer I would, but they're all too busy trying to get the world to invest 10 trillion dollars into the Plinko machine to make it viable to complete actual tasks. Or, dedicating their remaining lifespan to warn the world away from the technology entirely. Pretty hard split.
English
0
0
2
27
Duane C
Duane C@DuaneC6·
@CryptoCyberia They really said "This will help you automate small tasks" and tech bros started hardwiring it to literally every digital financial tool they had. "I plugged a toaster straight into a nuclear reactor, why did it blow up in my face?"
English
0
0
0
6
Duane C
Duane C@DuaneC6·
@jimstewartson Don't forget scraping every ounce of property left over from the dead and poor. And a shout out to Vance for holding stake in AcreTrader.
English
0
0
7
60
Jim Stewartson, Decelerationist 🇨🇦🇺🇦🇺🇸
David Sacks is another apartheid-bred malignant narcissist conman who was installed by Musk and Thiel to convince Trump to inflate the worst financial bubble in all of recorded history so they could skim billions off the top.
The Atlantic@TheAtlantic

David Sacks helped bridge the MAGA-tech divide—but his efforts are exposing Donald Trump to accusations that the president is selling out his populist base on behalf of the country’s richest men, George Packer reports: theatlantic.com/magazine/2026/…

English
3
145
351
6K
Duane C
Duane C@DuaneC6·
@jimstewartson "You're gonna be the most hated man in America, " said the most hated man in America. "I'll make sure of it." I wish I could be that delusional, if only for a moment.
English
0
0
1
60