Adam Stuckert

3.4K posts

Adam Stuckert

Adam Stuckert

@awstuckert

Vancouver, BC, Canada Katılım Temmuz 2008
548 Takip Edilen165 Takipçiler
Adam Stuckert retweetledi
Claude
Claude@claudeai·
Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.
Claude tweet media
English
4.6K
10.2K
79.9K
12.3M
Adam Stuckert retweetledi
Dimitri Dadiomov
Dimitri Dadiomov@dadiomov·
“The secret to doing great work is always to be a little underemployed. You waste years by not being able to waste hours.” This Amos Tversky quote is 1000x more true today. As technology accelerates, reserving time and energy for indulging your curiosity is ever more important. Really feeling that these days.
English
49
232
2.8K
187.8K
Adam Stuckert retweetledi
JFresh
JFresh@JFreshHockey·
SURVEY: NHL AWARDS 2025-26 What does your NHL Awards ballot look like for the Hart, Norris, Vezina, Selke, Calder, and Jack Adams? ⬇️⬇️⬇️ hockeystats.com/awards
English
14
10
120
59.8K
Adam Stuckert retweetledi
Ray Dalio
Ray Dalio@RayDalio·
I have been asked by several people what I meant when I said “we are in a world war” in my most recent note. To be clear, I didn’t mean to convey that I expect a shooting war between the U.S. and China (or any of the great powers) anytime soon. What I meant is that we are in the phase of the Big Cycle when major powers are in military wars and that the various wars happening now are interrelated, hence we are in a “world war," with the sides lined up as I described and with the implications for each of the main players and the whole unfolding in relatively classic interrelated ways that I describe as a progression of the Big Cycle. For example, it is now widely believed that if the U.S. fails to open the Strait of Hormuz to have free shipping and to protect its Gulf Allies from attacks, countries all around the world (most importantly in Asia) will conclude that the U.S. might not be the strong ally and countervailing force to China that they thought it would be. which will lead some to tilt economically and geopolitically more toward China in a number of ways - e.g. to buy less U.S. debt (which is what happened to the British in the Suez Crisis, bringing about the ultimate end of their Empire) - and it could lead others to build up their military capabilities.  As I complete my nearly three-week trip in Asia, I can convey that what I am saying is based on a lot more than conjecture. The reason I do not expect a U.S.-China military war soon, but I do expect a lot of brinksmanship, is because both nations realize that such a war would be devastating and that it would be impossible to fully win over the other, at the same time as they won’t want to give much.  Also, each country believes in its own economic and political systems and that the outcomes of those systems will determine their relative powers. And both nations have critically important domestic issues to deal with.  Some people in leadership positions, especially in China, believe that the relative health, wealth, and power levels between countries is not as important as their own absolute health, wealth, and power levels, and that helping each other build these rather than tear them down is most important. For example, they believe that the world will be a dangerous place if the U.S. and China don't have AI cooperations and controls, and they are concerned that AI can be weaponized. Most countries know that most wars in history were won by one of the sides secretly developing new technologically advanced weapons and showing them to their opponents. So, I believe that both sides think that their wars will be non-military wars that will yield evolutionary changes in relative powers.  As for how the Chinese will fight, and how the world order related to it will evolve, it will probably look more like the type of war described in the “Art of War” (which I suggest you read if you haven't), and for how the new international world order will evolve, to the extent that it is influenced by the Chinese, it will evolve to be more like the tribute system (which I suggest you understand if you don’t) than the existing world order. At the same time, I expect that there will continue to be trade, capital, technology, cyber, and geopolitical influence wars between these great powers and that both will continue to have justifiable fears of being cut off from essential goods, services, and capital that will necessarily will greatly reduce imbalances and interdependencies as well as efficiencies in production and trade of goods, services, and capital. I also believe we will increasingly see these two powerful nations pressure each other because there is no other way to resolve disputes now that the rules-based multilateral world order has been replaced by a power-based, self-serving world order. Said differently, I expect that China will be very strong in its defense without being very aggressive in its offense.  That is not just for tactical reasons; it is also because China has strong cultural inclinations to be that way. I hope this is helpful in clarifying my thinking and as always I'd be happy to answer any other questions or hear your thoughts. Ray
English
189
609
3.3K
584.1K
Adam Stuckert retweetledi
toki
toki@tokifyi·
hi @steipete , welcome to vancouver 🇨🇦 i ran the first openclaw meetup here and host irl events for buildersif you’re down, we could do a pop-up builder event today/tomorrow for folks who couldn’t afford the tedx ticket dms open
Peter Steinberger 🦞@steipete

This release makes me unreasonably happy since I wasn't involved at all - @vincent_koc and the maintainer team did a great job. I'm back soon to work on OpenClaw, today/tomorrow I'm prepping for @TEDTalks in Vancouver. 🇨🇦

English
11
6
91
13.4K
Adam Stuckert retweetledi
dany
dany@danywander·
me and @claudeai
English
221
4.6K
34.7K
1.6M
Adam Stuckert retweetledi
Aaron Levie
Aaron Levie@levie·
The more enterprises I talk to about AI agent transformation, the more it’s clear that there is going to be a new type of role in most enterprises going forward. The job is to be the agent deployer and manager in teams. Here’s the rough JD: This person will need to figure out what are the highest leverage set of workflows on a team are (either existing or new ones) where agents can actually drive significantly more value for the team and company. In general, it’s going to be in areas where if you threw compute (in the form of agents) at a task you could either execute it 100X faster or do it 100X more times than before. Examples would be processing orders of magnitude more leads to hand them off to reps with extra customer signal, automating a contracting review and intake process, streamlining a client onboarding process to reduce as many straps as possible, setting up knowledge bases than the whole company taps into, and so on. This person’s job is to figure out what the future state workflow needs to look like to drive this new form of automation, and how to connect up the various existing or new systems in such a way that this can be fulfilled. The gnarly part of the work is mapping structured and unstructured data flows, figuring out the ideal workflow, getting the agent the context it needs to do the work properly, figuring out where the human interfaces with the agent and at what steps, manages evals and reviews after any major model or data change, and runs and manages the agents on an ongoing basis tracking KPIs, and so on. The person must be good at mapping the process and understanding where the value could be unlocked and be relatively technical, and has full autonomy to connect up business systems and drive automation. This means they’re comfortable with skills, MCP, CLIs, and so on, and the company believes it’s safe for them to do so. But also great operationally and at business. It may be an existing person repositioned, or a totally net new person in the company. There will likely need to be one or more of these people on every team, so it’s not a centralized role per se. It may rile up into IT or an AI team, or live in the function and just have checkpoints with a central function. This would also be a fantastic job for next gen hires who are leaning into AI, and are technical, to be able to go into. And for anyone concerned about engineers in the future, this will be an obvious area for these skills as well.
English
267
385
3.7K
995.1K
Adam Stuckert retweetledi
HoneyBadger Charging
HoneyBadger Charging@BadgerCharge·
Gas prices across BC are sitting above $2 per litre and expected to keep rising. But what if driving electric was the equivalent of paying just 30 to 40 cents per litre? That’s the reality when you compare fuel to electricity. Even public charging often comes in below the cost of gas, and that’s before factoring in lower maintenance, fewer moving parts, and available incentives. When you look at the total cost of ownership, EVs are not just an environmental choice, they are becoming a financial one. For condo and multi-unit buildings, access to charging is what unlocks these savings for residents. Contact us to learn more or get started: sales@badgercharging.ca 236.480.0827
English
1
1
1
718
Adam Stuckert retweetledi
Adam Stuckert retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.1K
2.4K
20K
4.1M
Adam Stuckert retweetledi
kitze 🛠️ tinkerer.club
they checked my phone and didn’t let me in because i had openclaw in my contacts smh
kitze 🛠️ tinkerer.club tweet media
English
60
30
976
43.2K
Adam Stuckert retweetledi
Route 2 FI
Route 2 FI@Route2FI·
I guess the coins I like the most going into the new cycle are these: 1. $HYPE (perps, L1) 2. $TAO (AI, L1) 3. $NEAR (AI, L1, privacy) 4. $LIT (perps, the best bet on perps after HYPE) 5. $PUMP (memecoins, speculation) 6. $ZEC (L1, privacy) 7. $MON (new L1) 8. $MEGA (new L2) New coins good, old coins bad. HYPE, LIT, PUMP, MON, MEGA has never been in a bull. Well, you could argue HYPE launched at the tail of the bull, but not a full cycle. TAO and NEAR are clear tokens in the AI narrative. ZEC is the "VC-privacy coin". But tokens are not stocks, they have no value. Yes, and no. I think this is one of the hardest "dilemmas" of the new cycle. Betting on tokens in 2023 felt like a no-brainer. We all had hopes that our coins would make a comeback at some point. Now, in 2026, with an infinite number of tokens, it's harder than ever to pick something. There is a huge difference between a good product and a good token, and since most tokens are governance tokens, do we really need them? Maybe not, but it remains the main vehicle for speculation. What about BTC, SOL, ETH? BTC should always be a part of a core portfolio, maybe SOL and ETH also, but I think the ones above will outperform compared to them. Anyway, my gut feeling says that there will be something else that takes the spotlight, and that "the new thing" will outperform all of the 8 I listed. These are my thoughts today. Next week or next month I could already have changed my mind, so NFA and do your own research.
English
129
74
806
168.7K
Adam Stuckert retweetledi
DeFi Ned
DeFi Ned@defi_ned·
Howdy, friends! Trading terminals force you into their layouts like a one-size-fits-all sweater @definedfi’s private beta (Redefined) lets you customize everything you like Think Bloomberg Terminal for crypto Spot. Perps. Predictions. One wallet. Here's 5 quick videos:
English
10
14
51
9.4K
Adam Stuckert retweetledi
Marc Andreessen 🇺🇸
Magical OpenClaw experiences that use frontier models cost $300-1,000/day today, heading to $10,000/day and more. The future shape of the entire technology industry will be how to drive that to $20/month.
English
625
513
7.7K
1.7M
Kris
Kris@krisco655·
I thought I was just moving my @openclaw setup from @AnthropicAI to @OpenAI . What I was actually doing was rebuilding the operating system around it. Since making the switch to GPT-5.4 I've had to: - migrate the cron jobs - fix a broken Windows gateway setup with PM2 and old scheduled tasks fighting each other - clean up memory/docs that were still telling me outdated stuff - audit the current iOS app I'm building and the giant research workspace behind it before moving everything to the Mac Mini - recover old project decisions from Claude Web exports - figure out which Codex workflow is real and which one is just nice in theory
English
1
0
1
39
Adam Stuckert retweetledi
Michael Fisher
Michael Fisher@Captain2Phones·
I promise that some day I will get over my "pocket laptop" obsession – but today is not that day. NEW on the MrMobile YouTube channel: journey back 2 decades with me, for a look at a very special "High-Tech Computer" from the company that would come to be beloved for them. The HTC Universal from 2005 stars on the latest episode of "When Phones Were Fun." Join the live chat at 6p ET!
Michael Fisher tweet mediaMichael Fisher tweet mediaMichael Fisher tweet mediaMichael Fisher tweet media
English
26
100
1.5K
42.6K
Adam Stuckert retweetledi
Milk Road AI
Milk Road AI@MilkRoadAI·
This is WILD (Save this). The Wall Street Journal just got access to the private financial documents of OpenAI and Anthropic right before they go public. What's inside will change how you see the entire AI boom. OpenAI is projected to burn through $665 billion in cash before it ever turns a real profit between now and 2030. At its worst point, OpenAI will lose roughly $170 billion in a single year and that's more than the GDP of most countries. The most valuable private company in American history, currently valued at $852 billion, structurally designed to lose money at a scale the world has never seen. Model training costs alone are headed toward $440 billion by the end of the decade. Every time they build a smarter AI, the bill gets bigger and the cost curve is not slowing down. Anthropic is doing the same thing at a smaller scale, it's training costs will exceed $100 billion before 2030. It already pushed its break-even date back once and there is no guarantee it does not move again. Here is the part that matters most., OpenAI's gross margins fell from 40% to 33% in one year. Normal software companies run at 75 to 80 percent margins but the AI infrastructure runs closer to a utility or a railroad. The economics are completely different from what Wall Street is used to pricing. But the revenue is real and explosive. OpenAI hit $13.1 billion in 2025, ahead of its own forecast. It projects $62 billion by 2027 while Anthropic projects $55 billion next year alone. The growth numbers are genuinely historic. But costs are growing faster than revenue right now and both companies are about to ask everyday investors to fund the gap. OpenAI is targeting an IPO before the end of 2026 while Anthropic is racing to beat them to it. The entire bet comes down to one assumption, that by 2030, inference costs collapse, margins flip, and the money machine finally turns on. Who is likely to win this race?
Milk Road AI tweet mediaMilk Road AI tweet mediaMilk Road AI tweet mediaMilk Road AI tweet media
Milk Road AI@MilkRoadAI

This is WILD! OpenAI's own CFO thinks the company isn't ready to go public but Sam Altman doesn't want to hear it. He committed $600 billion in spending over five years, told investors the company will burn over $200 billion before making a single dollar of profit, and privately set a goal to IPO before the end of this year. His CFO, Sarah Friar, started telling colleagues the numbers don't add up and the organization isn't ready for public markets. Revenue growth is already slowing, and the cost to run OpenAI's own AI models quadrupled in a single year, crushing margins from 40% down to 33%. Altman's response was he stopped inviting her to meetings. The people in the room noticed immediately, describing her absence as "notable and awkward". He also restructured who she reports to the CFO of one of the most valuable companies in history no longer has a direct line to the CEO. Now look at the $122 billion funding round that everyone celebrated last week. The money mostly comes from Amazon and Nvidia, the same companies OpenAI pays for chips and cloud computing every single month. They are investing in the company they're already billing. A large portion of Amazon's commitment doesn't even arrive unless OpenAI completes an IPO or achieves AGI. And the Stargate project, the $500 billion data center empire announced at the White House, has no staff, no built data centers, and has been stalled for over a year. OpenAI tried to own its own data centers and build its own infrastructure but banks said no. Lenders refused to finance a company burning billions annually with no proven path to profit. So they went back to renting from the same suppliers who are now also their investors.

English
16
42
174
41K
Adam Stuckert retweetledi
JFresh
JFresh@JFreshHockey·
everyone will have their own criteria for what generational means. it's not anything close to an exact science. to me, it's a very restrictive group that leaves out a lot of legends, and it's not close to an insult not to be on that list yet at age 19
JFresh tweet media
English
233
25
1.1K
315K