Cary Dunn

66 posts

Cary Dunn banner
Cary Dunn

Cary Dunn

@carydunn

everything is computer

nvim Katılım Eylül 2008
1.2K Takip Edilen144 Takipçiler
Cary Dunn
Cary Dunn@carydunn·
@jack @blocks sounds like an excuse, why wouldn't you just do more with more?
English
0
0
0
13
jack
jack@jack·
we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack
English
8.7K
6.6K
51.1K
64.3M
Cary Dunn retweetledi
Simon Willison
Simon Willison@simonw·
This one is pretty nasty - it tricks Antigravity into stealing AWS credentials from a .env file (working around .gitignore restrictions using cat) and then leaks them to a webhooks debugging site that's included in the Antigravity browser agent's default allow-list
PromptArmor@PromptArmor

Top of HackerNews today: our article on Google Antigravity exfiltrating .env variables via indirect prompt injection -- even when explicitly prohibited by user settings!

English
50
322
2.2K
314.7K
Cary Dunn
Cary Dunn@carydunn·
If everyone is so bulled up on google rn...how are they not short apple? I don't see the world where deepmind/gemini are taking over AI and apple is flourishing without device/OS competition. Apple goes from rent taker to rent payer super fast.
English
0
0
0
32
Cary Dunn retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing. If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made). The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense). Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.
English
553
1.5K
12.4K
2.1M
Cary Dunn retweetledi
TMT Breakout
TMT Breakout@TMTBreakout·
The Non-Bubble that disappointed both Bulls and Bears -- how Sam's Splurge changed everything The worst kept secret among Tech market participants — just something AI bulls don’t admit out loud: they want a price-action bubble every bit as much as the bears do. Both want to see that steep, “blow-off” ascent that characterizes parabolic tops. Why? The AI bulls are all fully loaded for a vertical melt-up and AI bears want the aftermath so they can yell “I told you so.” It’s obvious to AI bulls (us included) that we’re not in an AI bubble. The non-argument is simple: valuations are reasonable (NVDA near 20x), the equity risk premium is almost 300bps above where it troughed in the tech bubble, operating margins are rich, and we’re still very early in the demand/build out of the AI supercycle. But bulls will typically follow this argument up by saying “‘it’s more like ‘97/’98.” Implicit in that statement is that they’re hoping the inevitable outcome is a ‘97-’99 style ramp, with all hoping it would occur as soon as possible. Why? The simplest answer is usually the right one: everyone likes bigger bonuses as soon as possible. Stated succinctly: the “AI bubble” ascent was the paradigm that both bulls and bears were operating under for most of this year, or longer. Bad news for the AI bulls and bears: the past few weeks has brought an end to that paradigm and led us to an unexpected turning point in the dynamics of the AI trade/narrative. On the 3 year anniversary of ChatGPT’s release, no less. And we have Sam’s $1.4T 30GW splurge to thank for it. Sam’s Splurge (we’ll call it “SS”) opened up AI “pandora’s box,” shifting the AI narrative in unexpected ways. First, the overarching discussion has shifted to a greater focus on OAI’s ability to monetize and what that means across the Tech ecosystem, from ad platforms to software/services companies to GPUs to infra hosting like ORCL. Despite the behemoth it already is, the market began to appreciate it was taking implicit bets on what is still a 3 year old start up industry/company. Second, SS and connected deals brought more focus to the interconnectedness of the whole ecosystem, OAI’s outsized role in it, circular financing, and “too big too fail” discussions. The interconnectedness of the AI ecosystem📷 Third, SS and his “Give us a few months and it’ll all make sense … We are not as crazy as it seems. There is a plan.” opened up discussions about government’s role in AI. While government intervention would help accelerate the AI buildout, it also opened a doorway of investor doubt. Reader CIO At CG expressed these opposite outcomes well in TMTB Chat: “It’s bullish if/when it happens. But until it happens it creates doubt if it is gravy (more upside) or if it’s needed to execute the 1trn+ commitments. Any doubts on ability to execute the 1.4trn is just bearish sentiment vis-à-vis today. So Friar opened a door that was closed. And by opening it, it opened both the left and right side of the distribution. It also makes people realize that they are too big to fail: if they fail to execute they will bring the entire ecosystem multiple down. And hard.” Fourth, the sheer scale of the SS $1.4T plan, which is nearly the size of the whole private credit market, nudged both public and private lenders to reprice AI-linked risk, most notably seen in the rise of Oracle and Coreweaves’ CDS spreads. At the same time, off-balance-sheet structures—e.g., Meta’s $27B Hyperion SPV with Blue Owl— didn’t help by concentrating risk with private creditors and muddying system-wide leverage mapping. The ironic thing is, if SS would have been half the size, things would have continued to grind along, investors would have enjoyed the ‘27 and ‘28 visibility, maybe even building the energy for a large vertical ascent in price action. Instead, it had the opposite effect: pouring too much gasoline on the fire and drowning out the energy for a big move up. Fifth — by locking in commitments eight years out, SS dragged the long-horizon AI debate into the present. Over the last few weeks I’ve heard an increasing amount of bulls give voice to risks they’ve been able to normally wave away over the first 3 years of the AI trade, in an unusual sign of humility. Some of the key existential questions that now feel more present in the discussion: How does the grid support the post-’28/’29 buildouts and what about water, land use, and local pushback? We’ve already heard of local governments slowing DC buildouts, and this week the WSJ wrote how Bernie and others are dialing up scrutiny of Data CentersIf inference moves to phones/PCs/cars, how does that rebalance hyperscaler capex, useful life assumptions, and who captures value? What’s the risk of stranded assets if models plateau or workloads shift to cheaper/edge solutions?The AI catch-22 no bulls want to talk about: If enterprise agents and automation work as advertised, what’s the path for unemployment and wages? If white-collar unemployment rises, what happens to ad spend and consumer wallets — remembering that GOOGLE and META are cyclically exposed ad businesses at their core? How does the seep into their top line and capex trajectory? If AI models don’t deliver, do we get a capex hangover and productivity disappointment?All of this sits against a U.S. backdrop that’s still skeptical of AI — worried about job loss and asking for a slower, safer rollout — which can swing sentiment and policy quickly. Will the current administration still be as supportive of the AI rollout if sentiment and unemployment shift in a more negative direction? These are issues that will be a lot more prominent in the next 3 years of the AI trade than they were in the first 3 years of the AI trade. This all began to seep into the price action of AI stocks several weeks ago: ORCL giving back all of its “monster RPO” move and more, very speculative sectors like Nuclear/Quantum rolling over, and the AI ecosystem progressively rallying less and less on each Open AI deal that was announced. It all culminated in the last two weeks. We can give thanks to some hawkish fed speak and Sam’s now infamous BG2 pod appearance for providing the spark needed to ignite the fire spreading. In a period of time where nothing has changed fundamentally in respects to the AI trade, the market began more heavily digesting the overarching effect of SS: more unknowns and more uncertainty in the minds of investors. After all, the market isn’t just a mechanism for discounting fundamentals and perceived risk, but also the current emotional state of participants. With belief shifting from inevitable euphoria (read: vertical ascent price action) to verification, SS has had the opposite effect of what Altman likely intended: more multiple compression and less belief in out year estimates. With greater uncertainty, it’s no wonder certain pockets of the market have underperformed: names with perceived questionable business models / debt issues (ORCL, CRWV, NBIS, Miners, etc.), names with perceived AI top of funnel / structural issues (DUOL, MNDY), names with rising opex as the market is less confident in how long heightened spend will be here to stay (META). It’s also no surprise that as the market digests these new developments, the profitability factor has outperformed while names with good narratives and fast growth but little in the way of valuation support have underperformed: NET, PLTR, SHOP, TSLA, U. This is also why memory has been so strong: EPS revisions are currently happening —> there’s nothing uncertain about opening up your favorite DRAM/NAND spot price checker, seeing how much DRAM/NAND has risen overnight, and plugging it into your model. These names are arguably more attractive in the current environment than they were before. The market is currently doing what it always does after a narrative/paradigm shock: digest, recalibrate, reassign risk premia. NVDA EPS and Gemini 3 are the next events on the docket to absorb. We’re running low gross while we let the market do its thing, letting the overarching narrative/price action stabilize and become clearer. @dylan522p at Semianalysis joked this week that time is now divided in BC (Before ChatGPT) and AD (After Da Launch of ChatGPT). We think the AI trade will eventually be divided between BSS (Before Sam’s Splurge) and ASS (After Sam’s Splurge). BSS and ASS. Wait - that doesn’t have a nice a ring to it, so let’s say it differently. We think the straight-line giddy phase of the AI trade will give way to something healthier: a phase where fundamentals and idiosyncrasies matter even more. Tech will always be a narrative and boom and bust heavy investing sector (that’s part of the fun), but in a landscape where sentiment is more balanced, stock-picking will become more relevant. That’s a good thing. SS popped the non-bubble. But the AI trade isn’t broken: it’s simply entering a more mature, scrutinized phase.
TMT Breakout tweet media
English
82
282
1.7K
915.3K
Cary Dunn
Cary Dunn@carydunn·
Refreshingly direct, didn't dodge every ROIC question.
Dwarkesh Patel@dwarkesh_sp

.@satyanadella gave me and @dylan522p an exclusive tour of Fairwater 2, the most powerful AI datacenter in the world. We then chatted through Satya's vision for Microsoft in a world with AGI. 0:00:00 - Fairwater 2 0:04:15 - Business models for AGI 0:13:42 - Copilot 0:20:56 - Whose margins will expand most? 0:37:12 - MAI 0:48:42 - The hyperscale business 1:03:39 - In-house chip & OpenAI partnership 1:10:30 - The CAPEX explosion 1:16:01 - Will the world trust US companies to lead AI? Look up Dwarkesh Podcast on Youtube, Apple Podcasts or Spotify to tune in.

English
0
0
1
30
Cary Dunn
Cary Dunn@carydunn·
Pretty sketch how many companies are willing to trust china trained open weight models given research like this. How could you ever tease out backdoors that might be present when running agent tools on poisoned context/UGC?anthropic.com/research/small…
English
0
0
0
30
Cary Dunn
Cary Dunn@carydunn·
Apple Vision Pro is basically the real Apple TV
English
0
0
0
123
Cary Dunn retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
🔥 New (1h56m) video lecture: "Let's build GPT: from scratch, in code, spelled out." youtube.com/watch?v=kCc8Fm… We build and train a Transformer following the "Attention Is All You Need" paper in the language modeling setting and end up with the core of nanoGPT.
YouTube video
YouTube
Andrej Karpathy tweet media
English
483
3K
19.9K
5.3M
Cary Dunn
Cary Dunn@carydunn·
Doesn't every mastodon instance need a corporate entity/legal/monetization strategy? Each instance is essentially running their own social media biz with OSS...just thinking about the implications of decentralization in this way.
English
0
0
0
330
Cary Dunn
Cary Dunn@carydunn·
It becomes a race to aggregate the most data (the more proprietary and harder to obtain the better)...and the most energy and compute resources to spend on training
English
1
0
0
0
Cary Dunn
Cary Dunn@carydunn·
One of the most concerning aspects of the current AI excitement is the potential acceleration and weaponization of data collection for the purpose of training models...*cough tiktokgooglefacebook*
English
1
0
2
0