Hoboken Squat Cobbler

15.8K posts

Hoboken Squat Cobbler banner
Hoboken Squat Cobbler

Hoboken Squat Cobbler

@canyoudugit8

#49ers, Father of one, obsessed with 9ers football. #FTTB

Sacramento, CA Katılım Ekim 2012
3.7K Takip Edilen1.6K Takipçiler
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@elonmusk More democrats live in cities and don’t have drivers licenses, so you want to make it harder for them to vote by pretending there is voter fraud. If your ideas are better, you shouldn’t be afraid of voters supporting them.
English
0
0
0
9
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@bubbleboi I’m going to go to the dollar store right now and buy 40 helium filled balloons as a hedge
English
0
0
0
24
Rohan Paul
Rohan Paul@rohanpaul_ai·
Chamath on how AI agents are making the "10x engineer" distinction disappear because the most efficient "code paths" are now obvious to everyone. Just as AI solved chess and removed the mystery of the best move, AI is doing the same for coding, making the process reductive and removing technical differentiation. "I'm going to say something controversial: I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer, and that's what agents have done. The 10x engineer used to have better judgment than the 1x engineer, but by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making them available to everybody. It's effectively like what happened in chess: an AI created a solver so everybody understood the most efficient path in every single spot to do the most EV-positive (expected value positive) thing. Coding is very similar in that way; you can reduce it and view it very reductively, so there is no differentiation in code." --- From @theallinpod YT channel (link in comment)
English
186
81
785
363.1K
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@OmerCheeema All stock prices are low as wallstreet isn’t going to run up prices on the lead up to taxes. Execution risk for AMD is wildly lower than execution risks for Nvidia. No one has ever run HBM at 13 Gbps pin at scale. Should be a thrill ride.
GIF
English
0
0
0
334
Omer Cheema
Omer Cheema@OmerCheeema·
AMD stock price is low as market is factoring in market and execution risks. AMD Samsung Memory deal reduces the execution risk significantly. Very good for AMD
English
3
5
126
8.7K
Mike
Mike@MikeLongTerm·
BREAKING $AMD CTO on Agentic AI Full 🆕 03/18/26 0:00 - 3:00: Introduction to Agentic AI in Chip DesignMark Papermaster introduces the season and topic: the application of Agentic AI in engineering, specifically for chip design at @AMD . Alex Starr explains the shift from manual pre-silicon verification to using cutting-edge AI techniques to increase value and speed (2:11). They discuss how this AI revolution has moved from simple two-way interactions to agentic workflows capable of chaining together meaningful outputs (4:30). 3:00 - 6:00: Defining Agentic Workflows and QualityAlex defines meaningful output as the result of an iterative process where agents produce content, critique it, and respin it to achieve a much higher quality than a single model could produce alone (5:27). They emphasize that this applies to various industries, not just semiconductors, due to the high adaptability of these AI techniques (5:50). 6:00 - 9:00: Complexity and the Role of AIChip design involves writing specialized code (Verilog), verifying it, and physical design (turning code into physical gates) (6:45). With chips now featuring over 200 billion transistors, the complexity is too high for humans to manage alone (7:36). AI acts as a virtual engineer to handle debugging, root cause analysis, and automatic code fixes (8:08), as well as physical design tasks like timing closure (8:36). 9:00 - 12:00: Ensuring Perfection and CollaborationDespite the speed, the chips must be right first time (9:05). AMD collaborates closely with EDA partners like Cadence, Synopsys, and Mentor (10:29) to integrate AI components into their frameworks while using AMD-specific data to enhance the tools (10:56). Agentic flows help break down organizational silos by allowing agents to communicate across domains like verification, thermal, and packaging (12:20). 12:00 - 15:00: Holistic System-Level DesignThey discuss moving from chip-level optimization to system-level scale (13:42). AI allows for co-optimization between software workloads and hardware architecture (14:14), enabling whole clusters of systems to work correctly upon release (14:26). This holistic approach tackles complexity that no human could previously fathom (14:32). 15:00 - 18:00: Debugging and Regression SuccessAlex shares a major success story: using agentic flows to root cause, debug, and fix hardware design code in CPU cores without human intervention (15:52). They run millions of simulations nightly, creating thousands of issues (16:32). Agents now triage and fix these issues, allowing for higher quality designs within tighter timelines (16:50). 18:00 - 21:00: Building AI Infrastructure and CultureCreating these workflows is not trivial; it requires hard engineering to build the necessary infrastructure to string AI tools together (18:55). The rapid change creates cultural disruption where some teams move faster than others (19:50). AMD is focusing on increasing the AI IQ of its 20,000 engineers to shift from being computation-limited to idea-limited (20:56). 21:00 - 24:00: The New Moat: SpeedAlex explains that while they face challenges, the ROCm software stack is thriving, and techniques from the software space are heavily influencing hardware design (22:00). Mark highlights that speed is the new moat (22:43). Failing to adopt these agentic flows will likely cause companies to get left behind (23:10). 24:00 - 27:49: The Future of Autonomous DesignLooking forward, Alex predicts autonomous workflows are coming soon (24:13), which will accelerate time to market and allow for higher quality products. AMD is driving 100% training of its workforce to ensure adoption (24:46). Source & Credit: youtube.com/watch?v=fj1iRi…
YouTube video
YouTube
Mike@MikeLongTerm

$AMD's on Track $77-$94B Rev| $TSM Supply 🧵 Part 1 Context: We can't talk about @AMD $77-$94B revenue without talking the TSMC aggressive supply ramp up recently. Analysts' current consensus for AMD's FY2026 revenue hovers around $40-50 billion on average, based on aggregates from sources. This represents roughly 34% growth from the estimated 2025 base of about $34–35B, where analysts do not believe 2GW of revenue is going to happen. No analysts came out to readjust their projections atm. This thread will be in 2 parts. Feel free to subscribe if you want to read the complete thread. Also it is important to know, AMD MI500 series will be on TSMC 2nm as well, so Dr. Su is securing allocation now for 2027-2030 as TSMC expanding capacity with contracts signed! The lowest end of FY2026 projection: AI GPUs: $40-50B EPYC Data center: $15-$20B Client Segment: $12-$13B Gaming: $6B Embedded: $4-$5B Total Revenue: $77-$94B Non-GAAP net income $19.3B-$23.5B Non-GAAP EPS $12-$14.7 Taiwan Semiconductor Manufacturing Company $TSM is aggressively expanding its fabrication facilities (fabs) worldwide to meet surging demand for advanced semiconductor nodes, particularly 2nm and 3nm processes driven by AI, high-performance computing (HPC), and mobile applications. TSMC's capital expenditures are projected to reach $52-56 billion for 2026, up as much as 40% from 2025, with further increases anticipated to $65-70 billion in 2027. This investment prioritizes capacity ramps for 3nm (currently at ~95-100% utilization in Taiwan) and 2nm (volume production started Q4 2025). While Taiwan remains the core for cutting-edge nodes, international sites in the US, Japan, and Germany are accelerating to diversify supply chains and support High demand for customer like $AMD. 1. Recent US-Taiwan Trade Deal and Tariff Reductions In January 2026, the US and Taiwan signed a landmark trade agreement aimed at bolstering US semiconductor manufacturing and supply chain resilience. This deal, negotiated through the American Institute in Taiwan and the Taipei Economic and Cultural Representative Office, includes reciprocal tariff adjustments and massive investment commitments from Taiwanese firms like TSMC ~ The US agreed to cap reciprocal tariffs on Taiwanese goods at no more than 15% (down from the previous 20% baseline for many categories). This applies broadly to Taiwanese exports but with targeted benefits for semiconductors and related equipment. For instance, chips imported to support US fab buildouts ( for TSMC's Arizona operations) qualify for preferential treatment, including potential duty-free imports or offsets under a new tariff program. ~Investment Commitments: In exchange, Taiwanese semiconductor and tech firms pledged at least $250 billion in direct US investments over the coming years, focused on semiconductors, energy, and AI. TSMC's previously announced $100-165 billion Arizona gigafab cluster is included in this total, with additional funds earmarked for expansions (potentially adding 5-6 more fabs beyond the initial six). This would increase capacity for $AMD to extend to 25-30GW by 2030. So far secured roughly 18-20GW by 2030 2. Implications for TSMC's US Progress The deal reduces operational costs for TSMC's Arizona fabs by easing tariffs on imported equipment, materials, and intermediate products needed for construction and ramp-up. It also complements the $6.6 billion in CHIPS Act subsidies, helping offset higher US labor and regulatory costs (which have kept margins ~10-15% below Taiwan levels initially). As of March 2026, Fab 1 (4nm) is at full utilization with yields surpassing Taiwan's, Fab 2 (3nm) is on track for H2 2027 production (pulled forward 6 months), and Fab 3 (2nm) construction is advancing rapidly. "New fab" here refers to significant phases or dedicated facilities coming online or ramping in 2026–2028. Incremental capacity for AMD is estimated as a share of the added wafers (rough range: 10–25% for AMD on constrained nodes, based on its AI/HPC demand growth and public statements; actuals vary by contract and could be higher if AMD secures more via US production). End of Part 1 Not Financial Advice!

English
2
8
32
5.5K
3X Long Labubu
3X Long Labubu@labubu_trader·
I'm accumulating AMD leap calls but still small position. I think the market underestimates AMD's CPU advantage in the AI agent world and its catch-up potential in model inference. ROCm is a joke now compared to CUDA.But with Claude Code's help, I think the gap will close much faster than people can imagine
3X Long Labubu@labubu_trader

@0xWaroy Open claw/agent narrative will last long in the AI world. And AMD is a very good one as well for agent play. I don’t think SNOW /MDB/DDoG is a good AI agent play for now. There is no evidence they will benefit the most from the AI agent movement.

English
23
10
123
27.2K
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@StockStormX @yianisz 2027. Mass shipping Helios and ramping MI500X 1.25 GW to Open AI 1.25 GW to Meta Microsoft, Oracle, Amazon, Humain, G42, xAI, Tata, Naver 4 GWish at $20B a GW. Total Revenue $120B 30% profit margin $22 EPS MC $1T
Indonesia
0
0
0
46
StockStorm
StockStorm@StockStormX·
@yianisz Calling $AMD a stealth 1T basically says the real trade is time horizon
English
1
0
1
381
Yiannis Zourmpanos
Yiannis Zourmpanos@yianisz·
$AMD is a $1T MC company.. the market just doesn’t see it yet.
English
10
3
141
10.8K
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@jukan05 Don’t believe that for a second. I do believe AMD sees that Taylor plant and they are licking their chops. I think this rumor is like the Micron / Nvidia rumor.
English
0
0
0
441
Jukan
Jukan@jukan05·
Just in: Samsung has reportedly attached a condition to supplying HBM to AMD — namely, that a certain portion of AMD’s advanced AI chips be manufactured at Samsung Foundry. (Chosun Biz)
English
17
16
234
31.7K
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@EthanLevins2 Dude wanted war with Iran for 40 years and got “toe tagged” a week into that shit. Alanis Morissette is going to write a song about this.
GIF
English
0
0
0
72
Ethan Levins 🇺🇸
Ethan Levins 🇺🇸@EthanLevins2·
Netanyahu died on March 8th. This is why Yair took an unusual 7 day break on X, for the Jewish Shiva mourning period. He tweeted last on March 8th, then waited exactly 7 days to retweet a post on March 15th.
Ethan Levins 🇺🇸 tweet media
English
1.9K
8.9K
47.9K
3.1M
Hoboken Squat Cobbler
Hoboken Squat Cobbler@canyoudugit8·
@yianisz Micron is making 16 high HBM4 that will have 48 GB per stack vs 36 GB for 12 high stack. Samsung has developed the exact same. Currently HBM4, 12 stack Rubin 288 GB MI455X 432 GB New HBM4, 16 stack Rubin 384 GB MI455X 576 GB Bus width stays the same, bandwidth stays the same
English
0
0
1
109
Yiannis Zourmpanos
Yiannis Zourmpanos@yianisz·
I think HBM4 is one of the most underappreciated bottlenecks in the entire AI stack right now. Vera Rubin isn’t just a GPU upgrade, it’s a memory escalation: 16 HBM4 stacks per GPU 576GB memory (33% more than AMD) ~22 TB/s bandwidth HBM4 is much harder to build. It moves from solder connections to direct copper-to-copper bonding, which is far more precise and difficult to manufacture. and that’s where the real insight is: This isn’t just a memory story—it’s a manufacturing story. Who wins: $HXSCL SKHynix: dominant supplier $SSNLF Samsung: validated + gaining share $MU: partially sidelined (only mid-tier Rubin CPX) But the hidden winner: $BESI / $BESIY: critical equipment for hybrid bonding (no real alternative at scale). The more HBM/GPU (now 16 stacks), the more pressure on advanced packaging tools. AI = memory + manufacturing constraints driving the next winners
Yiannis Zourmpanos tweet media
English
6
7
79
7.5K
Aaron
Aaron@Arronwei3n·
@canyoudugit8 x86 is still much more efficient than ARM.
English
2
0
1
161
Aaron
Aaron@Arronwei3n·
Love this CPUs part, worth reading. Jensen: we were never against CPUs, we don’t want to violate Amdahl’s Law. Accelerated computing, in fact, inside our systems, we choose the best CPUs, we buy the most expensive CPUs, and the reason for that is because that CPU, if not the best and not the most performant, holds back millions of dollars of chips. _ The Role of CPUs in Accelerated Computing Well, to this point, one of the big things with agents coming online is, you’ve talked a lot about accelerated computing, I think you’ve trash talked as it were, maybe the CPUs to the day, they’re all gonna be removed, like everything’s gonna be accelerated. Suddenly CPUs are hot again. It turns out they’re pretty useful and important to the extent you are selling CPUs now, how’s it feel to be a CPU salesman? JH: There’s no question that Moore’s law is over. Accelerated computing is not parallel computing. Go back in time — 30 years ago, there were probably 10, 20, 30 parallel computing companies, only one survived, Nvidia and the reason why is because we had the good wisdom of recognizing the goal wasn’t to get rid of the CPU, the goal was to accelerate the application. So what I just falsely accused you of was actually true for everybody else. JH: We were never against CPUs, we don’t want to violate Amdahl’s Law. Accelerated computing, in fact, inside our systems, we choose the best CPUs, we buy the most expensive CPUs, and the reason for that is because that CPU, if not the best and not the most performant, holds back millions of dollars of chips. When it comes to branch prediction, you worried about wasting CPU time, now you’re worried about wasting GPU time. JH: That’s right, you just never can have GPUs be squandered, GPU time be idle. And so we always use the best CPUs to the point where we went and built Grace so that we could have the highest performance single-threaded CPU and move data around a lot faster. And so accelerated computing was never against CPUs, my basis is still true that Amdahl’s Law is over, the idea that you would use general purpose computing and just keep adding transistors, that is so dead, and so I think fundamentally we’re not against CPUs. However, these agents are now able to do tool use, and the tools that they want to use are tools created for humans and they’re basically two types. There’s the stuff that we run in data centers and most of it is SQL, most of it is database related, and the other type is personal computers. We’re now going to have AIs that are able to learn unstructured tool use, the first type of tool use is structured. CLIs are tool use, APIs, they’re all structured tool use, the commands are very explicit, the arguments are explicit, the way you talk to that application is very specific. However, there’s a whole bunch of applications that were never designed to have CLIs and APIs and those tools need AIs to learn multi-modality, unstructured, and it has to go and be able to go surf a website and it has to be able to recognize buttons and pull down menus and just kind of work its way through it like we do. That tool use are going to want to use PCs and we have both sides, we have incredibly great data processing systems, and as you know, Nvidia’s PCs are the most performant in the world. So what makes an agent-focused CPU different from other CPUs? So you’re going to have a rack of just Vera CPUs. JH: Oh, really good, excellent. So the way that CPUs were designed in the last decade, they were all designed for hyperscale cloud and the way that hyperscale cloud monetizes CPUs is by the CPU core. So you want to design CPUs that have as many cores as possible that are rentable, the performance of it is kind of secondary. You’re dealing with web latency by and large. JH: That’s exactly right, exactly. And so the number of CPU instances is what you’re optimizing for. That’s why you see these CPUs with a couple of hundred, 300, 400 cores coming. Well, they’re not performant and for tool use, where you have this GPU waiting for the tool use— And you’re going over NVLink. JH: That’s right, you want the fastest single-threaded computer you can possibly get. So is it just the speed? Or does the CPU itself need to be increasingly parallel so it doesn’t have misses and things like that? Or so it’s like just all the way down the pipeline is very different? JH: Yeah, the most important thing is single-threaded performance and the I/O has to be really great. Because it’s now in the data center, the number of single-threaded instances running is going to be quite high and therefore, it’s going to bang on the I/O system, it’s going to bang on the memory controller really hard. Vera’s bandwidth-per-CPU core, bandwidth-per-CPU, is three times higher than any CPU that’s ever been designed, and so it’s designed so that it has lots and lots of I/O bandwidth and lots and lots of memory bandwidth, so that it never throttles the CPU. If the CPU gets throttled, then we’re holding back a whole bunch of GPUs. Is this Vera rack, is it still, you talked about it being very tightly linked to the GPU rack, but is it still disaggregated so that the GPUs can be serving multiple different Vera cores? Whereas you have a Vera core on a board with- JH: Yeah. Okay, got it, that makes sense. How does your Intel partnership and the NVLink thing fit into this, if at all? JH: Excellent. Some of the world is happy with Arm, some of the world still needs, particularly, you know, enterprise computing, a whole bunch of stacks that people don’t want to move and so x86 is really important to that. Has the resiliency of x86 code been surprising to you? JH: No. Nvidia’s PC is still x86, all of our workstations are x86. $NVDA
Stratechery@stratechery

3-17-2026 An Interview with Nvidia CEO Jensen Huang About Accelerated Computing stratechery.com/2026/an-interv…

English
13
8
58
23.9K