Ubaid Dhiyan

432 posts

Ubaid Dhiyan banner
Ubaid Dhiyan

Ubaid Dhiyan

@UbaidDhiyan

Infrastructure Software M&A. Engineer turned Banker turned Entrepreneur. Dad. Reader.

Katılım Ağustos 2011
786 Takip Edilen59 Takipçiler
Sabitlenmiş Tweet
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
My 2024 Outlook on Generative AI and LLMs. As an investor, customer, or potential employee, there are five developing trends you should pay attention to. Check out my latest post for insights on key players and why they matter. linkedin.com/pulse/2024-out… via @LinkedIn
English
0
0
0
142
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
4/ The timing is especially interesting alongside @Akamai's @AnthropicAI infrastructure deal. @Akamai is pushing further into AI compute while also moving closer to the enterprise user and workflow layer through security.
English
1
0
0
40
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
1/ @Akamai has been in the news for two very different reasons: a $1.8B, seven-year cloud infrastructure commitment with @AnthropicAI , and reported talks to acquire browser security company @LayerxSecurity for ~$250M.
English
1
0
0
41
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
@AnthropicAI in talks to acquire @StainlessAPI for ~$300M, reports @theinformation. Developer infrastructure around AI models, including SDKs, documentation and MCP tooling, will become increasingly important. Expect developer tooling M&A to accelerate.
English
1
0
0
66
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
Yes! This!!
tautologer@tautologer

you keep citing SF problems for reasons why the tech industry would want to move away. but it's important to understand that the overwhelming majority of the tech industry is in Silicon Valley -- the literal valley running down the peninsula -- not San Francisco. SF's combined market cap is ~$800b market cap in public companies plus startups and AI labs (which are driving most of the discourse), compared to almost $20 trillion market cap in the south bay/peninsula, plus almost all the VC firms. in terms of employee count, Claude estimates there are ~75-95k tech industry employees in SF, and ~350-400k in the rest of the Bay. looking at a map of big tech company headquarters will likely be edifying. most of them are an hour+ outside SF. and in many ways the suburbs in Silicon Valley are some of the most desirable places to live on Earth. nowhere else in the US can you find a pleasant climate year-round within an hour of two major cities (don't forget about San Jose!) with abundant high-paying jobs, very low crime, world-class schools, etc. small houses on the peninsula cost millions of dollars because _that's how much people value living there_. a huge portion of tech executives, employees who vested and exited a decade or two ago -- all your prospective angel investors and VCs -- and two-tech-income working families all live there, and they're really happy living there, and it would be just about impossible to convince them to leave. so if you want a single reason why the tech industry is hard to move, that's it right there. most anyone who is established -- in particular all the decision makers behind all the capital -- have zero reason to move anywhere. and don't forget about San Jose btw. Santa Clara County has more tech employment than SF and San Mateo combined. you never hear about San Jose's problems on twitter, and you know why? it doesn't really have any! a city of a million people with a median household income of $148,000 -- composed almost entirely of families living in nice little detached homes working stable jobs. one of the beating hearts of the tech industry -- albeit completely culturally irrelevant. so okay. you cited housing and crime as the reasons why people would want to move. crime is only a problem in SF -- we'll get to that later. housing is a problem everywhere in the Bay -- people complain about it a LOT. it's true! but also, housing is a problem everywhere. the Bay is worse than average, but not by a factor of 2x -- pretty much every major metro area in the country is hardly building housing. the housing problem in the Bay is so bad _precisely because people want to live here so much_, and they can afford to do so. and that's the thing -- tech industry employees _can afford to live here_. we complain, because paying the extraordinarily high prices sucks, but we can do it! tech employees are famously paid a lot of money! it's an annoyance for the tech industry -- it's only a real problem for everyone else. people make a lot of noise about it on twitter, and for good reason, but it's not driving the tech industry away from the region -- we're the ones setting the prices! and then crime. yeah, SF has a crime/social disorder problem. again, this affects most of the tech industry not at all, because the overwhelming majority of the industry neither lives nor works here. and even within the city, most of the crime, grime, and disorder is concentrated in a few neighborhoods. as long as you don't linger in certain areas after sunset, the city feels very safe. yeah, social pathologies are pretty prominent, and this sucks, and people rightly make an issue about it. but this is not enough to actually make living here unpleasant. and in particular, if you can afford to rent outside SoMa, you hardly even see this except maybe if you commute down Market Street or go out in the Mission. so while people rightly make a lot of noise about SF's governance failures here, it's not a compelling reason for the tech industry to leave the city, let alone the region. (also, it's only been really bad since covid, and it's been getting steadily better. the tech industry was instrumental in getting Lurie elected because of these problems, and he's making progress! yeah the governance of the city sucks, but it's at least somewhat tractable.) and sure, what about the startup scene? its center of gravity has shifted in large part to the city, away from Silicon Valley. why doesn't the next generation of founders leave or found somewhere else? well, this might happen. but then you lose out on the talent and capital and networks and mentorship expertise and so on that is firmly rooted in the idyllic Valley. is that worth it? some people think so! we'll see what happens. but certainly this hasn't happened in the current generation. part of the reason you're hearing so much noise about housing currently is because ~all the AI startups are here, printing beaucoup bucks and driving up the prices. again. you hear the most about SF's problems on twitter and on the news. partially this is because the squeaky wheel gets the grease, partially because the startup scene has shifted its center of gravity here, and ultimately because it's San Francisco. it's an iconic American city, it has a rich and storied history, it's a useful synecdoche for the Bay at large (especially for people with an ideological axe to grind), and it is a perennial driver of culture. but don't mistake SF's problems, or the startup scene's grievances, with the broader Bay or tech industry. the tech industry is in the Bay -- and has been for the last 70 years -- because it's one of the best places to live on earth, both because of the natural features of the region and because of the history and governance (no non-competes! very little crime outside SF!) and culture here. and ultimately, SF has all this chaos and disorder and startups and culture because this is a city that draws and shelters freaks and weirdos and outsiders and pioneers, and as much as we'll always complain about it, we love it here, and we wouldn't have it any other way.

English
0
0
1
50
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
@OpenAI's MRC announcement extends the AI infrastructure discussion from chips and interconnects into the protocol layer. As training clusters scale, the network fabric becomes part of the performance, reliability and economics of frontier AI. More here: udadvisory.co/intel/2026/arc…
English
0
0
0
12
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
The SAP modernization wave is becoming a useful lens for enterprise AI. @tesseralabsai's $60M Series A points to AI moving beyond developer productivity into the services-heavy work of enterprise transformation. More here: udadvisory.co/intel/2026/eve…
English
0
0
0
35
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
AI coding is usually discussed in the context of modern developer environments. @nova_ai is using AI to reduce the cost, complexity and risk of changing legacy systems where business logic, implementation debt have accumulated over decades. More here udadvisory.co/intel/2026/mor…
English
0
0
0
23
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
@adityaag I'd argue a bigger chunk of Series A/B companies are not even at the $10-20mm ARR level. There is a third path besides the ones you point to - it is to run an efficient M&A process that returns (multiples) of capital to stakeholders, preserves tech and soft-lands the team
English
0
0
0
441
Aditya Agarwal
Aditya Agarwal@adityaag·
The hardest spot in venture is to a Series A/B company that is not growing. You are at 10-20M ARR, 20-50 employees but are growing sub 25%. This setup is ngmi (not going to make it). You are not going to optimize and iterate your way out of that. My provocative take here is that instead of trying to iterate here...you should return back to Minus One. Figure out the core assets you have and what you can build that might be a bigger shot on goal. This will be very very hard. Frankly, I am not sure that many founders have the courage and fortitude to pull it off. But it is worth trying. Because the other path just leads to a slow decline and death. And that is much more painful.
Aditya Agarwal tweet media
English
23
8
221
36.8K
Ubaid Dhiyan retweetledi
Gavin Baker
Gavin Baker@GavinSBaker·
Much of Dwarkesh's argument hinges on this statment which *was* accurate but will be increasingly inaccurate on a go forward basis imo:    “American labs port across accelerators constantly. Anthropic's models are run on GPUs, they're run on Trainium, they're run on TPUs. There are so many things you can do, from distilling to a model that's well fit for your chips.”   As system level architectures diverge (torus vs. switched scale-up topologies, memory hierarchies, networking primitives), true portability is eroding. The Mi300 and Mi325 had roughly the same scale-up domain size as Hopper while Blackwell’s scale-up domain is 9x larger than the Mi355 scale-up domain, etc. Many frontier models are now being explicitly co-designed for inference on specific hardware like GB300 racks. Codex on Cerebras is another example. Those models run less efficiently on other systems and the performance differentials will only widen. A model that runs well on Google’s torus topology will run less efficiently on Nvidia’s switched scale-up topology and vice versa - the data traffic is fundamentally different as a byproduct of the models being parallelized across the different topologies. Google’s internal teams - and increasingly the Anthropic teams as they become the most important customer of almost every cloud - have the luxury of operating across the stack (models, chips, networking) - but that is not the case for the rest of the market and other prospective users. Anthropic is the exception, not the rule. To wit, Anthropic and Google allegedly have a mutual understanding where Anthropic can hire the TPU engineers they need every year to ensure that they can continue to get the most out of the TPU. Given the overwhelming importance of cost per token to the economics of the labs, models will be run where they run best. Most extremely large MoE models will run best on GB300s given the importance of having a switched scale-up network like NVLink for MoE inference. When training was the dominant cost for labs and power was broadly available, labs were optimizing to minimize capex dollars. Model portability was a way to create leverage over suppliers. I think that drove a lot of the focus on portability. Today, inference costs as measured by tokens per watt per dollar are everything. Inference is way more important than training costs (inference is effectively now part of training via RL). Labs are therefore now optimizing for inference. This means increasing co-design and higher go-forward switching costs for individual models between systems. I do think this explains why Anthropic and Nvidia came together: Anthropic needed Blackwells and Rubins to inference at least *some* of their models economically. And Mythos might just end up being released coincident with the availability of Rubins for inference. TLDR: as labs shift their focus from training to inference, the costs of portability and the upside of co-design to maximize tokens per watt per dollar both rise. Portability is likely to begin decreasing as a result.   I think what I might have respectfully added to Jensen’s answer is that systems evolve under local selective pressures. The evolutionary pressure in America is a shortage of watts so it makes sense for Nvidia to optimize, as an American company, for power efficiency and tokens per watt and stay on copper as long as possible. China has a surfeit of watts. Chinese AI systems are already taking advantage of this with the Huawei Cloudmatrix 384 and Atlas SuperPoD having an optical scale-up domain that is much larger than anything offered by Nvidia today at the cost of *much* higher power consumption and much lower tokens per watt. The networking primitives for this Huawei system are very different than those for Nvidia’s systems and a model that runs well on Nvidia will not run well on that system and vice versa. This means that if a Chinese ecosystem gets momentum, Chinese models might stop running well on American hardware. And when Chinese models run best on American hardware, America is in a better position as this gives America a degree of leverage and control over Chinese AI that it risks losing to an all-Chinese alternative ecosystem.   This architectural fork makes porting and distillation less effective and strengthens the pro-American national security case for selling China deprecated GPUs imo. Also I will attest that I did not wake up a loser this morning.
English
79
225
2.2K
725.2K
Kath Korevec
Kath Korevec@simpsoka·
Can’t wait to join the team at @openai building codex. Would love to hear what you love about it or want changed. We’re moving fast. DMs open.
English
279
23
1.4K
295.1K
Ubaid Dhiyan
Ubaid Dhiyan@UbaidDhiyan·
@pitdesi Of all the funny ARR numbers out there, this one has to be the funniest
English
1
0
9
4.6K
Sheel Mohnot
Sheel Mohnot@pitdesi·
cool- but I don’t understand it. There are 100s of GLP factories that prescribe GLP’s after a few Q’s Medvi’s flow is particularly bad. I assume margins would have been competed away Hims did $2.35B of revenue with 2k employees. This 2 person co in the space is doing $1.8B?
Sar Haribhakti@sarthakgh

.@eringriffith: "His start-up, Medvi, a telehealth provider of GLP-1 weight-loss drugs, got 300 customers in its first month. In its second month, it gained 1,000 more. In 2025, Medvi’s first full year in business, the company generated $401 million in sales. Mr. Gallagher then hired his only employee, his younger brother, Elliot. This year, they are on track to do $1.8 billion in sales." nytimes.com/2026/04/02/tec…

English
90
38
941
334.9K
Ubaid Dhiyan retweetledi
Gergely Orosz
Gergely Orosz@GergelyOrosz·
This is either brilliant or scary: Anthropic accidentally leaked the TS source code of Claude Code (which is closed source). Repos sharing the source are taken down with DMCA. BUT this repo rewrote the code using Python, and so it violates no copyright & cannot be taken down!
Gergely Orosz tweet media
English
443
1.2K
12.9K
2.2M