Chris

428 posts

Chris

Chris

@hellocloh

Product guy in health tech. Ex-founder. Starting to build again.

New York, NY Katılım Ekim 2009
184 Takip Edilen43 Takipçiler
Chris
Chris@hellocloh·
Obviously supply volume is a big factor, but the 11% utilization seems plausible compared to their demand. Grok just isn’t that popular compared to Gemini, ChatGPT/Codex, or Claude Code. And since most of the utilization comes from inference, the GPU utilization is some proxy to overall demand (obviously taking GPU supply into account somehow)
English
0
0
0
45
The Information
The Information@theinformation·
xAI’s GPU fleet is running at about 11% utilization, exposing how hard it is for AI labs to fully use expensive Nvidia hardware. Read more in our AI Agenda newsletter: thein.fo/4cHRjWI
English
57
51
531
1.2M
Chris
Chris@hellocloh·
@HankYeomans @JustAnotherPM That’s an interesting take. Wouldn’t you say that an eval being ambiguous means that it’s a bad eval and should be fixed?
English
0
0
0
4
Hank Yeomans
Hank Yeomans@HankYeomans·
@JustAnotherPM Mmmmmm I would say writing prototypes. Evals are too ambiguous and open to too much interpretation still. Implied someone let it rip and came back eval’d and let it rip again to iterate. Too long.
English
1
0
0
21
JustAnotherPM | Sid
JustAnotherPM | Sid@JustAnotherPM·
Most AI PMs are still writing PRDs. The ones shipping are writing evals.
English
2
3
10
658
Chris
Chris@hellocloh·
@mcuban Challenging AI output is the key. It’s a force multiplier when you apply it to a foundation that’s your own thinking (hence why adding so much context upfront in a prompt is valuable). But if you give it freedom, it’s confidence can be convincing even when it’s wrong
English
0
0
0
536
Mark Cuban
Mark Cuban@mcuban·
I’m coming to the conclusion that the biggest challenge for Enterprise AI, and AI in general , as of now, is that it’s still impossible to make sure that everyone gets the same answer to the same question, every time. Which is a great response to the doomers. AI doesn’t know the consequences of its output. Judgement and the ability to challenge AI output is becoming increasingly necessary, and valuable. Which makes domain knowledge more valuable by the second. Am I wrong ?
English
1.8K
397
5.7K
1.3M
Ivan Burazin
Ivan Burazin@ivanburazin·
Im just gonna say it GChat > Slack I cannot believe no one is using this
English
45
2
111
37.8K
Chris
Chris@hellocloh·
@DBredvick @GarrettLord The good old fashioned deterministic code - some people who say it’s making a comeback, others will realize that it never left :)
English
0
0
1
9
Drew Bredvick
Drew Bredvick@DBredvick·
@GarrettLord lots of ways to fix this: - human in the loop - smaller defined tasks - good old fashioned deterministic code Everyone is just asking too much of the LLM. Good software fixes this.
English
2
0
0
138
Garrett Lord
Garrett Lord@GarrettLord·
the reason enterprise agents aren't working isn't that the models are bad. it's that errors compound. one task at 90% accuracy is fine. three tasks chained together and you're at 73%. five and you're below 60%. that's not a model problem. that's an architecture problem. the companies that figure out how to break long-horizon work into evaluable steps and catch failures before they compound are the ones that actually automate real processes. everyone else is stuck automating small tasks and calling it AI transformation.
English
4
2
21
2.3K
Chris
Chris@hellocloh·
@pmitu @grok compare the current state of the financial markets given AI optimism vs dot com bubble
English
1
0
0
30
Paul Mit
Paul Mit@pmitu·
Aren’t we currently in an “dot-com” bubble of AI? Or not yet?
English
70
1
69
5.5K
Chris
Chris@hellocloh·
@dweekly Interesting… and what caused you to arrive at 20%? 80/20 rule?
English
0
0
0
5
David E. Weekly
David E. Weekly@dweekly·
If you aren't coding at all using AI, you're maybe doing it wrong. If you're accepting >90% of the changes AI is suggesting, you're DEFINITELY doing it wrong. The sweet spot is about a 20% rejection rate - you're paying attention and actively guiding/critiquing.
English
9
1
13
1.3K
Chris
Chris@hellocloh·
@winstonweinberg @saranormous Congrats. What are you targeting from a “hours spent a month”? And any reason why you track it per month instead of weekly? Guessing something related to cyclical nature of use?
English
0
0
0
1.1K
Winston Weinberg
Winston Weinberg@winstonweinberg·
We had an incredible April at Harvey. - Net new ARR is up 6x YoY - We’re about to break 50% DAU/MAU - Our average user now spends 12 hours a month using Harvey Job's not finished.
Winston Weinberg tweet media
English
21
32
405
112.9K
Chris
Chris@hellocloh·
@Jaytel So verbose and kind of loss the “predict what im thinking”
English
0
0
1
387
Jaytel
Jaytel@Jaytel·
4.7 is completely unusable
English
375
136
4.7K
766.7K
Chris
Chris@hellocloh·
Makes sense, and your perspective raises an interesting thought. With “vibe-coding” today, individual teams are empowered to release as much (and as quickly) as possible. Over time, this results in isolated releases and lack of thought / overall product vision. Coordinating features and releases in an org was difficult before AI, but now AI makes it 10x more difficult. I guess the underlying assumption with AI today is that “more is better.” But sometimes “less is more.” Maybe in the future, companies will need to spend cycles to refactor their products into a holistic experience. Kill the features (experiments) that failed and then fold the successful ones into a “well thought out” product (as you mentioned). But we also know that features rarely get killed - only launched :)
English
0
0
0
57
David Fowler
David Fowler@davidfowl·
@hellocloh Individually features are usually fine, but combined it’ll feel scattered or not well thought out.
English
1
0
3
666
David Fowler
David Fowler@davidfowl·
We can do orders of magnitude more with agents, but it turns out that building bug free reliable software still takes a huge amount of effort. Now that we can do more, we have to much even more effort into making the software reliable. You can see these companies crank out more features, but rarely are they high quality. You can feel the jank after the honeymoon phase of the first 5 minutes of use.
English
23
38
333
20.6K
Chris
Chris@hellocloh·
And then when you ask a domain expert (of the process) for advice and knowledge on how to automate said process, you realize that half the domain experts go by feel and may not realize all the nuances themselves. Years of repetition and it became muscle memory - don’t even realize that something they do is absolutely critical
English
2
3
99
5.6K
Justin Skycak
Justin Skycak@justinskycak·
Never underestimate how much time and effort you can waste by trying to automate a process you do not understand manually.
English
162
3.1K
22.4K
479.7K
Scott
Scott@scott___ttocs·
@justinskycak You just described ~every startup building an agent
English
1
0
5
569
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
73% of AI experts are optimistic about AI's impact. Only 23% of the general public feels the same. The people who understand it best are the most excited. The people who fear it most  don't know enough yet. Source: Stanford 2026 AI Index
English
129
66
588
19.8K
Chris
Chris@hellocloh·
Best quote I’ve heard to describe AI- it’s alien technology with no instruction manual. Really resonates…not the alien tech part, but the “no instruction manual” part. Shout out to Karpathy
English
0
0
2
31
Chris retweetledi
Holger Zschaepitz
Holger Zschaepitz@Schuldensuehner·
Morgan Stanley has again raised its capex forecasts for the five hyperscalers Amazon, Alphabet, Meta, Microsoft, and Oracle. It now expects them to spend about $805bn this year, up from a previous estimate of $765bn. For next year, the forecast has been lifted from $951bn to $1.1TRILLION. To put that into perspective, their 2026 spending alone would be roughly equal to what all non-tech companies in the S&P 500 spent combined in 2025. The expected ~$800bn for 2026 is nearly double 2025 levels and about three times what was spent in 2024.
Holger Zschaepitz tweet media
English
102
378
1.5K
762.8K
Chris
Chris@hellocloh·
One thing that surprised me from the interview with Fils was when Fils mentioned that it took him the first set to “get used to Sinner’s pace.” I wonder if that will be when we start to see more competition - as more people get used to his pace immediately and can put on more of an offensive in the first set.
English
0
0
0
778
sashi
sashi@puresinnema·
jannik sinner has mathematically solved tennis. just like how the 3-point/analytics revolution in the nba has made it lose its mystery, jannik has done the same in tennis. absolutely 0 holes in his game, not one single achilles heel on the tennis court. you have to pray for divine intervention from the sun to even be worth his time on court. an absolute machine
English
107
241
5.4K
659.3K
Chris
Chris@hellocloh·
@irabukht What industries or products were they? Care to share? :)
English
0
0
1
1.5K
Ira Bodnar
Ira Bodnar@irabukht·
this week I met several non-tech founders who Claude-coded $1M ARR businesses
English
39
8
310
39.8K
Chris
Chris@hellocloh·
@mattpocockuk I take their doc and comment the shit out of it. Comment every single sentence. Probably more comments than words, but helps prove a point
English
0
0
0
781
Matt Pocock
Matt Pocock@mattpocockuk·
What do you do if someone on your team is using AI negligently? I.e. not reviewing, not caring, leaning into the slop. This, of course, was a problem pre-AI. But the "code is cheap" mind virus is making it worse IMO.
English
171
39
1.2K
94.9K