shae wang

36 posts

shae wang

shae wang

@shaeyuwang

head of core data @ripple built XRPL data on @dune building production-grade web3 x AI infra writing about web3, AI, and neuroplasticity on substack

San Francisco, CA Katılım Mart 2014
230 Takip Edilen805 Takipçiler
Sabitlenmiş Tweet
shae wang
shae wang@shaeyuwang·
Last week, a Head of AI Transformation at a large enterprise told me: "We're holding off on buying observability/performance eval tooling. Models are improving so fast. Eventually they'll catch their own errors." That mindset is incredibly dangerous. Here's why 👇
English
1
0
5
178
shae wang
shae wang@shaeyuwang·
@GaryMarcus AI is not deterministic, (b) will never happen, build your own gates properly and actually study engineering if you want to vibe-code
English
0
0
0
85
Gary Marcus
Gary Marcus@GaryMarcus·
This is totally wrong. Blaming the user is missing the point that (a) coding agents have been overhyped and (b) can’t reliably obey the rules given to them in system prompts and other guardrails.
John A De Goes@jdegoes

Sorry, @lifeof_jer, but this is YOUR failure: 1. Your failure to demonstrate extreme ownership for AI generated code; instead, you abdicated your responsibility and blamed the AI. 2. Your failure to have an adequate and predictive mental model for how LLMs work. 1/2

English
38
15
167
12.6K
shae wang
shae wang@shaeyuwang·
@triathenum @jdegoes @lifeof_jer It's entirely on the vibers to understand the ceiling and risks here, disclaimers and cautionary tales are everywhere if they aren't too lazy to check and invest in learning the basics
English
0
0
2
60
Matt Matheus
Matt Matheus@triathenum·
@jdegoes @lifeof_jer I find the divide between the nontechnical vibe-coders and professional engineers interesting. Engineers are calling out that the user failed at basic infrastructure while the vibers are saying the footgun should have had more safeties.
English
3
1
10
1.5K
John A De Goes
John A De Goes@jdegoes·
Sorry, @lifeof_jer, but this is YOUR failure: 1. Your failure to demonstrate extreme ownership for AI generated code; instead, you abdicated your responsibility and blamed the AI. 2. Your failure to have an adequate and predictive mental model for how LLMs work. 1/2
JER@lifeof_jer

x.com/i/article/2048…

English
33
6
151
41.7K
shae wang
shae wang@shaeyuwang·
Super excited for Alex to join Ripple as our new Chief Risk Officer! 🫡
shae wang tweet media
English
0
0
2
94
shae wang retweetledi
Om Patel
Om Patel@om_patel5·
I taught Claude to talk like a caveman to use 75% less tokens. normal claude: ~180 tokens for a web search task caveman claude: ~45 tokens for the same task "I executed the web search tool" = 8 tokens caveman version: "Tool work" = 2 tokens every single grunt swap saves 6-10 tokens. across a FULL task that's 50-100 tokens saved why does it work? caveman claude doesn't explain itself. it does its task first. gives the result. then stops. no "I'd be happy to help you with that." no "Let me search the web for you" no more unnecessary filler words "result. done. me stop." 50-75% burn reduction with usage limits getting tighter every week this might be the most practical hack out there right now
Om Patel tweet media
English
956
1.4K
23.7K
3.1M
shae wang
shae wang@shaeyuwang·
I've been using Obsidian for a week. Lesson for beginners: if you're just starting with Obsidian, build first, organize later There's no right or wrong structure, and you'll find your favorite one by spending time with it yourself
Andrej Karpathy@karpathy

Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.

English
0
0
1
24
shae wang
shae wang@shaeyuwang·
Tighter scope -> lower variance -> better calibration -> better performance on these specialized tasks Specialized agents are also inevitable economically, because cheaper compute + easier to finetune/debug The orchestration layer does not need to be an LLM to work extremely well
Santiago@svpino

I really like the idea of having multiple specialized agents instead of a "general purpose" agent that tries to do it all. A few days ago, I read (sorry, I don't remember where) a study claiming that specialized agents, even when they are all using the same model, beat general agents by a mile. These guys are doing precisely that with an army of hyper-specific agents. And of course, they are following the ---Claw theme.

English
0
0
0
42
shae wang
shae wang@shaeyuwang·
the single internal AI strategy companies should have is let people build instead of "unifying AI access/infrastructure" regulating too early kills innovation and revenue opportunities
English
0
0
0
35
shae wang
shae wang@shaeyuwang·
5. Causal effectiveness layer Observability measures behavior. We need to also measure outcome. A/B tests run on single lines of marketing copy detect meaningful impact all the time. That's why the experimentation segment is worth $3B. So what about agents?
English
0
0
2
41
shae wang
shae wang@shaeyuwang·
4. Runtime lawfulness Evaluation ≠ enforcement. Behind a 100% eval score on benchmarks, it's still just probabilities. We need circuit breakers, policy engines, and continuous runtime governance and steering.
English
1
0
2
51
shae wang
shae wang@shaeyuwang·
Last week, a Head of AI Transformation at a large enterprise told me: "We're holding off on buying observability/performance eval tooling. Models are improving so fast. Eventually they'll catch their own errors." That mindset is incredibly dangerous. Here's why 👇
English
1
0
5
178
shae wang
shae wang@shaeyuwang·
AI will only accelerate whatever condition you’re already in. That’s why grit and discipline have never been more important in history. The pretty good will become extremely good, the mediocre won’t even realize they’re still mediocre.
English
1
0
1
92
shae wang
shae wang@shaeyuwang·
as a first-time manager, i’m learning that reducing scope to focus on 2-3 max value initiatives >> being in every room let your only testimony be one that people can’t forget
English
0
0
3
0
shae wang retweetledi
Discover Crypto
Discover Crypto@DiscoverCrypto·
I’m really sorry to @FTX_Official users. I cannot sugarcoat the situation: your money is stuck and you won’t be getting it back soon. NO ONE is going to acquire FTX. There are too many lawsuits and too many crimes performed on their books. Acquiring that liability won’t happen ,
English
526
1K
7.2K
0
shae wang retweetledi
Safiya Walker
Safiya Walker@safiya_xyz·
1/6 SV irony🤦🏾‍♀️😆Picture this. You’re on CH, in Big Ideas After Party w/ @eriktorenberg @PatrickJBlum (rec) where the convo is on edu, credentials & corps. Q asked 'Do corps/schools heavily marketing diversity efforts actually do a disservice to those they’re trying to attract?
English
6
3
19
0