anand iyer

17.6K posts

anand iyer

anand iyer

@ai

Managing partner Canonical · Venture Partner Lightspeed · Father · Husband · Few-shotting tech, open source

San Francisco, CA Katılım Şubat 2008
636 Takip Edilen49.7K Takipçiler
anand iyer
anand iyer@ai·
@LuisAlejandro You also can’t control what they eat. But you can control what’s in the fridge.
English
1
0
264
22.6K
Luis Alejandro
Luis Alejandro@LuisAlejandro·
@ai This is non-actionable advice. You can't control who your kid's friends are, what they do or how they behave.
English
15
0
63
58.2K
anand iyer
anand iyer@ai·
After 13 years, I can distill all parenting advice to one variable: your kid’s closest friends.
English
84
161
9.2K
1.5M
anand iyer
anand iyer@ai·
Sadly, I disagree. 1. AI builders are too consumed to care about anything else. Unless crypto primitives solve a specific need for them, it's practically invisible. 2. Polymaths like Elon, Sama, Jensen, Elon opine on everything, not specific to crypto. 3. The best founders today are building in AI. Crypto had that gravity in '20-21. Right now it doesn't.
English
1
1
4
475
Haseeb >|<
Haseeb >|<@hosseeb·
After a few weeks in SF, one thing stands out: AI people are more bullish on crypto than crypto people are on themselves. There's this narrative forming in crypto that AI people think crypto is a joke. It's just not true. I keep hearing this over and over from AI people who remain bullish crypto. Hell, Sama, Jensen, Elon, Zuck, the biggest names in AI have all been publicly bullish on crypto and its convergence with AI. Crypto's problem right now isn't that outsiders don't believe. It's that insiders are playing scared.
English
167
115
1.2K
148.6K
G
G@GS_VCactivist·
@ai I don’t agree. Two of my best friend are way different than I am. While growing up it looked similar, but our life went in complete different trajectories.
English
2
0
105
132.1K
anand iyer retweetledi
matt palmer
matt palmer@mattyp·
gradient descent was named after a sunset in san francisco
matt palmer tweet mediamatt palmer tweet media
English
17
33
1.1K
50K
anand iyer
anand iyer@ai·
Computer-adaptive tests like the SAT, GRE, GMAT have used this methodology to adjust question difficulty based on answers for decades. My daughters own experience with Khan Academy & Duolingo are proof that this sort of RL-based adaptive learning works. This is a subtle way to introduce AI into the classroom (without making the chatbot front & center) and to help students really improve.
Ethan Mollick@emollick

The research team (including @hamsabastani who is on X) found that letting students just use AI resulted in them using it to accidentally shortcut learning But both that study and a separate RCT found that AIs prompted to act as a tutor improved learning papers.ssrn.com/sol3/papers.cf…

English
0
0
5
2.5K
anand iyer
anand iyer@ai·
In elementary, best time to introduce them to the tools and to hack. Build core values & foundational thinking during these formative years. In middle & upper, best to teach systems/design thinking and to challenge assumptions. Build on the basics, introduce moral/intellectual complexities.
English
0
0
3
429
Jack Altman
Jack Altman@jaltma·
I hope schools are teaching kids to just sit down with codex / claude code and make stuff.
English
308
48
838
184.5K
anand iyer
anand iyer@ai·
TimesFM is impressive, but it only reads numbers. Most real-world forecasting failures are narrative failures, not data failures. Models need to learn how to "read the room". Synthefy's Migas 1.5 is the only TS model that can incorporate relevant exogenous data for high-accuracy forecasting. x.com/synthefyinc/st…
English
0
1
4
348
Daily Dose of Data Science
Daily Dose of Data Science@DailyDoseOfDS_·
Google open-sourced a time series foundation model. it works with any data without training. unlike traditional models, no dataset-specific training needed. TimesFM forecasts out of the box. trained on 100B real-world time-points across traffic, weather & demand forecasting.
Daily Dose of Data Science tweet media
English
22
309
2.1K
204.5K
anand iyer retweetledi
Jon Rothstein
Jon Rothstein@JonRothstein·
Dear Braden Smith, Fletcher Loyer, and Trey Kaufman-Renn, Thank you for an incredible four years of college basketball. What you stand for and represent will never be taken for granted. Walk proud. Today. Tomorrow. And Forever. Sincerely, America
English
288
1K
8.3K
402K
anand iyer
anand iyer@ai·
Chris has provided the clearest breakdown of robot world models that I've seen. The simplest way to understand them: an LLM asks "what word comes next?" A world model asks "what happens next in the physical world if I do this?" 3 types are competing right now: 1. Action-conditioned (V-JEPA 2, Dreamer v4): predict what happens given a robot's action. The purest approach, but predictions collapse within seconds. 2. Video world models (NVIDIA DreamGen, 1x): generate a video of the task first, then reverse-engineer the motor commands from the frames. No action labels needed to train. 3. Joint world-action models (DreamZero, Fast WAM): predict both video and action simultaneously. Currently winning on benchmarks. The best models don't even render video at test time. They just need to have learned what the world looks like in order to act in it. This matters because today's AI is essentially like a very well-read intern. Great at desk work, useless at unloading a truck. You can simulate billions of chess games, but you can't bake a billion cakes. World models trained on internet-scale video are how robotics closes that gap. Transformers made AI book-smart. World models give it physical intuition. Great read👇🏾
Chris Paxton@chris_j_paxton

x.com/i/article/2037…

English
5
12
69
13.1K
anand iyer
anand iyer@ai·
“My kids will never be smarter than a computer.” -@sama in response to @deedydas’s Q about how he thinks about the influence of AI as a parent.
English
2
1
11
2.1K
anand iyer
anand iyer@ai·
China built >40 state-backed exoskeleton data factories. Workers folding cloth, opening doors, stacking blocks, repeating each motion hundreds of times so a robot can learn what a hand already knows. No text corpus, no simulation gets you there. One of the only ways to give a machine physical intelligence right now is to pass it through a human body first. China is treating this as shared infrastructure worth building at national scale. Whereas here in the US, we are each collecting the same data inside walled gardens. Great read:
Divyansh Kaushik@dkaushik96

Harmonic drives. Servo motors. Rare earth magnets. Strain-wave gears. Exoskeletons worn by workers in Chinese factories repeating the same motion hundreds of times a day so a robot somewhere can learn what a hand already knows. Forty state-funded sites. Local governments providing space rent-free. New essay on why the AI competition is expanding. Link in reply.

English
3
18
101
11.7K
anand iyer
anand iyer@ai·
In 2020, right before the lockdown, @akashraju4 cold DM’d me asking for fundraising advice for his pre-seed. Today, Glimpse raised a $35M Series A led by a16z. Relentless execution by this team as they have navigated their way to PMF. Huge congrats team! Proud to have been a day 1 backer. Boiler up!!
anand iyer tweet media
Akash Raju@akashraju4

I’m extremely excited to announce that @try_glimpse has raised a $35M Series A led by @a16z with continued participation from @8vc & @ycombinator bringing our total raised to $52M. At Glimpse, we’re building the AI-native infrastructure for CPG & retail brands. We started off automating the deductions workflow - recovering brands millions of dollars back into their P&L & saving dozens of hours every week. With this initial focus, we’ve also built the CPG data layer giving us the opportunity to continue expanding to more manual workflows that can be automated. We’re giving CPG brands real operating leverage in the world of AI. To our 200+ customers including PLTFRM, Evermark, IQ Bar, Alice Mushrooms, and more, thank you for your faith in us. We have more fuel now to keep supporting & scaling with you all. We’re hiring - come join us! We’re just getting started.

English
3
2
20
3.8K
anand iyer
anand iyer@ai·
Almost everyone I know in tech is hacking on something on the side.
English
11
1
36
7K
anand iyer
anand iyer@ai·
Robotics sensor logs, self-driving car telemetry, hospital vitals - all time series, all dwarfing the text and video data the AI industry has spent years optimizing for. And the reason transformer models (Claude, ChatGPT etc.) can't forecast this well: they turn continuous numbers into discrete tokens, and that tokenization likely destroys the precision the problem needs. Google, Amazon, Datadog have all built proprietary models to compensate but those models only saw historical numbers, never the earnings report or policy change that caused them. @synthefyinc's Migas 1.5 is the first open-weights foundation model that combines text and time series to induce such exogenous information into time series forecasting natively. Early numbers: 75%+ win rate across 86 real-world datasets. 14.2% lower MAE. Weights on @huggingface. Or download & use their new skill directly in Claude.
Synthefy@synthefyinc

Today, we’re releasing Migas 1.5: the first foundation model to fuse text and time series. Until now, forecasting models have only looked at historical numbers. Migas 1.5 changes that by letting users incorporate real-world context directly into the forecast. This enables teams to forecast with essential context like earnings reports, policy changes, market events, supply shocks, and more. This directly results in more accurate forecasting and enables complex scenario analysis, especially when historical data is sparse. Highlights: - Highest Elo rating against leading foundation models on 86 real-world datasets - 75%+ win rate against all baselines (even Migas 1.0!) - Up to 14.2% lower MAE in short-context forecasting - Fully open source - Premade Claude skill to get you started in seconds We’re excited to open-source Migas 1.5 and eager to see what the community builds with it. Links in comments.

English
2
5
16
3.4K