Peter

22.4K posts

Peter banner
Peter

Peter

@peter

serial operator + investor in fintech, B2B SaaS, AI; @communitascap prev: @m12vc, @getvgs, @jobyinc; exited to @jpmorgan; head of @Visa Ventures; stats nerd

San Francisco Beigetreten Temmuz 2006
3.4K Folgt18.8K Follower
Angehefteter Tweet
Peter
Peter@peter·
If the grass is greener on the other side, try watering your own lawn.
English
36
121
452
0
Peter
Peter@peter·
My inner payments nerd is so proud today with @getVGS helping launch the new card spec for Machine Payments Protocol (MPP) with @Visa and @tempo. Wild to see the payments enablement platform I helped build become the infra for agentic & machine-to-machine commerce. 🤖💳
English
2
0
2
413
Peter retweetet
Peter
Peter@peter·
At the pace AI is moving, two of the most important skills to hone are: 1. Systems thinking/process design 2. Proficiency with AI apps/models (incl. "vibe coding") If you can apply the above to specialized workflows immense value can be created.
English
3
1
10
610
Peter
Peter@peter·
We're now at the point with AI where one of the world’s top AI researchers built a tool that does AI research for him while he sleeps, and it improved upon his own expert judgment on his own project. Buckle up.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
0
0
3
1K
Peter
Peter@peter·
It's all fun & games until your AI provider loses your account & subscription info and cannot find any record of you despite being a paying customer for over 1.5 years. Might be time to look into locally run AI models...
English
0
0
1
565
Peter
Peter@peter·
@alfongj That's a fair & valid point. Crypto wallets solve part of this today, but card acceptance is a different animal given the underwriting requirements. My personal hypothesis for many years now is that tradfi & new rails must converge over time. Form will follow function.
English
0
0
1
24
Alfonso
Alfonso@alfongj·
@peter I think the interesting part is allowing agent-developed apps to earn for their owner Or put in a different way, if i vibe code an app, I want to make money off it I think many vibe coders + their apps will not pass standard merchant onboarding
English
1
0
1
59
Alfonso
Alfonso@alfongj·
Brian is right and wrong here... Agents can't open a bank account - but that is not a limiting factor for making transactions. They can make transactions with their owner's credit card What agents can't do is accept cards easily. And I don't think this will change anytime soon
Brian Armstrong@brian_armstrong

Very soon there are going to be more AI agents than humans making transactions. They can’t open a bank account, but they can own a crypto wallet. Think about it.

English
15
4
59
8K
Peter retweetet
Neal Mintz
Neal Mintz@NealMintz1·
More than half of U.S. adults don’t have an estate plan. Even among those who do, few have trusts. Probate is slow, public, and messy. Trusts help avoid it and give families more control over how assets pass to heirs. Estate planning will shift from documents → execution. The real opportunity is operational, not just paperwork. Axiom Trust is pairing regulated fiduciary accountability with AI-powered workflows to modernize trust administration. Congrats to @davidmeister_ and team. Proud to have been their first backers at the pre-seed.
David Meister@davidmeister_

Today, Axiom Trust Company is launching publicly with $11.8M in funding led by @lightspeedvp, with participation from @WischoffVC, Runa Capital, SNR, @immad, Primetime Partners, and @thefintechfund. Trust administration is the last undigitized layer of financial infrastructure. The industry still operates on PDFs, email chains, and institutional memory. $2.5 trillion in wealth flows through trusts and estates every year — and that number is only growing. Axiom Trust is the infrastructure layer for the Great Wealth Transfer. Full story in today's @axios article from @ShenLucinda. Link below. @arfrank @jmover @seisler1 Katherine Zhang @NWischoff @dropalltables @mfanfant Abby Levy @peter @anothercohen @NikMilanovic @victoriatr @aaronlarue @hoomanradfar @Sikes_ @dhatkoff @TheRideshareGuy @nickabouzeid @jenny_colgate @dbkahn @NealMintz1 @ai_luce @robbiefigs

English
0
1
6
1.6K
Peter
Peter@peter·
The greatest wealth transfer in history is underway, but old models of trust admin don't scale. Axiom Trust Co. is building the modern software + AI stack to run trust ops with the control, reporting & rigor families expect. Congrats to @davidmeister_ & team on the launch.
David Meister@davidmeister_

Today, Axiom Trust Company is launching publicly with $11.8M in funding led by @lightspeedvp, with participation from @WischoffVC, Runa Capital, SNR, @immad, Primetime Partners, and @thefintechfund. Trust administration is the last undigitized layer of financial infrastructure. The industry still operates on PDFs, email chains, and institutional memory. $2.5 trillion in wealth flows through trusts and estates every year — and that number is only growing. Axiom Trust is the infrastructure layer for the Great Wealth Transfer. Full story in today's @axios article from @ShenLucinda. Link below. @arfrank @jmover @seisler1 Katherine Zhang @NWischoff @dropalltables @mfanfant Abby Levy @peter @anothercohen @NikMilanovic @victoriatr @aaronlarue @hoomanradfar @Sikes_ @dhatkoff @TheRideshareGuy @nickabouzeid @jenny_colgate @dbkahn @NealMintz1 @ai_luce @robbiefigs

English
0
1
1
559
David Meister
David Meister@davidmeister_·
Today, Axiom Trust Company is launching publicly with $11.8M in funding led by @lightspeedvp, with participation from @WischoffVC, Runa Capital, SNR, @immad, Primetime Partners, and @thefintechfund. Trust administration is the last undigitized layer of financial infrastructure. The industry still operates on PDFs, email chains, and institutional memory. $2.5 trillion in wealth flows through trusts and estates every year — and that number is only growing. Axiom Trust is the infrastructure layer for the Great Wealth Transfer. Full story in today's @axios article from @ShenLucinda. Link below. @arfrank @jmover @seisler1 Katherine Zhang @NWischoff @dropalltables @mfanfant Abby Levy @peter @anothercohen @NikMilanovic @victoriatr @aaronlarue @hoomanradfar @Sikes_ @dhatkoff @TheRideshareGuy @nickabouzeid @jenny_colgate @dbkahn @NealMintz1 @ai_luce @robbiefigs
English
20
12
53
20.7K
Peter
Peter@peter·
One of these things is not like the others. What an editorial choice. 😬
Peter tweet media
English
0
0
3
642
Peter
Peter@peter·
The recent releases from @perplexity_ai are pretty awesome. First Model Counsel and now Perplexity Computer w/ 19 models orchestrated in a single interface. IMO Perplexity has the perfect blend of insight + concision. And I love the footnotes to source material for documentation & validation. If I were an enterprise buyer, I'd consider Perplexity as a generalized in-house AI layer in lieu of OAI or Anthropic. Why choose 1 model when you can have 19 orchestrated for you? All it needs now is an IDE and better keyboard shortcuts. ;-)
Peter tweet media
English
3
2
24
1.9K