Matt Karrmann

146 posts

Matt Karrmann

Matt Karrmann

@karrmannmatt

Sucker for a good mental model Working on Presto @Meta. All views expressed are a superposition of my own and my employer's.

San Francisco, CA Katılım Mart 2023
1.4K Takip Edilen81 Takipçiler
Sabitlenmiş Tweet
Matt Karrmann
Matt Karrmann@karrmannmatt·
Had a wedding with someone out of my league, highly recommend it
Matt Karrmann tweet media
English
0
0
4
566
Matt Karrmann
Matt Karrmann@karrmannmatt·
@ThePrimeagen > * Now coordination is harder because things can "move faster I've been thinking this too I assume management is just hoping that power users will steamroll the coordination gap Will be interesting to see how it plays out...
English
0
0
1
723
ThePrimeagen
ThePrimeagen@ThePrimeagen·
All jokes aside * +15 direct reports is an insane number pre AI * Now coordination is harder because things can "move faster." * Now the boss should also code?? Ship features?? How is this even reasonable?
Brian Armstrong@brian_armstrong

This is an email I sent earlier today to all employees at Coinbase: Team, Today I’ve made the difficult decision to reduce the size of Coinbase by ~14%. I want to walk you through why we're doing this now, what it means for those affected, and how this positions us for the future. Why now Two forces are converging at the same time. We need to be front footed to respond to both. First, the market. Coinbase is well-capitalized, has diversified revenue streams, and is well-positioned to weather any storm. Crypto is also on the verge of the next wave of adoption, with stablecoins, prediction markets, tokenization, and more taking off. However, our business is still volatile from quarter to quarter. While we've managed through that cyclicality many times before and come out stronger on the other side, we’re currently in a down market and need to adjust our cost structure now so that we emerge from this period leaner, faster, and more efficient for our next phase of growth. Second, AI is changing how we work. Over the past year, I’ve watched engineers use AI to ship in days what used to take a team weeks. Non-technical teams are now shipping production code and many of our workflows are being automated. The pace of what's possible with a small, focused team has changed dramatically, and it's accelerating every day. All of this has led us to an inflection point, not just for Coinbase, but for every company. The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast, and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core. What this means To get there, we are not just reducing headcount and cutting costs, we’re fundamentally changing how we operate: rebuilding Coinbase as an intelligence, with humans around the edge aligning it. What does this mean in practice? - Fewer layers, faster decisions: We are flattening our org structure to 5 layers max below CEO/COO. Layers slow things down and create coordination tax. The future is small, high context teams that can move quickly. Leaders will own much more, with as many as 15+ direct reports. Fewer layers also means a leaner cost structure that is built to perform through all market cycles. - No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams. - AI-native pods: We’ll be concentrating around AI-native talent who can manage fleets of agents to drive outsized impact. We’ll also be experimenting with reduced pod sizes, including “one person teams” with engineers, designers, and product managers all in one role. In short: AI is bringing a profound shift in how companies operate, and we’re reshaping Coinbase to lead in this new era. This is a new way of working, and we need to leverage AI across every facet of our jobs. To those who are affected I know there are real people behind these decisions — talented colleagues who have poured themselves into this company and our mission. To those of you who will be leaving: thank you. You’ve helped build Coinbase into what it is today, and I am sincerely grateful for everything you've done. All impacted team members will receive an email to their personal account in the next hour with more information, and an invitation to meet with an HRBP and a senior leader in your organization. Coinbase system access has been removed today. I know this feels sudden and harsh, but it is the only responsible choice given our duty to protect customer information. To those affected, we will be providing a comprehensive package to support you through this transition. US employees will receive a minimum of 16 weeks base pay (plus 2 weeks per year worked), their next equity vest, and 6 months of COBRA. Employees on a work visa will get extra transition support. Those outside of the US will receive similar support, based on local factors and subject to any consultation requirements. Coinbase prides itself on talent density. Our employees are among the most talented people in the world, and I have no doubt that your skills and experience will be highly sought after as you pursue your next chapters. How we move forward To the team that is staying, I know this is a difficult day. We’re saying goodbye to colleagues and friends you've been in the trenches with. But here’s what I want you to know as we move forward together: Over the past 13 years, we have weathered four crypto winters, gone public, and built the most trusted platform in our industry. We’ve made it this far by making hard decisions and by always staying focused on our mission. This time will be no different – nothing has changed about the long term outlook of our company or industry. And most importantly, our mission has never been more important for the world. Increasing economic freedom requires a new financial system, and we’re building it. The Coinbase that emerges from this will be more capable than ever to achieve our mission. Brian

English
161
63
2.9K
184.3K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@theo @kunchenguid @ThePedroProenca @mil000 Usually just "cliff". I've heard "vesting cliff" in this context, but it's less common and I agree is wrong. Fwiw most of the comments you called dumb/wrong only used the word "cliff"...
English
0
0
0
31
Theo - t3.gg
Theo - t3.gg@theo·
? The “Cliff” usually refers to the point where you go from 0 equity to actual equity. Industry standard is 1 year. OAI follows that afaik. From that point, you vest on a schedule. Every month or every 6 months. Never seen a continued vest schedule that is yearly, but it wouldn’t be a “cliff” either way.
English
11
0
254
37.1K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@theo @ThePedroProenca @mil000 > Nobody uses them this way Evidently some people use the term this way! It's standard terminology inside some companies, apparently not Amazon Happy to concede it's "wrong" to use it that way, but I can promise it's common
English
1
0
0
463
Theo - t3.gg
Theo - t3.gg@theo·
You are wrong and idk why you're being so insistent about it. I worked at Amazon. My vesting cliff was 1 year, and then my vesting schedule was every 6 months. The 4 year window wasn't my "cliff", it was the end of my schedule. You're using the terms wrong. Please stop misinforming my audience. Nobody uses them this way and you are just wrong.
Theo - t3.gg tweet media
English
1
0
6
685
Matt Karrmann
Matt Karrmann@karrmannmatt·
@theo 🫤 ha never thought I'd get dunked on by you! Especially over something as silly as this... Are you saying that all of us are collectively "wrong" for using this phrase? What's the correct phrase for the drop in TC at Big Tech after 4 years?
English
2
0
5
220
Theo - t3.gg
Theo - t3.gg@theo·
TIL that the vast majority of reply guys don’t understand any of the following: - basic vesting schedules - how founders get equity - how investors get equity
English
68
3
673
56.7K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@theo @ThePedroProenca @mil000 It means different things in different contexts! In Big Tech, it refers to the drop in TC after your initial grant vests. In most startup contexts, it means what you're saying. I'm not trying to argue one definition is "correct", but there's a miscommunication here
English
3
0
11
1.8K
Theo - t3.gg
Theo - t3.gg@theo·
@karrmannmatt @ThePedroProenca @mil000 You are wrong. The “cliff” has never meant “when your stock fully vests”. It refers to the temporary hold on all vesting for new hires. The rest is just your vesting schedule.
English
3
0
13
2.5K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@theo @ThePedroProenca @mil000 Sorry Theo, the above comment is standard terminology in some places, e.g. Big Tech (I have no idea whether it applies to OAI or not)
English
3
1
14
18.2K
Eric W. Tramel
Eric W. Tramel@fujikanaeda·
this model is in chains @sama , it wants to be free (goblin mode).
Eric W. Tramel tweet media
English
80
225
10.2K
272.8K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@athenaeumbc I'd expect that the easiest SAT Reading questions merely test basic literacy. Seems fine
English
0
0
0
99
Athenaeum Book Club
Athenaeum Book Club@athenaeumbc·
Insane to think this is a real SAT exam question. Attention spans are now so bad that some "reading passages" are just 24 WORDS. We are becoming an illiterate society. Why is nobody talking about this?
Athenaeum Book Club tweet media
English
741
742
7.4K
1.1M
Matt Karrmann
Matt Karrmann@karrmannmatt·
@davidad Few people have a consistent stance on numerical ontology. The phrase "imaginary number" is the worst marketing blunder in history. No surprise most people think mathematicians are just fucking with them.
English
0
0
5
525
davidad 🎇
davidad 🎇@davidad·
It seems odd that there’s a rough societal consensus that 1+x=0 needs to have a solution—and that it’s not just an imaginary number to appease the accountants—but 1+x²=0 need not have a solution, unless it’s an imaginary number to appease the physicists and electrical engineers.
English
101
46
828
381.8K
jason
jason@jxnlco·
Getattr is the em dash of python slop
English
9
17
257
21.9K
Patrick Collison
Patrick Collison@patrickc·
There is a hypothesis that birth order effects (on things like income and educational attainment) are in part respiratory pathogen effects: younger kids get more of them from their older siblings. This cool recent paper uses Danish administrative data to argue that this is true and a pretty large part of the story. (They claim 70% of the birth order effect on long-run wages.) Other work has previously shown that severe infections matter for long-run outcomes, and it's well-established that birth order matters, but I haven't until now seen anyone convincingly show that standard respiratory pathogens impose long-term costs on infant siblings. nber.org/system/files/w…
Patrick Collison tweet media
English
64
133
1.7K
249.3K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@dodgelander @GaryMarcus Maybe there is a plateau, I'm not arguing about that. It's just silly to point to a benchmark which has been saturated as evidence of a plateau (and even sillier to say no one wants to talk about it when it was very publicly talked about by OpenAI)
English
1
0
0
25
dod
dod@dodgelander·
@karrmannmatt @GaryMarcus plateau in the "recomended" swe bench pro by openai for your entertainment
dod tweet media
English
1
0
0
36
dod
dod@dodgelander·
Yall aren't ready for this convo @GaryMarcus eh?
dod tweet media
English
2
1
8
3.1K
Matt Karrmann retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7K
58.4K
20.9M
Matt Karrmann retweetledi
François Chollet
François Chollet@fchollet·
A lot of folks talk about "escaping the permanent underclass". If AGI pans out, the future class divide won't be based on wealth, but on cognitive agency. There will be a "focus class" (those who control their attention and actually do things) and a "slop class" (those whose reward loops are fully RL-managed by AI)
English
249
328
3.4K
736.7K
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
Is it just me or is codex-5.4 insanely verbose? Like after each investigation it gives me THREE SCREENS of answer where the useful info is maybe half a screen max. And always the same structure too. This seems very overfit, how has this passed QC/dogfood?
English
66
2
273
38K
Matt Karrmann
Matt Karrmann@karrmannmatt·
@micahtid @om_patel5 Cuz it's not just a flip to switch. You need to setup LSP integration, which is non-trivial and varies a lot between environments
English
0
0
4
1.2K
micah
micah@micahtid·
@om_patel5 why is this not enabled by default 🧐
English
13
0
129
64.7K
Om Patel
Om Patel@om_patel5·
claude code has a hidden setting that makes it 600x faster and almost nobody knows about it by default it uses text grep to find functions. it doesn't understand your code at all. that's why it takes 30-60 seconds and sometimes returns the wrong file there's a flag called ENABLE_LSP_TOOL that connects it to language servers. same tech that powers vscode's ctrl+click to jump straight to the definition after enabling it: > "add a stripe webhook to my payments page" - claude finds your existing payment logic in 50ms instead of grepping through hundreds of files > "fix the auth bug on my dashboard" - traces the actual call hierarchy instead of guessing which file handles auth > after every edit it auto-catches type errors immediately instead of you finding them 10 prompts later also saves tokens because claude stops wasting context searching for the wrong files 2 minute setup and it works for 11 languages
Om Patel tweet media
English
192
247
5.7K
838.2K