illiquid

7.9K posts

illiquid banner
illiquid

illiquid

@lefttailguy

routers eat the world; consilient software observer; crafting an alchemist's paradise.

palo alto Katılım Eylül 2020
591 Takip Edilen4.6K Takipçiler
Sabitlenmiş Tweet
illiquid
illiquid@lefttailguy·
People know that enterprise software has never been that fancy, but that enterprise distribution is hard. The first point is becoming clearer as software gets ever easier to build, but the distribution point means that the "incumbents can use Cursor too" argument should have legs. In other words, having the distribution apparatus built out should give incumbents massive advantage. But the market is obv. not buying that. Two potential explanations: 1) it's actually becoming less difficult to distribute software and to understand what to build. Biggest story here is @OpenAI as a workflow intent aggregator that can route that intent to the apps/tools best suited to solve the problem in question. If the primary interface becomes an AI system that routes requests to various specialized tools, then distribution increasingly means being the tool that ChatGPT (or whoever wins that aggregation race) chooses to route to. This advantages the orchestration layer/aggregators of course (see this interesting piece in the Diff for a fun theory there: thediff.co/archive/router…), but also means that startups should have a relative advantage in optimizing everything for this new paradigm/definition of distribution. This is somewhat analogous (with caveats) to how Google partially shifted distribution from being contingent on salespeople and brand ads to SEO/adwords, but even more extreme. or 2) if you don't buy that, PE firms or these newish AI transformation firms (like Brain Co, GC's Percepta, SaxeCap, Palantir/Scale, Anthropic, etc.) will leverage their reputational capital, AI-native DNA, and/or ownership stakes to leapfrog both AI-native startups and incumbents by building and distributing/deploying tools themselves (the Bain anecdote). Understanding how distribution dynamics/difficulty are trending feels like it should be very important if you want to have a sophisticated view on how software economics will eventually look, but it's pretty understudied imo. Especially relative to the question of how much easier it is becoming to build product. 1) taken to it's logical conclusion means that @OpenAI (or whoever wins) matches workflow intent to digital services/solutions as effectively as search/social match consumer intent to consumer goods. And that OpenAI/AI more broadly makes the production of digital services as automated/fossil-fueled as the production of physical goods. If you are building/selling software this means you have two new/more powerful rent seekers than in the past: the aggregators/orchestration layer and the chip/infra layer. This sounds a lot like e-commerce market structure/economics to me. They also pay large rents to aggregators (meta, amazon, google, etc.) and infrastructure layer (manufacturers). In e-commerce, moats exist in the form of brand, economies of scale (in certain instances), etc. but moats and margins in e-commerce are obviously weaker than moats and margins in enterprise software. Folks like @cpaik have argued that AI will make the economics of software look more like the economics of media, which is sort of the maximalist take on vibecoding completely eliminating software moats. But the reality IMO (at least for foreseeable future) will be somewhere closer to e-commerce. Software won't become completely free like a lot of media has, but just selling software will become less lucrative. Some solutions will commoditize significantly, like certain consumer goods did when mass manufacturing and search/direct response ads went live. Some will retain/command more pricing power as a function of true account/data gravity, regulation/compliance, end-to-end workflow complexity etc. (this is probably why you see vertical solutions outperforming). In any case, I find it harder and harder to argue that the long run equilibrium isn't relatively bleak unless you're an aggregator, vertically integrated operating company (the AI-native opco/AI transformation approach), or NVIDIA. would love thoughts @matt_slotnick, @sebkrier, @ChairliftCap, @MangotreeA, @huntermmonk , @yrechtman, @BucknSF
illiquid tweet media
English
6
6
63
32.8K
🎭
🎭@deepfates·
The new thing in San Francisco is no longer chief of staff or MTS. It's wizards. Everybody's got to have a wizard. If your company doesn't have a wizard and a 10,000 year cosmic plan you're ngmi. At some top startups each C-Suite exec has a wizard of their own
English
40
26
620
64.9K
illiquid
illiquid@lefttailguy·
@rkundy If you’re counting, that’s the wrong boss
English
0
0
0
6
RISHI
RISHI@rkundy·
question. you go out for dinner/drinks with your boss. how many drinks are you getting?
English
1
0
0
25
Nikunj Kothari
Nikunj Kothari@nikunj·
Man, /goal is just AGI if given the right tools.. Like what do you mean you went through all the entire database of 2k+ line items, fixed all the product images, the frontend bugs caused by different images, the descriptions, used browser harness to get real-time info from the web, used web search for fact checking, wrote scripts for all the work you did for the future.. and ran for 2 hours while I met founders for coffee. I'm just shook 😅
English
57
21
788
208.2K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
What are some of the best blogposts of all time?
English
6
0
5
1.6K
grant
grant@grantbelden·
@lefttailguy lol at thinking the AmEx acq was premeditated by Long Lake
English
1
0
2
229
illiquid
illiquid@lefttailguy·
Solving the science of asset selection in a future (or indeed the present) where every company is a "Context Acquisition Company" is the real frontier. I love that everyone is getting around to the idea that the secrets (scarce context) currently illegible to/hidden from computers (human or machine) are everything. Now the next leap for people to make is that the science of sourcing, selecting, and monopolizing that context (really THE ASSETS that produce it) is everything. If AI progress is a function of compute and data (most algorithmic progress is really just data progress; h/t @BerenMillidge, @_kevinlu, @mentalgeorge, @GarrettLord, etc.), then every company is going to have a context desk just like they will (or already do) have a compute desk. The difference is, CONTEXT IS NOT FUNGIBLE. Most context (both that exists right now and that will be created in the future) will be completely commodity beta. Winning will be about getting to and instrumenting the right asset (context production factory) first. And yes, there are right and wrong answers. To do this kind of asset selection well requires an extremely scarce meta-capability: the ability to coordinate the right kind of access and the right kind capital at the right time. These assets (and the secrets within them) are structurally difficult to access, evaluate and instrument. They are not floating around in banked processes, to be frictionlessly purchased on listed exchanges, or willingly coming through Mercor or Handshake's expert portal. (Yes, a context production asset can be (very often is) a single person or collection of people.) When @WillManidis talks about a Deal Guy Yuga, what he means is that there are people who have deeply internalized the fact that at the limit, in a world of infinite intelligence, access to/monopoly on the right permissioned data streams is all that matters. Getting yourself to a position (meta-access, meta-capital) where you have the ROFR on those permissioned data streams, means being a generational Deal Guy. This is a very different and specific kind of "Deal Guy" though. Knowing which asset(s) are going to give you the right context to create, compound, and commercialize the best vertical world model now and into the future is the new form of security analysis. But the triple-exceptional combination of domain expertise, meta-access, and technical ability that’s required to execute this new security analysis effectively is scarcer than the talent at quant firms, YC combined, and dare I say, the labs, combined. Palantir understood this and it's why they focused on getting root-access (or something close) to the "highest-status" institutions, and the data streams they produce, first. If you have the talent that can get access to and create value within those institutions, everything else should be a forgone conclusion. If you want examples of the teams that (I believe) actually understand this new science of asset selection and long term value capture in a world of infinite intelligence, study Long Lake and @formationbio. They know and have known that it's all about being able to get the right asset (context), in the right market, with the right team (machine and human) first. These two companies are very far ahead on the scientific frontier of context acquisition. GC backed Long Lake last year. Do you think it’s a coincidence that Long Lake chose to work with General Catalyst? My bet is that Long Lake knew they wanted to acquire Amex GBT before they partnered with GC, and that they partnered with GC because Ken Chenault (the ex-CEO of Amex) is General Catalyst’s Chairman. That gave them the right access at the right time to a very valuable context asset (Amex Global Business Travel) A superhuman vertical-specific Elon operating every company means market leading monopolies in every single slice of the unstructured economy. The thing is you have to build this superhuman Elon while flying the plane. You can't build this superhuman Elon without the very specific context that operating specific assets in the real world gives you. In fact, there's only one stream of context that was able to produce human Elon! Knowing which context stream is likely to do the same a priori is so extremely difficult, but probably possible. I’ll let you intuit why Amex GBT is both most likely to be the market leading monopoly if it were operated by the superhuman Elon of business travel and why it’s also the most likely to produce the context to build that superhuman Elon. The labs of course are very large acquirers of context at present and I think they will continue to play and improve their capabilities here. Through their deplyoment companies, they have already chosen the PE funds that they deem to be the best Context Acquisition Funds. Through in-house deployment focus on Life Sciences they have chosen the vertical they see as containing the most valuable context producing assets. They will acquire very seemingly unrelated companies and will acquihire very interesting people just to get tokens, they will create a Context Acquisition Fund of Funds. But it's not a foregone conclusion that they become the best performing context acquisition companies. Or that they even view it this way. And that presents an opportunity for anyone that does.
yoni rechtman@yrechtman

The Context Acquisition Company (CAC). We are a holding company acquiring services firms for their tokens in order to build domain specific agents/ models to deploy into our platform businesses and beyond. There's $1T hidden in the computers. We're gonna get it out.

English
5
5
69
31K
Patrick OShaughnessy
Patrick OShaughnessy@patrick_oshag·
Who would you love us to profile in @colossusmag ? Our sweet spot has been: incredible people who’ve not been covered or interviewed often. Who would you suggest we try for?
English
136
6
166
48K
illiquid
illiquid@lefttailguy·
@yrechtman context and compute as the universal inputs, deployed with care by the taste cannons at Slow Ventures
illiquid tweet media
English
0
0
0
172
illiquid
illiquid@lefttailguy·
@anjali_shriva Well I guess the definition of a good driver vs average driver in this case is the # of adverse events (collisions, traffic violations, etc.) per mile driven? Or how does one define it?
English
1
0
1
79
illiquid
illiquid@lefttailguy·
This is a really good framing to the most important question. Merits a thoughtful response, which I will get back to you with. I think the crux here is that the world model of the best driver is not that different from the world model of an average driver (in terms of outcomes in that task). The same cannot be said about most valuable areas of cognitive work. And in those areas, it's often really hard to know whether they someone is "driving well" or not until it's far too late. What we basically need is alpha capture for every field of cognitive work. Understanding when to double/triple down on that persons decisions/judgement or when to hedge it (because we've seen they haven't performed too well in similar contexts in the past).
English
1
0
3
234
illiquid
illiquid@lefttailguy·
Yes if you mean that the kind of context you monopolize matters: economic value from owning the rights to particular books, songs, etc. varies dramatically. But if you're saying it's impossible to monetize context privately, you're 100% wrong. Try recreating Meta's ad engine even with a real time dump of their entire Big Query instance.
English
0
0
2
350
illiquid
illiquid@lefttailguy·
@herbiebradley That model works fine as long as you have some good way of pricing the drug out-license!
English
0
0
1
56
Herbie Bradley
Herbie Bradley@herbiebradley·
@lefttailguy To be fair these AI labs are incredibly researcher brained at the leadership level. The "we'll sell drugs from gigabrain models to cure cancer" approach just is a much better culture fit, and fits the mission arguably just as well.
English
1
0
1
82
illiquid
illiquid@lefttailguy·
I agree they are completely uninstrumented (and not at all focused) on it at present. The crazy thing is, the only way to make sure AGI is beneficial "for all humanity" is to have the best model/apparatus for allocating AGI. That model needs to have the best understanding for what *is* most beneficial to humanity at any given time and a way to deliver those solutions with machine intelligence as the core factor of production. So if they want to achieve their mission, they will have to in some sense solve capital allocation. They won't be able to earn the profits necessary to stay on the frontier if they don't have a better understanding than everyone of how value generating intelligence is in any given economic context. There's a massive value creation delta for the same equivalent task depending on the context.
English
1
0
2
422
Herbie Bradley
Herbie Bradley@herbiebradley·
@lefttailguy agreed—though still unconvinced the labs will make it work here, since operating companies and M&A is not in their DNA at all, and if they outsource it to partner PE funds then the PE funds' incentive is to max margins by not sacrificing their portcos' optionality or private ctx
English
1
0
5
676
illiquid
illiquid@lefttailguy·
illiquid tweet media
illiquid@lefttailguy

Solving the science of asset selection in a future (or indeed the present) where every company is a "Context Acquisition Company" is the real frontier. I love that everyone is getting around to the idea that the secrets (scarce context) currently illegible to/hidden from computers (human or machine) are everything. Now the next leap for people to make is that the science of sourcing, selecting, and monopolizing that context (really THE ASSETS that produce it) is everything. If AI progress is a function of compute and data (most algorithmic progress is really just data progress; h/t @BerenMillidge, @_kevinlu, @mentalgeorge, @GarrettLord, etc.), then every company is going to have a context desk just like they will (or already do) have a compute desk. The difference is, CONTEXT IS NOT FUNGIBLE. Most context (both that exists right now and that will be created in the future) will be completely commodity beta. Winning will be about getting to and instrumenting the right asset (context production factory) first. And yes, there are right and wrong answers. To do this kind of asset selection well requires an extremely scarce meta-capability: the ability to coordinate the right kind of access and the right kind capital at the right time. These assets (and the secrets within them) are structurally difficult to access, evaluate and instrument. They are not floating around in banked processes, to be frictionlessly purchased on listed exchanges, or willingly coming through Mercor or Handshake's expert portal. (Yes, a context production asset can be (very often is) a single person or collection of people.) When @WillManidis talks about a Deal Guy Yuga, what he means is that there are people who have deeply internalized the fact that at the limit, in a world of infinite intelligence, access to/monopoly on the right permissioned data streams is all that matters. Getting yourself to a position (meta-access, meta-capital) where you have the ROFR on those permissioned data streams, means being a generational Deal Guy. This is a very different and specific kind of "Deal Guy" though. Knowing which asset(s) are going to give you the right context to create, compound, and commercialize the best vertical world model now and into the future is the new form of security analysis. But the triple-exceptional combination of domain expertise, meta-access, and technical ability that’s required to execute this new security analysis effectively is scarcer than the talent at quant firms, YC combined, and dare I say, the labs, combined. Palantir understood this and it's why they focused on getting root-access (or something close) to the "highest-status" institutions, and the data streams they produce, first. If you have the talent that can get access to and create value within those institutions, everything else should be a forgone conclusion. If you want examples of the teams that (I believe) actually understand this new science of asset selection and long term value capture in a world of infinite intelligence, study Long Lake and @formationbio. They know and have known that it's all about being able to get the right asset (context), in the right market, with the right team (machine and human) first. These two companies are very far ahead on the scientific frontier of context acquisition. GC backed Long Lake last year. Do you think it’s a coincidence that Long Lake chose to work with General Catalyst? My bet is that Long Lake knew they wanted to acquire Amex GBT before they partnered with GC, and that they partnered with GC because Ken Chenault (the ex-CEO of Amex) is General Catalyst’s Chairman. That gave them the right access at the right time to a very valuable context asset (Amex Global Business Travel) A superhuman vertical-specific Elon operating every company means market leading monopolies in every single slice of the unstructured economy. The thing is you have to build this superhuman Elon while flying the plane. You can't build this superhuman Elon without the very specific context that operating specific assets in the real world gives you. In fact, there's only one stream of context that was able to produce human Elon! Knowing which context stream is likely to do the same a priori is so extremely difficult, but probably possible. I’ll let you intuit why Amex GBT is both most likely to be the market leading monopoly if it were operated by the superhuman Elon of business travel and why it’s also the most likely to produce the context to build that superhuman Elon. The labs of course are very large acquirers of context at present and I think they will continue to play and improve their capabilities here. Through their deplyoment companies, they have already chosen the PE funds that they deem to be the best Context Acquisition Funds. Through in-house deployment focus on Life Sciences they have chosen the vertical they see as containing the most valuable context producing assets. They will acquire very seemingly unrelated companies and will acquihire very interesting people just to get tokens, they will create a Context Acquisition Fund of Funds. But it's not a foregone conclusion that they become the best performing context acquisition companies. Or that they even view it this way. And that presents an opportunity for anyone that does.

ZXX
0
0
8
1.5K
illiquid
illiquid@lefttailguy·
You might find this interesting. The only enterprise software apps that (almost certainly) have a 100% DAU/MAU are Teams and Outlook. I would think chatbots would compete directly with person-to-person queries as more questions you would have asked a teammate about become answerable by computers. But something tells me person to person queries are actually up (context is that which is scarce). Good modern comps for Excel are of course Claude Code / Codex. The funny thing is, the more people use Harvey, Codex, etc. the higher the DAU/MAU of Excel, PPT, Word, etc. are likely to drift, because the output of a lawyer is still passed around in Word docs and the output of a banker is still passed around in excel workbooks. So creating more output really means creating more instances of those file types. If anyone has concrete examples of workflows in these tools (for business users, not engineers) that don't touch these file types or applications at some point during the assembly line, I'd love to hear 1) the workflow and 2) what is being used instead.
illiquid tweet media
English
1
0
5
480
illiquid
illiquid@lefttailguy·
@stokebuilder you may enjoy this utopian (or dystopian) vision piece framing the labs (OpenAI in this case) as a Hayekian Aggregator. "Taken to its logical conclusion, this mechanism allows OpenAI to be the intelligent switchboard that not only discovers/understands/captures workflow intent (scarce context), but also routes it to the best end-to-end solution (scarce context)". This intelligent switchboard turns aggregated workflow intent into a dynamic economic inefficiency index that is programmatically extensible/legible to the growing universe of first party and third party, increasingly AI-powered applications purpose built to solve specific problems end-to-end. Crucially, this system will be maximally conducive to self-improvement via RL: instead of optimizing routing around lagging indicators—who has the largest marketing budget, best Gartner standing, subjective reputation among implementation consultants, executives, etc.—it optimizes around which tool was most effective at turning workflow intent/problems into solutions. Outcome signals drive better routing, which attracts better tools, which improves outcomes in a compounding loop." Now, it's up to you whether you think the incentives allow OpenAI et al. to become the API the entire economy runs through. My sense (and likely Soren's sense) is that the physics of economic behavior will not allow it. I'm writing another piece that argues having the world model that a true hayekian aggregator would have is necessary for the labs to acheive their stated goal of "making AGI beneficial for all humanity." but it's unlikely most other actors will trust that world model or pay it rent. thediff.co/archive/router…
English
2
0
2
172
Soren Larson
Soren Larson@hypersoren·
Depends on how you cast it. I'd say a Hayekian Aggregator needs both access to - feedback loops - commercial venues to monetize these loops Probably opportunity to refine the above –– in coding the labs have this. But they don't in Abridge or OpenEvidence's markets. @lefttailguy might have ideas on if/how to formalize
English
1
0
4
144
Soren Larson
Soren Larson@hypersoren·
More Hayekian arguments coming out: here that only companies see valuable high entropy context streams that cannot be instrumented any other way but by being that company The black pill version of this is that all of this is also true for a roll-up—a hayek-friendly generalizer
Charlie O'Neill@oneill_c

x.com/i/article/2054…

English
3
1
32
3.9K