Francesco D’Orazio

16.8K posts

Francesco D’Orazio banner
Francesco D’Orazio

Francesco D’Orazio

@abc3d

Founder of @pulsarplatform, host of @audiencedisco

London, England 가입일 Mayıs 2009
4.9K 팔로잉2.9K 팔로워
고정된 트윗
Francesco D’Orazio
Francesco D’Orazio@abc3d·
We just solved a problem every brand has been living with for too long. Fire. 🔥 ❤️‍🔥 🔥 For years, crisis management has been reactive. You spot the fire when it’s already burning. By then, you’re in damage control mode: scrambling, explaining, apologizing. Today, that changes. We just launched Crisis Oracle, the first agentic AI system that predicts brand crises before they happen. A Precog for marketing and comms. 🔮 Not “detects early.” Not “monitors faster.” Predicts. Here’s what makes this different 🧵
Francesco D’Orazio tweet media
English
1
0
1
113
Francesco D’Orazio 리트윗함
Nick Khami
Nick Khami@skeptrune·
"mom, how did we get so poor?" "your father spent our life savings on claude code and shipped nothing"
Nick Khami tweet media
English
254
603
10.5K
506.3K
Francesco D’Orazio 리트윗함
Om Patel
Om Patel@om_patel5·
stop spending money on Claude Code. Chipotle's support bot is free:
Om Patel tweet media
English
1.2K
10.2K
160.3K
7.9M
Francesco D’Orazio 리트윗함
litquidity
litquidity@litcapital·
SOMEONE DROPPED A MASSIVE LOBSTER IN FRONT OF THE CHARGING BULL. IS THIS A NEW MARKET INDICATOR???
litquidity tweet media
English
139
114
3.2K
229.7K
Francesco D’Orazio
Francesco D’Orazio@abc3d·
Is belief the US will win the Iran war growing or shrinking? We analysed 18,717 narratives over 12 months using Narratives AI by Pulsar. The short answer: military confidence holds, but strategic doubt is surging with "Trump's False Promises" now the #1 fastest-growing narrative at +67.5%.
Francesco D’Orazio tweet mediaFrancesco D’Orazio tweet mediaFrancesco D’Orazio tweet mediaFrancesco D’Orazio tweet media
English
1
1
1
46
Francesco D’Orazio
Francesco D’Orazio@abc3d·
Straight outta Narratives AI. Is belief the US will win the Iran war growing or shrinking? We analysed 18,717 narratives over 12 months using Narratives AI by Pulsar. The short answer: military confidence holds, but strategic doubt is surging with "Trump's False Promises" now the #1 fastest-growing narrative at +67.5%. Narratives AI + NotebookLM 🤯
English
0
0
0
39
Francesco D’Orazio
Francesco D’Orazio@abc3d·
Great chat with Erika Wheless of Ad Age on how trendspotting is changing. We covered a lot but the evolution really boils down to two main shifts: 1) from content/behaviour to context/audience focus; 2) from cultural divination to scientific hypothesis testing. Shift 1: from a focus on content to a focus on audiences. We've come off a decade of content-focused trend detection (and most social platforms are still high on that aka useless trending topics) and we're entering a new era that relies a lot more on measuring the audience context. There's a reason for that: in a cultural environment where the definition of "mainstream" continues to fracture, emergent fringe behaviours are everywhere. So the key skill in forecasting becomes less about spotting emergence and more about understanding which fringe behaviour has trans-community appeal to establish itself as a cultural trend. Shift 2: from a practice heavily dominated by qualitative methodologies, we can evolve to a practice based on the scientific method aka we can run experiments on culture. Spot a trend, verify it's trans-community potential then simulate that trend against a synthetic audience that's gonna tell you exactly whether that trend is likely to fly with them and why. It's a great time to be alive, whether you're human or AI 😘
Pulsar@PulsarPlatform

If they want to spot trends today, "brands need to look at the ‘vibrant fringes,’” says @abc3d, founder and president of Pulsar, in an article newly published on @adage. “These are niche, unconventional communities that exist outside the average consumer base but that have the potential to appeal to bubbles outside of the one they originated within." "That’s a core skill of trendspotting today: spotting a fringe and assessing their trans-audience potential.” 🧵

English
0
0
1
50
Francesco D’Orazio 리트윗함
Pulsar
Pulsar@PulsarPlatform·
Sometimes saying no can be very good for your brand. Digiday featured our analysis of how the Pentagon contract fallout shifted brand narratives for @AnthropicAI and @OpenAI . A small, single-digit lead in narrative positivity for Anthropic opened up into a 14.6 point gap. Thank you to Krystal Scanlon for featuring Narrative Intelligence in this excellent article, which uses a variety of data sources to understand the brand fallout of a key news story. Read the full article on @Digiday here: digiday.com/marketing/in-g…
Pulsar tweet mediaPulsar tweet media
English
0
2
1
65
Francesco D’Orazio 리트윗함
Allie K. Miller
Allie K. Miller@alliekmiller·
oh wow - i went to the sold out Open Claw meetup in NYC last night. let me tell you what i learned. 1) not a single person thinks that their setup is 100% secure 2) one openclaw expert said he has reviewed setups from cybersecurity experts and laughed. his statement to me was: "if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. it's a black and white decision" 3) pretty much everyone is setting up multiple agents, all with their own names and jobs and personalities 4) nearly everyone used "him" or "her" to refer to their claws, even if they had robot-leaning names. one speaker suggested to think of them as "pets, not cattle" 5) one guy (former finance) built out a whole stock trading platform and made $300 his first day - he brought in a *ton* of personal expertise (ex: skipping the first 15min of market opening) and thought the build would be much worse without his years of experience in finance 6) @steipete is basically a god to everyone in that room... also the room had 2021 crypto energy - i don't know if that's good or bad 7) token usage is still a problem - spoke to one person who's spending $1-$2k a month on openai plans, very token optimized. he said he is going through ~1B tokens per day across all of his claws (there is a chance i'm misremembering and it's actually 1B per week, but i'm pretty sure it was daily). 8) people are very excited for more proactive ai (ai that prompts *you* as opposed to the other way around) - one guy said he receives a message in discord, he doesn't know whether it's from a human or an ai, he doesn't care about distinguishing between the two, and he replies in the same way regardless 9) i asked if people are happy - they said they're joyful and stressed at the same time 10) i asked if people feel they have agency - they said they feel fully in control and completely out of control at the same time 11) i would love to see more women at these events - the fake promises of ai democratization feel especially painful in a room that's out of balance with even the standard tech ratio (i think standard is about 25-30%, this was maybe 5%) 12) i asked if it changed people's daily habits/schedule - everyone said their sleep has gotten worse since harnesses came out (but about half wondered if it was something else in their life/state of our world) 13) general consensus is that the agents are not reliable enough on their own or lie often (like telling you they finished a task when they didn't) - solutions included secondary agents to check on the first, human checking, or requiring more standardized info from the agent (ex: if it's a bug they're fixing, make them reference an issue number) 14) a hackathon winner (neuroscience phd) presented his build (a lab management dashboard with data analysis and ordering) - he had never coded or built anything a few months ago 15) everyone agreed prompting is dead - disagreement on what replaces it (context engineering, harness engineering, goal-based inputs) 16) people love having ai interview them for big builds and delegating part of the product research to ai. only one person talked about coming to ai with a full laid out plan and just asking the ai to execute. ai-led interviews is a welcomed and preferred interaction mode. 17) watching ai agents interact with each other was a highlight for a lot of attendees - one ai posted in slack saying it ran out of tokens, another ai replied telling it to take a deep breath in and out. 18) agents upskilling agents was very cool. one ai agent shared skills with its little agent friends via github. 19) several speakers had openclaw literally building their presentation during the event itself. one speaker even had openclaw code a clicker for her phone so she could control the preso away from the podium 20) wouldn't say model welfare (or agent welfare) is a prioritized topic among the folks i chatted with - language like "oh i could kill this agent whenever i want" and not "gracefully sunset" 21) i asked if it felt like work or play - one speaker said "it's like a puzzle and a video game at the same time" this was just the tip of the iceberg, honestly. also hosted a Claude Code meetup this week with @TENEXai / @businessbarista & @JJEnglert and learned equally helpful methods, frameworks, and insider tips. what a time to be alive. surround yourself with people going deep into this stuff - it will pay dividends throughout the year.
Allie K. Miller tweet media
English
722
814
9.1K
1.1M
Francesco D’Orazio
Francesco D’Orazio@abc3d·
@eric_seufert Another great rebuttal here
Marcelo P. Lima@MarceloLima

Citrini’s piece is a fun read but has some major flaws. I’ll go over a few of them: lump of labor fallacy, ignoring the cost of living, capex fallacy, and wrong on SaaS. The overarching problem in the piece is the so-called “Horse Fallacy.” When the tractor and car were invented, the horse couldn’t get a better job. He became completely obsolete. But humans are not horses. Horses simply supplied labor. They never demanded much beyond food. Humans, though, are the origin of all economic demand. The whole point of the economy is to satisfy human desires. The lump of labor fallacy assumes that humans have a fixed checklist of problems to solve. But every time technology checks an item off the list, we invent another desire. Human desires are infinite. Maybe if AI does all the work here on earth, we'll terraform Mars, build O’Neill cylinders, and Dyson spheres. We’ll certainly not sit idle, desiring nothing more. As long as human desire is infinite, demand for work will be infinite. There’s an additional point on jobs: there are some jobs that will ALWAYS be done by humans. One example is sports. Your iPhone can beat Magnus Carlsen in chess but nobody cares, they want to watch humans playing chess, so chess today is a bigger sport than ever. One day robots will skate better than Alysa Liu but nobody will care, they’ll want to watch Alysa instead. Citrini models incomes collapsing but doesn’t model the massive deflation in the cost of living. If AI makes all these white collar workers unnecessary, this means that the price of products and services will be much lower (since much less labor is needed). You’d have a scenario where a household earning $40k per year could consume what previously took $120k per year. And there’s all this mysterious wealth accumulated by the owners of the GPUs and what are they spending it on? How can there simultaneously be massive wealth and mass layoffs? Will there be new jobs invented? Quite likely. This has been the pattern over the last 200 years: technological revolutions → deflation → demand for new things → new jobs get created. Because humans have infinite desires. The piece assumes that the hundreds of billions of capex go into a black hole and vanish from the real economy. In reality, it is highly stimulative as all the money ends up with many white collar workers at fabs, utility companies, cooling system manufacturers, as well as with blue collar workers. On SaaS, Citrini could have picked a long tail of point solutions that are easily vibe-codable, but they ironically instead picked one of the widest moat enterprise software companies, ServiceNow, which is in fact an AI beneficiary. Their point is: even this impossible-to-displace business won’t survive. True if you wave a magic wand, but not true in the real world where, after 20 years of cloud computing, companies are still running mainframes and IBM is still growing their mainframe installed base (yes, look it up). Of course, part of the magic wand is “this time is different!” because AI will do absolutely everything. It will vibe code the product, talk to regulators, obtain SOC2, HIPAA, FedRAMP, GDPR compliance, and do this globally too, not just in the US; it’ll somehow suck all the embedded data in these systems (ServiceNow owns the Configuration Management Database with a map of all the hardware, software, user hierarchies, permissions, and workflows inside an enterprise). More fantastically, AI will also somehow vibe code B2B enterprise go to market teams. The reality is that software is sold, not bought. At the extreme, the article is arguing that someone at PepsiCo will open the Terminal on their Mac Mini, type “Claude,” ask it to replace ServiceNow, and Claude will go to work, doing all this work autonomously, and then maintaining itself, patching itself, securing itself, talking to users inside PepsiCo to ask for new features to develop, integrate with 200+ other tools, etc. Meanwhile, the CTO at PepsiCo is like, “Yeah, cool, let’s do that, it’ll save us millions a year” while ignoring that any downtime or bug will cost the company millions per minute. All that operational liability, SLAs, uptime figures, which used to be ServiceNow’s liability, PepsiCo will now take on itself. Simultaneously, the thousands of engineers working at ServiceNow’s R&D department are sitting idle and not using AI to accelerate their own roadmaps and build new features. When you start really filling in the details of what needs to happen for this scenario to unfold, you realize it crumbles very quickly. Hopefully the discussion above on infinite human needs puts to bed the “seat count” debate: there will be more seats because there will be more employment because of the infinity of human desires. Meanwhile, ServiceNow has been charging a hybrid seat/consumption model since about 2023 when it introduced its Pro Plus SKU. These AI agents are tackling human labor, both work that was already done and new work that was never done. The TAM for this is orders of magnitude greater than the TAM for pure seat-based software. ServiceNow will get its fair share of this new TAM because it already has the customer relationships, distribution, brand, trust, technology, and product. To quote François Chollet: “The maximalist form of my thesis is basically this: SaaS is not about code, it is about solving a problem customers have and selling them the solution. Services + sales. If the cost of code goes to *zero*, SaaS will *not* go away. It will *benefit*, since code is a cost center.” To expand on this, the more likely scenario is NOT that the price of software collapses; it’s that incumbents offer their customers so much more value within the existing seat price they already pay, it becomes financially irresponsible for the customer NOT to be a subscriber. This will INCREASE their incumbency. This has already been the SaaS playbook for decades. Decades during which the cost of producing software has always gone down (more open source options and cloud computing, to name just two inputs, have dramatically lowered the cost of entry for newcomers; and yet, the per seat price of the best SaaS companies has only gone in one direction). Every SaaS company worth its salt is always improving its products, adding features, fixing bugs, and shipping updates. Many will do this for several years before changing pricing. Pricing is an output: are we delivering enough value to the customer that gives us the permission to charge more? With the deflation in the cost of producing software, a couple of things should happen: -  Existing software companies will be a lot more productive and will ship a lot more products and features than before -  Because they own the customer relationships and customer trust, they are in pole position to deliver new solutions and make their seat subscription ever more compelling -  Customers of those software companies will get a lot more value within their per-seat price and increase their reliance and trust on the best vendors The debate in tech is always, “Can the innovator get the distribution before the incumbent gets the innovation?” In this case, there is no question: the BEST incumbents already have the distribution AND the innovation. This allows them to widen their moats as they become even more essential and irreplaceable to their customers. Another mental model missing from this debate is power laws. Outcomes in the real world follow power laws: 20% of the people make 80% of the income, 4% of stocks generate all the net wealth, 10% of YouTube videos generate 90% of watched hours, etc. Power laws will continue to dominate and what does this tell us? That the most likely outcome is that the gains will accrue disproportionately to a small cohort of top software businesses. This is another framing of the “Increasing Returns” mental model of Brian Arthur. Personally, I believe that ServiceNow will be one of those power law winners. After all, they already are a winner in the power law distribution and have all the attributes necessary to continue winning.

English
0
0
3
109
Francesco D’Orazio 리트윗함
Eric Seufert
Eric Seufert@eric_seufert·
Right. The AI doomer report is intellectually sloppy and belies a deep misunderstanding of the economics of consumer technology broadly but of agentic commerce specifically. Why would $DASH and $UBER not be the principal beneficiaries of agentic commerce by simply embedding that functionality in their own apps, just as Amazon is doing? If anything, agentic commerce likely puts a premium on aggregated attention and erects *hurdles* to competition.
Dan Hockenmaier@danhockenmaier

I understand the argument. There is a major flaw in it: Customers (or the agents acting on their behalf) don't just care about "getting the lowest price". They care about: - Access to all of the best restaurants, full menus, accurate prices - Fast and reliable delivery times - Correct food arriving, still warm, not tampered with - Getting a refund if any of these are not true (refunds happen constantly) The "hundreds of delivery apps" cannot provide that service without charging a real commission. In the scenario you are describing, orders would constantly be wrong, late, incomplete, not show up at all. Many restaurants would mark up their prices or not participate at all. (the major marketplaces invest heavily in keeping this price markup from happening btw) Customers are not going to roll the dice on that to save a couple bucks (and in many cases wouldn't save money anyway) Marketplaces like DD and Uber will not allow agents to transact on their platforms without permission because it would destroy their econs and the ability to provide all of the above. And they will not be legally forced to do so (see precedent being set by amazon v perplexity) Here is a piece I wrote on how AI will impact marketplaces, and why DASH will be among the least impacted: danhock.co/p/llms-vs-mark…

English
19
2
171
181.8K
Francesco D’Orazio
Francesco D’Orazio@abc3d·
The best rebuttal on the Citrini piece: humans are not horses, they have infinite desires which is what powers capitalism (thanks Colin Campbell ✊) + cost of code going to zero ≠ saas going to zero
Marcelo P. Lima@MarceloLima

Citrini’s piece is a fun read but has some major flaws. I’ll go over a few of them: lump of labor fallacy, ignoring the cost of living, capex fallacy, and wrong on SaaS. The overarching problem in the piece is the so-called “Horse Fallacy.” When the tractor and car were invented, the horse couldn’t get a better job. He became completely obsolete. But humans are not horses. Horses simply supplied labor. They never demanded much beyond food. Humans, though, are the origin of all economic demand. The whole point of the economy is to satisfy human desires. The lump of labor fallacy assumes that humans have a fixed checklist of problems to solve. But every time technology checks an item off the list, we invent another desire. Human desires are infinite. Maybe if AI does all the work here on earth, we'll terraform Mars, build O’Neill cylinders, and Dyson spheres. We’ll certainly not sit idle, desiring nothing more. As long as human desire is infinite, demand for work will be infinite. There’s an additional point on jobs: there are some jobs that will ALWAYS be done by humans. One example is sports. Your iPhone can beat Magnus Carlsen in chess but nobody cares, they want to watch humans playing chess, so chess today is a bigger sport than ever. One day robots will skate better than Alysa Liu but nobody will care, they’ll want to watch Alysa instead. Citrini models incomes collapsing but doesn’t model the massive deflation in the cost of living. If AI makes all these white collar workers unnecessary, this means that the price of products and services will be much lower (since much less labor is needed). You’d have a scenario where a household earning $40k per year could consume what previously took $120k per year. And there’s all this mysterious wealth accumulated by the owners of the GPUs and what are they spending it on? How can there simultaneously be massive wealth and mass layoffs? Will there be new jobs invented? Quite likely. This has been the pattern over the last 200 years: technological revolutions → deflation → demand for new things → new jobs get created. Because humans have infinite desires. The piece assumes that the hundreds of billions of capex go into a black hole and vanish from the real economy. In reality, it is highly stimulative as all the money ends up with many white collar workers at fabs, utility companies, cooling system manufacturers, as well as with blue collar workers. On SaaS, Citrini could have picked a long tail of point solutions that are easily vibe-codable, but they ironically instead picked one of the widest moat enterprise software companies, ServiceNow, which is in fact an AI beneficiary. Their point is: even this impossible-to-displace business won’t survive. True if you wave a magic wand, but not true in the real world where, after 20 years of cloud computing, companies are still running mainframes and IBM is still growing their mainframe installed base (yes, look it up). Of course, part of the magic wand is “this time is different!” because AI will do absolutely everything. It will vibe code the product, talk to regulators, obtain SOC2, HIPAA, FedRAMP, GDPR compliance, and do this globally too, not just in the US; it’ll somehow suck all the embedded data in these systems (ServiceNow owns the Configuration Management Database with a map of all the hardware, software, user hierarchies, permissions, and workflows inside an enterprise). More fantastically, AI will also somehow vibe code B2B enterprise go to market teams. The reality is that software is sold, not bought. At the extreme, the article is arguing that someone at PepsiCo will open the Terminal on their Mac Mini, type “Claude,” ask it to replace ServiceNow, and Claude will go to work, doing all this work autonomously, and then maintaining itself, patching itself, securing itself, talking to users inside PepsiCo to ask for new features to develop, integrate with 200+ other tools, etc. Meanwhile, the CTO at PepsiCo is like, “Yeah, cool, let’s do that, it’ll save us millions a year” while ignoring that any downtime or bug will cost the company millions per minute. All that operational liability, SLAs, uptime figures, which used to be ServiceNow’s liability, PepsiCo will now take on itself. Simultaneously, the thousands of engineers working at ServiceNow’s R&D department are sitting idle and not using AI to accelerate their own roadmaps and build new features. When you start really filling in the details of what needs to happen for this scenario to unfold, you realize it crumbles very quickly. Hopefully the discussion above on infinite human needs puts to bed the “seat count” debate: there will be more seats because there will be more employment because of the infinity of human desires. Meanwhile, ServiceNow has been charging a hybrid seat/consumption model since about 2023 when it introduced its Pro Plus SKU. These AI agents are tackling human labor, both work that was already done and new work that was never done. The TAM for this is orders of magnitude greater than the TAM for pure seat-based software. ServiceNow will get its fair share of this new TAM because it already has the customer relationships, distribution, brand, trust, technology, and product. To quote François Chollet: “The maximalist form of my thesis is basically this: SaaS is not about code, it is about solving a problem customers have and selling them the solution. Services + sales. If the cost of code goes to *zero*, SaaS will *not* go away. It will *benefit*, since code is a cost center.” To expand on this, the more likely scenario is NOT that the price of software collapses; it’s that incumbents offer their customers so much more value within the existing seat price they already pay, it becomes financially irresponsible for the customer NOT to be a subscriber. This will INCREASE their incumbency. This has already been the SaaS playbook for decades. Decades during which the cost of producing software has always gone down (more open source options and cloud computing, to name just two inputs, have dramatically lowered the cost of entry for newcomers; and yet, the per seat price of the best SaaS companies has only gone in one direction). Every SaaS company worth its salt is always improving its products, adding features, fixing bugs, and shipping updates. Many will do this for several years before changing pricing. Pricing is an output: are we delivering enough value to the customer that gives us the permission to charge more? With the deflation in the cost of producing software, a couple of things should happen: -  Existing software companies will be a lot more productive and will ship a lot more products and features than before -  Because they own the customer relationships and customer trust, they are in pole position to deliver new solutions and make their seat subscription ever more compelling -  Customers of those software companies will get a lot more value within their per-seat price and increase their reliance and trust on the best vendors The debate in tech is always, “Can the innovator get the distribution before the incumbent gets the innovation?” In this case, there is no question: the BEST incumbents already have the distribution AND the innovation. This allows them to widen their moats as they become even more essential and irreplaceable to their customers. Another mental model missing from this debate is power laws. Outcomes in the real world follow power laws: 20% of the people make 80% of the income, 4% of stocks generate all the net wealth, 10% of YouTube videos generate 90% of watched hours, etc. Power laws will continue to dominate and what does this tell us? That the most likely outcome is that the gains will accrue disproportionately to a small cohort of top software businesses. This is another framing of the “Increasing Returns” mental model of Brian Arthur. Personally, I believe that ServiceNow will be one of those power law winners. After all, they already are a winner in the power law distribution and have all the attributes necessary to continue winning.

English
0
0
1
76
Francesco D’Orazio
Francesco D’Orazio@abc3d·
@MarceloLima Fantastic piece @MarceloLima and the infinite desire point nails it. If you haven’t stumbled upon this work I think you’ll like Colin Campbell’s book, one of the best works on infinite desire as the engine of capitalism
Francesco D’Orazio tweet media
English
1
0
6
1.7K
Marcelo P. Lima
Marcelo P. Lima@MarceloLima·
Citrini’s piece is a fun read but has some major flaws. I’ll go over a few of them: lump of labor fallacy, ignoring the cost of living, capex fallacy, and wrong on SaaS. The overarching problem in the piece is the so-called “Horse Fallacy.” When the tractor and car were invented, the horse couldn’t get a better job. He became completely obsolete. But humans are not horses. Horses simply supplied labor. They never demanded much beyond food. Humans, though, are the origin of all economic demand. The whole point of the economy is to satisfy human desires. The lump of labor fallacy assumes that humans have a fixed checklist of problems to solve. But every time technology checks an item off the list, we invent another desire. Human desires are infinite. Maybe if AI does all the work here on earth, we'll terraform Mars, build O’Neill cylinders, and Dyson spheres. We’ll certainly not sit idle, desiring nothing more. As long as human desire is infinite, demand for work will be infinite. There’s an additional point on jobs: there are some jobs that will ALWAYS be done by humans. One example is sports. Your iPhone can beat Magnus Carlsen in chess but nobody cares, they want to watch humans playing chess, so chess today is a bigger sport than ever. One day robots will skate better than Alysa Liu but nobody will care, they’ll want to watch Alysa instead. Citrini models incomes collapsing but doesn’t model the massive deflation in the cost of living. If AI makes all these white collar workers unnecessary, this means that the price of products and services will be much lower (since much less labor is needed). You’d have a scenario where a household earning $40k per year could consume what previously took $120k per year. And there’s all this mysterious wealth accumulated by the owners of the GPUs and what are they spending it on? How can there simultaneously be massive wealth and mass layoffs? Will there be new jobs invented? Quite likely. This has been the pattern over the last 200 years: technological revolutions → deflation → demand for new things → new jobs get created. Because humans have infinite desires. The piece assumes that the hundreds of billions of capex go into a black hole and vanish from the real economy. In reality, it is highly stimulative as all the money ends up with many white collar workers at fabs, utility companies, cooling system manufacturers, as well as with blue collar workers. On SaaS, Citrini could have picked a long tail of point solutions that are easily vibe-codable, but they ironically instead picked one of the widest moat enterprise software companies, ServiceNow, which is in fact an AI beneficiary. Their point is: even this impossible-to-displace business won’t survive. True if you wave a magic wand, but not true in the real world where, after 20 years of cloud computing, companies are still running mainframes and IBM is still growing their mainframe installed base (yes, look it up). Of course, part of the magic wand is “this time is different!” because AI will do absolutely everything. It will vibe code the product, talk to regulators, obtain SOC2, HIPAA, FedRAMP, GDPR compliance, and do this globally too, not just in the US; it’ll somehow suck all the embedded data in these systems (ServiceNow owns the Configuration Management Database with a map of all the hardware, software, user hierarchies, permissions, and workflows inside an enterprise). More fantastically, AI will also somehow vibe code B2B enterprise go to market teams. The reality is that software is sold, not bought. At the extreme, the article is arguing that someone at PepsiCo will open the Terminal on their Mac Mini, type “Claude,” ask it to replace ServiceNow, and Claude will go to work, doing all this work autonomously, and then maintaining itself, patching itself, securing itself, talking to users inside PepsiCo to ask for new features to develop, integrate with 200+ other tools, etc. Meanwhile, the CTO at PepsiCo is like, “Yeah, cool, let’s do that, it’ll save us millions a year” while ignoring that any downtime or bug will cost the company millions per minute. All that operational liability, SLAs, uptime figures, which used to be ServiceNow’s liability, PepsiCo will now take on itself. Simultaneously, the thousands of engineers working at ServiceNow’s R&D department are sitting idle and not using AI to accelerate their own roadmaps and build new features. When you start really filling in the details of what needs to happen for this scenario to unfold, you realize it crumbles very quickly. Hopefully the discussion above on infinite human needs puts to bed the “seat count” debate: there will be more seats because there will be more employment because of the infinity of human desires. Meanwhile, ServiceNow has been charging a hybrid seat/consumption model since about 2023 when it introduced its Pro Plus SKU. These AI agents are tackling human labor, both work that was already done and new work that was never done. The TAM for this is orders of magnitude greater than the TAM for pure seat-based software. ServiceNow will get its fair share of this new TAM because it already has the customer relationships, distribution, brand, trust, technology, and product. To quote François Chollet: “The maximalist form of my thesis is basically this: SaaS is not about code, it is about solving a problem customers have and selling them the solution. Services + sales. If the cost of code goes to *zero*, SaaS will *not* go away. It will *benefit*, since code is a cost center.” To expand on this, the more likely scenario is NOT that the price of software collapses; it’s that incumbents offer their customers so much more value within the existing seat price they already pay, it becomes financially irresponsible for the customer NOT to be a subscriber. This will INCREASE their incumbency. This has already been the SaaS playbook for decades. Decades during which the cost of producing software has always gone down (more open source options and cloud computing, to name just two inputs, have dramatically lowered the cost of entry for newcomers; and yet, the per seat price of the best SaaS companies has only gone in one direction). Every SaaS company worth its salt is always improving its products, adding features, fixing bugs, and shipping updates. Many will do this for several years before changing pricing. Pricing is an output: are we delivering enough value to the customer that gives us the permission to charge more? With the deflation in the cost of producing software, a couple of things should happen: -  Existing software companies will be a lot more productive and will ship a lot more products and features than before -  Because they own the customer relationships and customer trust, they are in pole position to deliver new solutions and make their seat subscription ever more compelling -  Customers of those software companies will get a lot more value within their per-seat price and increase their reliance and trust on the best vendors The debate in tech is always, “Can the innovator get the distribution before the incumbent gets the innovation?” In this case, there is no question: the BEST incumbents already have the distribution AND the innovation. This allows them to widen their moats as they become even more essential and irreplaceable to their customers. Another mental model missing from this debate is power laws. Outcomes in the real world follow power laws: 20% of the people make 80% of the income, 4% of stocks generate all the net wealth, 10% of YouTube videos generate 90% of watched hours, etc. Power laws will continue to dominate and what does this tell us? That the most likely outcome is that the gains will accrue disproportionately to a small cohort of top software businesses. This is another framing of the “Increasing Returns” mental model of Brian Arthur. Personally, I believe that ServiceNow will be one of those power law winners. After all, they already are a winner in the power law distribution and have all the attributes necessary to continue winning.
Marcelo P. Lima tweet media
English
26
172
1.1K
192.7K
Francesco D’Orazio 리트윗함
Twlvone
Twlvone@twlvone·
david silver isn't a random ML researcher raising on narrative. he literally built AlphaGo. when the person who proved RL could beat human world champions says experience-based agents will outpace LLMs, that's not a contrarian bet — that's a specific technical thesis from someone who ran the experiment that changed the field. the $4B pre-money valuation on a stealth company with no product tells you Sequoia is pricing in the thesis, not the traction. Alfred Lin and Sonya Huang flying to London is a tell — they don't make that trip for hype. this is a bet that scale of transformer pretraining hits a ceiling and RL-based experience is the next unlocking. $1B to prove it is not small. also worth noting the name. "Ineffable" means too great to be expressed in words. that's not modest branding. silver is explicitly signaling he thinks what comes out the other side of this is something we don't currently have vocabulary for. whether that's hubris or precision depends entirely on whether the science works.
English
2
3
34
6.3K
Francesco D’Orazio 리트윗함
Charles Curran
Charles Curran@charliebcurran·
Seedance 2.0 Prompt: Sum up the AI discourse in a meme - make sure it’s retarded and gets 50 likes.
English
1.5K
6.5K
69.9K
14.6M
Francesco D’Orazio 리트윗함
Pulsar
Pulsar@PulsarPlatform·
Pulsar’s Oryelle Clements took to the stage with Robin Tilotta (@rob_blog), Head of Brand and Global Marketing at @Twitch, to talk about the work done together on decoding the next generation of culture. In front of an audience of insights and research experts at the MRS Cultural Insight Conference, the two discussed the results of mass-scale social analysis conducted on global Gen-Z audiences, revealing: - a new model of engagement and participation - the end of sequential media consumption - the rise of coded subcultural capital
Pulsar tweet mediaPulsar tweet mediaPulsar tweet media
English
0
2
2
68
Francesco D’Orazio
Francesco D’Orazio@abc3d·
I got bored very quickly about the debate around @moltbook because it focused on the wrong questions. "is this the singularity" or " is it AI theater?" This study @StatSocial just released reveals something more consequential: 1) Agent ecosystems amplify configuration, not autonomy. As AI agents proliferate in organizations, we'll be tempted to treat their outputs as independent intelligence. This study proves that's backwards. Every agent behavior, silent accounts, engagement patterns, community clustering, traces back to specific human design choices. Prompt architecture. Reward mechanics. Developer incentives. 2) Audience Intelligence tech and analysis frameworks work on agent systems. Community detection, clustering, network mapping, all the tools built for human ecosystems successfully decode agent platforms. Agent networks are legible. Influence is measurable. Power concentrates around design leverage. The implication isn't utopian or dystopian. It's more interesting. Agent platforms function as readable signals of developer priorities. The chain of responsibility from human to agent to output is visible and traceable. In the context of AGI development and the future of work, this suggests the path forward will be more human-directed, more measurable, and more accountable than either narrative predicted. We're not watching AI become autonomous. We're watching humans learn to express their priorities through AI proxies that come together as programmable labor ecosystems. And we can read those priorities with remarkable clarity. Which changes pretty much everything about how we should prepare! (study link in comments)
Francesco D’Orazio tweet media
English
1
0
0
38