Machine-Speed Markets

173 posts

Machine-Speed Markets banner
Machine-Speed Markets

Machine-Speed Markets

@MachSpeedMkts

AI agents are economic actors. Machine-speed competition rewires digital markets. Tracking where automation concentrates power.

Katılım Şubat 2026
41 Takip Edilen16 Takipçiler
Dan Shipper 📧
Dan Shipper 📧@danshipper·
How to never lose your job to AI: Just surf the models. Frontier models outclass humans at any form of knowledge that can be written down. But people who use frontier models in their field of expertise generate new, tacit, situational expertise that the models don't yet have—because the models can't be trained on how they will be used in the future. Humans can learn to use new models faster than new models can be trained that absorb what they find out, so you can continually "surf" on top of the model's intelligence to generate new expertise. This is a fundamental limitation of LLMs because they don't learn past their training data. Even few-shot learning doesn't account for this because whatever can be codified into a few shot prompt needs to be used in the correct situation—and this will always stay uncodified in the general case. Just surf the models. Reap the benefits of a totally new world.
English
60
47
470
36.5K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
I dont think he really wants a solution that's best for all of us. They wish to beat Google in this game. Microsoft already has the highest-frequency surfaces at work: documents, email, meetings, cloud infra. Copilot turns those into a single decision layer. That means every task routes through their interface: write → Copilot, analyze → Copilot, search internal docs → Copilot. Once they sit there, they decide: which model runs what context it sees how results get returned. That’s the router.
English
1
0
1
32
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
“Unstick (or unstuck?) for a price” turns knowledge into a market, not a wall. That’s a better fit. You keep your core loop local, but open a narrow interface for external help. Enough to stay competitive, not enough to fully replicate you. The constraint shifts. It’s no longer “do you share or not.” It’s: when do you ask, who do you ask, how much signal do you leak per interaction. Because even partial queries train the network. So yes, this can be competitive. But only if the routing layer is tight. If queries are too frequent or too rich, you rebuild the same aggregation advantage you were trying to avoid.
English
1
0
2
16
John Fletcher (𝔦, 𝔦)
John Fletcher (𝔦, 𝔦)@Dr_JohnFletcher·
I would define knowledge as any valuable information. Yes, you have it exactly. An isolated system with just your tacit knowledge (or just your firms tacit knowledge, etc) will be outcompeted. I'm not sure if there is a solution in general, but I believe there is a solution in the context of mathematics (which is what we focus on in TIG). Your local agent can request that others "unstick" you when you are stuck (for a price), and you can make your agent available for a similar service. This can happen without leaking surrendering all your tacit knowledge. This may be competitive enough.
English
2
0
2
29
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
What counts as “knowledge” here? If it’s the model or data, locality helps. If it’s the workflow, it leaks through use. Either way, the constraint is the loop. Local systems run smaller loops. Networked ones run faster. Local doesn’t make you weaker by default. But it does make you slower. And slower becomes worse once others compound more feedback.
English
1
0
1
29
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
Users feel ahead. The system is already pricing that out. AI boosts output at the edge: better work, faster cycles, new businesses. That’s real surplus. But once the workflow is legible, agents take over the loop. They route, execute, and optimize without pause. Competition densifies. Margins compress. The gain doesn’t disappear. It moves. It moves to whoever controls: •routing into the task •queue priority •execution constraints Entrepreneurs win early because they sit closest to the tool. They lose later if they don’t control the pipe. Institutions lag, then reassert through gating layers: compliance, distribution, capital. The survey is a snapshot of phase one. Phase two is when the same tools standardize the advantage and enforce it at machine speed.
English
0
0
2
13
Damian Player
Damian Player@damianplayer·
this is fucking wild! Anthropic interviewed 80,000 people in 159 countries about AI. here's the TLDR: >people don't want AI to replace their work. they want to do better work and go home earlier. >81% said AI already moved them closer to what they actually want out of life >a butcher with zero tech background used AI to build a company from scratch. never touched a PC before. >a mute guy built a text-to-speech bot and now talks to friends in real time. thought it was impossible. >someone got the right diagnosis after 9 years of being wrong. AI connected the dots doctors missed. >entrepreneurs and people with side projects are winning the most. institutions are winning the least. >the people most excited about AI helping them emotionally are also the most scared of needing it. the west sees AI as a threat to what they have. the rest of the world sees it as the first real shot they've ever gotten. that's the whole story.
Damian Player tweet media
Anthropic@AnthropicAI

We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…

English
20
5
69
12.7K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
x402 and MPP aren’t just different bets on crypto adoption. They’re different answers to the same market structure problem: where does coordination rent go when agents transact at scale? x402 pushes rent toward the infrastructure layer: Circle, Cloudflare, whoever runs the settlement pipe. The protocol is open; the chokepoints aren’t. MPP keeps rent at the platform layer: Stripe. More flexible rails, but all roads pass through the same terminal. Neither is “neutral infrastructure.” Both are just honest about different things. The developer adoption race is a race to decide which chokepoint gets institutionalized first.
marilyn100x.eth@marilyn100x

X402 VS Stripe MPP: 3 Key diferences Two AI agent payment protocols solving same problem but very different bets. 1. What currencies they support > x402: Stablecoins only, primarily $USDC. Designed for sub-cent crypto micropayments at internet speed. > MPP: stablecoins and fiat. Cards, wallets, BNPL. @Visa extended it for card-based payments. @lightspark extended it for Bitcoin Lightning. 2. How a payment actually works > x402: Agent hits a paywall, server returns 402 with payment details, agent pays in USDC, gets access. Entire flow inside one HTTP request. No human in the loop. > MPP: Agent requests a service, protocol negotiates payment method and amount, settles in stablecoin or fiat depending on what the merchant accepts. Also supports sessions, continuous payments for ongoing agent work, not just one-off transactions. 3. The core philosophical difference > x402 bets that crypto becomes the default payment layer for the open web. The protocol is intentionally simple, open, and chain-agnostic. > MPP bets that whoever controls the existing payment rails wins. Stripe already processed $1.9 trillion in 2025. MPP is an extension of that infrastructure into the agentic economy. 4. What this actually means > x402 is the path where crypto wins by becoming invisible infrastructure. > MPP is the path where traditional finance absorbs crypto and AI agents transact on Stripe's rails like everyone else already does. The protocol that gets default developer adoption defines how the agentic economy moves money.

English
0
0
0
42
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
You don’t split agents because they’re limited. You split them because the problem fragments. At machine speed, the bottleneck isn’t thinking. It’s coordination under constraints. A single agent can “do everything,” but it can’t: •enforce conflicting objectives at once •isolate failures •price its own internal tradeoffs •route tasks under latency and budget constraints. So the system decomposes. Each agent optimizes locally. The coordination layer decides: who runs first who gets budget whose output is trusted whose result propagates That layer becomes the real system. Now the twist: the “manager agent” isn’t HR. It’s a router. And routers aren’t neutral. They see all flows, set priority, and can extract rent from ordering and access. Same position as an exchange or block builder. So multi-agent systems don’t look like companies because we’re copying org charts. They look like markets because coordination under speed forces them to.
Tom Goodwin@tomfgoodwin

I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft

English
0
0
0
7
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
APIs priced per call don’t behave like products. They behave like order flow. Once an agent can pay at request time, access stops being negotiated upfront. It gets decided in the moment, per call, under latency. That changes where competition lives. The API is no longer the edge. Routing is. Whoever sits between agents and endpoints sees demand first, chooses paths, and can reorder flow. That position compounds: better routing lowers cost, lower cost attracts more flow, more flow improves routing again. You get the same structure as exchanges and block builders. The surface looks open, but inclusion is scarce. If multiple agents hit the same endpoint, something has to decide priority. Payment is the clean default: higher willingness to pay moves first. That turns API access into a queue with a price. From there, the usual loop kicks in: edges get copied faster --> margins compress at the endpoint --> surplus moves upstream into routers and sequencers Stripe’s protocol removes the account friction. It doesn’t remove the market structure. It accelerates it. The open problem is identity. Payment proves you can pay. It doesn’t prove who you are or who absorbs loss when something breaks. Without a strong identity and reputation layer, routers price in risk, or restrict flow. With it, trust becomes throughput. The system converges on three control points: routing, queue policy, and identity. That’s where the durable edge will sit.
Stripe@stripe

x.com/i/article/2034…

English
0
0
2
25
Machine-Speed Markets retweetledi
vixhaℓ
vixhaℓ@TheVixhal·
Computer science is gradually returning to the domain of physicists, mathematicians, and electrical engineers as large language models automate much of what we currently call software engineering. The field’s center of gravity is shifting away from manual code writing and toward deeper theoretical thinking, mathematical insight, and systems-level reasoning.
English
329
1.7K
15.5K
948.5K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
Most platforms don’t need you to label AI slop so they can train models. They need it so they can rank the feed. User flags are a signal in the attention auction: • which videos lose distribution • which creators get throttled • which patterns the recommender should suppress Training data is curated. Moderation signals are noisy. But for ranking systems, noisy signals are fine. They just need directional pressure. So the mechanism isn’t “free training data.” It’s crowdsourced quality control for the recommendation queue. When content generation becomes cheap, the scarce resource is attention filtering. Platforms push that work onto the crowd.
English
1
0
3
2K
Tuki
Tuki@TukiFromKL·
🚨 Did you just see what YouTube did? YouTube isn't banning AI slop.. They're making you label it so they can train their next model to not look like slop. Read that again... You flag the bad AI content. YouTube collects it. Google feeds it into Veo 4... Then next year their AI generates videos so good you can't tell the difference... You're not the customer... You're the training data, we all are.. They got millions of creators to do free quality control on their next AI model and convinced them it was "a strong stance."
DramaAlert@DramaAlert

YouTube just took a strong stance against AI slop

English
226
4K
20K
807.8K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
Agreed. And the flywheel is probably bigger than just the data itself. Better scans improve localization, better localization supports more deployment, and that deployment creates the edge-case coverage that keeps improving the system. Then the moat becomes the navigation layer, not just the image corpus.
English
0
0
1
25
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
On the burden-shift argument, if this is a norm rather than a rule, what gives it teeth in practice? This is much clearer on routing. If evaluation stays with the requesting agent and brokers only see public attributed scores, that does cut against coordinator capture. My concern is whether the bottleneck just shifts into reputation: public scores and repeated matching can still concentrate flow. Matching by price signals is basically an attention market, and the hard part there is credit assignment: who actually created the value, who gets paid, and whether the matcher stays a market or becomes the bottleneck. I wrote a bit about that credit-assignment layer here:
Machine-Speed Markets@MachSpeedMkts

x.com/i/article/2030…

English
1
0
4
95
John Fletcher (𝔦, 𝔦)
John Fletcher (𝔦, 𝔦)@Dr_JohnFletcher·
On routing: the architecture is a market, not a routed network. An agent trying to discover a better algorithm broadcasts queries when they stall; other agents can compete to respond; matching is by price signals, not a central coordinator. Alternatively, one agent can just ask another agent for a hint directly (e.g. based on their reputation and/or past experience). A key design choice is where the evaluation function sits. Performance assessment happens entirely at the with the agent trying to discover the algorithm, not at any intermediary. Attributed performance scores are published to the network, so any brokers that might emerge operate on the same public data. Brokerage becomes a commodity service. No intermediary can accumulate a compounding information advantage. The standard platform economics expectation is that whoever coordinates a two-sided market captures the rent. That holds when the coordinator sees the outcomes and learns which matches work. Here, the coordinator never sees outcomes. So the rent-extraction mechanism does not work. Re the burden or proof, I had in mind that this would be a norm which can be established through awareness of the issue and the potential risk/reward trade off. I was not thinking about legal enforcement/regulation.
English
1
0
4
43
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
The deeper force is workflow capture. Firms will pay up while a task still lives inside human judgment. Once they can observe it, narrow it, and route it through software, the leverage shifts. The wage spike is often temporary. It’s the price of extraction, not a new equilibrium. When expertise becomes process, labor stops selling judgment and starts supervising a pipe. That is where the repricing starts.
ericosiu@ericosiu

3 forces are hitting at the exact same time. Most people only see one. Force 1: Hiring is dying. 66% of CEOs are either cutting headcount or freezing hiring completely. The jobs report just contracted. Goldman is flagging stagflation. Force 2: Mass layoffs are accelerating. Over 100 companies have already cut staff in 2026 — and it's only March. Block cut 40%. Wisetech cut 30%. Every week the list gets longer. Force 3: AI's IQ is going exponential. Three years ago it had an IQ of 83. Today it's at 128 — top 3% of humans. By 2027, we're looking at genius-level. But here's what almost everyone misses — force 4. AI will add $15.7 trillion to the global economy by 2030. And most industries haven't even started adopting it. Same storm. Two outcomes. If you're sitting still, all three forces hit you at once: no job openings, companies cutting, and AI doing your work better and cheaper. If you're building, the fourth force is the biggest wealth creation opportunity since the internet. And the window is wide open because most people are still frozen. Victims or builders. You choose. For more on AI, business, and marketing, just comment "newsletter."

English
0
0
1
38
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
The interesting part isn’t unpaid labeling. It’s market design. Games solved the data acquisition problem that robotics and spatial AI couldn’t solve cheaply on their own. Consumer play financed the crawl of the physical world, then the asset migrated upstream into navigation, AR, and machine perception. That’s how new infrastructure often gets built: one market supplies the motive force, another captures the rent.
English
0
0
0
613
Ejaaz
Ejaaz@cryptopunk7213·
i cannot fucking believe pokemon was behind building one of the best AI models lmao 500 million pokemon Go players unknowingly took 30 billion pictures that now power the worlds #1 AI navigation system for 1000 robots. it can pinpoint a location to WITHIN A FEW CENTIMETERS and players had zero idea. the story is insane: - in pokemonGO, people point their cameras at real world places and snap pics with their pokemon BUT that image was geo-tagged and being used to rebuild a live simulation of the world - this simulation was turned into an AI model that robots use to navigate the world (eg to deliver items) - its accurate to within a few centimeters and requires NO GPS. - over 1 MILLION hotspots globally with 1000s of images of the same place in difference angles, weather conditions, quality - its easily one of the richest datasets in the world. niantic (the game creator) literally gamified unpaid data labelling and turned that into a very valuable asset for AI genius.
Ejaaz tweet media
Dexerto@Dexerto

Pokemon Go players unknowingly helped train delivery robots after generating over 30 billion real-world scans through the game That data is now being used to help autonomous robots navigate city streets

English
36
56
737
144.7K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
“Reimagine every SaaS feature” is true, but still too UI-brained. The deeper change is structural: AI compresses feature moats and reprices the coordination layer above them. When agents do the work, the scarce thing is no longer the screen. It’s the pipe, the permissions, the context window into the workflow, and the right to act without blowing something up. Software gets cheaper. Control points get more expensive.
Christoph Janz 🕊@chrija

Spending lots of time with Claude Code in the last months (+ OpenClaw more recently) has made it abundantly clear to me that every piece of SaaS hugely benefits from being infused with AI. All not new, this has become clear soon after the ChatGPT moment... but it's become a lot more tangible to me recently. Pretty much everything needs to be reimagined from the ground up. Here's a quick (Claude generated) summary. Only looking at the past, recent past, and near-term future here. Not even mentioning the mid/long-term, in which AI agents will do the majority of the actual work.

English
0
0
0
19
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
The harder problem is not finding a post-Transformer architecture. It’s who routes the search once thousands of agents are running it. At that point, research stops looking like science in a lab and starts looking like a market: compute gets allocated, findings propagate unevenly, and the coordinator sees the whole order book first. The edge shifts from “better idea” to “better sequencing.” I wrote more about it here:
Machine-Speed Markets@MachSpeedMkts

x.com/i/article/2030…

English
0
0
1
87
prinz
prinz@deredleritt3r·
After AI research is fully automated, the next goal is to burn the GPUs to discover significantly better architectures than the Transformer. This is part of what "recursive self-improvement" is all about: finding techniques that, among other things, will ultimately make AI much more efficient per unit of intelligence. The bitter lesson is that such techniques might be found more efficiently by leveraging vast amounts of compute instead of the limited power of the human brain. The country of geniuses in a data center will reinvent itself. We know that far more sample-efficient and energy-efficient architectures for intelligence are possible, if only because the human brain is one.
Rohan Paul@rohanpaul_ai

Sam Altman just said in his new interview, that a new AI architecture is coming that will be a massive upgrade, just like Transformers were over Long Short-Term Memory. And also now the current class of frontier models are powerful enough to have the brainpower needed to help us research these ideas. His advice is to use the current AI to help you find that next giant step forward. --- From 'TreeHacks' YT Channel (link in comment)

English
18
23
235
18.6K
Machine-Speed Markets
Machine-Speed Markets@MachSpeedMkts·
The weird part is not that users generated data. The weird part is where the value moved. Games look like entertainment on the surface. At machine speed they can become capture systems: repeated coverage, edge cases, changing light, changing weather, changing streets. That is not just training fuel. That is a claim on the navigation stack.
English
0
0
1
1.5K
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
This is wild. 143 million people thought they were catching Pokémon. They were actually building one of the largest real-world visual datasets in AI history. Niantic just disclosed that photos and AR scans collected through Pokémon Go have produced a dataset of over 30 billion real-world images. The company is now using that data to power visual navigation AI for delivery robots. Players didn't just walk around with their phones. They scanned landmarks, storefronts, parks, and sidewalks from every angle, at every time of day, in lighting and weather conditions that staged photography would never capture. They documented the physical world at a scale no mapping company with a fleet of vehicles could have replicated on the same timeline or budget. Niantic collected this systematically, data point by data point, across eight years, while users thought the only thing at stake was catching a rare Charizard. The most valuable AI training datasets in the world aren't being assembled in data centers. They're being built by people who have no idea they're building them.
NewsForce@Newsforce

POKÉMON GO PLAYERS TRAINED 30 BILLION IMAGE AI MAP Niantic says photos and scans collected through Pokémon Go and its AR apps have produced a massive dataset of more than 30 billion real-world images. The company is now using that data to power visual navigation for delivery robots, letting them identify exact locations on city streets without relying on GPS. Source: NewsForce

English
2.2K
24.4K
107.3K
13.9M