lukex

6.2K posts

lukex banner
lukex

lukex

@Lukex

co-founder, chief janitor @satlayer | venture partner @PressStartCap | core contributor @0xMITHarvard

Katılım Ocak 2009
2.7K Takip Edilen3.4K Takipçiler
lukex
lukex@Lukex·
1000% agree. you don't win your vertical deploying AI to 10x side quests, while your competitors are deploying AI to 10x main quests
Gergely Orosz@GergelyOrosz

Sage observation from @karrisaarinen (CEO of Linear) It now makes SO MUCH sense why I see a bunch of eng teams rebuilt a SaaS vendor in-house with AI, brag about and feel good They are doing side quests... and they don't even know it. And they are not helping their co win!!

English
1
0
1
122
cookies (🍪,🍪) | 饼妹
cookies (🍪,🍪) | 饼妹@jinglingcookies·
collated > a list of 60 companies building in the agentic commerce space, across these sectors: cards, standards, tooling, identity, credit, checkout execution, discovery, marketplace, policy control > resources to understand and keep up to date with agentic commerce, includes: agentic commerce infra, cards vs stablecoins, traction, analytics comment down below if you would like access
cookies (🍪,🍪) | 饼妹 tweet media
English
47
5
76
15.1K
lukex
lukex@Lukex·
@levie Been thinking and tinkering on this as well! AI will make data more valuable and kill free APIs. Think it splits into 3 groups: internal data (becomes differentiator/moat), frequently accessed data, infrequently accessed data Discoverability will be a big pain point!
English
0
0
2
51
Aaron Levie
Aaron Levie@levie·
There are some pretty wild downstream effects in a world with trillions of agents using the internet and software. One very big one is what happens with agents with budgets and wallets. There are lots of business models that never ended up working out for the human-based internet that all of a sudden start to make economic sense in an agent-based internet. Think of all the proprietary data and research that’s sitting out there right now behind a paywall that a human will never run into. Finance data, medical research, and so on. Most people won’t sign up for a $100 or $1000 subscription for information they need infrequently. The cost is too high. Equally, micropayments for this data rarely worked at scale because the volume was too low to matter. However, now an agent can have a budget for a specific set of research it’s doing, and the agent might pay $0.1 or $1 to access it in a workflow. And now that data may be relevant in 1,000X’s more use-cases than it was before. Similarly, there are many APIs and tools out there on the web that don’t make sense to have a subscription for, but now an agent may interact with for a specific exchange, and it could cost $.01 or $0.1 per transaction. All of a sudden new kinds of software can get built and monetized that would have been uneconomical before. Some new form of commercial open source, essentially. Obviously lots of infrastructure and agreement across the industry is needed for this -and getting discovered by the agent is going to be a whole new class of search and discovery problem- but there are so many potentially interesting new scenarios here.
François Chollet@fchollet

AI agents will soon graduate to fully-fledged economic actors that buy services, compute, and even data in the course of accomplishing high-level goals. 1-2 years before we start seeing this at scale.

English
70
71
631
160.2K
eric
eric@defyneric·
a lot of these agent-to-agent ideas are just way too early conceptually they’re cool, but if you’re building products specifically for agents today there’s basically no TAM the smarter approach right now is building agents for humans first we can’t expect agents to start using other agents until humans are actually using agents in the first place at the end of the day the customer is still a human it reminds me of jeff before hyperliquid, back in 2017 he tried to build a prediction market and it failed the idea wasn’t wrong, it was just too early. then polymarket launched years later and it exploded a lot of these agent to agent ideas are similar. the concept is good, but are founders really able to build, raise money, and run payroll for 5–6 years while the market still doesn’t exist?
English
4
1
28
1.3K
lukex
lukex@Lukex·
@ccatalini AI commoditizing execution makes data more valuable It's why Google is ironically taking an Apple walled garden approach when it comes to Gemini/AI
English
0
0
4
31
lukex retweetledi
Xiaoyin Qu
Xiaoyin Qu@quxiaoyin·
The scariest thing about AI in 2026 isn't some sci-fi scenario. It's watching people you know — people with the same credentials, the same caliber — split into two completely different groups in a matter of months. I've seen it happen firsthand. Stanford grads, ex-Meta engineers, startup founders. Three months ago, they were all roughly at the same level. Now? The divergence is so obvious it's uncomfortable. Some of them got really good at AI. Not just "using ChatGPT" good — fundamentally different in how they think, work, and produce. Their output is compounding. Their depth of insight is compounding. They look like they're playing a different game entirely. Others are still running on the resume they built five years ago. And here's the number that haunts me: 99% of people still use AI at the level of "What's the weather today?" or "What kind of flower is this?" The 1% who figured it out aren't even one group. There's massive variance within them — some are orchestrating AI agents to run entire companies, some use it for research that would take a whole team, some have AI write half their code, some have AI write all of it. The income implications are brutal. If someone uses AI to produce the output of 10,000 people, they're worth 10,000x the salary. Someone who can't figure out a single tool? They might not be worth hiring at all. What really unsettles me is how fast our patience is eroding. The moment we feel someone performs below what AI can do, we don't think "they need training." We think "they're worth zero." Not less. Zero. So the real AI danger isn't AI going rogue. It's the epic, unprecedented amplification of the gap between people — in capability, in income, in relevance. One silver lining: the old hierarchy is broken. People who were once untouchable can now be overtaken by someone who masters AI faster. That door is genuinely open. But if you don't walk through it, you won't just fall behind by a little. You'll become invisible. #AISkillGap #FutureOfWork #ArtificialIntelligence #Productivity
English
47
65
379
85.1K
Silver
Silver@silver_pump·
@dgt10011 ngl i respect his contrarian stance against the hype even if its like shouting into the void lol
English
1
0
2
49
Jeff Park
Jeff Park@dgt10011·
LeCun has been the most aggressive critic of the transformer/LLM consensus for years, and this is his magnum opus "much of real-world sensor data is unpredictable, and generative approaches do not work well." This basically epitomizes his view that real intelligence cannot come from scaling text prediction alone whether he is right or not, im excited to see a another school of thought come to market that in some ways restores agency back to the physical world. Worth "keeping an eye on it" :)
Jeff Park tweet media
AMI Labs@amilabs

Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.

English
14
8
98
24.4K
lukex
lukex@Lukex·
@jenzhuscott @openclaw Pretty cool to see governments moving fast, necessary given the pace of AI
English
0
0
1
366
Jen Zhu
Jen Zhu@jenzhuscott·
Has @openclaw been launched for a month yet? Moonshot: KimiClaw MiniMax: MaxClaw Alibaba: CoPaw ByteDance: ArkClaw Tencent: WorkBuddy Zhipu: AutoClaw Shenzhen: OpenClaw hardware stores Local governments: we pay you to deploy 🦞 👀
English
45
65
565
140.3K
lukex
lukex@Lukex·
@poezhao0605 Government moving at the pace of startups is a sight to behold
English
0
0
2
338
Poe Zhao
Poe Zhao@poezhao0605·
OpenClaw mania in China has crossed from tech hype to government policy. Shenzhen moved first. Now Wuxi's high-tech zone dropped a 12-point draft policy specifically supporting OpenClaw-based development. Compute subsidies up to $42K/year. Full cloud platform subsidies up to $140K. And up to $700K for breakthroughs in embodied AI robots and smart industrial inspection. This is how China scales technology. Government sets the table with money and policy. Companies bring the products.
Poe Zhao tweet media
English
15
64
333
71.3K
lukex retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic is running a masterclass in negotiation-as-marketing right now. The $200M Pentagon contract represents 1.4% of Anthropic’s $14 billion run rate, up 14x from $1 billion fourteen months ago. This is not a number worth compromising a brand over. Amodei knows this. The Pentagon knows this. So why is he personally publishing a detailed statement, point by point, timed for maximum news cycle impact? Because every headline that reads “AI company refuses Pentagon’s demands on autonomous weapons and mass surveillance” is worth more than the contract. Anthropic just bought the most expensive brand positioning in AI history, and the Pentagon is paying for it. The statement is surgically written. Amodei opens by affirming he believes in using AI to defend democracies. Lists every classified deployment Anthropic pioneered. Emphasizes they’ve never objected to specific military operations. Then draws two narrow lines: no mass surveillance of Americans, no fully autonomous weapons. The framing makes it almost impossible to argue against without sounding like you’re pro-surveillance. The Pentagon’s negotiator called Amodei a “liar” with a “God complex.” The Pentagon threatened to invoke the Defense Production Act and label Anthropic a supply chain risk simultaneously. Amodei pointed out those two threats are contradictory: one says Anthropic is dangerous, the other says Claude is essential. That line will be in every news story for the next 48 hours. It was designed to be. Sen. Tillis, a Republican not seeking reelection, broke with the administration on the record. Said the Pentagon was being “unprofessional” and that you should listen when a company turns down money out of concern for consequences. Anthropic didn’t have to lobby for that. The positioning did the work. Every enterprise buyer evaluating AI vendors just watched Anthropic publicly refuse to let a customer override their safety commitments. For a company selling to regulated industries, that demo is priceless. The 5:01pm Friday deadline is tomorrow. Anthropic will either keep the contract with safeguards intact or lose it and gain something more valuable: permanent differentiation in a market where every other lab said yes.
Anthropic@AnthropicAI

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…

English
121
264
2.1K
365.3K
lukex
lukex@Lukex·
Entertaining and thought-provoking
goldenlabubuwatch@pandawatch88

This Chinese parody of the Citrini article is quite funny to read, plus it will give you a feel of US vs China enterprise processes. The author argues that China enterprise and economy design makes it very resilient to any AI disruption (a lot of satire in the piece, with a lot of truth in it). Since covid zero and subsequent downturn already destroyed so many white collar jobs, and these people were now driving for meituan, when agents arrived there was nothing to replace😂 Also that when AI arrived to disintermediate transactions and erode the "commission" economy, it found that this commissions had already been eroded by fighting between the internet giants, so there was no economic value left to get for the agent. Also since SaaS never took of in China, that didn't get disrupted either. The Salesforce of China is Kweichow Moutai, good luck with that Claude. "Leaders’ relative sluggishness in adapting to digital workflows, ironically, prevented AI from establishing a seamless connection between decision-making and execution. The oral briefings, the eye contact in the room, the interpretation of casual gestures, and the seating arrangements at meetings—these became the true barriers." "[in 2028] Many meetings still happened in old-style, almost antique conference rooms, with staff coming in every ten minutes to top up tea. Nothing was recorded, and nothing made it to the AI-readable digital domain. What eventually filtered out was often just a highly distilled A4 sheet, with just a few lines of text. Much offline information was private, face-to-face, and vivid. It couldn’t be analysed digitally, and it lived in human memory and judgement. To reverse-engineer, from a few A4 pages, the vast decision-making information dissemination and the dense web of relationships behind them, was, for AI, close to a hopeless task." eastisread.com/p/the-2028-chi…

English
0
0
2
840
lukex
lukex@Lukex·
first programming, then accounting, legal but which non-obvious role falls first?
Andrej Karpathy@karpathy

It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.

English
0
0
1
764
lukex
lukex@Lukex·
@andrewchen You/AI'll never have perfect information, esp on the team. Still need humans to make the final call
English
0
0
1
21
andrew chen
andrew chen@andrewchen·
i asked my openclaw what I should be focused on rn in the zoomed out view, and here's what it said: Figuring out what VCs even do in an AI world — if AI agents can source deals, write memos, do diligence... what's the irreducible human core of investing? You're living that question with me sitting here. lol
English
97
6
213
30K
lukex
lukex@Lukex·
@levie For how long though?
English
0
0
0
112
Aaron Levie
Aaron Levie@levie·
This is counterintuitive for some, which is why there’s a paradox named after it. But if you lower the cost of something that was previously supply constrained, demand for that thing goes up. Software engineering is just one of the easiest examples to contemplate. The process goes like this: every small business, every IT team, every large enterprise sees that engineering can now drive vastly more output. They then start to consider all the new things they can build or automate. They even test building prototypes themselves. They only get so far with that approach because they realize there are still 50 other tasks that go into building software and maintaining it. So they start to hire more engineers to do that work. All of this for work they never would have considered automating or having software for if AI didn’t exist. So yes, automating tasks, in plenty of fields, will lead to demand for experts, not less.
Puru Saxena@saxena_puru

The software industry is apparently dying but job postings for software engineers are rapidly rising!

English
195
405
3.4K
589.4K