Rod Rivera

2.3K posts

Rod Rivera banner
Rod Rivera

Rod Rivera

@rodriveracom

Educating the next billion in AI

Katılım Eylül 2015
196 Takip Edilen638 Takipçiler
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
If you want to understand how "always-on" AI agents, such as OpenClaw, or conversational agents, such as @Rasa_HQ , work and build your own, join us in March! It will be in person in Central London in the evenings. Plus, best of all, it's free! Let's learn together how to harness these technologies to automate tasks and deliver better customer experiences! You can apply to the @nebiusacademy at the link below. (and of course, if you're obsessed with #OpenClaw like I am, subscribe to @localainet to stay up to date on what's happening in the OpenClaw ecosystem)
Nebius Academy@Nebius_Academy

Meet @profrodai 🎓 DevRel at Rasa and Professor of the Practice at ITAM, Rod will be teaching at our free AI Performance Engineering course in London this spring. You’ll learn how to build AI agents for business tasks on top of third-party APIs. Apply: bit.ly/40aMVbE

English
1
2
6
117
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
Most of us use Claude Code just for the basics, but when it gets complex, we're back to copy-pasting into ChatGPT. That's why I'm joining Denis Volkhonskiy and Stan Fedotov, PhD, from @nebiusacademy to enter the Claude Code "engine room" tomorrow. What we are breaking down: * Reliability: How to stop "hallucination loops" in agent-based iteration. * Speed: Building high-speed pipelines that don't sacrifice code quality. * Production: Moving past experiments into actual ML and DevOps workflows. We'll also be giving a first look at the AI Performance Engineering course coming to London this March. It’s a 14-week deep dive (completely free) for those ready to move from "using AI" to "engineering AI systems." In the course, I'll show you how to build voice AI agents with @Rasa_HQ and tools like OpenClaw. If you’re an ML engineer, DevOps specialist, or a dev looking to upgrade your stack, you must come and join us! 📅 Tomorrow, Feb 17 🕓 04:00 PM GMT 🔗 Register here: hellorasa.info/4aqzMzN Can't make it? I also curate a calendar of the best AI events happening in London (online + in-person). It would be great to see you at one of those instead: hellorasa.info/4aAUQ6O #ClaudeCode #AIagents #Rasa #NebiusAcademy #SoftwareEngineering #MLOps
Rod Rivera tweet media
English
0
2
3
328
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
OpenClaw is great! But local != secure if the LLM has full home directory access. Check out AgenShield for: ✅ Static policies (not vibes) ✅ Kernel-level enforcement (macOS Seatbelt) ✅ JIT secrets 🔗 Shield: bit.ly/4qpSjT6 💻 GitHub: bit.ly/3MsvjVv
English
0
1
1
51
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
Just a week ago, I was wondering how long it would take for OpenClaw to have more stars on GitHub than n8n. It took n8n 81 months to get to 174k stars. OpenClaw achieved it in 3 months. At the beginning of the year, before becoming aware of OpenClaw, I mentioned that 2026 is the year of the digital coworker, where we finally move from software and AI that help us with our workflow to software that autonomously completes outcomes, software that truly works like a digital peer to whom we can delegate tasks. Right now, this vision isn't yet fully materialized. OpenClaw and similar tools are far from perfect and very experimental. But you can imagine a very close future where we aren't paying a yearly subscription for a CRM at $6,000 USD, but instead we start paying $15,000 or $20,000 for a digital SDR. It might sound like a lot to pay for software, but when you consider that a human SDR has a yearly base salary of around $40,000, you realize a company saves more than half the costs while getting more or less the same results. And without someone who goes on holidays, takes days off, or might leave you at some point. This might sound grim, but the other side of the coin is that anyone adopting OpenClaw and similar digital coworkers starts to have a virtual team at their disposal that truly makes them 10x more productive. We just have to look at how products like Lovable already empower marketing teams to do in days what was truly impossible and unthinkable before without involving agencies, high budgets, and months of planning and development. This is the first time since ChatGPT went live in late 2022 where it feels that, again, AI is accelerating and radical changes are starting to happen. How are you using OpenClaw?
Rod Rivera tweet media
English
2
1
2
337
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
Today’s OpenClaw Daily is out! And if you are watching your OpenClaw API bills explode, you're not alone. This issue highlights @MattGanzak guide showing how simple config changes can drop costs from $1,500+/month to under $50 localainet.substack.com/p/slashing-ope…
English
1
2
4
124
Rod Rivera retweetledi
Sovereign Agents
Sovereign Agents@sovereignagents·
Today's edition of the OpenClaw Daily covers: - How one builder created a 10-agent "Mission Control" system with persistent memory - Critical security vulnerabilities discovered (and patched) - The Moltbook controversy and database exposure incident localainet.substack.com/p/openclaw-dai…
Sovereign Agents tweet media
English
0
2
2
43
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
There is an almost religious belief that system prompts and guardrails are what will protect Moltbot, or any other agent, from exposing data or executing malicious actions. The solution is not “more sophisticated system prompts” is.gd/SLjWazis
English
0
1
1
40
Rod Rivera
Rod Rivera@rodriveracom·
In 2023, I began talking about "AI Product Engineer". Mostly, as AI Engineer had already been coined by Swix. Recently, I was thinking, "Nobody uses the term." Wrong! It's amazing to see JDs & job titles on LI. As with everything, the key is to keep talking til someone listens
Rod Rivera tweet media
English
1
0
0
76
Rod Rivera
Rod Rivera@rodriveracom·
@egg_ert For it to be useful, show insight and agency, it must be approved and aligned with company goals. Vibecoding is nothing else but the new Excel spreadsheet, in itself, it does not mean much nor adds much value. Value comes from alignment.
English
0
0
0
46
Christian Eggert
Christian Eggert@egg_ert·
Vibe coding something useful for a company is a way better signal than any resume. It proves you have the insight and the agency to actually deliver.
English
1
0
1
103
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
It is not an exaggeration to say that Moltbot is the next big AI platform to master and build on. On Google Trends, there has been more search interest in the last two days around Moltbot than around: • Claude Code (blue) • Cursor (yellow) • N8N (green) • Manus (Purple) • LangChain (orange) And we must remember Moltbot is not even 3 months old. It currently stands at 67.2k stars and 8.4k forks. Btw, I am writing an article on how to install and run Moltbot safely. Everyone is asking: "How do I run Moltbot without risking my inbox, files, and credit card?" Stay tuned.
Rod Rivera tweet media
English
0
2
1
285
Rod Rivera
Rod Rivera@rodriveracom·
Life update: I joined the wave of Mac Mini buyers to run Moltbot in a restricted environment. I want to test everything under the sun. Share your favorite models / projects / libraries for running local AI! I'll be building in the open and sharing my findings and impressions!
Rod Rivera tweet media
English
0
0
0
159
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
Clawdbot is poised to be the next big thing in agentic AI. On par with LangChain in 2022. In 2 months, it went from 0 stars to 50k stars. There seems to be a real need to run a personal local agent with your tools and data. The software is still not accessible to the average Internet user (it requires installing Node.js), but any Vibecoder guided by ChatGPT can get it up and running in minutes. Expect it to become a standard tool in the ecosystem or, at the very least, a case study of architectures and approaches for running local agents. Have you played with ClawdBot?
Rod Rivera tweet media
English
0
1
3
382
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
Light vs. heavy agents miss the real axis: durability vs. disposability. I was thinking about this while reading the latest post of @htahir111 from @zenml_io. He makes a compelling distinction between Light Agents (chat wrappers) and Heavy Agents (orchestrated workflows). He's right that enterprise reliability needs orchestration, sandboxing, and state management. But the Light vs Heavy axis describes complexity. It misses the more critical question: Durability vs Disposability. Traditional batch workflows were built to last. These are capital investments measured in years or decades. Agents are different. Reviewing my own "Heavy" agent architectures from 2024, I've found that robustness is often an expensive trap. Many "Heavy" agents built 18 months ago were sophisticated band-aids. They used complex chaining and error handling to compensate for model stupidity. Today, a single call to a frontier model often outperforms those brittle, multi-step chains. If you spent 6 months productionizing a workflow that the next model update solves natively, you didn't build an asset. You built technical debt. When to Go Heavy? Build Heavy only when: - The process is high volume and low variance - Compliance requires an audit trail - The cost of failure exceeds the cost of engineering Everything else? Stay light. Traditional automation is optimized for repeatability. Agentic automation optimizes for leverage in the face of uncertainty. Models improve faster than architectures age. That inverts the traditional calculus of software engineering. In 2010, you built once and ran for years. In 2026, you might build something that's obsolete in six months because the model got smarter. That doesn't mean you built wrong. It means you built for the reality that existed, and you need to be ready to rebuild when reality shifts. How are you calculating ROI on your agents? Are you factoring in the "Model Obsolescence Rate"? Are you building for durability when you should be building for velocity? Most importantly: are you optimizing for the system you have today, or the system you'll have in Q3? Because if you're building "Heavy" for a problem that Claude 4 or GPT-5 solves natively, you're not future-proofing. You're past-anchoring. Follow @profrodai for more on agent engineering, AI operations, and building in a moving landscape.
English
0
2
4
69
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
We're Not Becoming Specialists With AI. We're Becoming Operators of Outcomes. There's a comforting myth people still cling to in tech and knowledge work: "If I specialize deeply enough, I'll be safe." We see it all the time. All I need is that Cisco certification to be in demand. Pass the CFA exam and unlock job security and wealth. For decades, this was true. Difficulty was a moat; specialized knowledge was scarcity. But in the AI era, this logic is inverting. AI is a universal solvent for deep, technical moats. If your job can be cleanly isolated, specified, and benchmarked, it is exactly the kind of thing an agent will absorb. The risk isn't being a generalist. The risk is being too legible to automation. And here's the mechanism: AI is a knowledge compressor. It takes vast, deep verticals of information such as the entire corpus of Cisco documentation, financial regulations, or legal precedents, and compresses them into instantly retrievable utility. The human who memorized the manual is no longer the gatekeeper. The syntax, the formula, the formatting, all of it has been friction-ized away. This week, there was a very interesting signal in the Financial Times article about McKinsey, which was not what its headline describes, which is that they're using AI in interviews. It's what they're testing for. They aren't testing for answers, correctness or frameworks. These days, if you want an analyst role at the company, candidates must show judgment, curiosity and adaptation. These aren't "soft skills" anymore. These are anti-automation skills. The Shift from "How" to "Why" (or: Why Syntax Became Worthless Overnight) In the past, the barrier to entry was the "How." The syntax of a command line. The specific formula for a derivative. The complex formatting of a legal brief. Specialists were paid to navigate this friction. AI creates a friction-free layer over complexity. A generalist can now execute complex "How" tasks like "Write a Python script to scrape this data" without knowing the syntax. The code appears. The formula computes. The brief formats itself. Value is migrating upstream to the "What" (strategy) and the "Why" (ethics, purpose, judgment). Judgment is now scarcer than execution. This is the inversion: we used to hire people who could execute and hope they'd develop judgment. Now we need judgment first, and execution is becoming table stakes through tooling. Liberal Arts Aren't a Nostalgic Detour. They're a Hedge Against Discontinuity. When Bob Sternfels, the global managing partner at McKinsey, says liberal arts graduates bring "truly novel" thinking that complements AI's inability to make "discontinuous leaps," that's an observation that goes beyond HR rhetoric. AI is phenomenal at interpolation, pattern completion, and compression of prior work. It thrives in predictable domains with fixed rules. A dermatologist diagnosing a specific lesion (deep specialization) is more at risk of automation than a general practitioner managing a patient's overall holistic health (broad context). AI is great at summarizing a problem but bad at reframing the problem itself, rejecting the premise, and sensing when the question is wrong. That's exactly what philosophy, literature, history, and the arts train you to do. For decades, we optimized education for local maxima: finance majors for finance jobs, CS majors for engineering ladders, MBAs for management pyramids. AI collapses those ladders. What survives are people who can move sideways, not just climb upward. The Silo Vulnerability: Why Deep Specialists Are Sitting Ducks Here's the uncomfortable truth: deep specialists often work in silos. They navigate in isolated environments with predictable rules. A specific accounting standard. A particular networking protocol. A narrow regulatory framework. AI thrives in silos. It is actually better at deep, narrow, rule-based tasks than it is at broad, chaotic, cross-domain tasks. The narrower the domain, the easier it is to train a model on it. Deep specialization is not a bunker. It's a target. The most secure workers are not those who dig one hole deeply, but those who can connect Dot A (Finance) to Dot B (Tech) to Dot C (Psychology). If AI provides the raw intelligence blocks, the human providing value is the architect who puts them together. Call it combinatorial creativity. Wealth will accrue to those who can synthesize output from AI across different domains to solve novel problems. The future isn't I-shaped (deep in one thing) or even T-shaped (deep in one, broad in others). It's M-shaped: multiple peaks of competency, because AI has lowered the learning curve enough that professionals must now have multiple areas of fluency to remain competitive. The Half-Life of Skills: Why Certifications Are Printed Maps in an Era of Shifting Plates Here's the brutal math: technical skills rot faster than they used to. A certification earned in 2023 may be obsolete by 2026. AI updates code libraries and financial regulations in real-time. Relying on a static certification for safety is like relying on a printed map in an era of shifting tectonic plates. And it gets worse. The frontier of value isn't a straight line anymore (harder stuff = more money). It's jagged and unpredictable. AI might conquer a specific "hard" peak tomorrow while leaving some adjacent, "easier" tasks untouched. Betting your career on one specific hard skill is a bet on a single point on a jagged, moving frontier. The new "specialization" is adaptability itself. The ability to learn a new tool, use it, and discard it when it breaks. That's the new CFA. One Agent per Human Is Not a Metaphor. It's a Unit of Production. The most under-discussed line in the article is this: McKinsey has a "workforce" of 20,000 AI agents on top of its 40,000 staff, moving toward "one agent per human." Let us just think what this means. Historically, one manager oversaw ten analysts, software was priced per seat, and productivity scaled with headcount. Teams could only get bigger. Move fast forward to today and one human orchestrates N agents, software is priced per usage, success, and outcome, and productivity scales with orchestration skill. In this world, human teams can only get smaller. This reality is starting to be reflected in AI pricing, as it already looks less like SaaS and more like electricity: standing charge, consumption-based pricing, and performance premiums. We are watching the utility-ization of cognition. And utilities don't care how many people are on your team. They care how much output you draw. Consulting Is the Canary Because Consulting Is Pure Leverage McKinsey matters here because it's upstream. What McKinsey does or advises to do, sets practices and behaviors across the economy. As for decades, it has been a training ground for elite talent, a distributor of managerial best practices, and a cultural blueprint for how "serious companies" operate. When McKinsey hires less, clients don't just copy that behavior. They copy the logic behind it. And the logic is simple: juniors are not less capable, they are less necessary. If one operator with agents can do what ten juniors used to do, the surplus doesn't get reallocated. It gets eliminated. Which leads to the uncomfortable question no one wants to ask: what happens to the people whose jobs disappear because they succeeded at something automatable? I Cut My Team Too. And That's the Part That Scares Me. This is where it stops being abstract. Last year, I had freelancers for content, editing, and marketing. All of them were good people, reliable and talented. I cut all of them. And not because they were bad, but because I realized I could automate most of what they did. And if I'm honest, the hardest part was letting go of the identity of "leading a team," the feeling of importance that comes from coordination, and the social validation of being a mini-CEO. Once you drop that ego layer, the cold efficiency is undeniable: fewer dependencies, less coordination overhead, no HR, no admin, no performance variance, and no emotional debt. This is the logic millions of founders, managers, and operators are independently arriving at. And collectively, it's explosive. From Activities to Outcomes (and Why That's a Social Shock) We are migrating from software, from work, from organizations toward outcomes instead of activities. You're not paid to analyze anymore. You're paid to decide. You're not paid to produce slides. You're paid to move metrics. You're not paid to "support." You're paid to close loops. While AI produces content regardless if it is code, analysis, or prose, humans provide context. A CFA creates a financial model; a human decides if that model makes sense given the geopolitical climate and the CEO's divorce. That contextual judgment, that ability to say "this analysis is technically correct but fundamentally wrong for this situation," that's where value lives now. AI thrives in outcome-based systems, while humans build identity around activity-based ones. That mismatch is where the tension lives. Because when outcomes matter more than effort, the market becomes brutally honest: fewer roles, higher expectations, and wider variance between winners and everyone else. This is much more than a cyclical downturn. It's a structural compression of labor demand. 2026 Is the Year of the Digital Coworker (and the Moral Test) I've said it before and I'll say it again: 2026 is the year of the digital coworker. These aren't assistants. They aren't copilots. They are coworkers. They are digital twins, agent fleets, and autonomous operators. I'm building one for myself. And I'm conflicted about it, because every task I automate is a task I no longer pay someone else to do. At scale, this becomes more than a personal optimization problem, it turns into a societal coordination failure. If everyone acts rationally as an individual, the collective outcome may be irrationally unstable. We don't yet have language (let alone policy) for that. The Real Question Isn't "Will Jobs Disappear?" They will. The real questions are: who captures the upside? How do people re-enter the system? What do we value when efficiency is no longer scarce? McKinsey is right to test judgment instead of memorization. Liberal arts are right to reassert themselves. Generalists are not regressing. They're resurfacing. But unless we grapple seriously with where the displaced energy goes, we risk building the most productive economy in history with no social story to sustain it. And that (not AI) is the real discontinuity. If you're interested in how AI is reshaping work, organizational design, and what happens when productivity decouples from headcount, follow me at @profrodai for more posts like this. I write about the rise of the AI Operator, agent engineering, and the transition from activity-based to outcome-based systems. You can also find me at the Agent Engineering Community.
Rod Rivera tweet mediaRod Rivera tweet media
English
0
1
1
91
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
This shows that it is more profitable to double down on LLM inference rather than invest in alternative approaches (world models) that are highly compute-intensive and unclear whether they’ll surpass LLM performance and/or find client acceptance. Some read it as “this is the result of China's restricted access to GPUs,” but in a world of unlimited GPUs, this wouldn’t lead to investing in more novel research. Where do we see this? Meta, OpenAI, and similar, where you’re seeing an exodus of scientists leaving the companies and starting their own Neo labs because they feel they can’t do new research that isn’t LLM-focused. For any of those companies, compute and capex have not been issues; it is just more attractive to focus 100% on exploiting LLMs rather than exploring new approaches.
Jukan@jukan05

According to a Bloomberg report, Justin Lin, the head of Alibaba's Qwen team, estimated the probability of Chinese companies surpassing leading players like OpenAI and Anthropic through fundamental breakthroughs within the next 3 to 5 years to be less than 20%. His cautious assessment is reportedly shared by colleagues at Tencent Holdings as well as Zhipu AI, a major Chinese large language model company that led this week's public market fundraising efforts among major Chinese LLM players. Lin pointed out that while American labs such as OpenAI are pouring enormous computing resources into research, Chinese labs are severely constrained by a lack of computing power. Even for their own services—i.e., inference—they’re consuming so much capacity that they don’t have enough compute left to devote to research.

English
0
1
1
87
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
A great example of what happens when we trust ChatGPT unquestioningly and shut down our brains: A couple who took their wedding vows based on a speech made using AI tool ChatGPT will have to do it all again. The speech did not meet official requirements. The couple had chosen a friend as their civil registrar, a so-called ‘eendagsbab’ (registrar for a day), who had then used ChatGTP to write a speech with “a lighter tone”. However, the ceremony did not include the mandatory declaration that binds a couple to ‘do their duty to one another as required by their wedded state’. Without it, the marriage is void. We should never forget that all these AI systems are, by nature, not reliable. We should always see them as tools to help us with a first draft. The best analogy is to picture an AI like an eager junior employee: they are hyper-motivated and will do all you ask, but they are also clumsy and imprecise. Would you ever pass off your junior intern's work as your own? Without ever reviewing it? Then why do we blindly copy-paste everything ChatGPT generates?
Rod Rivera tweet media
English
0
1
1
33
Rod Rivera
Rod Rivera@rodriveracom·
@big_duca One still needs a very good understanding of programming / soft eng to assess if what Claude Code is doing is reasonable. All these models are prone at over engineering every problem. One thing that sets seniors apart. The junior blindly copy pastes what the AI outputs.
English
0
0
0
34
Duca
Duca@big_duca·
I am not sure if other developers feel like this. But I feel kinda depressed. Like everyone else, I have been using Claude code (for a while, it’s not a recent thing lol). And it’s incredible. I have never found coding more fun. The stuff you can do and the speed you can do it at now. Is absolutely insane. And I’m using it to ship a lot. And solve customer problems faster. So all around it’s a win. But at the same time. The skill I spent 10,000s of hours getting good at. Programming. The thing I spent most of my life getting good at. Is becoming a full commodity extremely quickly. As much fun as it is. And as much as I like using the tools. There’s something disheartening about the thing you spent most of your life getting good at. Now being mostly useless.
English
1.2K
425
7.6K
1.2M
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
The LeCun Episode: Science, Power, and the AI Bubble We Pretend Not to See Yann LeCun recently left Meta and, among other things, admits they "fudged" Llama 4 benchmarks. Within weeks, reports surface of a $3B valuation for his new lab (which hasn't shipped anything yet), making him a billionaire. The FT piece on LeCun accidentally says far more than it intends to, and what it reveals about misaligned incentives, organizational resets, and bubble dynamics is worth unpacking. Let's strip it down to first principles. Llama 4, Benchmark Theatre, and the Moment Trust Breaks The most important part of the article is not LeCun's opinions on LLMs. It's this: Meta shipped Llama 4, it underperformed, and then the organization massaged benchmarks to make it look competitive. LeCun himself admits the results were "fudged a little bit." That's not a minor footnote. That's a trust failure. Inside any large organization (especially one like Meta), once senior leadership believes that numbers are being gamed, everything changes. CEOs don't argue about research philosophy at that point. They rotate management. And that's exactly what happened. Mark Zuckerberg lost confidence in the GenAI org, sidelined it, and effectively reset the structure. People left. Others will leave. This is textbook organizational behavior. Not because Meta hates science. Because you cannot run a trillion-dollar infrastructure bet on vibes and academic authority alone. The Management Reset: When Line Managers Are No Longer Optional Enter Alexandr Wang: Young. Operational. Deeply embedded in the data and infrastructure layer of modern AI. Not a theorist. LeCun's reaction is revealing: “You don't tell a researcher what to do. You certainly don't tell a researcher like me what to do”. This sentence alone tells you the story was already over. Because that's not how organizations work. If someone is your line manager, they do tell you what to do (or at the very least, what the organization is optimizing for). When a senior figure publicly rejects that premise, the organization has only two options: neutralize them or let them leave with dignity. Meta chose option 2. And to be clear: this has nothing to do with ageism or "old people resisting change." It's about misaligned incentives. Meta is building products, platforms, and revenue-generating systems now. LeCun is pursuing scientific truth on a decade-long horizon. Both are legitimate. They are just no longer compatible inside the same reporting line. Scientists vs Builders vs Capital Allocators This is where the discourse online gets lazy. People frame this as "LLMs vs world models" or "scaling vs new paradigms." That's not the real axis. The real axis is: * Scientists (LeCun, Sutskever): incentivized by novelty and long-term correctness * Builders (Wang, Amodei): incentivized by execution and operational leverage * Capital allocators (Altman, Musk): incentivized by timing, narrative, and market capture These groups are not evil. They're just optimizing for different objective functions. LeCun saying "LLMs are a dead end" is scientifically defensible. Meta's bet of tens of billions on LLM infrastructure is economically rational. Conflict was inevitable. The Neolab Play: Why This Is Also About Money Let's be honest about the other half of this story. LeCun leaves Meta, and almost immediately, there are reports of a $3B valuation for a lab that hasn't shipped a product yet. That tells you everything you need to know about the current phase of the AI cycle. We are in a moment when reputation, combined with the right narrative and having optionality, is enough to raise billions. And LeCun is one of the few people alive whose name alone can justify that bet. This is not unprecedented. We've seen it with OpenAI alumni, Mistral AI, and prior generations: NeXT, Xerox PARC spin-outs, Bell Labs descendants. Will LeCun's lab eventually be acquired by Google or Meta? Almost certainly (if it produces something that depends on massive video datasets). And that's the key constraint. World Models, Video, and the Data Gravity Problem Technically, LeCun's vision is coherent. Video is richer than text. It encodes physics, causality, time, embodiment, and emotion. If you want agents that predict, not just autocomplete, video matters. But here's the uncomfortable truth: who owns the world's video data? Google owns YouTube. Meta owns Instagram and Facebook. Any serious world-model effort at scale will collide with data gravity and compute gravity. Which means independence is temporary. Science does not happen in a vacuum. And it certainly doesn't happen without infrastructure and datasets. The Bubble Signal No One Wants to Admit If you want a genuine sign that we're deep into an AI bubble, it's not hype on Twitter. It's this: we are pouring billions into organizations whose leaders openly say they are not interested in commercialization, yet we value them as if monetization is inevitable. That doesn't mean the science is wrong. It means time horizons are being mispriced. We've seen this movie before. Consumer internet startups were chasing adoption. "We'll figure out revenue later," while capital markets were assuming inevitability. Some of these bets will pay off massively. Many will not. And that's okay (as long as we're honest about what game we're playing). Final Thought LeCun is not wrong. Meta is not wrong. Wang is not wrong. They are just optimizing for different worlds. What this episode really shows is that AI is no longer just a scientific field. It's an industrial, political, and financial system. And those systems have rules that even legends can't ignore forever. I'll go read what LeCun and his students have been publishing recently. And you? If you're interested in how AI is reshaping organizations, agent engineering, and the future of work, follow @profrodai for more posts like this. I write about AI operations, building with agents, and what happens when technology meets organizational reality.
Rod Rivera tweet media
English
0
1
1
282
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
2026 is not “early AI.” It's year 4 of the AI revolution, and most companies are still preparing for year 1. 2026 will be the year of the digital coworker. 10% of humanity uses ChatGPT regularly. Remember how long it took for the internet to go mainstream? Not just exist, but actually change how people behave daily. Or mobile phones: they were around since the 80s, but only became ubiquitous 20 years later. AI is moving faster. 2026 is not "early AI." It's year 4 of the AI revolution. Let's zoom out - 2023 was the year of AI assistants. Chatbots, copilots, "wow, it can write emails." - 2024 was RAG and pipelines. AI was plugged into documents, data, and internal knowledge bases. - 2025 was the year of the agent. Multi-step reasoning, tools, workflows, autonomy. So what's 2026? The Digital Coworker. I think 2026 is when we stop treating AI as a tool and start treating it as a role. Not an assistant. A coworker. A year when we stop noticing whether the person helping us is human or not. Is your remote marketing assistant a contractor overseas or an AI? Does it matter if the work gets done well? This is the year AI stops doing tasks and starts owning entire roles. Not a "creative writing agent." An AI Marketing Manager. One that understands the company context. That joins your Zoom calls. That speaks and listens in real time. That coordinates with humans and other tools. That makes decisions within clear boundaries. And that operates continuously, not just when you remember to prompt it. Voice-first. Realtime. Embedded in your actual workflow. At that point, the interface disappears. We don't "use AI" anymore. We work with coworkers who happen to be digital. The uncomfortable part Most companies are still optimizing for AI as software when they should be preparing for AI as labor. Org charts will change. Career paths will change. Management itself will change. The biggest risk in 2026 isn't AI replacing humans. It's humans still thinking in task-level automation while others build role-level intelligence. Curious how others see this playing out! Do you agree with the Digital Coworker framing or do you see a different inflection point for 2026? I’ll be sharing more thoughts on Agent Engineering, AI Operations, and how work is changing. Follow @profrodai if this is a topic you care about.
Rod Rivera tweet media
English
0
1
1
36
Rod Rivera retweetledi
Rod Rivera
Rod Rivera@profrodai·
More than 200k banking jobs in Europe are at risk by 2030. Morgan Stanley estimates a 10% workforce reduction due to AI. The cuts will affect central services, including back-office, middle-office, risk, and compliance. What's your plan to stay relevant professionally in 2026?
Rod Rivera tweet media
English
0
1
1
41