leomid83
1.3K posts











anthropic just publicly committed to a deadline for transformative AI (AGI): 2028 and this matters in the race against china in AI (for good reason): > if AGI is built in <24 months then its essential adversaries don't get access to it (first). we're seeing a mini version of this play out with the government restriction on mythos (no public release). > china themselves admit USA is #1 for now: "China is still sharpening our swords while the other side has suddenly mounted a fully automatic Gatling gun." -chinese cybersecurity analyst on mythos. you might call bullshit on this but yesterday's security test report from Logan suggests they're right. > anthropic dismisses deepseek's claim they've found a workaround to building frontier models with less compute: everything about ai advancement (model intelligence, research, algorithms) sit DOWNSTREAM of compute the models are now building themselves. the winners are whoever has more compute (gpus in this case) and thats the USA. huawei's entire compute capacity this year is only 4% of nvidia's entire compute. good food for thought imo












Forward deployed engineers, or equivalent, are about to become one of the most in-demand jobs in tech. And one of the most important functions for AI rollouts. Deploying agents is far more technical of a task than most people realize, often far more involved than deploying software. Software generally works the same way every time, and generally for the past few decades has been updated versions of an existing technology or concept (which basically means easier for the enterprise to update their workflows on a newer system). With agents, you’re actually deploying the equivalent of work output within the enterprise. The customer is effectively using you as a professional services provider for a task, which they expect to get solved nearly end-to-end now. This means you need to actually deeply understand the business process as a vendor, and get the customer from the current to the end state seamlessly. Companies need help figuring out which models will work best for their workflows, they need extensive evals setup often, they need change management support for workflows, they need to get their data setup for the agents, and constant tuning of the agentic system for their process. Massive role in tech now. And another example of the kind of highly technical work that AI is creating.




The most important AI lab in the world isn’t in San Francisco. It’s in Kyiv. 27,000 Shaheds last winter. 80+ models trained on real battlefield data through Brave1. A 95% interception target. Not a prototype. Not a pilot. Live, at scale, against a peer adversary. Silicon Valley talks about AI. Ukraine fights a war with it. Nobody in government understands what AI means for warfare better than @FedorovMykhailo. The next phase starts now.

🇺🇸🇺🇦100 companies. 80 AI models. Real combat data. Inside the Ukraine-Palantir partnership reshaping AI in modern war. Palantir @PalantirTech CEO Alex Karp visited Kyiv this week — and Ukraine's Defence Minister @DefenceU Mykhailo Fedorov @FedorovMykhailo disclosed what their partnership has actually delivered. Mykhailo Fedorov met with Alex Karp on May 12, marking the latest milestone in a partnership that began in June 2022, when Alex Karp became one of the first major Western tech CEOs to travel to Kyiv after the full-scale invasion began. The deliverables Mykhailo Fedorov disclosed — concrete, operational, and consequential: — Detailed air attack analysis system — built jointly to process Russia's increasingly complex saturation strikes. Every Shahed swarm, every ballistic launch, every drone vector now feeds into a unified analytical framework that informs interception decisions in near-real time. — AI for intelligence processing — solutions deployed to handle the volume of raw intelligence data Ukraine collects across the front. Human analysts cannot manually process what Ukraine now generates. Palantir's AI does. — Integration into deep strike planning — technology embedded directly into the targeting cycle for Ukraine's long-range operations against Russian refineries, command nodes, and military infrastructure. Why this matters: Palantir is one of the top-10 US technology companies, with a market capitalisation of approximately $330 billion. Its software is used by NATO militaries, government agencies, and intelligence services across the alliance. Its core product — turning chaotic real-world data into operational decisions — is the most consequential capability in modern war. Ukraine is the only country in the world currently running Palantir's defence stack against a peer adversary in active high-intensity combat. That means two things: One, Ukraine is generating the operational data that will refine these systems for every NATO country that operates them in future. Two, Ukrainian forces are gaining a decision-speed advantage that Russia, dependent on its own slower analytical infrastructure, cannot match. The Brave1 Dataroom The most significant disclosure may be the joint Brave1 Dataroom — a platform built by Ukraine and Palantir that gives defence tech developers access to real battlefield data for AI model training. The numbers: — 100+ companies training models on the platform — 80+ AI models in development for detecting and intercepting aerial targets in complex environments This is the foundation Ukraine is building for the next generation of air defence — AI systems trained not on simulations or war-game data, but on actual combat conditions, drawn from the most intense aerial threat environment any country has faced since World War II. Mykhailo Fedorov's: "Today, technology, AI, data analysis, and the math of war directly influence the result on the battlefield. Our goal is to strengthen our partnership with Palantir in AI solutions and defence tech projects that give Ukraine a technological advantage." Ukraine is no longer just deploying Western technology. It is co-developing it — at the cutting edge, under combat conditions, with feedback loops that no peacetime defence programme can replicate. — Source: Minister of Defence Mykhailo Fedorov, May 12, 2026 #Ukraine #Palantir #AI #DefenceTech #UAF #UkraineWillWin #Brave1 #ModernWarfare





NEW: @JTLonsdale shocks CNBC on AI regulation debate: @andrewrsorkin: Is there ultimately going to be an FDA for AI models? Joe: The FDA has killed millions of people... Andrew: Killed?? Joe: Massive bureaucracy makes it cost 10 or 100X more than it should... there's tons of these new drugs you could be developing to save lives that we're just not able to do... China would love for us to have a massive regulatory bureaucracy for AI and let them get ahead. Andrew Ross Sorkin: We're all trying to figure out what this could look like. Is there ultimately going to be an FDA for AI models? Is that a good thing or a bad thing for somebody who's thinking the way you do? Joe: Listen, the FDA has killed millions of people. Let's be totally clear, right? Andrew: Killed?? Joe: It's literally led to the deaths of millions of people, Andrew... There's all these new therapies, especially now, by the way, with AI that we could be developing... Andrew: It's also hopefully saved some lives... Joe: I mean the trade off is probably 100 to 1. There's a very famous story from 60 years ago where they caught some stuff that was killing people in Europe and saved them here. They've used that as an excuse to make this massive bureaucracy that makes it cost 10 or 100 times more to do drugs than it should, which means there's tons of these new drugs you could be developing to save lives that we're just not able to do. I would be investing billions more to save lives, but I can't. So the equivalent is terrifying to me. The government is bad at these things. The bureaucrats are bad at these things. Now there's a there's another argument here, which is that you have things like Mythos and OpenAI's new technology that's really, really good at hacking into everything. And you probably don't want like, that new technology going to the bad guys right away. So there has to be some sort of trade off, some sort of framework. We have to be really careful not to make the mistakes the FDA has made. Andrew: So what would you do? What do you think that should look like? Joe: There probably should be some national agreement on regulation on new powerful models. It should be as small and as narrow as possible. It should not have the same bureaucracy. You should make sure the government from the start, has metrics on the speed at which it has to go and the transparency, because you're gonna have cronyism, you're gonna have the big guys capture it. You're going to slow it down. Pharma loves the FDA against biotech. It makes it too expensive for us to build our own pharma companies. We have to sell to them. This is what the big guys want. Google and Microsoft and OpenAI and the rest of them, they want to create rules to make it so they can... Andrew: They've all been calling for it. I mean, you remember Sam Altman, Dario, others early on said, "Regulate us; you need to regulate us. Please, regulate us." The question is was that a genuine call for action or do you think that was a "We think Washington's never going to do this. So we'll say it, and get some nice PR points." Joe: If you are the leader in the space and you have tens of billions, hundreds of billions of dollars, you want there to be really complicated regulation with people you can hire who go in and out of your company, who work there because you know you're going to be able to control it and influence it. ...And by the way, China has pre-IND (Investigational New Drug process) and IND of 30 and 60 days about right now. We have 200 and 500 days. And so we've completely delayed anything we do. We've handed more than a third now of our biotech sector to China in the last six years because we're so slow. We definitely don't want to do that on the AI side. That would be a disaster. China would love for us to do a massive regulatory bureaucracy for AI and let them get ahead. We cannot allow that to happen. @SquawkCNBC

The $111B consulting industry just got a massive wake-up call. OpenAI and Anthropic launching dedicated service firms on the same day isn't a coincidence, it's an admission. They finally realized that building a smart model is only half the battle. The other half is the "last mile": actually making the tech work inside the messy, complex reality of a giant corporation. For decades, the "Big Three" consulting firms have charged billions for slide decks and "strategic advice." But you can't solve an engineering problem with a PDF. These new ventures are designed to replace consultants with forward-deployed engineers who actually build. By partnering with the biggest PE firms, they aren't just selling software; they’re installing the central nervous system for the next generation of industry. The era of "talking" about AI is over. The era of building it into the bedrock of the economy has begun. Real work > Slide decks. What do you think, will these AI service firms actually kill the traditional consulting model, or will the "Big Three" find a way to pivot? Follow @10xme_biz on X to learn more about AI and sign up at 10xme.biz for a free AI diagnostic and newsletters.


