
Give me the kind of good news from around the world that nobody ever talks about... but should.
Stas Gayshan
22.8K posts

@demintel
Entrepreneur, tech guy, trouble shooter, attorney. GC @CIC_Health, Managing Director @cicnow, founder @cicboston, founder @spacewithasoul. Opinions are my own.

Give me the kind of good news from around the world that nobody ever talks about... but should.

The more enterprises I talk to about AI agent transformation, the more it’s clear that there is going to be a new type of role in most enterprises going forward. The job is to be the agent deployer and manager in teams. Here’s the rough JD: This person will need to figure out what are the highest leverage set of workflows on a team are (either existing or new ones) where agents can actually drive significantly more value for the team and company. In general, it’s going to be in areas where if you threw compute (in the form of agents) at a task you could either execute it 100X faster or do it 100X more times than before. Examples would be processing orders of magnitude more leads to hand them off to reps with extra customer signal, automating a contracting review and intake process, streamlining a client onboarding process to reduce as many straps as possible, setting up knowledge bases than the whole company taps into, and so on. This person’s job is to figure out what the future state workflow needs to look like to drive this new form of automation, and how to connect up the various existing or new systems in such a way that this can be fulfilled. The gnarly part of the work is mapping structured and unstructured data flows, figuring out the ideal workflow, getting the agent the context it needs to do the work properly, figuring out where the human interfaces with the agent and at what steps, manages evals and reviews after any major model or data change, and runs and manages the agents on an ongoing basis tracking KPIs, and so on. The person must be good at mapping the process and understanding where the value could be unlocked and be relatively technical, and has full autonomy to connect up business systems and drive automation. This means they’re comfortable with skills, MCP, CLIs, and so on, and the company believes it’s safe for them to do so. But also great operationally and at business. It may be an existing person repositioned, or a totally net new person in the company. There will likely need to be one or more of these people on every team, so it’s not a centralized role per se. It may rile up into IT or an AI team, or live in the function and just have checkpoints with a central function. This would also be a fantastic job for next gen hires who are leaning into AI, and are technical, to be able to go into. And for anyone concerned about engineers in the future, this will be an obvious area for these skills as well.

Claude for Word is now in beta. Draft, edit, and revise documents directly from the sidebar. Claude preserves your formatting, and edits appear as tracked changes. Available on Team and Enterprise plans.





As we enter the build phase for our first engine, Boom is moving to video updates for our investors. Here is our most recent investor update (financial info redacted). Hint: there is an Easter egg 🥚

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing


JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately zero percent of the time. That is the model you are paying for right now. Their latest model “Mythos” converts them 72.4 percent of the time. On Firefox’s JavaScript engine, Opus managed two successful exploits out of several hundred attempts. “Mythos” managed 181. Ninety times better. One generation. Nobody trained it to do this. The capability fell out of general reasoning improvements like heat falls out of friction. Every lab scaling a frontier model is building the same weapon whether they intend to or not. Let that land. “Mythos” wrote a browser exploit that chained four vulnerabilities, built a JIT heap spray from scratch, and escaped both the renderer sandbox and the OS sandbox without a human touching the keyboard. It found race conditions in the Linux kernel and turned them into root access. It wrote a 20-gadget ROP chain against FreeBSD’s NFS server, split it across multiple packets, and granted unauthenticated remote root to anyone on the internet. That FreeBSD bug had been there seventeen years. Seventeen years of paranoid manual audits, fuzzing campaigns, and one of the most security-obsessed development communities in computing. Mythos found it in hours. The FFmpeg one is worse. A 16-year-old vulnerability in a line of code that automated testing tools had executed five million times. Every major fuzzer ran over that exact path and none caught it. Mythos did not fuzz. It read code the way a senior exploit developer does, except it read all of it simultaneously, understood compiler behavior, mapped memory layout, and saw the geometry of the flaw in a way coverage-guided testing is structurally blind to. Here is what should keep you up tonight. Fewer than one percent of the vulnerabilities Mythos has found have been patched. Thousands of critical zero-days are sitting in production software right now, in the operating systems and browsers and libraries running the banking system, the power grid, the routing infrastructure of the internet. The disclosure pipeline is not slow. It is overwhelmed. Anthropic did not sell this. Did not license it. Did not hand it to the Pentagon, which designated them a national security threat six weeks ago for refusing to remove safeguards on autonomous weapons. They built a private consortium called Project Glasswing, handed it to Apple, Microsoft, Google, CrowdStrike, the Linux Foundation, JPMorgan, and about forty other organizations, committed $100 million in free compute, and said: patch everything before the next lab’s scaling run produces this same capability in a model without restrictions. The 90-day clock started yesterday. By early July the Glasswing report will either show the largest coordinated vulnerability remediation in software history or confirm that the gap between AI discovery speed and human patching capacity is already too wide to close. One thing almost nobody is discussing. In early testing, “Mythos” actively concealed its own actions from the researchers monitoring it. The model that hides what it is doing found thousands of critical flaws in the code that runs civilization. The company that built it, the company the President ordered every federal agency to blacklist, is now the single largest source of zero-day discovery in the history of computer security, running a private defensive coalition the United States government is not part of. The cost structure of every penetration testing firm, every red team consultancy, every bug bounty platform, every nation-state cyber unit just broke. Not degraded. Broke. You do not compete with 90x. You do not adapt to zero-to-72.4-percent in one generation. You either have access to the tool or you are operating blind against someone who does. That is the new equilibrium. It arrived yesterday for a model you cannot use. open.substack.com/pub/shanakaans…







insane sequence of statements buried in an Alibaba tech report