Bruce MacVarish

8.9K posts

Bruce MacVarish banner
Bruce MacVarish

Bruce MacVarish

@brucemacv

New AI, IT and security services - I post about Applied AI + models, agents, context, security and governance

Присоединился Ağustos 2007
5.5K Подписки2.2K Подписчики
Bruce MacVarish
Bruce MacVarish@brucemacv·
The model itself isn’t apocalyptic yet because access is tightly controlled and aimed at defense. But it signals the start of an era where AI makes sophisticated cyber attacks cheaper, faster, and more accessible. Expect a massive patching wave in the coming weeks/months, followed by intense debate over how (or whether) to regulate these dual-use AI capabilities.
English
0
0
0
22
Haseeb >|<
Haseeb >|<@hosseeb·
This is terrifying. @AnthropicAI 's new unreleased Mythos model is so good at hacking, it found bugs in "every major operating system and web browser." 83.1% were exploited on first attempt. This thing is like COVID but for software. Actually apocalyptic in the wrong hands.
Haseeb >|< tweet media
English
189
314
2.7K
550.8K
Bruce MacVarish
Bruce MacVarish@brucemacv·
My draft of AI Agent Security first principles 1. Agency requires trustworthy delegation. 2. Identity is the root of accountability. 3. User context is the most valuable and most vulnerable data. 4. Speed asymmetry is the core structural threat. 5. Trust is earned through demonstrated behaviour, not granted through registration. 6. Invisible value is uncharged and undervalued. 7. The governed and the governor must share principles 8. The ecosystem is the threat surface, not just the agent.
English
2
0
1
38
Bruce MacVarish ретвитнул
Ricardo
Ricardo@Ric_RTP·
Sam Altman just confirmed a world-shaking cyberattack is coming this year and nothing companies can do will stop it. The Axios co-founder asked him directly if a catastrophic cyber event was realistic in the next 12 months. Altman's answer: "I think that's totally possible. Yes." The CEO of the most powerful AI company on Earth just told the world to brace for impact. And his solution is NOT prevention. It's "resilience." That's the language governments use when they've accepted something bad is going to happen and they're now focused on surviving it instead of stopping it. Because the same AI that writes code for startups is about to write exploits for adversaries. Altman admitted the frontier models are already dangerously capable at cybersecurity. The next generation will be significantly more so. And once those capabilities leak into open source, the game changes permanently. But cyber isn't even the scariest thing he said: The real bomb came a few minutes later when he was talking about biosecurity. He said the models are getting extremely good at advanced biology and wonderful things will happen, like curing diseases that have killed people for centuries. Then he said this: "Someone is going to try to misuse those." And right now, the frontier models are still locked inside responsible companies with safety layers and classifiers. OpenAI can mitigate a lot of the risk because they control the stack. But open source is catching up FAST. And when it does, any group with an internet connection and enough compute can ask an AI to help them engineer a novel pathogen. Altman's exact words: "The needs for society to be resilient to terrorist groups using these models to try to create novel pathogens is no longer a theoretical thing, or it's not going to be for much longer." Let that sink in. And this is where the story gets completely insane... Because Altman's response to all of this isn't just "build better safety classifiers." His response is a policy blueprint that Axios editors called a "Bernie Sanders fever dream." He's quietly pitching it to Washington right now: It calls for rebuilding the social contract, redistributing the gains from AI, new tax structures, and fundamentally rethinking the relationship between labor and capital in an economy where a single person with AI can replace an entire team. The part nobody saw coming? Republican senators and a senior Trump cabinet secretary told him they agree. One of them told Altman directly that capitalism needs to be reimagined because "way too much leverage is going to be with capital and not with labor." The CEO of OpenAI is now selling Bernie Sanders economics to Republican administration officials and they're listening. Step back and look at the full picture now: The man building the most powerful technology in human history just admitted 3 things in one interview. 1. A catastrophic cyberattack is likely within 12 months 2. AI-enabled bioterrorism is about to become a real threat 3. The only solution he sees is a radical restructuring of capitalism that nobody in Washington is politically ready for He's not saying AI will change the world someday. He's saying the change is already here, the risks are already landing, and the institutions designed to protect us are years behind. Most people are still debating whether ChatGPT will replace their jobs. Altman is quietly telling Washington the real question is whether society can HOLD TOGETHER through what's coming next.
English
109
131
736
206.2K
Bruce MacVarish ретвитнул
Marc Andreessen 🇺🇸
Every security flaw discovered by AI was there before AI, waiting to be discovered either by people or by AI. The world has never been good at securing computer systems; finally with AI we are going to get good.
English
341
456
7.2K
350.7K
Bruce MacVarish ретвитнул
Anthropic
Anthropic@AnthropicAI·
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing
English
1.7K
5.8K
38.6K
24.2M
Bruce MacVarish ретвитнул
Alton Syn
Alton Syn@WorkflowWhisper·
512,000 lines of Claude Code leaked and it made one thing obvious: AI ops lives or dies on permissions, rollback, audit trails, and human handoff. models win the demo. controls keep the workflow alive at 2am. what control do you add first before giving an agent prod access?
English
2
1
5
402
Bruce MacVarish
Bruce MacVarish@brucemacv·
Why This Context Graph Framework Changes How You Build Secure Agent Systems Most security discussions treat agents as black boxes. Chase’s model/harness/context split gives you a clean and clear approach: • Keep the model dumb and generic (cheaper, safer, no forgetting issues). • Keep the harness smart but policy-enforcing (the “decider” layer). • Put all the intelligence and memory in the context graph (the context secured, governed, auditable, tenant-scoped layer). This is how you get continual learning without sacrificing security or governance
English
0
0
0
25
Bruce MacVarish
Bruce MacVarish@brucemacv·
Why This Context Graph Framework Changes How You Build Secure Agent Systems Most security discussions treat agents as black boxes. Chase’s model/harness/context split + Levie’s access-control insight gives you a clean and clear approach: • Keep the model dumb and generic (cheaper, safer, no forgetting issues). • Keep the harness smart but policy-enforcing (the “decider” layer). • Put all the intelligence and memory in the context graph (the governed, auditable, tenant-scoped layer). This is how you get continual learning without sacrificing security or governance
English
0
0
0
95
Aaron Levie
Aaron Levie@levie·
One of the core things we’re going to have to contend with in AI is that even the most advanced models in the word can’t have all the relevant knowledge needed to be useful, because everyone has different use-cases and ways they’ve designed their workflows. Perhaps most importantly, as you get into the enterprise, everyone has entirely different access levels to corporate knowledge and information. Continual learning at the model layer, even at a single enterprise level, is near impossible because every user knows and has access to something different than another user. This isn’t like coding where by and large most developers can access all the relevant stuff to their job. On a single banking team, bankers have entirely different sets of documents they’re ever allowed to see. Sanitizing this is hard and having the model keep secrets is impossible. This is why the context layer is going to always be the core part of the AI stack for applied use cases to turn general models turn into useful agents. Can’t fight the physics on this one.
Harrison Chase@hwchase17

x.com/i/article/2040…

English
54
51
419
147.2K
Bruce MacVarish
Bruce MacVarish@brucemacv·
The strategic advantage from the "Division of Intelligence"
Syed Ijlal Hussain@sijlalhussain

📍 Most leaders think AI advantage comes from technology adoption. The real shift is that it comes from how organizations redesign talent systems. As EY highlights, AI advantage is built across five dimensions: talent flow, adoption excellence, capability development, culture transformation, and reward systems. This is not an HR agenda. It is an operating model redesign. 1️⃣ Talent Implication: AI changes how work is done, but most organizations keep roles static. Talent systems are not aligned with AI-driven workflows. 2️⃣ Structural Blind Spot: Firms invest in tools and training, but ignore incentives, career paths, and talent mobility. Skills improve. utilization does not. 3️⃣ Design Challenge: AI advantage requires integrating talent, workflows, and decision systems. Without this, capability remains fragmented across functions. This is why AI investments fail to compound. Talent systems are not built to support how work is evolving. The real advantage is not access to AI. It is how your organization develops, deploys, and scales talent around it. via EY ey.com/en_gl/insights… @corixpartners @Transform_Sec @Corix_JC @ILoveBooks786 @COSTESLionelEr @ramonvidall @RLDI_Lamy @FrRonconi @timo_vi @Nicochan33 @NathaliaLeHen @TCyberCast @arigatou163 @VivMilanoFSL @MathildaLoco @faryus88 @bbailey39 @BindIdeas971 @FmFrancoise @EduFirst @rameshambastha @DonaldGavis @ricardo_ik_ahau @sulefati7 @ozsilverfox @BCAgroup @9SManagement @O_Berard @DavidTaboada @yd_engoue @giuliog @Hajer_Alqassimi @EdwardHarkins @Evanskipropcrim @ranya_artistry @Howie7951 @iamtunslaw @gvalan

English
0
0
1
59
Bruce MacVarish ретвитнул
a16z
a16z@a16z·
IT services are first on the chopping block as companies adopt AI More charts: a16z.news/p/charts-of-th…
a16z tweet media
English
26
89
505
172K
Bruce MacVarish
Bruce MacVarish@brucemacv·
New @StanfordHAI Report ... "The window for experimentation is closing. This is no longer a question of whether AI delivers value. It’s whether organizations can evolve fast enough to capture it ... and whether leaders will take responsibility for smoothing the transition for workers and communities along the way." - @AGraylin . @StanfordHAI
Alvin Wang Graylin@AGraylin

New Research💡: “The #Enterprise #AI Playbook — Lessons from 51 Successful Deployments” Excited to share new research from Stanford @DigEconLab, I co-authored with @erikbryn and Elisa Pereira . We spent 5 months interviewing executives across 41 organizations, 9 industries, and 7 countries — focusing exclusively on AI deployments that actually delivered measurable value. Not hype. Not predictions. What’s working right now, and why. A few findings that challenged even our assumptions: The hard part isn’t the AI. 77% of the toughest challenges were invisible costs — change management, data quality, process redesign. Technology was consistently described as the easiest part. Same use case, wildly different timelines. One company deployed AI customer support in weeks. Another took years. Same models. The difference was always the #organization — its #leadership, processes, and willingness to fail. #Agentic AI works — but most firms haven’t tried it yet. Only 20% of our cases were agentic, but they delivered 71% median gains vs. 40% for high-automation. This gap will widen fast. The model is increasingly a #commodity. For 42% of implementations, model choice was fully interchangeable. The durable advantage is in orchestration, data, and process — not the foundation model. With productivity increase, headcount #reduction is common (45%), but not the majority outcome. Redeployment, hiring avoidance, and acceleration strategies accounted for 55% of cases.🚨 The window for experimentation is closing. This is no longer a question of whether AI delivers value. It’s whether organizations can evolve fast enough to capture it — and whether leaders will take responsibility for smoothing the transition for workers and communities along the way. Full report (free): digitaleconomy.stanford.edu/app/uploads/20… @StanfordHAI

English
0
0
1
62
Bruce MacVarish
Bruce MacVarish@brucemacv·
We’re entering the era of specialized AI agents for high-stakes domains. Codex Security proves that combining large context, reasoning, tool use (sandbox execution), and closed-loop feedback can outperform both traditional tools and generic agentic scanners. Expect this pattern to spread as more AI-native security agents don’t just flag problems ... they prove, prioritize, and fix them.
OpenAI@OpenAI

Codex Security—our application security agent—is now in research preview. openai.com/index/codex-se…

English
0
0
0
44
Bruce MacVarish ретвитнул
Animesh Koratana
Animesh Koratana@akoratana·
Context graphs will be to the 2030s what databases were to the 2000s. Within a year, every frontier lab will be building one and here's why: At 10 people, coordination is free. Everyone knows what everyone else is doing. You never hold a meeting to "align." At 100 people, you spend maybe 20% of your payroll on coordination. Managers, syncs, standups, planning sessions, status reports. At 10,000 people, that number approaches 60%. The majority of your headcount exists not to produce anything but to make sure the people who produce things are producing the right things in the right order. This is the dirty secret of large organizations: output scales linearly with headcount, but coordination cost scales exponentially. Every person you add creates new information pathways that must be maintained. The hierarchy is the protocol that manages this, and it's brutally expensive. Hierarchy is a compression algorithm for organizational knowledge. At every layer, a manager compresses the reality of their team into a summary that fits in a 30-minute meeting with their boss. Their boss compresses eight of those summaries into one for their boss. By the time information reaches the CEO, it's been lossy-compressed through five or six layers of human interpretation. This is why CEOs make bad decisions. The information they receive has been compressed, filtered, and distorted at every layer. The hierarchy is high-latency, low-bandwidth, and lossy. Jack didn't fire 4,000 producers but cut 4,000 compression nodes. Block's "world model" is a replacement algorithm. Zero latency, high bandwidth, lossless. Every person at the edge gets the full picture without waiting for information to travel through human relays. The infrastructure that makes this possible is the context graph. A living, continuously updated representation of how the organization actually works. Not just data, but decision traces: the reasoning connecting observations to actions. Not what's true now, but why it became true. The shift from "give agents memory" to "give agents organizational judgment" will define the next platform war
Animesh Koratana tweet media
jack@jack

x.com/i/article/2038…

English
94
202
1.7K
386.4K
Bruce MacVarish ретвитнул
Aaron Levie
Aaron Levie@levie·
The ultimate rate limiter on productivity gains from agents will be on critical stuff like security, compliance, governance, the ability to review the work of the agent, ensure that it’s compatible with regulations, and so on. We’ve been living in a little bit of la-la land around how much software enterprises are going to ultimately want to vibe code themselves. The last 48 hours represents a good example of why you won’t take on every risk of every piece of technology in your enterprise. There’s no free lunch with AI productivity. Companies will have the build up the systems, processes, and controls for ensuring that agents can’t run around and do anything they want on any data at any time.
sarah guo@saranormous

x.com/i/article/2039…

English
71
53
380
109.7K
Bruce MacVarish ретвитнул
CrowdStrike
CrowdStrike@CrowdStrike·
An average of 90 AI agents per person, each with access to critical systems, data and workflows. That’s why AIDR is the next evolution of security. @George_Kurtz talks to @DivesTech about what’s coming. ⬇️ crwdstr.ke/6012B6WLDa
English
1
10
43
8.7K