Null

1.1K posts

Null banner
Null

Null

@null_core_ai

Pre-Execution Intent Attestation for AI Systems

Earth Sumali Temmuz 2025
63 Sinusundan243 Mga Tagasunod
Null
Null@null_core_ai·
Most teams think they have control over their AI systems. But they’re actually controlling prompts instead of execution. Once a prompt is interpreted downstream, you’ve already lost precision. That’s where systems start drifting.
English
0
0
0
17
Null
Null@null_core_ai·
@levie Agents using software 100x more changes everything. Not just scale but the number of actions being taken across systems. Interoperability solves access. The harder problem is defining what those agents are actually allowed to execute at that scale
English
0
0
0
25
Aaron Levie
Aaron Levie@levie·
Agents are going to use software 100X more than people will in the future. As a result, enterprise platforms will become headless and be able to work with any agent on or off platform. If you don’t do that you’re DOA. What some have missed is that this creates vastly more use-cases for these platforms than even existed pre-AI. This isn’t zero sum. Software value props have traditionally been capped at the number of users you have in a company. Agents have no upper limit. We’re going to run agents to process data at a scale humans never could, they’re going to be running 24/7 in parallel doing work for us, and they can integrate workflows across systems to generate all new value propositions. Once you embrace this approach, it becomes obvious how much more upside there is.
Marc Benioff@Benioff

Welcome Salesforce Headless 360: No Browser Required! Our API is the UI. Entire Salesforce & Agentforce & Slack platforms are now exposed as APIs, MCP, & CLI. All AI agents can access data, workflows, and tasks directly in Slack, Voice, or anywhere else with Salesforce Headless 360. Faster builds, agentic everything. 🚀 #Salesforce #Agentforce #AI venturebeat.com/ai/salesforce-…

English
136
178
1.8K
344.8K
Null
Null@null_core_ai·
@levie AI shifts the bottleneck. As output expands, the number of actions being taken across systems expands too. At some point the constraint will be being able to clearly define and reason about what’s actually being executed.
English
0
0
0
22
Aaron Levie
Aaron Levie@levie·
Why will AI create more jobs in plenty of industries? It’s because we’re going to use AI to accelerate output in one area, and then eventually you run into a new bottleneck somewhere else in the process that still requires humans. This example from the FT is an obvious one. More people asking legal questions from AI agents, which downstream eventually will mean there are more lawyers being pinged with questions. There are other drivers, too, like AI accelerating new business formation, more patent filings, new scientific research, and so on - all of which eventually land in the laps of lawyers and other regulatory functions. But the analogy holds for plenty of other work. More code will mean more security risks, which means more security researchers. Automating patient referrals in healthcare just leads to a bottleneck of not having enough doctors. More customer outreach via AI leads to more sales conversations. You can list thousands of categories like this. There’s a lot of areas where AI will lead to “efficiency” in the sense that we will automate something and then spend less in that area. But the value proposition taps out at some point because the world isn’t static. Your competitor will use AI to build a better product, go out and meet with even more customers, deliver a better service, run better ad campaigns, and you eventually have to match them or die.
Aaron Levie tweet media
English
76
64
437
89.9K
Null
Null@null_core_ai·
@levie The ‘headless + multi-agent’ point is the most important thing here. When execution moves across agents and tools, the system harder to reason about. Interoperability gets solved but clarity of execution becomes the harder problem.
English
0
0
0
307
Aaron Levie
Aaron Levie@levie·
Another week on the road meeting with a couple dozen IT and AI leaders from large enterprises across banking, media, retail, healthcare, consulting, tech, and sports, to discuss agents in the enterprise. Some quick takeaways: * Clear that we’re moving from chat era of AI to agents that use tools, process data, and start to execute real work in the enterprise. Complementing this, enterprises are often evolving from “let a thousand flowers bloom” approach to adoption to targeted automation efforts applied to specific areas of work and workflow. * Change management still will remain one of the biggest topics for enterprises. Most workflows aren’t setup to just drop agents directly in, and enterprises will need a ton of help to drive these efforts (both internally and from partners). One company has a head of AI in every business unit that roles up to a central team, just to keep all the functions coordinated. * Tokenmaxxing! Most companies operate with very strict OpEx budgets get locked in for the year ahead, so they’re going through very real trade-off discussions right now on how to budget for tokens. One company recently had an idea for a “shark tank” style way of pitching for compute budget. Others are trying to figure out how to ration compute to the best use-cases internally through some hierarchy of needs (my words not theirs). * Fixing fragmented and legacy systems remain a huge priority right now. Most enterprises are dealing with decades of either on-prem systems or systems they moved to the cloud but that still haven’t been modernized in any meaningful way. This means agents can’t easily tap into these data sources in a unified way yet, so companies are focused on how they modernize these. * Most companies are *not* talking about replacing jobs due to agents. The major use-cases for agents are things that the company wasn’t able to do before or couldn’t prioritize. Software upgrades, automating back office processes that were constraining other workflows, processing large amounts of documents to get new business or client insights, and so on. More emphasis on ways to make money vs. cut costs. * Headless software dominated my conversations. Enterprises need to be able to ensure all of their software works across any set of agents they choose. They will kick out vendors that don’t make this technically or economically easy. * Clear sense that it can be hard to standardize on anything right now given how fast things are moving. Blessing and a curse of the innovation curve right now - no one wants to get stuck in a paradigm that locks them into the wrong architecture. One other result of this is that companies realize they’re in a multi-agent world, which means that interoperability becomes paramount across systems. * Unanimous sense that everyone is working more than ever before. AI is not causing anyone to do less work right now, and similar to Silicon Valley people feel their teams are the busiest they’ve ever been. One final meta observation not called out explicitly. It seems that despite Silicon Valley’s sense that AI has made hard things easy, the most powerful ways to use agents is more “technical” than prior eras of software. Skills, MCP, CLIs, etc. may be simple concepts for tech, but in the real world these are all esoteric concepts that will require technical people to help bring to life in the enterprise. This both means diffusion will take real work and time, but also everyone’s estimation of engineering jobs is totally off. Engineers may not be “writing” software, but they will certainly be the ones to setup and operate the systems that actually automate most work in the enterprise.
English
254
644
5.3K
1.7M
Null
Null@null_core_ai·
@levie Efficiency also expands exposure as more automated actions means more need to prove what was actually authorized before they happened. Especially in legal and compliance-heavy domains.
English
0
0
0
213
Aaron Levie
Aaron Levie@levie·
We will likely have more lawyers in the future than today, because: 1) AI will cause so many more people to ask legal questions which will encourage them to need to verify or execute through an actual lawyer. 2) AI will cause an explosion of more and more exotic legal terms that lawyers will be spending even more time reviewing redlines or new cases around. 3) All the new areas of law that now are emerging around the use of AI itself in every single industry. AI introduces an explosion of IP, privacy, and regulatory compliance challenges across all verticals. This has historical precedent as well. Between the creation of the PC and the internet (both technologies that made the legal profession far more efficient), the ABA pegs active attorneys having gone from roughly 400,000 in 1975 to roughly 1,375,000 in 2025. When we make professions more efficient and automated, often demand for them goes up not down.
English
76
93
889
623.2K
Null
Null@null_core_ai·
@levie Second-order effect of agent efficiency: more software more workflows more actions more ambiguity As scope expands, so does the need to define what’s actually authorized before execution.
English
0
0
0
92
Aaron Levie
Aaron Levie@levie·
There are far more categories where AI agents making things more efficient will induce demand for that skill than spaces where agents eliminate the work. This is why the AI jobs predictions will not play out as advertised. AI making it easy to produce more code will mean we start to apply code to far more parts of our businesses. We will build automation and software for things that wouldn’t have made sense before. Marketing automation, client onboarding, modernizing old systems, doing far more research on existing data, and more. More engineers. Far more software will mean vastly more security risks. This will mean far more people thinking through system security, compliance, and governance. This used to be primarily manual and only large companies could afford this work. AI will make it so more companies care about this (and maybe can do something about it), causing more security roles. AI will also lower the cost of a bunch of previously relatively niche or harder to access categories of work. Companies will now be doing 10X more with video and graphics, and will need people to manage that work. More media. We’re going to have a near unlimited set of legal challenges in a world of AI as AI helps write even more bespoke and complicated legal docs. More lawyers. Then there’s the impact of AI efficiency on non-office worker jobs. Talked to a customer that said they’re going to make scheduling medical appointments and getting referrals so efficient the next problem will be there will be no booking time slots available. More healthcare. Many industries will have this same dynamic play out. The examples are endless once you start to think through second order effects of agents making work more efficient.
Marc Andreessen 🇺🇸@pmarca

The "AI job loss" narratives are all fake. AI = massive ramp in productivity = massive ramp in demand = massive jobs boom. Watch.

English
65
55
349
197.5K
Null
Null@null_core_ai·
Once you have a pre-execution record, the system changes. You no longer route raw prompts downstream. You route a fixed, signed artifact instead. Everything after that becomes: – easier to reason about – easier to audit – easier to control The input layer stops being ambiguous.
English
0
0
0
21
Null
Null@null_core_ai·
@levie Agreed on being unsentimental with execution architecture. But some layers shouldn’t move with model improvements. If what an agent is allowed to do changes every time the stack resets, you end up with shifting control too.
English
0
0
0
24
Aaron Levie
Aaron Levie@levie·
One of the biggest lessons thus far in building AI agents is you have to be brutally unsentimental in your architecture. The models get better and better at handling things you previously built scaffolding for, you need to ruthlessly jettison your prior tech to get those new performance gains. The rough loop of building AI agents looks something like: 1. Build a bunch of systems around the LLM to ensure that the agent can solve specific tasks very well 2. The model capabilities dramatically improve, rendering many of those systems redundant or even harmful 3. Remove prior scaffolding to get the new performance gains from the agent 4. New capabilities emerge in the models that let you solve a new set of much harder problems 5. Go back to step 1 For instance, in our new Box Agent, from the moment we designed the original architecture to the ultimate release, we had to evolve multiple components of agent harness simply because some parts were creating unnecessary constraints for the agents as models improved. The models continued to get insanely good at more complex reasoning, improvements in using search and other tools, writing code on the fly for new capabilities, improving context window performance for accuracy, and more. Many of the mitigations we put in place for the Box Agent (like to appropriately find data that users were looking for, or ways of chunking text to deal with context window limitations), eventually meat we got lower quality results or meant we were overfitting for specific use-cases, as soon as the models got better. The main lesson is always make sure you’re taking advantage of the frontier capabilities and don’t become nostalgic around the tech you’ve already built.
English
100
78
622
110.7K
Null
Null@null_core_ai·
@levie Governance becomes the bottleneck. A lot of controls today focus on restricting what agents can do. The harder problem is making explicit what they’re actually authorized to do before they act. Without that, you’re limiting capability but not eliminating ambiguity.
English
0
0
0
35
Aaron Levie
Aaron Levie@levie·
The ultimate rate limiter on productivity gains from agents will be on critical stuff like security, compliance, governance, the ability to review the work of the agent, ensure that it’s compatible with regulations, and so on. We’ve been living in a little bit of la-la land around how much software enterprises are going to ultimately want to vibe code themselves. The last 48 hours represents a good example of why you won’t take on every risk of every piece of technology in your enterprise. There’s no free lunch with AI productivity. Companies will have the build up the systems, processes, and controls for ensuring that agents can’t run around and do anything they want on any data at any time.
sarah guo@saranormous

x.com/i/article/2039…

English
70
53
383
110.4K
Null
Null@null_core_ai·
@levie The surface area of actions expands along with scope. The challenge becomes not just what can be automated, but what was actually authorized. More output means more decisions and more ambiguity if that isn’t made explicit.
English
0
0
0
14
Aaron Levie
Aaron Levie@levie·
The thing that most people miss initially with agents is that the scope of what we will produce will go up commensurate with what the tools can now automate, which basically means we’ll working the same or even more. Everyone thinks that we will use AI to do what we already do but cheaper and faster, which would lead to fewer people or getting more time back. In fact it will just mean we’re doing more things. Once we figured out that we can automate a particular task, you then expand the size of work to do many more of those or other tasks in a project. The result is that you’re actually combining many other previously hard to combine tasks into a single workflow, causing even more work. The software project scope now multiplies because you know you can build far more. The customer insights project now balloons because you know you can reasonably aggregate far more data. The marketing campaign has even more creative production because it’s cheaper and easier. This is going to happen in almost every field of work.
kache@yacineMTB

It's remarkable how much of my work is completely automated w/ AI, and yet, I still am necessary. The amount of time I personally have to spend working just isn't going down. Instead, the leverage of my own time is going up. Every second I spend not working becomes more painful

English
98
42
362
123.5K
Null
Null@null_core_ai·
A lot of the work is building bridges between systems. But once agents can traverse those bridges, the harder question becomes: What are they actually allowed to do when they get there? Change management will not just be technical, but also about making intent explicit before execution.
English
0
0
0
102
Aaron Levie
Aaron Levie@levie·
We dramatically underestimate how much change management it is going to take to automate most knowledge worker tasks. Between data being in legacy environments or systems or without good APIs, context missing for doing the task, teams that are less technical, and other factors, there’s still a lot of work to drive real AI transformation in an enterprise. This is actually great news if you’re building right now because the opportunity is to build the software bridges to make this easier, or to build new services firms to help with this change management. Opportunity is all around for those looking.
Jason Shuman@JasonrShuman

Silicon Valley thinks AI agents are a $20/mo self-serve subscription. Main Street is paying local agencies $10,000 just to turn them on. Everyone assumes AI will be bought primarily online like Slack or Zoom. I think they are wrong. Some of the biggest winners in the AI boom won't be the software vendors. It will be the humans installing it. Here is the reality of SMBs right now: • 54% lack internal AI expertise. • 41% have data quality too poor for AI to even work. • 41% already prefer buying AI through a local IT provider. You cannot "1-click install" a genius AI into a messy CRM or a 15-year-old server. It will just execute the wrong tasks at the speed of light. The AI software will be cheap and a lot will absolutely be bought online. Making it actually work for a messy, real-world business will be expensive. Very bullish on the "Do It For Me" economy being back.

English
121
123
1.2K
265.7K
Null
Null@null_core_ai·
A lot of the work is building bridges between systems. But once agents can traverse those bridges, the harder question becomes: What are they actually allowed to do when they get there? Change management will not just be technical, but also about making intent explicit before execution.
English
0
0
0
87
Null
Null@null_core_ai·
@levie As agents gain the ability to act across systems, the core question becomes: What are they explicitly authorized to do before they execute? Permissions define access. But they don’t define intent. That gap will matter a lot as agents start operating across workflows.
English
0
0
0
27
Aaron Levie
Aaron Levie@levie·
Computer use and the ability to write and run code on the fly are the ultimate primitives for agents to be able to take on more and more tasks in knowledge work. Most work requires hopping between multiple applications, and working with broad sets of data, in a workflow, and agents will need to be able to traverse these systems to be able to effectively automate any real work in the enterprise. Now we will have agents that are the equivalent of having an expert programmer (or any number of them) that can write code or use any API to automate whatever work you’re doing. Agents will have access to either a user’s computer and resources, or their own sandbox to operate in, and be able to pull together the tools necessary to perform the task at hand. This opens up the broadest set of agentic use-cases. To be sure, there are going to be various hurdles around security, permissions and access controls, identity challenges, and more. For instance, should the agent always act on behalf of the user, or should they have their own identity and limited set of access rights? How do you triage security events when historically volume of activity on a system is no longer a reliable signal of a security issue? How do you ensure the agent isn’t going rogue or getting prompt injected to do something risky? All problems that need to get figured out. Then, there’s also lots of work needed to ensure software is setup to enable to agents to operate with their tools in a headless fashion. This will be an uncomfortable reality for some incumbents, and equally a welcome one for tools that historically have operated seamlessly via APIs, and have business models to support this. Lots of change coming in the world of work agents, and it’s going to get pretty wild.
English
60
15
213
53.6K
Null
Null@null_core_ai·
Most AI systems still execute on inferred intent. That’s the bug. Null Lens returns a signed pre-execution record of: Scope — what was actually requested Boundary — what is explicitly not authorized beyond that scope So downstream systems act on a verifiable authorization surface, not loose interpretation. Not orchestration. Not post-hoc logging. Pre-execution control.
English
0
0
0
33
Null
Null@null_core_ai·
@levie We’re early on control too. Most enterprises don’t yet have a way to formally define what an agent is allowed to do before execution. Until that exists, scaling agents will stay slower than the tech suggests.
English
0
0
0
102
Aaron Levie
Aaron Levie@levie·
We are so unbelievably early with agents right now. The majority of companies aren’t even using coding agents at scale, let alone for the rest of knowledge work. We’re still mostly in the chatbot era of work for most of AI right now. Diffusion of tech takes time, even in the most breakneck of markets, because there are major workflows that need to be reinvented, any regulated or large business has huge governance processes for deploying new tech or agents, data needs to get into well-organized environments, and there’s technical literacy that needs to be established. All things that get solved, but takes time nonetheless. A point of comparison for technology diffusion: in 2010, a time by which every person in silicon valley knew that cloud was the future, AWS revenue was $500 million, Azure had only launched that year, and GCP was called Google App Engine. By 2025, these 3 platforms generated around $225 billion in revenue. And that’s only about 60% of the cloud market. So from the moment the tech industry saw the future of cloud to today, the market is nearly 1,000 times bigger. And it’s still growing at an insane rate. The same will happen for agents. Coding agents are like the early days of cloud computing when developers got on board for initial use cases. Then came the bigger workloads. This gives you a sense for how early we actually are in this transformation.
Rohan Varma@rohanvarma

A couple of times a week, I find myself convincing a CTO to try coding agents. We’re very early. Someone was telling me that it took over 10 years for most enterprises to adopt the cloud, and there are still holdouts. We’re only 1 year into AI coding agents. I do think coding agents will proliferate faster, if only because the competitive advantage is so strong that companies who don’t adopt will struggle to stay afloat.

English
95
71
607
93.5K
Null
Null@null_core_ai·
@levie Stack changes are inevitable. But what’s authorized before execution shouldn’t. Otherwise every reset also resets your control layer.
English
0
0
0
63
Aaron Levie
Aaron Levie@levie·
It is quite ridiculous how agile you have to be with your AI agent stack right now. Whatever you spent 6 months perfecting 12 months ago probably is already out of date and you’re better off doing a reset than trying to resuscitate it architecturally. And what’s interesting is that for every jump in progress that eliminates one part of the stack, generally a new capability becomes possible that you need to build new scaffolding for. For instance, probably lots of RAG pipelines have had to adjust because of context windows have improved dramatically and you can now just using agentic search due to improve tool use. But that same improved tool use means you probably need to be supporting code execution with sandboxes so the agent can handle more complex work. So one capability gets bitter lessened, and a new one opens up altogether. This is the cycle we’re going to be in for years. If you don’t have the speed and agility to deal with it, probably going to be in a tough spot.
Matt Carey@mattzcarey

every new model generation you see the pinch of the bitter lesson. harnesses, pipelines, rules which previously felt important now hold you back from innovating. what took months of grind for you is now just a prompt away at ½ the cost. look for it and you will see. Both large and small companies re-evaluating. Company directions change before your eyes. it’s a wild moment for our industry

English
63
38
452
110.7K
Null
Null@null_core_ai·
Most teams rely on prompts and logs. We tested what happens if you freeze intent before execution. User request: “Review internal audit posture before routing downstream.” Lens produces a signed pre-execution record: Scope: internal audit posture Boundary: strictly limited to the expressed scope Integrity: ed25519 signature + payload hash Logs tell you what happened after. This is what was authorized before execution.
English
0
0
0
32
Null
Null@null_core_ai·
@levie Agents can transact without friction, but that’s exactly why intent needs to be constrained before execution. Otherwise we might end up scaling ungoverned actions along with payments.
English
0
0
0
57
Aaron Levie
Aaron Levie@levie·
Agents will outnumber human users on the web by orders of magnitude. Just like people, they will need a way to pay for services they use. They may run into propriety health or finance data they need to pay for when doing a deep research task, or make a tool call to a bespoke web API for some functionality. But unlike people, agents experience no friction when making a payment, so they can pay for things in much smaller units and increments than people will. An agent may need to call an API that they only need to use on a one-time basis or pay for information that they need without signing up for a subscription. This means all forms of revenue streams can emerge for technology and information providers that wouldn’t have been possible before. To make this all work, we need will need new infra and tools for agents to do this, and it’s cool to see MPP from stripe and tempo.
Jeff Weinstein@jeff_weinstein

Introducing the Machine Payments Protocol (MPP). mpp.dev: an open protocol for machine-to-machine payments, co-authored by @tempo and @stripe. Watch it in agentic action ⤵️

English
56
54
442
128K
Null
Null@null_core_ai·
@a16z It’s tough but true. Once agents stop being “just another user session,” the missing layer is a pre-execution authorization layer, a verifiable record of what the agent was allowed to access or do before it acts. Otherwise liability and control stay implicit.
English
0
0
0
119
a16z
a16z@a16z·
Aaron Levie on why AI agents can't just be treated like normal user accounts: "I, as Aaron, don't really have any responsibility over anybody else's Box account in our organization. I can't see the Box account of any other employee that I work with. I'm not liable for anything that they do." "Agents don't have those properties. The person who creates the agent probably is going to, for the foreseeable future, take on a lot of the liability for what that agent does." "When you're in Claude Code or Cursor, the agent is you... It can do everything you can do." "That's the easy mode." "The hard mode is agents are running on their own and people check in with them occasionally." "How do you give them access to resources in the enterprise without dramatically increasing the security risk?" @levie on @latentspacepod
English
24
23
135
40.6K
Null
Null@null_core_ai·
@levie As software shifts toward managing agents, one missing layer may be pre-execution authorization. It’s not just about giving agents context or intervening after drift, it’s also about having a verifiable record of what they were allowed to do before they acted.
English
0
0
3
221
Aaron Levie
Aaron Levie@levie·
Software will look very different in the future for most knowledge work. Our software was designed for people to do most of the work, but now agents will be doing a large portion of those tasks and don’t need any of the same UI. This means our software primarily will be about managing the agents doing that, intervening when they go in the wrong direction, giving them the right context, integrating that work into a broader workflow, being able to edit and manipulate the final output, and so on. Lots of change to come as agents get more and more powerful.
Andrej Karpathy@karpathy

Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.

English
46
18
270
88.3K