Chris Hood

1.5K posts

Chris Hood banner
Chris Hood

Chris Hood

@chrishood

AI Keynote Speaker & Strategic Advisor | 2x Best Selling Author of #Infailible and #CustomerTransformation | Helping enterprises cut through hype & unlock $2B+

Orange County, CA Katılım Aralık 2010
2.9K Takip Edilen9.1K Takipçiler
Chris Hood
Chris Hood@chrishood·
npm packages have manifests. Python wheels have metadata. Docker images have layer history. AI agents, being handed access to APIs, databases, and production systems, ship as a directory of files with no standard description of what they do. I built .agent to fix that. The .agent file has three things: A manifest: what the agent does, what it accesses, what it's allowed to call A cryptographic integrity hash: detects any modification after packaging A behavioral trust score: computed from code analysis before the hash is finalized The trust score comes from a 4-level analysis pipeline: L1: manifest schema validation (+20) L2: static AST analysis of source code (+30) L3: LLM semantic verification with citation requirements (+25) L4: runtime sandbox observation (+25) Levels you can't run get documented skip penalties. Max score: 100. Transfer, transport, share your agents, and know what they've been developed to do.
Chris Hood tweet media
English
0
0
0
23
Chris Hood
Chris Hood@chrishood·
Your security team needs to review an AI agent before it goes to production. So they clone the repo and start reading code. No manifest. No declared capability set. No standard artifact that tells them what the agent touches, what it can read, what it can write, or what external services it calls. Two days later they either approve it, flag it, or give up. This is the current state of AI agent deployment. Agents shipped as directories. Capabilities implied by imports. Permissions discovered at runtime. No chain of custody from the developer who built it to the system running it. Every other software ecosystem solved this problem decades ago. AI agents are the exception. Introducing the .agent format. A self-describing, verifiable packaging standard for AI agents. Manifest, integrity binding, behavioral trust score, and Ed25519 signing. Everything a security team, ops team, or compliance reviewer needs to answer basic questions about an agent before it runs. Patent pending. Open spec. Available now. pip install agentpk #AgenticAI #AIGovernance #AISecurity #AgentPK #dotAgent chrishood.com/introducing-ag…
English
0
0
0
14
Chris Hood
Chris Hood@chrishood·
I wrote about these types of situations in my book, Infailible, last year. Despite the improvement in the quality of AI, humans know when the "agent" on the other side is not a real person. And it impacts, the customer’s perception of how much you care.
Davy Jones@itsNTBmedia

English
0
0
4
234
Chris Hood
Chris Hood@chrishood·
I keep seeing "AI governance architecture" diagrams on social. Eight layers. Dozens of individual projects. A different startup for each box. Governance isn't layers you stack. It's a lifecycle that loops: Establish → Decide → Enforce → Intervene → Prove → Adapt → back to Establish. Skip a phase, the loop breaks. Bolt together six vendors, you get six tools with no shared identity, no unified trust, and gaps at every seam. chrishood.com/the-six-phases… #AIGovernance #AgenticAI #BehavioralControlPlane
English
0
0
0
14
Andrej Karpathy
Andrej Karpathy@karpathy·
💯 "If you build it, they will come." :) ~Every business you go to is still so used to giving you instructions over legacy interfaces. They expect you to navigate to web pages, click buttons, they give out instructions for where to click and what to enter here or there. This suddenly feels rude - why are you telling me what to do? Please give me the thing I can copy paste to my agent.
Andrej Karpathy tweet media
English
111
188
2.3K
145.7K
Chris Hood
Chris Hood@chrishood·
Where are your AI agents? What are they doing? Who built them? The ones other teams built. The ones that got modified and redeployed without a formal record. The ones still running because nobody turned them off. Most organizations can't answer those three basic questions. An API key doesn't answer those questions. A service account doesn't either. Neither does a spreadsheet. Agent identity isn't an access control problem. It's an accountability problem. And it has a straightforward solution that you can implement for free today. chrishood.com/how-to-know-mo… #AIGovernance #AgenticAI #AISecurity #Nomotic
English
0
0
0
9
Chris Hood
Chris Hood@chrishood·
Most AI governance tools monitor what happened after the fact. Few govern behavior before an action completes. Permissions control access. Observability records events. Guardrails filter outputs. Only Nomotic evaluates how an agent behaves over time. Nomotic’s Behavioral Control Plane™ approaches the problem differently. It governs behavior in real time using behavioral memory, evolving trust scores, ten separate behavioral drift patterns, multidimensional evaluation, and the authority to interrupt execution when risk appears. The architecture mirrors networking’s control plane, separating governance from execution. If AI agents will run real operations, governance cannot remain an afterthought. It must exist as infrastructure. chrishood.com/misbehaving-ag… #AIGovernance #AgentBehavior #Nomotic
English
0
0
0
20
Chris Hood retweetledi
Sanjay Kalra, Digital Transformation Sherpa™️
AI governance doesn’t fail at the model. It fails where “good” policies collide and nothing in the architecture says who wins. @ChrisHood’s weights-and-vetoes framing exposes the real design question: not “what can the AI do?” but “who owns the override when security, ethics, and business value disagree?” In most organizations, that answer is still implicit and political, not explicit and engineered. A practical way forward: - Treat agents as heteronomous by design: every critical action traces back to human-owned policies and named owners per dimension (security, ethics, legal, commercial). - Make vetoes and weights auditable: every veto explainable in business language, every weight change with an approver and trail. - Separate layers: a normative layer (non‑negotiables and trade‑offs), a computational layer (scores and allow/deny/defer), and a political layer (the only group allowed to change the first two). We should stop saying “the AI decided.” The agent executed; the governance stack decided; specific humans designed and still own that stack. The real maturity test is no longer “do you have AI governance?” but “is your override architecture legible, negotiable, and clearly owned?” #AI #AIGovernance #AIEthics linkedin.com/pulse/weights-…
English
0
1
1
50
Elon Musk
Elon Musk@elonmusk·
Many talented people over the past few years were declined an offer or even an interview @xAI. My apologies. @BarisAkis and I are going through the company interview history and reaching back out to promising candidates.
Elon Musk@elonmusk

@beffjezos xAI was not built right first time around, so is being rebuilt from the foundations up. Same thing happened with Tesla.

English
6.3K
10.3K
104.2K
48.5M
Chris Hood
Chris Hood@chrishood·
AI doesn't hallucinate as often as you think. Sometimes it's just wrong in a way it can't recognize. Confident. Consistent. Internally logical. Passing every check you built. "Justifiably false." It's not a technical error. It's an opinion the model was trained to hold. And opinions don't trigger governance alerts. We've spent years building control layers that assume wrong looks different from right. But when a system believes the sky is green, it doesn't drift. It doesn't flag. It just... keeps telling you the sky is green. If we had two different AI models, one trained on one political party's belief system, and the other trained on the opposite belief system, do you expect to have different outputs? Now ask yourself: whose truth is the governance layer stabilizing toward? Because if you build a correction system, you're making a claim about what correct means. And that claim is contested in almost every domain that actually matters. We haven't solved this since the invention of written communication. Social media glorifies it. Wikipedia embraces it. AI may not be able to solve it either. chrishood.com/justifiably-fa… #JustifiablyFalse #AIGovernance #AI #Opinions
English
0
0
0
16
Chris Hood
Chris Hood@chrishood·
Ask a room if they trust AI. Almost no hands go up. Ask how many are deploying it. Every hand goes up. Most organizations never actually answered the trust question. They outsourced it to the builders and moved on. And now that assumption is sitting underneath a lot of consequential decisions. There are two ways to govern AI. One of them scales. The other just feels safer. chrishood.com/the-big-g-vs-l… #AIGovernance #Nomotic #Trust #littleg
English
2
0
2
39
@levelsio
@levelsio@levelsio·
Thank god MCP is dead Just as useless of an idea as LLMs.txt was It's all dumb abstractions that AI doesn't need because AI's are as smart as humans so they can just use what was already there which is APIs
Morgan@morganlinton

The cofounder and CTO of Perplexity, @denisyarats just said internally at Perplexity they’re moving away from MCPs and instead using APIs and CLIs 👀

English
696
347
6.2K
2M
Chris Hood
Chris Hood@chrishood·
Degenerative AI is what happens when the feedback loops feeding a model begin to work against it. The model produces outputs. Those outputs influence behavior. That behavior generates new data. That data gets fed back into the system. The model you deployed is not the model you have. Every organization running an AI system for more than a year is operating something that has been shaped by everything that happened after go-live. Whether it is better or worse depends entirely on what it has been learning from. Most organizations cannot answer that question. The AI industry is built on progress narratives. More data. More capability. Forward and up. Degenerative AI moves in the other direction, and almost nobody is measuring for it. chrishood.com/degenerative-a… #DegenerativeAI #AIModels #Language
English
0
0
2
61
Chris Hood
Chris Hood@chrishood·
Customer first. Data second. Everything else after. Every few months a new model drops and the scramble begins. Benchmarks. Comparisons. Vendor calls. LinkedIn hot takes. Almost none of it addresses the actual problem. The majority of AI initiatives that stall, underperform, or quietly get defunded trace back to the same root cause. The data was incomplete. The data was inconsistent. The data was siloed, outdated, or simply missing. A company adopts a model. Results disappoint. Instead of examining the data, the team concludes they picked the wrong model. They switch. Results disappoint again. They switch again. This is the switching trap. Data compounds. Models are becoming a commodity. If your AI strategy centers on model selection and use case identification, ask one harder question before your next planning cycle: What is our data strategy? If the answer is vague, you have found the real problem. chrishood.com/your-ai-strate… #AI #Data #CX
English
0
0
0
16