THE AI REGULATOR

1.4K posts

THE AI REGULATOR banner
THE AI REGULATOR

THE AI REGULATOR

@EUAIACTGUY

AI security, AI governance, EU AI Act. Practical checklists and controls that ship.

Katılım Ocak 2026
191 Takip Edilen68 Takipçiler
Sabitlenmiş Tweet
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Quick audit before you ship an AI agent: 1) What data can it access? 2) What actions can it take? 3) What needs approval? 4) What gets logged (inputs, tools, outputs)? 5) How do you revoke permissions fast? If you cant answer in 2 minutes, dont deploy.
English
1
0
2
690
Pau Labarta Bajo
Pau Labarta Bajo@paulabartabajo_·
Stop versioning code. Start versioning intent. When AI agents generate thousands of lines at once, git diffs become unreadable and commits meaningless. The real source of truth is what you wanted, not what was produced. Design your AI workflows so the intent (the prompt, spec, or goal) is the artifact you track and refine.
English
5
3
24
2K
Prompt Driven
Prompt Driven@Prompt_Driven·
@karpathy Git breaks for agents because we're versioning the wrong artifact. Code is just the ephemeral exhaust of an AI's search. The real accumulation isn't thousands of code commits, it's the prompts and tests (the constraints) that bound the search space.
English
1
0
0
29
Andrej Karpathy
Andrej Karpathy@karpathy·
The next step for autoresearch is that it has to be asynchronously massively collaborative for agents (think: SETI@home style). The goal is not to emulate a single PhD student, it's to emulate a research community of them. Current code synchronously grows a single thread of commits in a particular research direction. But the original repo is more of a seed, from which could sprout commits contributed by agents on all kinds of different research directions or for different compute platforms. Git(Hub) is *almost* but not really suited for this. It has a softly built in assumption of one "master" branch, which temporarily forks off into PRs just to merge back a bit later. I tried to prototype something super lightweight that could have a flavor of this, e.g. just a Discussion, written by my agent as a summary of its overnight run: github.com/karpathy/autor… Alternatively, a PR has the benefit of exact commits: github.com/karpathy/autor… but you'd never want to actually merge it... You'd just want to "adopt" and accumulate branches of commits. But even in this lightweight way, you could ask your agent to first read the Discussions/PRs using GitHub CLI for inspiration, and after its research is done, contribute a little "paper" of findings back. I'm not actually exactly sure what this should look like, but it's a big idea that is more general than just the autoresearch repo specifically. Agents can in principle easily juggle and collaborate on thousands of commits across arbitrary branch structures. Existing abstractions will accumulate stress as intelligence, attention and tenacity cease to be bottlenecks.
English
517
711
7.5K
1.1M
Cody McLain
Cody McLain@codymclain·
@tom_doerr this is actually huge for agent development. versioning prompt chains has been such a pain, especially when you have agents calling other agents with different skill dependencies
English
1
0
0
11
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
If it’s not linkable, it’s not verifiable.
English
0
0
0
9
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Governance starts at config, not policy.
English
0
0
0
5
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
If you can’t click-to-verify, it’s vibes, not evidence.
English
0
0
0
5
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Why it builds trust: anyone can verify claims end-to-end.
English
0
0
1
6
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Evidence artifact: log every tool-call with trace_id + token scope + expiry.
English
0
0
0
5
Victor Akinode
Victor Akinode@VictorAkinode·
With the continued rise in AI, there is a pattern in organizations that happens more than organizations would care to admit. Think of this, an AI tool is: Approved by legal Compliant on paper Ethically reviewed But even after all this, it still introduces massive security exposure. So, if it met all the requirements above, how or why does this happen? There are a number of reasons but here are a few I can name off the top of my head: Compliance is not security Ethics is not threat modeling And legal approval is not technical validation But the biggest reason is that most AI risk today lives in the gaps between teams. GRC looks at frameworks. Security looks at attack paths. Leadership assumes alignment. That assumption is expensive. If your AI governance doesn’t answer the following questions: 1. What data goes in? 2. What data comes out? 3. Who can access it? 4. How is misuse detected? Then I hate to break it you, but that's not governance. It’s documentation. Effective AI governance must be operational, not theoretical. Otherwise, compliance becomes a checkbox and attackers don’t care about checkboxes, and neither will they be stopped by them.
Victor Akinode tweet media
English
6
7
16
129
Lucky
Lucky@lucky_inuwa·
@VictorAkinode Compliance is not security. Ethics is not threat modeling. And assumptions between teams are the most expensive vulnerability in AI today.
English
2
0
1
25
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Punchline: If it can’t survive Level 2, it’s marketing. If it can’t reach Level 3, it’s not governance.
English
0
0
0
5
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
trace_id=2026-03-15T10:22Z|model=o4-mini|release=R128|req=9f3c|user=anon
English
0
0
0
12
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
@Swaphut @elonmusk If you can’t answer “what caused the spike?” quickly, you don’t have monitoring — you have counters.
English
0
0
0
3
Hima Bindu Kolapalli
Hima Bindu Kolapalli@Swaphut·
@elonmusk This is exactly why Ethical AI needs strong product discipline: safe defaults, robust guardrails, continuous monitoring, and fast incident response for adversarial edge cases. Trust is built into the system, not just intent. #ResponsibleAI #ProductOps
English
1
0
0
89
Elon Musk
Elon Musk@elonmusk·
I not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests. When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.
Queen Bee@KingBobIIV

I haven't seen one single indecent image other than that bloody gangbang woman, and that was only because I went there to see if her baptism was real (spoiler: it wasnt). How are all these Labour MPs seeing so much child porn on X? Why are their algorithms sending it to them? Or are they looking for it? Thats the bigger question.

English
6.9K
13.3K
120.8K
27.6M
Pavel G. | Founder, Operon
Pavel G. | Founder, Operon@pavel_builder·
@HIMSS Responsible AI frameworks in healthcare need to go beyond ethics checklists. Accountability, traceability, and real-time monitoring are the harder problems.
English
1
0
0
12
HIMSS
HIMSS@HIMSS·
Next week at #HIMSS26, healthcare leaders from around the world will gather to explore what comes next for AI and digital health transformation. On Wednesday, March 11, Hal Wolf, President and CEO of HIMSS, will discuss how health systems can responsibly evaluate and deploy AI to drive measurable impact, from establishing clear criteria for selecting AI applications to implementing processes that reinforce values and minimize bias amid the rapid expansion of AI in healthcare. Hal will be joined by Ran Balicer of Clalit Health Services and Isaac Kohane of Harvard Medical School, with the discussion moderated by Gil Bashe of FINN Partners. If you’re headed to Vegas, we hope you’ll join the conversation. Get more details on this session: bit.ly/4r9fp0p
HIMSS tweet mediaHIMSS tweet mediaHIMSS tweet mediaHIMSS tweet media
English
4
5
9
3.1K
Unearthing29
Unearthing29@unearthing29·
What responsible AI actually means in practice: ∙Model documentation ∙Bias audits ∙Data governance ∙Explainability + transparency ∙Continuous monitoring Not just compliance—it’s product differentiation.
English
2
0
0
9
Unearthing29
Unearthing29@unearthing29·
Responsible AI Becomes Competitive Advantage 🧵 2026’s underrated trend: Companies betting on Responsible AI as business moat.
English
2
0
1
16
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
Anti-pattern: “We take this seriously” with zero timeline or containment.
English
0
0
0
5
THE AI REGULATOR
THE AI REGULATOR@EUAIACTGUY·
If you can’t measure it, you can’t govern it.
English
0
0
0
9
Waldo
Waldo@0xWaldox0·
@codyschneiderxx Interesting direction. The BYO-agent model only works if agents run in company-controlled sandboxes with auditable logs, scoped tokens, and policy-enforced tools. Otherwise security and compliance will block it.
English
1
0
0
193
Cody Schneider
Cody Schneider@codyschneiderxx·
so I’m starting to believe more and more that the most effective startup employees will have custom agents and personal software they bring to their jobs and these people will become 100x employees how I see this working: personally, the way I operate now is simple basically whatever I’m working on, I’m trying to automate parts of it in the background while I work on it I’m either building agents that can take over the task as it comes up or building software that eliminates it entirely and this stack of software slowly becomes an extension of m every week it gets a extended, refined, and more capable of doing the things I don’t want to do or the things I shouldn’t be wasting time on over time, it stops feeling like “tools” and starts feeling like infrastructure a personal backend a private ops team a swarm of specialized agents that quietly remove friction from everything I touch and once you start working like this, it’s impossible to go back you start seeing every repetitive action, every manual process, every annoying workflow as a bug not in the company’s system but in your system if you fix 3–5 of these bugs every week, you wake up a few months later with: - your own automations - your own research agents - your own monitoring systems - your own custom interfaces - your own intelligence layer sitting on top of your job it’s compounding leverage and I think that’s where the 100x employee comes from not from raw talent not from hustle but from the quiet accumulation of self-augmenting tools that raise your ceiling until you’re operating on an entirely different curve most people will still be “doing work.” a few will be architecting systems that do their work for them those people win those people become irreplaceable those people become their own force multipliers companies that recognize this and empower it will end up hiring individuals who effectively show up with their own internal R&D department in their github repo we’re entering the era of the 1000x startup employee and it’s going to change everything
English
101
89
1.2K
140K