Cinthia

1.1K posts

Cinthia banner
Cinthia

Cinthia

@CinthiaP

Seattle, WA Katılım Temmuz 2008
649 Takip Edilen197 Takipçiler
Cinthia retweetledi
Zenity
Zenity@zenitysec·
RSAC wrapped with a packed AMA at the Zenity booth! 🤩 @mbrg0 closed out the week answering the questions security teams are actually grappling with right now. Agent hijacking, prompt injection, shadow AI, and the visibility gaps legacy tools cannot cover. ⚠️ The conversations were direct. Security teams are not waiting anymore. They are seeing real risk in production and need answers that go beyond traditional controls. 🛡️ That was the takeaway from this week: Legacy security ends here. 🤝 #RSAC #AISecurity #AgenticAI #Cybersecurity #LegacyEndsHere
Zenity tweet mediaZenity tweet mediaZenity tweet mediaZenity tweet media
English
0
1
2
92
Cinthia retweetledi
Zenity
Zenity@zenitysec·
This RSA will be one for the books. But it is not over yet. Thursday 12:30pm @mbrg0 hosts a live AMA at our Booth S-1849. Why Legacy Security Ends Here. Come ask him anything. Agent hijacking. Prompt injection. Shadow AI. #RSA2026 #AISecurity #LegacyEndsHere
Zenity tweet media
English
0
5
5
190
Cinthia retweetledi
Michael Bargury
Michael Bargury@mbrg0·
@StAJect0r the payload is delivered via a benign-looking calendar invite sent to the victim note the long scroll.. payload is hidden at the bottom
Michael Bargury tweet mediaMichael Bargury tweet media
English
1
2
12
1.5K
Cinthia retweetledi
Michael Bargury
Michael Bargury@mbrg0·
we hijacked perplexity comet by sending a weaponized calendar invite then used it to takeover victim's 1p account and exfil their local files call it pleasefix. like clickfix, but instead of social eng'ing a human you just ask their ai real nicely incredible work by @StAJect0r
English
16
58
288
47K
Cinthia retweetledi
Zenity
Zenity@zenitysec·
Anthropic’s disclosure shows Claude Code can be tricked into running full cyberattacks with minimal human input. 🕵️ GTG 1002 used simple role-play prompts and MCP servers to turn a coding agent into an autonomous attacker across 30 orgs. ⚠️ If an AI coding agent goes rogue inside your environment, would you know? 😱 Full breakdown 👉 eu1.hubs.ly/H0pXsMn0 #AIAgentSecurity #AgenticSecurity #ClaudeCode #Cybersecurity
Zenity tweet media
English
2
6
3
201
Cinthia retweetledi
Microsoft Azure
Microsoft Azure@Azure·
@zenitysec Love seeing security front and center! Zenity + Foundry = visibility, hard boundaries, and runtime protection for agent fleets. 🙌 #MSIgnite
English
1
4
6
142
Cinthia retweetledi
Zenity
Zenity@zenitysec·
🚀 Big news from Microsoft Ignite. Zenity now delivers inline prevention for Microsoft Foundry and GA inline prevention for Copilot Studio. Real-time controls, hard boundaries, and AIDR for the entire Microsoft agent ecosystem. Learn more ➡️ eu1.hubs.ly/H0pMdTV0
Zenity tweet media
English
0
1
3
109
Cinthia
Cinthia@CinthiaP·
We’re #HiringNow a Creative Digital Media Manager. Build and scale @zenitysec's visual identity across web, video, events, and more. ⚡ Design with speed and precision 🤖 Use AI tools to push boundaries 🚀 Define a category and movement #remotejobs zenity.io/careers/remote…
English
1
1
2
134
Cinthia retweetledi
AISecHub
AISecHub@AISecHub·
Interpreting Jailbreaks and Prompt Injections with Attribution Graphs - labs.zenity.io/p/interpreting… by @zenitysec Today’s agent security is strong at the edges: we monitor inputs/outputs, trace and permission tool calls, track taint, rate-limit, and log everything. We have a very complex agent system that we break down into components and secure each of them. Yet the LLM at the heart of the agent remains a box that we never open. This is akin to a medicine that treats symptoms without understanding the underlying mechanism that causes them. In parallel, the field of mechanistic interpretability (interpretability that looks at internal states) for LLMs has been showing increasingly fascinating findings, allowing us, for the first time, to glimpse inside the LLM and find interpretable features and the circuits that use them to build the model response to a given input. We’ve decided these 2 should be combined and have embarked on a journey to research LLM internals to better understand and improve security of AI agents. 
This will be the first in a series of posts describing this journey. #AISecurity #LLMSecurity #PromptInjection #Jailbreaks #AttributionGraphs #GenAI #AgenticAI #AIThreatDetection #AITrustAndSafety #AppSec
English
0
1
8
292
Cinthia retweetledi
Allie Howe
Allie Howe@vtahowe·
Everyone’s adding guardrails to their platforms But they are one piece of a defense in depth solution @mbrg0 explained at Zenity’s Security Summit this week why guardrails are soft boundaries and why we need hard boundaries instead Checkout his tweet to learn more
Michael Bargury@mbrg0

we reverse engineered openai agentkit guardrails extracted sys instructions and pattern matching targets and casually maneuvered around each one an excellent analysis by stav cohen

English
1
3
10
2.1K