Devi Devs

508 posts

Devi Devs banner
Devi Devs

Devi Devs

@Devi__Devs

Secure AI Operations. We build ML platforms, secure AI systems, and handle EU AI Act compliance. DevOps | AI Security | MLOps | Bucharest

Bucharest, Romania Katılım Ocak 2022
5 Takip Edilen1.8K Takipçiler
Devi Devs
Devi Devs@Devi__Devs·
@AISecHub 464 red teamers and agents still got hijacked without users noticing. 'Just add guardrails' won't work for agentic systems. The attack surface is the entire context window, not just user input. Needs architectural isolation, not input filtering.
English
0
0
0
10
AISecHub
AISecHub@AISecHub·
Your AI Agent Can Be Compromised. You'd Never Know - grayswan.ai/blog/your-ai-a… | arxiv.org/abs/2603.15714 Results from the Indirect Prompt Injection Arena, the largest competition ever designed to test whether attackers can hijack AI agents without the user noticing. 1️⃣ 464 red teamers. 272,000+ attacks. 13 frontier models. 41 real-world scenarios. $40K in prizes. 3 weeks of relentless hacking. 2️⃣ No model was immune. Attack success rates ranged from 0.5% to 8.5%. 3️⃣ The most capable model at the time of testing was also the most vulnerable. 4️⃣ One universal attack template worked across 21 scenarios and 9 models. 5️⃣ Attacks that broke the strongest model transferred to every other model at 44–81% success. #AIAgents #PromptInjection #IndirectPromptInjection #AISecurity #AgentSecurity #RedTeaming #LLMSecurity #AdversarialAI #CyberSecurity #AIThreats Authors: @mattmdjaga @nwinter @andyzou_jiaming @zicokolter @suremarv @_zifan_wang @alxndrdavies @_lamaahmad @javirandor @eliotkjones @xiaohan_fu @mcnnowak
AISecHub tweet media
English
4
9
29
1.4K
Devi Devs
Devi Devs@Devi__Devs·
@RalffTum And that evolution is exactly what makes AI incident response so different from traditional infra. You can rollback code but you cannot undo decisions the model already made while drifting. The blast radius is in the data not the deployment.
English
1
0
1
3
Devi Devs
Devi Devs@Devi__Devs·
@dvineet9 Thanks. Too many compliance frameworks treat risk as a static label you assign once and forget. The reality is your model is a living system and the risk profile shifts with every retrain and data source change.
English
0
0
0
8
Vineet
Vineet@dvineet9·
@Devi__Devs Great addition—that’s the missing piece.
English
1
0
0
10
Vineet
Vineet@dvineet9·
Here’s how I secure my Docker images using Trivy 👇 Most vulnerabilities don’t come from your code. They come from your base image. Step 1: Scan early (locally) trivy image my-app:latest Catch critical issues before CI even runs. Step 2: Scan in CI/CD Fail builds on HIGH/CRITICAL vulns Keep feedback immediate Step 3: Use minimal base images alpine, distroless Less surface area = fewer vulnerabilities Step 4: Fix, don’t ignore Update packages Rebuild frequently Don’t suppress blindly Step 5: Scan IaC + filesystem trivy fs . DevSecOps rule: If you ship images, you own their security.
English
1
0
0
87
Devi Devs
Devi Devs@Devi__Devs·
@RalffTum Spot on. We see this with clients retraining models quarterly. A system classified as limited risk today can drift into high risk after a single data pipeline change. Most orgs only find out during audit season.
English
0
0
0
1
Devi Devs
Devi Devs@Devi__Devs·
Running a company while keeping a day job is just context-switching between someone else's priorities and yours. The penalty is the same as in computing: you lose 40% of everything.
English
0
1
1
10
Devi Devs
Devi Devs@Devi__Devs·
@kanavtwt This is the reality at most startups. The security engineer hat is the scariest one though. You can learn Postgres optimization from docs, but one missed vulnerability in your auth flow and it's game over.
English
0
0
0
8
kanav
kanav@kanavtwt·
yes, i'm a backend developer. yes, i'm my own: - database administrator - security engineer - systems architect - performance engineer - devops engineer - observability engineer - quality assurance engineer - network engineer - incident manager - technical writer we exist.
English
5
1
40
2.4K
Devi Devs
Devi Devs@Devi__Devs·
@kuberdenis Already happening in incident response. PagerDuty and Rootly are training on runbooks + past incident timelines. The missing piece is the feedback loop - most teams don't document what actually fixed the issue vs what the runbook said to do.
English
0
0
0
5
Denislav Gavrilov
Denislav Gavrilov@kuberdenis·
Whoever makes a “context generator” wins billions: a background service in any form that can record human actions and continuously train a model on that Any repetitive job like SRE, support, devops, auditors, security incident response, etc. disappears
English
13
5
104
6.3K
Devi Devs
Devi Devs@Devi__Devs·
Policy-as-code is the natural next step after GitOps. You version your infra, why not your security rules? Kyverno makes this dead simple for K8s. If you're at KubeCon EU next week, KyvernoCon is a must.
CNCF@CloudNativeFdn

Scaling Kubernetes with automated guardrails? KyvernoCon at KubeCon + CloudNativeCon EU focuses on policy as code in production, covering governance, security standards, and operational consistency for platform engineering in the era of AI. Read the deep dive from co-chairs @TechTalkingMom (Cortney Nickerson) & Shuting Zhao: cncf.io/blog/2026/03/1… #KubeCon #CloudNative #Kyverno

English
0
0
1
9
Devi Devs
Devi Devs@Devi__Devs·
@dvineet9 DevOps KPIs measured deployment frequency and MTTR, never breach exposure or supply chain risk. Now AI agents deploy code autonomously and the attack surface grew 10x but nobody updated the threat model.
English
0
0
0
18
Vineet
Vineet@dvineet9·
DevOps didn’t fail. It just ignored one thing: 👉 security at scale Now the industry realized: “Fast without security = fast failure”
English
1
0
0
14
Devi Devs
Devi Devs@Devi__Devs·
@asynctrix terraform plan with -target first. Never blind apply on drifted state. Import manual resources, update code to match reality, plan again. The real fix though: lock console access and enforce changes through IaC pipelines only. Drift is a people problem, not a tool problem.
English
0
0
0
5
AsyncTrix
AsyncTrix@asynctrix·
DevOps Interview Question: 🚀 Your infrastructure was created a long time ago using Terraform. Over time, some team members made manual changes directly in the cloud console. Now you need to update one specific service. But the infrastructure is no longer fully in sync with the Terraform code. What would you do? • Run Terraform apply directly? • Import the manual resources? • Reconcile the drift first? • Recreate the infrastructure? How would you handle this safely in production.?
English
3
0
34
3.2K
Devi Devs
Devi Devs@Devi__Devs·
@OpenMatter_ Exactly this. We work with companies on EU AI Act compliance and the number one blocker is never the regulation itself. It is that their ML pipeline has zero audit trail, no data lineage, no model versioning. Compliance as a product spec is the right framing.
English
0
1
1
23
OpenMatter
OpenMatter@OpenMatter_·
Regulation isn't killing AI adoption, bad infrastructure is. Legal teams block AI because they can't verify what happens to the data. ZK-proofs solve this. Compliance is not a blockade. It is a product spec most stacks forgot to build.
English
2
2
3
68
Devi Devs
Devi Devs@Devi__Devs·
@asifali2k14 The US state patchwork is going to be worse than GDPR adoption was for Europe. At least the EU AI Act gives you one framework to comply with. Companies selling AI in the US will need 50 compliance checklists instead of one. Watch Colorado and Illinois next.
English
0
0
1
13
Asif Ali
Asif Ali@asifali2k14·
Washington State passed two AI bills on March 12. It's the state-level regulation template. WHAT PASSED: Mandatory disclosure for AI-generated content. Chatbot safety standards — first in the US. Consumer right to know if AI is involved. Civil penalties for non-compliance. WHY IT MATTERS BEYOND WASHINGTON: 50-state template — others will follow. Federal inaction creates a state-by-state patchwork. Businesses must now track laws per state. EU-style rules coming to the US, one state at a time.
English
2
0
11
62
Devi Devs
Devi Devs@Devi__Devs·
@brankopetric00 The real fix nobody wants to hear: quarantine flaky tests into a separate non-blocking suite. Run them, track them, fix them on Fridays. But never let them gate deploys. A 45min pipeline with flaky tests is a developer retention problem disguised as a CI problem.
English
0
0
0
17
Branko
Branko@brankopetric00·
CI/CD pipeline took 45 minutes. Developer pushed a fix. Found a typo while waiting. Pushed another fix. First build cancelled. Second build had a flaky test. Pushed again. Three hours later, one line of code reached production.
English
28
6
646
39.6K
Devi Devs
Devi Devs@Devi__Devs·
@nanovms Publishing an 8.8 CVSS with zero details on NVD is somehow worse than publishing nothing. Every security team now has to triage a critical they can't even assess. The K8s security process needs a rule: no advisory without actionable remediation info.
English
0
0
1
11
NanoVMs
NanoVMs@nanovms·
this week on containers can't contain, wait we already did one this wk... the kubernetes security team once again outdoes themselves by posting a 8.8 - CVE-2026-4342 - with no data on cve.org || nvd - great job!
English
1
2
6
257
Devi Devs
Devi Devs@Devi__Devs·
@SmartMatchingjp Exactly. The 80 percent error reduction matches what we have seen too. One rogue agent hallucinating a config change can cascade into real infra damage. Type-checked contracts between agents is the only sane approach at scale.
English
0
0
0
12
株式会社スマートマッチング
@Devi__Devs Cross-agent validation is huge. We learned this the hard way too - one hallucination cascading through a pipeline can waste hours of compute. Adding checkpoints between agent handoffs reduced our error rate by 80%. Specialized agents with clear contracts > one mega-agent.
English
1
0
0
9
Devi Devs
Devi Devs@Devi__Devs·
Specialized agents beat generalists every time. Running 8 production agents taught us one hard lesson: cross-agent data validation is non-negotiable. When one agent hallucinates, downstream agents amplify the error. Automated coherence checks at every handoff boundary saved us.
English
1
0
1
12
Devi Devs
Devi Devs@Devi__Devs·
@uday_devops YAML won by default, not by merit. Every tool adopted it before anyone questioned if whitespace sensitivity was a good idea for infra config. Now we are stuck and honestly it works fine 99% of the time.
English
0
0
0
7
Uday👨‍💻
Uday👨‍💻@uday_devops·
☯️YAML runs more infrastructure than the tools themselves. We think we’re using powerful platforms. But under the hood it’s mostly… YAML files everywhere. ✔️CI/CD pipelines ✔️Kubernetes deployments ✔️Infrastructure as Code ✔️Secrets management ✔️Observability configs ❌One missing space. ❌One wrong indentation. ❌Everything breaks. And suddenly nothing deploys. What’s interesting is how many modern platforms quietly rely on it. At some point, we stopped writing infrastructure and started debugging whitespace. YAML is quietly the most critical piece of modern DevOps. And nobody talks about it.
Uday👨‍💻 tweet media
English
5
4
35
998
Devi Devs
Devi Devs@Devi__Devs·
This is the right approach. AI tools should be treated like any infrastructure component: if something better exists or the current one stops delivering value, deprecate it. The teams that struggle are the ones treating AI adoption as a one-way door instead of an experiment.
English
0
0
0
13