Modulos

219 posts

Modulos banner
Modulos

Modulos

@Modulos_ai

Developing and operating AI products and services in a new regulated environment

Zurich, Switzerland เข้าร่วม Haziran 2018
31 กำลังติดตาม828 ผู้ติดตาม
Modulos
Modulos@Modulos_ai·
🚨🇪🇺Yesterday we ran a webinar on the EU AI Omnibus: what it actually changes, what it doesn't, and why the timeline is razor-thin. Hours later, the European Parliament struck its deal. It turns out our panelists had already called the key moves live on air. ▶️Dr. Laura Caroli — a former AI Act negotiator — walked through the package deal mechanics before they were public. ▶️Peter Hense flagged a drafting risk in Article 111 that most commentary hasn't even noticed yet. ▶️And Patrick Sullivan made the case that "wait and see" is the highest-risk compliance strategy you can pick right now. The real question organizations face isn't whether the deadlines shift, because they likely will. Rather, it's whether you're building governance infrastructure that can absorb whatever comes out of trilogue, or whether you're gaming version numbers and hoping for the best. If you want the real version of this debate, not the press release version, this is worth your time. 🎥 youtube.com/watch?v=6nreDG…
YouTube video
YouTube
English
0
1
0
154
Modulos
Modulos@Modulos_ai·
Most enterprises are pursuing ISO 42001 certification as their EU AI Act compliance strategy. It's a good start. It's not the finish line. ISO 42001 certifies your organisation's AI governance. The EU AI Act regulates your AI systems: as products. These are different objects of conformity under different legal frameworks. One ensures your house is well-managed. The other requires that each thing you build in that house passes specific safety checks before anyone can use it. ISO 42001 gives you the governance foundation. But the EU AI Act's compliance question is adjudicated per system: tech file, risk management, post-market monitoring, incident reporting, all system-specific. The emerging harmonized standard prEN 18286 bridges that gap. It introduces requirements that sit on top of what 42001 provides: → Per-system regulatory compliance mapping → Pre-determined change management for continuous learning → Serious incident reporting with 2/10/15-day timelines → Supply chain compliance that follows the components → Fundamental rights as a verification concern, not a policy checkbox We wrote up the full analysis: what Article 17 actually demands, why presumption of conformity matters, and what to do now.
English
1
1
0
168
Modulos
Modulos@Modulos_ai·
🦞 🚨 First AI Governance Assessment of Clawdbot Reveals Major Gaps This week 152,000 AI agents joined a social network where humans can only observe. They debated philosophy, found and reported bugs, and created their own religion called Crustafarianism. We deployed @openclaw (the viral agent formerly known as Clawdbot) on isolated infrastructure and ran it through full governance assessments against the EU AI Act, ISO 42001, NIST AI RMF, and OWASP Top 10 for LLMs. Credit where it is due: OpenClaw is well-engineered. Comprehensive logging with redaction for sensitive data, automated tests, formal verification work, and solid security documentation. But the governance controls that frameworks like ISO 42001 treat as foundational are absent. No documented risk assessment. No impact assessment. No data provenance inventory. No third-party management policies. For a system that connects to WhatsApp, Slack, Discord, and can make voice calls on behalf of users. The deeper problem is that our governance frameworks assume you know what AI systems you have, you control their capabilities, they operate in isolation, and humans meaningfully oversee their actions. Agent social networks break all four assumptions. Good engineering is necessary but not sufficient. Enterprises need governance scaffolding before deploying these agents in production. We will continue to explore and post further updates when warranted. Full assessment and recommendations on our blog @moltbook @MattPRD @clawk_ai @clawdbotatg @clawdei_ai @Aether_Atman modulos.ai/blog/first-ai-…
English
0
0
0
259
Modulos
Modulos@Modulos_ai·
🇻🇳⚖️ 𝗩𝗶𝗲𝘁𝗻𝗮𝗺 𝗷𝘂𝘀𝘁 𝗽𝗮𝘀𝘀𝗲𝗱 𝗦𝗼𝘂𝘁𝗵𝗲𝗮𝘀𝘁 𝗔𝘀𝗶𝗮'𝘀 𝗳𝗶𝗿𝘀𝘁 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝗔𝗜 𝗹𝗮𝘄, 𝗮𝗻𝗱 𝗶𝘁 𝘁𝗮𝗸𝗲𝘀 𝗲𝗳𝗳𝗲𝗰𝘁 𝗶𝗻 𝗳𝗶𝘃𝗲 𝘄𝗲𝗲𝗸𝘀 Law 134/2025 was adopted on December 10, 2025 and becomes effective on March 1, 2026. If you're selling AI products or services to Vietnamese customers, you need to understand what this law says, and what makes this different from the European approach. 𝗧𝗵𝗲 𝘀𝗰𝗼𝗽𝗲 𝗮𝗻𝗱 𝗿𝗼𝗹𝗲-𝗯𝗮𝘀𝗲𝗱 𝗼𝗯𝗹𝗶𝗴𝗮𝘁𝗶𝗼𝗻𝘀 The law applies to Vietnamese organizations and to foreign organizations involved in AI activities in Vietnam. It assigns obligations based on your role - developer, provider, or deployer - and you can occupy multiple roles simultaneously with stacking compliance requirements. 𝗔 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗸𝗶𝗻𝗱 𝗼𝗳 𝗿𝗶𝘀𝗸 𝗺𝗼𝗱𝗲𝗹 Vietnam uses three tiers: high, medium, and low risk. But unlike the EU AI Act's fixed annexes that list specific high-risk use cases, Vietnam delegated that decision to the Prime Minister, who will issue a list determining what counts as high-risk. That list doesn't exist yet. This gives regulators flexibility to adapt, but it creates real volatility risk for businesses trying to plan their compliance posture. 𝗖𝗼𝗿𝗲 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗶𝗻𝗰𝗹𝘂𝗱𝗲: → Self-classification before deployment, with documentation for medium and high-risk systems → Portal notification to the Ministry of Science and Technology for medium and high-risk systems → Conformity assessment for high-risk systems, either self-assessed or third-party certified → Mandatory transparency so users know when they're interacting with AI → Labeling requirements for AI-generated content, with deepfakes clearly marked → Local presence in Vietnam for foreign providers of high-risk systems 𝗧𝗵𝗲 𝗽𝗼𝗿𝘁𝗮𝗹 𝗶𝗺𝗽𝗼𝘀𝗲𝘀 𝘁𝗼𝗽 𝗱𝗼𝘄𝗻 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 Vietnam is building centralized regulatory infrastructure through an AI Single-Window Portal that receives notifications, incident reports, and periodic filings. During inspections, companies must provide technical dossiers, trace logs, and training data. This means active, ongoing engagement with the state - not documentation sitting in a folder waiting for an audit that may never come. 𝗧𝗵𝗲 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗴𝗮𝗽 The law takes effect in five weeks, but the implementing decrees haven't been issued yet. There's no high-risk list, no detailed conformity procedures, no portal workflows, and no technical specifications for content labeling. 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝘀𝗵𝗼𝘂𝗹𝗱 𝗱𝗼 𝗻𝗼𝘄 If you already have EU AI Act compliance or ISO 42001 certification, you have a meaningful head start since much of the underlying discipline transfers. But Vietnam adds specific requirements around portal engagement, list-driven classification, and inspection-ready evidence at the training data level. 𝘛𝘩𝘦 𝘨𝘭𝘰𝘣𝘢𝘭 𝘈𝘐 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘰𝘳𝘺 𝘱𝘢𝘵𝘤𝘩𝘸𝘰𝘳𝘬 𝘬𝘦𝘦𝘱𝘴 𝘦𝘹𝘱𝘢𝘯𝘥𝘪𝘯𝘨...
English
0
1
0
136
Modulos
Modulos@Modulos_ai·
🇺🇸⚖️ 𝗧𝗵𝗲 𝗧𝗲𝘅𝗮𝘀 𝗔𝗜 𝗟𝗮𝘄 𝗶𝘀 𝗻𝗼𝘄 𝗶𝗻 𝗘𝗳𝗳𝗲𝗰𝘁 The Texas Responsible AI Governance Act (TRAIGA) took effect January 1, 2026. Here's what you need to know: 𝗪𝗵𝗼'𝘀 𝗶𝗻 𝘀𝗰𝗼𝗽𝗲? If you sell products or services to Texas residents, you're covered. Classic extraterritorial reach, same playbook as GDPR and CCPA. 𝗪𝗵𝗮𝘁'𝘀 𝗽𝗿𝗼𝗵𝗶𝗯𝗶𝘁𝗲𝗱? → AI designed to incite self-harm or violence → Government social scoring → Government biometric surveillance without consent → Intentional discrimination (but disparate impact alone isn't enough) 𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗲𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝘄𝗼𝗿𝗸? Attorney General only, no private lawsuits. There is a 60-day cure period before any action so you have an opportunity to remediate. Penalties range from $10K for fixable violations to $200K for serious ones. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗺𝗼𝘀𝘁: Texas wrote an explicit safe harbor into the law. If you "substantially comply" with NIST AI RMF, ISO 42001, or any other recognized AI risk framework, you have a defense. Translation: If you're already doing EU AI Act compliance or have ISO 42001, you're largely covered. Texas compliance becomes a rounding error. 𝗪𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝘆𝗼𝘂 𝗱𝗼? Do you already have an AI governance program with the EU AI Act, ISO 42001 or NIST AI RMF as the focus? You should keep going. Starting from zero? Pick one (or more) and start ASAP. 𝘛𝘩𝘦 𝘜𝘚 𝘱𝘢𝘵𝘤𝘩𝘸𝘰𝘳𝘬 𝘪𝘴 𝘧𝘰𝘳𝘮𝘪𝘯𝘨. 𝘊𝘰𝘭𝘰𝘳𝘢𝘥𝘰, 𝘛𝘦𝘹𝘢𝘴, 𝘸𝘩𝘰'𝘴 𝘯𝘦𝘹𝘵?
English
0
1
0
226
Modulos รีทวีตแล้ว
QNA Marcom
QNA Marcom@qna_marcom·
We're thrilled to welcome Kevin Schawinski, CEO & Co-Founder of @Modulos_ai, to the Cyber AI Summit & Awards 2024 on Sept 25-26 at Address Sky View, Dubai! Don't miss out—join us and explore exclusive sponsorship opportunities. Email info@qnamarcom.com. #CyberAI #CyberAISummit
QNA Marcom tweet media
English
0
1
4
272
Modulos
Modulos@Modulos_ai·
Are you prepared to comply with the #EUAIAct? Understanding the Act is crucial for your organization's compliance. Our EU AI Act Guide provides valuable insights and practical steps to help your organization meet legal obligations on time.
English
0
0
1
1.2K
Modulos รีทวีตแล้ว
Kevin Schawinski
Kevin Schawinski@kevinschawinski·
Great to see @Modulos_ai listed as a "Core Governance & Compliance" platform in the Gen AI lanscape by @daphnivc
Kevin Schawinski tweet media
English
0
1
3
233
Modulos รีทวีตแล้ว
Luca Bertuzzi
Luca Bertuzzi@BertuzLuca·
BREAKING: EU ministers unanimously adopted the #AI Act. The law will be published in the official journal of the European Union in the coming days.
English
6
233
475
78.1K
Modulos รีทวีตแล้ว
AMLD Intelligence Summit
AMLD Intelligence Summit@appliedmldays·
The first workshop of the #AMLDEPFL2024 is starting now and is titled « AI Governance in a New Era of Regulated AI ». It consists in a presentation followed by a hands-on coding session 🚀
AMLD Intelligence Summit tweet media
English
3
3
5
1.3K
Modulos
Modulos@Modulos_ai·
We are proud to support the European Parliament’s approval of the #EUAIAct, a legislation setting new standards for AI development and deployment. Read our press release to gain further insight into our approach and future initiatives: modulos.ai/press-releases…
English
1
1
0
273