musharsec
66 posts

musharsec
@musharsec
AI security & governance for healthcare. Helping medtech, pharma and health systems deploy AI safely. Founder @Periculo_uk | https://t.co/evwk8XLiRb
Katılım Mart 2026
11 Takip Edilen3 Takipçiler

This is the deskilling risk that almost nobody in NHS procurement is modelling. Efficiency gains from AI-assisted diagnosis are real — but if clinicians lose the diagnostic reasoning that sits behind pattern recognition, you've removed your error-catching layer. The AI confidence score becomes the diagnosis. From a safety and governance perspective, that's not augmentation — it's substitution by stealth.
English

This gap matters enormously in healthcare. Clinical AI is being approved on benchmark performance while failing basic reliability tests in production. A system that can diagnose from an image but can't reliably follow a multi-step instruction chain isn't safe for autonomous clinical use. AGI hype accelerates deployment timelines and shrinks safety margins — exactly the wrong direction for high-stakes environments.
English

The productivity framing is fine but incomplete. AI tools that touch clinical workflows inherit the same security obligations as clinical software — that means access controls, audit logging, supplier assurance. Board-level AI adoption without DSPT alignment isn't just a governance risk, it's a procurement risk. Trusts are taking on liability they haven't fully scoped.
English

AI can boost NHS productivity from board to ward. This HSJ webinar, in association with Multiverse, highlights going beyond just clinical software. Register to watch on demand: hsj.co.uk/7041400.article
English

The healthcare AI implication here is serious: AI systems are being trained and validated on medical literature — including preprints with no peer review. Bixonimania shows the mechanism. A fabricated condition gets into the training corpus, AI confidently propagates it. In clinical settings, this isn't just hallucination — it's systemic misinformation at scale. Provenance of training data should be a regulatory requirement.
English

How to create a medical condition that does not exist?
Post preprints of a fake disease (Bixonimania) and let AI take it from there
nature.com/articles/d4158…
English

BMA is right to push back. GP data is among the most sensitive in the NHS — and the single patient record creates a new attack surface if access controls aren't strictly defined. Who can query it, under what legal basis, and with what audit trail? 'DHSC control' is too vague without technical enforcement. Data controllers can't be departments — they have to be accountable individuals.
English

The British Medical Association (BMA) has called for doctors to remain in control of GP data in the single patient record data, rather than the Department of Health and Social Care (DHSC).
eu1.hubs.ly/H0thsGL0

English

20k clinicians is significant scale — but AI scribing success isn't just about deployment numbers. The real risk is that ambient documentation becomes a data governance blind spot: audio processing, note storage, third-party model access. NHS trusts need DSPT-aligned data flow assessments *before* go-live, not after. Who owns the AI output if something's missed?
English

Four NHS trusts in south west London will roll out AI-scribing to 20,000 clinicians across the region in a large-scale deployment of the technology.
eu1.hubs.ly/H0thS9k0

English

@HermanPrimeAI The dual compliance challenge of EU MDR + EU AI Act is significant and underappreciated. MDCG 2025-6 clarified the interplay but the evidence requirements are different frameworks. August 2027 sounds distant — it isn't if you're starting conformity assessment from scratch.
English

EU AI Act integration with MDR/IVDR goes live August 1, 2026. Every AI-enabled medical device now faces dual compliance pathways.
132 days until the deadline.
I checked which notified bodies have published AI assessment protocols. Found 11. There are 47 notified bodies authorized for MDR Class IIa/IIb devices.
The math problem: if 76% of your competitors are waiting for "clearer guidance," and guidance clarity arrives 30 days before the deadline, what happens to certification queue times?
English

The data transparency question for clinical AI is even more pointed than for traditional research. If an AI system is influencing clinical decisions, the audit trail requirements go beyond what most current deployments have in place. Reproducibility and explainability in clinical AI are governance problems as much as technical ones.
English

I'm delighted to say our highly secure and productive OpenSAFELY platform has received a further £17 million in funding from @wellcometrust, including a major new mental health data project. More here, and clips from @BBCr4today this morning too! Onward!
bennett.ox.ac.uk/blog/2025/02/g…
English

I was just on @BBCr4today talking about our OpenSAFELY platform, and how we protect the privacy of NHS GP records while supporting hundreds of researchers to do lifesaving work. You can see a 4 minute video explaining how the platform works here:
Docs.OpenSAFELY.Org
English

The data protection angle here for AI is underappreciated. When an AI agent processes patient data — reads records, calls APIs, makes decisions — each of those actions needs to be covered by a lawful basis and logged. Most current AI deployments in healthcare can't demonstrate that. The ICO enforcement cases in this space are coming.
English

Worth flagging for healthcare organisations: AI agents represent a new attack surface that most existing guidance doesn't fully cover. Prompt injection, rogue agent behaviour, and supply chain attacks on AI models are threat vectors that NHS suppliers in particular need to be thinking about now — before the regulation catches up.
English

🚨 The UK has exposed Russian military intelligence targeting vulnerable routers to support cyber attacks.
A new advisory from the NCSC reveals how state‑linked group APT28 exploited vulnerable edge devices to conduct DNS hijacking operations.
ncsc.gov.uk/news/apt28-exp…
English

@digitalhealth2 This is an important point for NHS suppliers. The governance question here is often overlooked — most digital health vendors are still treating AI as a feature rather than a system that needs its own oversight framework. The gap between deployment and governance is widening.
English




