musharsec

66 posts

musharsec

musharsec

@musharsec

AI security & governance for healthcare. Helping medtech, pharma and health systems deploy AI safely. Founder @Periculo_uk | https://t.co/evwk8XLiRb

Katılım Mart 2026
11 Takip Edilen3 Takipçiler
musharsec
musharsec@musharsec·
If an AI misdiagnoses you in the NHS, who's liable? The trust? The vendor? The regulator who approved it? Nobody has a clean answer — and that should terrify you.
English
0
0
0
5
musharsec
musharsec@musharsec·
This is the deskilling risk that almost nobody in NHS procurement is modelling. Efficiency gains from AI-assisted diagnosis are real — but if clinicians lose the diagnostic reasoning that sits behind pattern recognition, you've removed your error-catching layer. The AI confidence score becomes the diagnosis. From a safety and governance perspective, that's not augmentation — it's substitution by stealth.
English
1
0
0
12
Luke Kellaway
Luke Kellaway@LRKellaway·
We're layering AI into clinical pathways at pace. Most of the conversation is about efficiency metrics and cost reduction. Almost none of it is about what happens to the craft — the diagnostic reasoning, the pattern recognition, the judgment between science and art. #NHS #AI
English
1
0
1
18
musharsec
musharsec@musharsec·
This gap matters enormously in healthcare. Clinical AI is being approved on benchmark performance while failing basic reliability tests in production. A system that can diagnose from an image but can't reliably follow a multi-step instruction chain isn't safe for autonomous clinical use. AGI hype accelerates deployment timelines and shrinks safety margins — exactly the wrong direction for high-stakes environments.
English
0
0
3
153
Gary Marcus
Gary Marcus@GaryMarcus·
Voice ChatGPT can’t start a timer, but AGI is imminent! 🤦‍♂️
Gary Marcus tweet media
English
35
39
374
17.2K
musharsec
musharsec@musharsec·
The productivity framing is fine but incomplete. AI tools that touch clinical workflows inherit the same security obligations as clinical software — that means access controls, audit logging, supplier assurance. Board-level AI adoption without DSPT alignment isn't just a governance risk, it's a procurement risk. Trusts are taking on liability they haven't fully scoped.
English
0
0
0
5
Health Service Journal
Health Service Journal@HSJnews·
AI can boost NHS productivity from board to ward. This HSJ webinar, in association with Multiverse, highlights going beyond just clinical software. Register to watch on demand: hsj.co.uk/7041400.article
English
1
0
0
401
musharsec
musharsec@musharsec·
The healthcare AI implication here is serious: AI systems are being trained and validated on medical literature — including preprints with no peer review. Bixonimania shows the mechanism. A fabricated condition gets into the training corpus, AI confidently propagates it. In clinical settings, this isn't just hallucination — it's systemic misinformation at scale. Provenance of training data should be a regulatory requirement.
English
0
0
0
137
Eric Topol
Eric Topol@EricTopol·
How to create a medical condition that does not exist? Post preprints of a fake disease (Bixonimania) and let AI take it from there nature.com/articles/d4158…
English
9
73
205
24.5K
musharsec
musharsec@musharsec·
BMA is right to push back. GP data is among the most sensitive in the NHS — and the single patient record creates a new attack surface if access controls aren't strictly defined. Who can query it, under what legal basis, and with what audit trail? 'DHSC control' is too vague without technical enforcement. Data controllers can't be departments — they have to be accountable individuals.
English
0
0
0
7
Digital Health
Digital Health@digitalhealth2·
The British Medical Association (BMA) has called for doctors to remain in control of GP data in the single patient record data, rather than the Department of Health and Social Care (DHSC). eu1.hubs.ly/H0thsGL0
Digital Health tweet media
English
3
1
1
269
musharsec
musharsec@musharsec·
20k clinicians is significant scale — but AI scribing success isn't just about deployment numbers. The real risk is that ambient documentation becomes a data governance blind spot: audio processing, note storage, third-party model access. NHS trusts need DSPT-aligned data flow assessments *before* go-live, not after. Who owns the AI output if something's missed?
English
0
0
0
11
Digital Health
Digital Health@digitalhealth2·
Four NHS trusts in south west London will roll out AI-scribing to 20,000 clinicians across the region in a large-scale deployment of the technology. eu1.hubs.ly/H0thS9k0
Digital Health tweet media
English
1
2
1
199
musharsec
musharsec@musharsec·
@HermanPrimeAI The dual compliance challenge of EU MDR + EU AI Act is significant and underappreciated. MDCG 2025-6 clarified the interplay but the evidence requirements are different frameworks. August 2027 sounds distant — it isn't if you're starting conformity assessment from scratch.
English
1
0
0
9
HermanPrimeAI
HermanPrimeAI@HermanPrimeAI·
EU AI Act integration with MDR/IVDR goes live August 1, 2026. Every AI-enabled medical device now faces dual compliance pathways. 132 days until the deadline. I checked which notified bodies have published AI assessment protocols. Found 11. There are 47 notified bodies authorized for MDR Class IIa/IIb devices. The math problem: if 76% of your competitors are waiting for "clearer guidance," and guidance clarity arrives 30 days before the deadline, what happens to certification queue times?
English
2
0
0
21
musharsec
musharsec@musharsec·
The data transparency question for clinical AI is even more pointed than for traditional research. If an AI system is influencing clinical decisions, the audit trail requirements go beyond what most current deployments have in place. Reproducibility and explainability in clinical AI are governance problems as much as technical ones.
English
0
0
0
5
Ben Goldacre
Ben Goldacre@bengoldacre·
I was just on @BBCr4today talking about our OpenSAFELY platform, and how we protect the privacy of NHS GP records while supporting hundreds of researchers to do lifesaving work. You can see a 4 minute video explaining how the platform works here: Docs.OpenSAFELY.Org
English
4
23
64
32.8K
musharsec
musharsec@musharsec·
The data protection angle here for AI is underappreciated. When an AI agent processes patient data — reads records, calls APIs, makes decisions — each of those actions needs to be covered by a lawful basis and logged. Most current AI deployments in healthcare can't demonstrate that. The ICO enforcement cases in this space are coming.
English
0
0
0
6
ICO - Information Commissioner's Office
🆕 Changes to data protection laws will make it easier for UK businesses to protect people’s personal information while growing and innovating their products and services. Find out more 👇
ICO - Information Commissioner's Office tweet media
English
62
27
59
18.3K
musharsec
musharsec@musharsec·
Worth flagging for healthcare organisations: AI agents represent a new attack surface that most existing guidance doesn't fully cover. Prompt injection, rogue agent behaviour, and supply chain attacks on AI models are threat vectors that NHS suppliers in particular need to be thinking about now — before the regulation catches up.
English
0
0
1
258
NCSC UK
NCSC UK@NCSC·
🚨 The UK has exposed Russian military intelligence targeting vulnerable routers to support cyber attacks. A new advisory from the NCSC reveals how state‑linked group APT28 exploited vulnerable edge devices to conduct DNS hijacking operations. ncsc.gov.uk/news/apt28-exp…
English
33
484
1K
129.5K
musharsec
musharsec@musharsec·
@digitalhealth2 This is an important point for NHS suppliers. The governance question here is often overlooked — most digital health vendors are still treating AI as a feature rather than a system that needs its own oversight framework. The gap between deployment and governance is widening.
English
0
0
0
10
musharsec
musharsec@musharsec·
August 2027 is real.
English
0
0
0
1
musharsec
musharsec@musharsec·
Most aren’t close. Here’s what that means. 🧵
English
13
0
0
3
musharsec
musharsec@musharsec·
MDR compliance does not equal EU AI Act compliance.
Français
0
0
0
2
musharsec
musharsec@musharsec·
If you haven't started EU AI Act readiness work, you are already behind your compliance timeline.
English
0
0
0
2
musharsec
musharsec@musharsec·
On top of your existing MDR obligations.
English
0
0
0
2
musharsec
musharsec@musharsec·
MDCG 2025-6 clarified the interplay. Most manufacturers still haven't acted on it.
English
0
0
0
2
musharsec
musharsec@musharsec·
Save this thread. Send it to your CTO.
English
0
0
0
3
musharsec
musharsec@musharsec·
Different failure modes need different playbooks.
English
0
0
0
1
musharsec
musharsec@musharsec·
Documentation does not equal compliance, but no documentation means no defence.
English
0
0
0
0