LLMs and Radiology: The Hidden Cybersecurity Threats Hospitals Must Address in 2025
- Tech Brief
- May 21
- 4 min read

In recent weeks, a wave of concern has swept through the medical and AI communities. A special report from the Radiological Society of North America (RSNA) has shed light on a growing and largely underestimated threat: the cybersecurity vulnerabilities of large language models (LLMs) in radiology.
As artificial intelligence becomes increasingly embedded in clinical workflows—from medical transcription to radiology report generation—experts are now warning that these tools may be opening new doors for cyberattacks. The implications? Massive risks to patient data, diagnostic integrity, and hospital systems. What was once hailed as a productivity revolution may be quietly evolving into a digital Trojan horse.
🔍 Who, What, When, Where, and Why
The issue was formally raised in May 2025 by experts in AI and radiology at the RSNA. The report, widely cited by media outlets including Healthcare in Europe and AuntMinnie, highlights the rapid integration of LLMs such as GPT-based systems in radiological operations and research—particularly in high-income healthcare systems.
These models are used for summarizing radiology reports, assisting diagnosis, automating documentation, and even generating patient communication drafts. However, as their role expands, so do the risks.
LLMs are susceptible to a range of cybersecurity threats, including data poisoning, inference attacks, prompt injection, and adversarial manipulation—all of which can have devastating consequences in a healthcare setting.
🧠 What’s Driving the Risk?
Several underlying causes are contributing to the vulnerability of radiology’s AI systems:
1. Blind Trust in AI:
Clinicians and IT departments often adopt LLMs assuming they are secure by design. However, most LLMs are not trained with adversarial robustness in mind. They were built for efficiency, not necessarily for environments with high privacy or reliability standards like hospitals.
2. Lack of Standardized AI Cybersecurity Protocols:
Unlike traditional health IT systems that follow rigorous HIPAA or GDPR compliance, AI models often operate in a grey area where cybersecurity regulations have not yet caught up. This leaves significant room for manipulation or data leakage.
3. Increased Attack Surface:
The integration of LLMs across multiple endpoints—clinical dashboards, PACS systems, report generators, and patient portals—creates multiple entry points for malicious actors to exploit.
4. Training on Sensitive Data:
Many AI models are either fine-tuned on sensitive radiology data or use real-time inputs that may include protected health information (PHI). This raises the risk of inference attacks, where a hacker could reverse-engineer personal data from model outputs.
🔄 Short-Term and Long-Term Implications
🔹 Short-Term Consequences:
Diagnostic Errors: Malicious manipulation of input data can lead to flawed radiological summaries or false findings.
Breach of Confidentiality: Prompt injection or prompt leaking could expose patient data embedded in AI systems.
Operational Disruption: Denial-of-service attacks targeting AI-powered tools could stall radiology departments during critical operations.
🔹 Long-Term Consequences:
Erosion of Trust: If AI-generated reports become a source of litigation or patient harm, hospitals may scale back adoption, halting innovation.
Regulatory Scrutiny: Governments may impose stricter oversight on AI integration in clinical settings, delaying innovation pipelines.
Insurance Repercussions: Liability questions surrounding AI errors could reshape malpractice insurance frameworks and hospital risk assessments.
👥 Multiple Perspectives
👨⚕️ Radiologists:
While many embrace LLMs as time-saving tools, some are now voicing concerns about overreliance. “We can’t treat AI like a calculator. It’s a black box that can be compromised,” said one expert quoted in RSNA’s report.
🧑💻 Cybersecurity Analysts:
They’ve been warning about LLM vulnerabilities for months, especially after publicized exploits in ChatGPT and other generative systems. Healthcare, with its high-value data, is a prime target.
🏛️ Policy Makers:
In the EU, lawmakers are already debating how AI-based medical tools fit within the AI Act. Meanwhile, in the U.S., the FDA has been slow to regulate generative AI in healthcare compared to diagnostic imaging devices.
🕰️ Historical Context: Echoes from Earlier Tech Disruptions
This is not the first time healthcare technology has faced a crisis of confidence. When electronic health records (EHRs) were first introduced, they promised efficiency but also introduced new privacy challenges and legal liabilities.
Similarly, as AI becomes the new operating system for medicine, we are witnessing a repeat of the cycle: hype → over-adoption → emerging risks → public scrutiny → regulation.
🧾 Key Takeaways and What Comes Next
LLMs in radiology are not inherently safe and can become cybersecurity liabilities without proper controls.
Hospitals need AI-specific security frameworks, including adversarial testing, secure prompt design, and audit trails.
Governments and health agencies must step in with regulations tailored to generative AI in clinical settings.
Transparency and explainability must become core to LLM deployment in medicine—black-box tools are no longer acceptable in life-critical environments.
🔮 What’s on the Horizon?
Expect to see:
AI “firewalls” specifically for LLMs in healthcare
Standardized testing for AI safety in radiology, akin to drug trials
A wave of “AI cybersecurity startups” targeting hospitals and clinics
Public pressure for patients to opt out of AI-generated diagnostics
This unfolding story is a powerful reminder: in medicine, trust is everything. And any tool—no matter how intelligent—that risks compromising it must be handled with the utmost care.
Commentaires