Europe is rapidly integrating artificial intelligence (AI) into its healthcare systems, but a new report from the World Health Organization (WHO) warns that patient protections are lagging dangerously behind. While AI offers the potential for improved care and reduced strain on medical professionals, its widespread deployment is proceeding without adequate safeguards, raising serious concerns about equity, accuracy, and accountability.
Uneven Adoption and Funding Across Europe
The WHO analysis, covering 50 countries in Europe and Central Asia, reveals a fragmented approach to AI in healthcare. Half of the nations surveyed are already using AI chatbots for patient interactions, while 32 are deploying AI-powered diagnostic tools, particularly in areas like medical imaging. Applications range from early disease detection (Spain) to workforce training (Finland) and data analysis (Estonia).
However, only 14 countries have dedicated funding for their AI healthcare initiatives, and just four – Andorra, Finland, Slovakia, and Sweden – have comprehensive national strategies in place. This disparity underscores a critical gap: enthusiasm for AI implementation without a clear roadmap for responsible integration.
The Risks: Bias, Errors, and Accountability
The WHO report highlights the inherent risks of AI in healthcare. These tools rely on massive datasets, which can be flawed, biased, or incomplete. Consequently, AI-driven decisions may perpetuate existing health disparities or even lead to medical errors, such as missed diagnoses or inappropriate treatments.
A key question remains unanswered: who is responsible when an AI system makes a mistake? The lack of clear accountability could erode public trust and deter healthcare workers from adopting these technologies.
WHO Recommendations: A Call for Caution and Clarity
To mitigate these risks, the WHO urges European countries to prioritize public health goals, invest in AI literacy for healthcare professionals, and establish robust ethical and legal guidelines. Transparency is crucial: patients deserve to know when and how AI is being used in their care.
“AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision,” warns Dr Hans Kluge, head of the WHO’s Europe office.
The WHO also stresses the need for rigorous testing to ensure AI systems are safe, fair, and effective in real-world settings before being deployed to patients.
The current lack of standardized oversight may already be causing hesitancy among healthcare workers, according to Dr David Novillo Ortiz of the WHO. Without proactive measures, AI’s potential to improve healthcare may be overshadowed by its risks.
The report serves as a stark reminder that technological advancement must be paired with responsible governance to ensure equitable and safe healthcare for all.
