Artificial Intelligence (AI)
CPME Rapporteur: Prof. Dr Christian LOVIS (CH)
CPME Secretariat: Ms Sara RODA
CPME monitors the developments and the implementation of the Artificial Intelligence Act (AIA), having recently issued a policy on the deployment of AI in healthcare.
CPME notes the uptake of AI in healthcare is currently low due to several factors, including the complex environment of the sector, the wide range of products available on the market, the majority of which are not certified by a third-party, and a lack of confidence in using AI systems based on data from unknown data sources or on data collection processes. AI products should be seamlessly integrated into the healthcare information system and reflect the real needs of healthcare professionals, patients and their carers. Doctors should be free to decide whether to use an AI system, bearing in mind the best interests of the patient, and to retain the right to disagree with an AI system,without repercussions.
CPME supports applying the strict liability regime for AI systems (as there is no need for the victim to prove fault) and mandatory insurance for high-risk AI systems, which should include “tail” cover.
CPME welcomes the AIA risk-based approach, the development of the EU database for high-risk AI and the proposed risk management system. The list of stand-alone high-risk AI of Annex III should still include the use of AI for assessing medical treatments and for health research. Certain systems cannot be deployed without clear validation as there can be misuse leading to discrimination and harm. Having AI external auditors should be considered an ‘appropriate practice’ and the information given to users should be clear and understandable for non IT-specialists. An independent authority or third party should have access to the algorithm in case of complaints or questions, taking due account for commercial secrecy. Human oversight should be of high quality and appropriately resourced. Medical obligations need to be supervised by medical regulators to guarantee the quality of healthcare. The CE marking should only be given to those AI systems that comply with EU law, including the General Data Protection Regulation. The possibility of exercising data subject rights must be provided in the AI system from the very beginning.