WHO issues AI ethics and governance guidance for large multi-modal models
The World Health Organization (WHO) has released new guidance on the ethics and governance of large multi-modal models (LMMs).
LMMs are a type of fast-growing generative AI technology with applications across healthcare.
The guidance covers both the risks and benefits of LMMs in areas such as diagnosis, clinical treatment, administrative tasks, documentation, medical education, simulated patient encounters, and scientific research.
The guidance also outlines broader health system risks, including accessibility and affordability of top-performing LMMs, and potential “automation bias” in healthcare professionals and patients, leading to overlooked errors or improper delegation of decisions to LMMs.
“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs’’, said Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.
The body has also underscored the need for the engagement of various stakeholders in healthcare in all stages of the development of the technology including regulators.
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”
The body has further recommended investments by stakeholders in public infrastructure, such as computing power and public data sets, accessible to developers in the public, private and not-for-profit sectors, that requires users to adhere to ethical principles and values in exchange for access.
Additionally, WHO recommends using laws, policies and regulations to ensure that LMMs and applications used in health care and medicine, irrespective of the risk or benefit associated with the AI technology, meet ethical obligations and human rights standards that affect, for example, a person’s dignity, autonomy or privacy.