The impact of artificial intelligence on healthcare

Artificial intelligence (AI) has extraordinary potential to enhance healthcare, from improvements in medical diagnosis and treatment to assisting surgeons at every stage of the surgical act, from preparation to completion.

With machine learning and deep learning, algorithms can be trained to recognise certain pathologies, like melanomas for skin cancers, and with a clean and documented dataset, AI can also be used for image analysis to detect diseases from pictures.

As a result, AI helps to optimise the allocation of human and technical resources.

Moreover, the massive use of data by AI makes it possible to improve the prognosis of patients and the choice of treatment, by adapting treatment to the characteristics of the disease and the specificities of the individual. 

Dr Harvey Castro, a physician and healthcare consultant, points to the recent integration of Microsoft’s Azure OpenAI service with Epic’s electronic health record (EHR) software as proof that generative AI is also making important in-roads in the healthcare space. 

“One use case could be patient triage, where the AI is literally like a medical resident, where the doctor speaks and it is taking all the information down and using its grasp of algorithms to start triaging those patients,” he says. “If you have 100 patients in the waiting room, that’s a lot of information coming in – you’ll be able to start prioritising even though you haven’t seen the patient.”

“If you have 100 patients in the waiting room, that’s a lot of information coming in – you’ll be able to start prioritising even though you haven’t seen the patient.”

Dr Harvey Castro, physician and healthcare consultant

Castro adds it is important that any application of AI is meaningful and improves clinical care, as opposed to being deployed as a “shiny new tool” that does not help the clinician or the patient. 

He sees a future where large language models (LLMs) – large quantities of unlabeled text which help form the basis of neural networks used by AI – are specifically created for use in healthcare. 

“One of the problems with ChatGPT is that it wasn’t designed for healthcare,” says Castro. “To be in healthcare, it’s going to need to be the correct LLM that is consistent, has fewer issues with hallucination, and based on data from a database that can be referenced and has clarity.”

The term ‘hallucination’ refers to when the AI system provides a response or output that is nonsensical or unrealistic. 

From his perspective, the future of healthcare will be marked by LLMs evolving with more predictive analytics and capable of looking at an individual’s genetic makeup, medical history and biomarkers.

The Importance of Regulation 

Eric Le Quellenec, partner with Simmons & Simmons, AI, and healthcare, explains that regulation can ensure AI is used in a way that respects fundamental rights and freedom.

The proposed EU AI Act, which is expected to be adopted in 2023 and would be applicable in 2025, sets out the first legal framework in Europe for the technology. A draft proposal was presented by the European Commission in April 2021 and is still under discussion. 

However the regulation of AI also falls under other European legislation. 

“Firstly, any use of AI system involving the processing of personal data is subject to the General Data Protection regulation (GDPR),” he says.

As health data is considered as sensitive data and as used on a large scale, the regulation requires data protection impact assessments (DPIAs) to be carried out.

“It’s a risk mitigation approach and by doing so it’s easy to go beyond data protection and onboard ethics,” adds Le Quellenec, noting the French data protection supervisory authority made available a self-assessment fact sheet, as did the Information Commissioner’s Office (ICO) in the UK.

He adds that the UNESCO Recommendation on the Ethics of Artificial Intelligence, published in November 2021, is also worth noting. 

“At this point, all these are just ‘soft laws’ but good enough to enable stakeholders to have reliable data used for AI processing and avoid many risks like ethnical, sociological and economical bias,” he continues.

From Le Quellenec’s perspective, the proposed EU AI Act once adopted, should follow a risk-based approach, differentiating between uses of AI that create an unacceptable risk, a high risk, and low or minimal risk, and establishing a list of prohibited practices of all AI systems which are considered unacceptable for use. 

“AI used for healthcare is considered as being at high-risk,” Le Quellenec explains. “Before being placed on the European market, high-risk AI systems will have to be regulated, by obtaining a CE certificate marking.”

He believes high-risk AI systems should be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.

“All that should also give trust to the public and foster the use of AI related products,” Le Quellenec notes. “Plus, human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used.”

This will ensure the results provided by AI systems and algorithms are used only as an assistance, and do not lead to a loss of autonomy on the part of practitioners or an impairment of the medical act.

Castro and Le Quellenec will both be speaking about the topic of AI at the HIMSS European Health Conference and Exhibition in Lisbon on 7-9 June, 2023. 

Source: Read Full Article