Home Moral guidelines A code of ethics is needed before AI takes on more of the doctor’s role, report warns

A code of ethics is needed before AI takes on more of the doctor’s role, report warns


Clear ethical standards and guidance are needed for the use of artificial intelligence (AI) in healthcare settings, otherwise there is a risk of undermining trust between doctors and their patients, a Council report has warned. from Europe.

There are several potential ways in which increased use of AI in healthcare could impact patients’ human rights and the doctor-patient relationship, the report concludes, including inequalities in access to health care. health.

Other issues with AI that need to be considered are transparency for both healthcare professionals and patients, risk of social bias in AI systems, dilution of patient reporting on his health and the risk of automation bias, deskilling and misplaced responsibility.

The report’s author, Dr Brent Mittelstadt, research director at the Oxford Internet Institute, said he hoped it would inspire people to think about how AI could disrupt the basic processes involved in Health care.

He fears this is being used as a way to cut budgets or cut costs rather than improve patient care.

“If you’re considering bringing a new technology into the clinical space, you need to think about how it will be done.”

“Too often it is seen only as an exercise in economy or efficiency, not as one that could radically transform healthcare itself,” he said.

The report comes as a study found that AI has the potential to relieve pressures on the NHS and its workforce, but “frontline healthcare staff will need tailored and specialist support before ‘use with confidence’.

In the study, Health Education England and NHS AI Lab had also said there was a risk that AI could exacerbate cognitive biases and that clinicians might accept an AI recommendation uncritically due to time or difficulty. other pressures.

Council of Europe report says use of AI remains ‘unproven’ and could undermine ‘healing relationship’

“The doctor-patient relationship is the foundation of ‘good’ medical practice, and yet it seems to be turning into a doctor-patient-AI relationship.

“The challenge facing AI providers, regulators and policy makers is to establish strong standards and relationships for this new clinical relationship to ensure that the interests of patients and the moral integrity of medicine in as a profession are not fundamentally damaged by the introduction of AI,” the report concludes.

Dr. Mittelstadt also noted that it is not the patient’s vulnerability that is changed by the introduction of AI, but the means of delivery of care, how it can be delivered and by whom and who “can be disruptive. in many ways”.

In addition to the already widely recognized biases in AI systems due to the data on which they are based, there are also issues related to professional standards, in the event that AI is used, according to the report.

He adds: “If AI is used to greatly augment or replace human clinical expertise, its impact on the care relationship is more difficult to predict.”