During a session at the 2022 European Hematology Association (EHA) Congress, speakers discussed how artificial intelligence (AI) can help advance the principles of ethical medicine, but also how new technologies are used to undermine the integrity of scientific research.
The session was part of the YoungEHA track at EHA2022, which is aimed at early career scientists and clinicians and aims to go beyond scientific data to provide a forum for discussion on the evolving field of hematology.
The first speaker, Elisabeth Bik, PhD, explained how her work as a consultant changed from the world of microbiology to consulting on suspected scientific misconduct – in other words, she is now a detective on the hunt for errors or potential fraud in research.
Scientific papers are the building blocks of research, Bik noted, and “if those papers contain errors, it would be like building a wall of bricks where one of the bricks is not very stable…The wall of science will collapse.”
Scientific fraud is defined as plagiarism, falsification or fabrication, but does not include honest mistakes, Bik explained. Reasons for cheating can include pressure to publish, feelings of having to meet high expectations after tasting research success, or even a “power game” in which a professor threatens a postdoctoral researcher with visa revocation. if an experiment is not successful. to succeed.
Inappropriate image duplication in research papers, which is Bik’s area of expertise, can fall into one of 3 categories: simple duplication, repositioning and modification. Mere duplication may mean honest error instead of intentional misconduct, but it is still inappropriate and needs to be corrected, she noted.
The problem, however, is that these cases are difficult to spot. On an 8-frame slide, Bik asked the audience if they could identify a duplication. This reporter was proud to have spotted 1 set of identical images until Bik revealed that there were actually 2 duplicate cases.
Beyond simple duplication, Bik also showed examples of hematology-related images such as Western blots and bone marrow flow cytometry being repositioned, flipped, and altered to mask their identity, indicating a more intentional deception on the part of the authors.
To compound the problem, journals often did not take action to retract or correct the article when Bik brought his concerns to their attention. She advised the public to look at search figures with a critical eye, especially when acting as a reviewer. If the data “looks too good”, it is the responsibility to raise the issue with the editor privately.
“If you see something, say something, because it happens,” Bik warned.
One insidious use of AI to perpetuate scientific fraud is the presence of artificially generated Western blot images in articles published by “paper mills”. Bik has identified over 600 articles using these fake images, which are produced through generative adversarial networks. Unfortunately, the images are unique, so they cannot be captured with duplication detection software.
The cost to society of scientific misconduct goes beyond allowing readers to unknowingly base their research on articles that contain errors or fraud, Bik explained. The presence of fraud undermines the integrity of science and can be misused by those with a political agenda to claim that all science is flawed.
“We have to believe in science and we have to do better to improve science,” Bik concluded.
The next speaker, Amin Turki, MD, PhD, from the Universitätsklinikum Essen in Germany, discussed how AI can represent both an opportunity and a challenge for ethics in medicine, especially in hematology.
The use of AI in hematology has grown exponentially in recent years, Turki explained, as prediction tasks have evolved from risk prediction to bone marrow diagnosis, which is time consuming and difficult, due to the complexity of bone marrow with its abundance of cells and structures. , but has the potential to transform clinical practice.
Machine learning has identified new phenotypes to predict outcomes in chronic graft versus host disease (GVHD), and Turki and colleagues are working on research to predict mortality after allogeneic hematopoietic cell transplantation, as well as in patients with acute GVHD.
Yet the use of AI in medicine is not without ethical questions, and the number of articles indexed on PubMed on AI and ethics has increased exponentially in recent years. Turki looked at AI through the prism of moral principles of medicine based on the United Nations Declaration of Human Rights issued in 1948:
- Autonomy: AI can support patient autonomy through the use of digital agents such as wearable devices.
- Beneficence: AI can improve health by overcoming the limits of human cognition through better risk prediction or individualized treatment.
- Non-harmfulness: Toxicity can be reduced through AI-defined dosing and treatment algorithms.
- Justice: Researchers hope that AI can be used to reduce the impact of health disparities, but if not done correctly it can increase or perpetuate these disparities (for example, if health interventions are made accessible only in the richest countries).
Stakeholders for ethical AI in medicine include developers, deployers, users and regulators, and each has unique responsibilities, Turki said. He suggested several ways to overcome ethical challenges, including integrating ethics into the AI development lifecycle, prioritizing human-centered AI, and ensuring fair representation. .
The ongoing digital transformation holds promise for transforming hematology care, Turki concluded, but it also forces us to “never forget that the human condition is the foundation of our understanding.”