The selective deployment of AI in healthcare: An ethical algorithm for algorithms

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underr...

全面介紹

Saved in:  
書目詳細資料
Authors: Vandersluis, Robert (Author) ; Savulescu, Julian (Author)
格式: 電子 Article
語言:English
Check availability: HBZ Gateway
Journals Online & Print:
載入...
Fernleihe:Fernleihe für die Fachinformationsdienste
出版: Wiley-Blackwell 2024
In: Bioethics
Year: 2024, 卷: 38, 發布: 5, Pages: 391-400
Further subjects:B Bias
B Artificial Intelligence
B Melanoma
B Exclusion
B Machine Learning
B Algorithm
在線閱讀: Volltext (kostenfrei)
Volltext (kostenfrei)
實物特徵
總結:Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.
ISSN:1467-8519
Contains:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.13281