Clinical applications of machine learning algorithms: beyond the black box [Analysis]
David S Watson, Jenny Krutzinna, Ian N Bruce, Christopher EM Griffiths, Iain B McInnes, Michael R Barnes, Luciano Floridi
British Medical Journal, 12 March 2019; 364
Abstract
Machine learning algorithms are an application of artificial intelligence designed to automatically detect patterns in data without being explicitly programmed. They promise to change the way we detect and treat disease and will likely have a major impact on clinical decision making. The long term success of these powerful new methods hinges on the ability of both patients and doctors to understand and explain their predictions, especially in complicated cases with major healthcare consequences. This will promote greater trust in computational techniques and ensure informed consent to algorithmically designed treatment plans.
Unfortunately, many popular machine learning algorithms are essentially black boxes—oracular inference engines that render verdicts without any accompanying justification. This problem has become especially pressing with passage of the European Union’s latest General Data Protection Regulation (GDPR), which some scholars argue provides citizens with a “right to explanation.” Now, any institution engaged in algorithmic decision making is legally required to justify those decisions to any person whose data they hold on request, a challenge that most are ill equipped to meet. We urge clinicians to link with patients, data scientists, and policy makers to ensure the successful clinical implementation of machine learning (fig 1). We outline important goals and limitations that we hope will inform future research.