AI and machine learning have yet to make a real impact on healthcare

Friday, December 9, 2016
News
In an article on The Conversation.com, Salvado explains why the AI- and machine learning revolution, predicted to be part of the next industrial revolution, have some challenges to overcome before they can become mainstream in healthcare. AI, achine learning and deep learning can help save business and industry billions of dollars by the next decade.

 Tech giants such as Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data, while machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet. So, Salvado asks,  why haven’t we seen artificial intelligence used to the same extent in healthcare?

As an example, Salvado states that radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM, Philips, Siemens and others are working on this issue (Philips recently launched three platforms to help radiologists with a quicker, first time right diagnose). Also, doctors have no access to AI for guiding and supporting their diagnoses.

Deep learning pushes the limits

Machine learning technologies have been around for decades. A relatively recent technique called deep learning keeps pushing the limit of what computers can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognise patterns in data. The learning stage requires very large data sets of cases along with the corresponding answers.

Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks. Here lies a problems with healthcare, Salvado says. Data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown. So, what is needed, are better and bigger data sets

Complexity of human body, genetics

Also, the functions of the human body, its anatomy and variability, are very complex. Add to this the complexity that diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on. A third, specific challenge to medical is the difficulty to measure precisely and accurately any biological processes introducing unwanted variations.

Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available. The end result:  medical data sets need to be extremely large to be usefull.

Large research initiatives

Several andincreasingly large research initiatives are adressing this problem by gathering bigger and data sets. Examples are Biobank in the United Kingdom, (which aims to scan 100,000 participants), the Alzheimer’s Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade.

A government initiative is the American Cancer Moonshot program for a national cancer data ecosystem, so researchers, clinicians and patients can contribute data with the aim to “facilitate efficient data analysis”. Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information.

Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using machine learning is tremendous. Some companies such as Google are eagerly trying to access those data.

What needs to be learned

Another problem or challenge is, that what a machine needs to learn is not always obvious. Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty, Salvado writes.

Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed.  Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death.

So while a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, it is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost.

Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty. Unsupervised methods can recognise patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results. Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known.

Most challenging issue

But probably the most challenging issue is about understanding causation. Analysing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments.

Traditionally, randomised clinical trials provide evidence on the superiority of different options, but they don’t benefit yet from the potential of artificial intelligence. New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association.

So large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging. Impressive progress is being made but many challenges still exist.