It will not imminently put medical experts out of work
FOUR years ago a woman in her early 30s was hit by a car in London. She needed emergency surgery to reduce the pressure on her brain. Her surgeon, Chris Mansi, remembers the operation going well. But she died, and Mr Mansi wanted to know why. He discovered that the problem had been a four-hour delay in getting her from the accident and emergency unit of the hospital where she was first brought, to the operating theatre in his own hospital. That, in turn, was the result of a delay in identifying, from medical scans of her head, that she had a large blood clot in her brain and was in need of immediate treatment. It is to try to avoid repetitions of this sort of delay that Mr Mansi has helped set up a firm called Viz.ai. The firm’s purpose is to use machine learning, a form of artificial intelligence (AI), to tell those patients who need urgent attention from those who may safely wait, by analysing scans of their brains made on admission.
That idea is one among myriad projects now under way with the aim of using machine learning to transform how doctors deal with patients. Though diverse in detail, these projects have a common aim. This is to get the right patient to the right doctor at the right time.
In Viz.ai’s case that is now happening. In February the firm received approval from regulators in the United States to sell its software for the detection, from brain scans, of strokes caused by a blockage in a large blood vessel. The technology is being introduced into hospitals in America’s “stroke belt”—the south-eastern part, in which strokes are unusually common. Erlanger Health System, in Tennessee, will turn on its Viz.ai system next week.
The potential benefits are great. As Tom Devlin, a stroke neurologist at Erlanger, observes, “We know we lose 2m brain cells every minute the clot is there.” Yet the two therapies that can transform outcomes—clot-busting drugs and an operation called a thrombectomy—are rarely used because, by the time a stroke is diagnosed and a surgical team assembled, too much of a patient’s brain has died. Viz.ai’s technology should improve outcomes by identifying urgent cases, alerting on-call specialists and sending them the scans directly.
The AIs have it
Another area ripe for AI’s assistance is oncology. In February 2017 Andre Esteva of Stanford University and his colleagues used a set of almost 130,000 images to train some artificial-intelligence software to classify skin lesions. So trained, and tested against the opinions of 21 qualified dermatologists, the software could identify both the most common type of skin cancer (keratinocyte carcinoma), and the deadliest type (malignant melanoma), as successfully as the professionals. That was impressive. But now, as described last month in a paper in the Annals of Oncology, there is an AI skin-cancer-detection system that can do better than most dermatologists. Holger Haenssle of the University of Heidelberg, in Germany, pitted an AI system against 58 dermatologists. The humans were able to identify 86.6% of skin cancers. The computer found 95%. It also misdiagnosed fewer benign moles as malignancies.
There has been progress in the detection of breast cancer, too. Last month Kheiron Medical Technologies, a firm in London, received news that a study it had commissioned had concluded that its software exceeded the officially required performance standard for radiologists screening for the disease. The firm says it will submit this study for publication when it has received European approval to use the AI—which it expects to happen soon.
This development looks important. Breast screening has saved many lives, but it leaves much to be desired. Overdiagnosis and overtreatment are common. Conversely, tumours are sometimes missed. In many countries such problems have led to scans being checked routinely by a second radiologist, which improves accuracy but adds to workloads. At a minimum Kheiron’s system looks useful for a second opinion. As it improves, it may be able to grade women according to their risks of breast cancer and decide the best time for their next mammogram.
Efforts to use AI to improve diagnosis are under way in other parts of medicine, too. In eye disease, DeepMind, a London-based subsidiary of Alphabet, Google’s parent company, has an AI that screens retinal scans for conditions such as glaucoma, diabetic retinopathy and age-related macular degeneration. The firm is also working on mammography.
Heart disease is yet another field of interest. Researchers at Oxford University have been developing AIs intended to interpret echocardiograms, which are ultrasonic scans of the heart. Cardiologists looking at these scans are searching for signs of heart disease, but can miss them 20% of the time. That means patients will be sent home and may then go on to have a heart attack. The AI, however, can detect changes invisible to the eye and improve the accuracy of diagnosis. Ultromics, a firm in Oxford, is trying to commercialise the technology and it could be rolled out later this year in Britain.
There are also efforts to detect cardiac arrhythmias, particularly atrial fibrillation, which increase the risk of heart failure and strokes. Researchers at Stanford University, led by Andrew Ng, have shown that AI software can identify arrhythmias from an electrocardiogram (ECG) better than an expert. The group has joined forces with a firm that makes portable ECG devices and is helping Apple with a study looking at whether arrhythmias can be detected in the heart-rate data picked up by its smart watches. Meanwhile, in Paris, a firm called Cardiologs is also trying to design an AI intended to read ECGs.
Eric Topol, a cardiologist and digital-medicine researcher at the Scripps Research Institute, in San Diego, says that doctors and algorithms are comparable in accuracy in some areas, but computers have the advantage of speed. This combination of traits, he reckons, will lead to higher accuracy and productivity in health care.
Artificial intelligence might also make medicine more specific, by being able to draw distinctions that elude human observers. It may be able to grade cancers or instances of cardiac disease according to their risks—thus, for example, distinguishing those prostate cancers that will kill quickly, and therefore need treatment, from those that will not, and can probably be left untreated.
What medical AI will not do—at least not for a long time—is make human experts redundant in the fields it invades. Machine-learning systems work on a narrow range of tasks and will need close supervision for years to come. They are “black boxes”, in that doctors do not know exactly how they reach their decisions. And they are inclined to become biased if insufficient care is paid to what they are learning from. They will, though, take much of the drudgery and error out of diagnosis. And they will also help make sure that patients, whether being screened for cancer or taken from the scene of a car accident, are treated in time to be saved.
Source: The Economist