FYI.

This story is over 5 years old.

Health

Google's Algorithms Are Already Outperforming Pathologists

But the tech still has a few drawbacks.
STEVE GSCHMEISSNER/Getty Images

One of the more difficult things a doctor can do is diagnose cancer. That's not just because of the life-changing effects such a finding can have, but because distinguishing an abnormal but benign bunch of cells from one that's potentially deadly is surprisingly subjective. Trained physicians can and often do disagree. In one study of breast biopsies, for example, diagnostic agreement was as low as 48 percent; individual physicians agreed with the consensus view a little more than 75 percent of the time. Both of those numbers are frighteningly low, and researchers are looking to computers to improve them.

Advertisement

Why do human doctors so often disagree? The problem isn't that they don't know what they're looking for—they generally have a set of cues, steps they go to produce a diagnosis.

But they can disagree about what they're seeing, and how it fits the set of cues. Not only can they disagree with each other; a classic 1968 study found that, when given a copy of a stomach ulcer they'd already diagnosed, physicians disagreed with themselves, rending different diagnoses. Nearly five decades ago, researchers were drawing attention to what that study called a "generally terrifying" level of disagreement.

Researchers back then found that a simple algorithm was more consistent. That's not surprising: An algorithm is just a set of rules to be followed. Human beings, in all our subjectivity, tend to apply these rules inconsistently.

Computers, on the other hand, do not. That basic insight—that diagnosis means recognizing cues correctly and interpreting them consistently, and computers may outpace humans at both—has led today's researchers to combine machine learning and big data in an effort to automate cancer diagnosis.

What does that mean? Fundamentally, it means feeding a large dataset of high-definition biopsy images into a program designed to analyze them. That's essentially what researchers created in the late 1960s. With machine learning and big data, researchers can create algorithms that improve themselves and train them with information about large numbers of patients.

Think of it as an automated doctor who's constantly revising its diagnostic criteria based on what it sees in tens of thousands of biopsies. This ideal always-learning, never-sleeping pathologist can complement its human counterpart, helping to standardize diagnoses.

That's the thinking behind a recent Google project focused on analyzing breast biopsies. It uses using data from the Camelyon16 project, which challenged participants to create and refine cancer-detection algorithms. (It's different, though, from the approach used by IBM's Watson supercomputer, which has ingested millions of cancer research papers and can offer a diagnosis based on a patient's medical profile. It's already being used—and trained—in Asia.)

According to the Google research blog, the company's already proven it can design a model that can match or exceed the performance of a pathologist taking unlimited time to examine the images. In this case, the pathologist spent 30 hours examining 130 slides, meaning an automated assistant could be a valuable time-saver, as well as helping to standardize diagnoses.

Right now there are some limitations: The algorithms are narrowly focused on cancers they've been trained to recognize, for example. They're specialists who can't stray outside their field; they lack the breadth of knowledge a trained doctor would have. But right now they're good at what they do, and they're likely to only get better.