Cervical cancer is highly preventable—but only if you live in a place where there’s access to the human papillomavirus (HPV) vaccine and routine gynecological screening.
Recent advances in vaccines protecting against HPV infection, which is the primary cause of the cancer, and improved cervical screening tests promise a future with a significantly reduced prevalence of cervical cancer.
But we’re not there yet.
Currently, more than half a million women are diagnosed with cervical cancer each year, with 90% of the deaths occurring in low- and middle-income countries. In these countries, cervical cancer is often a leading cause of cancer deaths among women. If detected at the precancerous stage, the infection can be treated and cancer can be prevented.
The challenge is that access to vaccines and to screening is very limited in many parts of the world. In addition, the HPV DNA tests that provide the best screening are expensive, so many health care providers instead rely on a visual inspection of the cervix, which is highly subjective. This inspection, which takes place after an application of weak acetic acid, is known as VIA, or visual inspection with acetic acid. There must be a better way, one that is more accurate yet still inexpensive.
What if there were an imaging tool that was accessible, accurate, affordable, and easy to use—and that could reduce subjectivity and help make better diagnosis possible? Would it help reduce the workload of clinics if women without HPV infection were screened out more reliably?
A dozen years ago, these questions might have seemed out of reach, but today research happening at NIH, through a collaboration between NLM and the National Cancer Institute (NCI), is finding answers.
“After more than 12 years of work using biomedical informatics, data science, and health information technology, we have achieved a point where we can project that a handheld device like a smartphone will be able to screen for subtle changes that predict risk of cervical cancer that can be easily treated,” said Sameer Antani, acting branch chief for the Communications Engineering Branch and Computer Science Branch at NLM’s Lister Hill National Center for Biomedical Communications. “This is a pivotal time for our research.”
NCI led a seven-year, longitudinal cervical cancer screening study in Costa Rica that tracked over 9,000 participants and obtained more than 60,000 photographic film images that showed the progression of disease in infected women. Following this, NLM digitized the images, and then NLM and NCI researchers collaborated with Global Good, which is funded by Bill Gates, to analyze the images to develop an artificial intelligence (AI)-based detection algorithm for cervical cancer screening and tools that could help the cancer research community.
“The strength of the expertise of cancer epidemiologists and researchers working with the information science experts of NLM has allowed us to be on the verge of discovery,” said Antani. “This digital learning study would not have been possible without the framework of the intramural program at NIH.”
The effort expands
A year ago, NLM and NCI signed a formal agreement with Global Good and Unitaid, a foundation associated with the World Health Organization, to coordinate efforts to combat cervical cancer.
The potential impact of this partnership is huge.
“The eventual goal is not only to screen but to be able to recommend clinical management,” said Antani. “This is the most challenging part. This agreement is helping to enable field studies to gather data that could be used for training automatic decision-making computer apps to come up with predictions or guidance in a wide range of settings.”
The NLM team
Leading NLM’s efforts are Antani, Rodney Long, and Zhiyun (Jaylene) Xue. Each of them brings a different perspective.
Antani, a senior medical imaging scientist, contributes his expertise in AI, computer science, and engineering. Long is a mathematician with research interests in telecommunications, systems biology, image processing, and scientific/biomedical databases. Xue is an expert in colposcopic image analysis and brings her knowledge of medical image processing and AI. In addition, other staff members, including Peng Guo, a postdoctoral fellow, work with the team.
“It’s an ideal combination of skills, with experience for every step of automated image-based medical decision making,” said Antani.
Over the years, the NLM team has contributed to original research in image processing techniques and software development to support cancer research. As part of their research in image processing, they created novel apps that help localize and characterize the appearance of lesions on cervical images. One of these apps is the Boundary Marking Tool (BMT).
The BMT helps gynecologists mark suspicious regions on pictures of the cervix. “These markings from the BMT helped us understand how cervical cancer develops and also figure out how to recognize it before it invades,” said Mark Schiffman, a senior principal investigator at NCI, who is one of the leaders of NCI’s cervical screening research.
But how would the BMT work in the field? How reliable could it be if human experts were inconsistent in recognizing the severity of the disease?
“NCI had 20 experts in the field using the BMT and manually marking regions in the images that were suspect for predicting cancer,” said Long. “The variability in their opinions was enormous.”
The data gathered with the BMT have also been used for training and certifying aspiring students in the field through the Teaching Tool, a collaboration between NLM and the ASCCP (formerly, the American Society for Colposcopy and Cervical Pathology).
The recognition that doctors have great trouble knowing how severe cervical changes are, which is the basis for clinical management, has led to many efforts over the years to find a reliable aid to early signs.
Outcomes of these early research and development efforts and improved understanding of cervical cancer stages and lesions helped drive home the need for more research, particularly using the advanced possibilities offered through recent explosive growth in the use of a kind of AI called deep learning.
Researchers at Global Good explored the use of deep learning on the cervical cancer images from the Costa Rica study. The results were astonishingly good.
“In disbelief, NCI asked us to validate these early findings,” explained Antani. “Jaylene, Rodney, and I redid the experiments under different conditions and confirmed the findings. These were published in a landmark article in the Journal of the National Cancer Institute.”
The article emphasizes the role of deep learning. Antani describes deep learning as “a smart brute-force approach to learning patterns.” He explained, “It looks at every relevant pattern of image pixels for a class of images and learns meaningful correlations while also learning to suppress those that could fool the human eye. This deep learning-based approach is a focus of the efforts in the agreement.”
“Epidemiologists are particularly excited,” said Long. “One of the people who’s been in this field for years says that even if this performs [only] marginally better than what they use now, they would be really glad. They’re eager for any improvement.”
“I’m thinking about how we at NIH tend to think in terms of disease and population impact. All this work has a tremendous impact on the technical side, too,” Antani added.
Hope for the future
Because of the agreement, researchers are now exploring deep learning algorithms for cervical screening in nearly 100,000 women at 20 sites; the biggest ones are in Brazil, India, Nigeria, Zambia, and parts of the United States.
The hope is that one day a nurse in a low- or middle-income country or a rural area of the United States could use her smartphone to run an AI app to not only screen for disease but to recommend a simple treatment that will save the life of her patient—early detection and efficient treatment.
“I’m glad that artificial intelligence can contribute to this important research field,” said Xue. “Because I am a woman, I was especially happy to get involved in this project that will affect many women, especially those in low-resource regions where there are few other options.”
By Kathryn McKay, editor of NLM in Focus
AI, ML, and DL Simply Defined
Artificial intelligence (AI) is a computer performing tasks commonly associated with human intelligence. Humans code or program a computer to act, reason, and learn. An algorithm or model is the code that tells the computer how to act, reason, and learn.
Machine learning (ML) is a type of AI that is not explicitly programmed to perform a specific task but, rather, can learn iteratively to make predictions or decisions. The more data an ML model is exposed to, the better it performs over time.
Deep learning (DL) is a subset of ML that uses artificial neural networks modeled after how the human brain processes information to learn from huge amounts of data. A well-designed and well-trained DL model is able to perform classification tasks and make predications with high accuracy, sometimes exceeding human expert-level performance.
Source: National Cancer Institute