These days it also means greater understanding of complex biomedical issues, thanks to Open-i, the National Library of Medicine’s novel system for searching and retrieving images and relevant bottom line statements from the medical literature in PubMed Central.
“Open-i (pronounced “open eye”) is an innovative system for images, accompanied by the meaningful text, and supports thousands and thousands of searches of vast multimedia collections,” explains George Thoma, PhD, Chief of the Communications Engineering Branch in NLM’s Lister Hill National Center for Biomedical Communications. “It is part of a huge, accelerating revolution in medicine.”
The Communications Engineering Branch began to develop Open-i in 2010, led by staff scientists Dina Demner-Fushman, MD, PhD, and Sameer Antani, PhD, experts in, natural language processing and complex image retrieval, respectively. The challenge was to seamlessly blend imaging and natural language processing techniques.
“Open-i goes to the heart of the issue: identifying the most relevant information on a disease, say tuberculosis, for families seeking such information,” Dr. Antani says. “The goal is to help explain a topic by connecting its visual and textual components. It turns around the typical text-based search process by leading with the image, blurring the boundary between the visual and textual, and making things relevant.”

The Open-i team, left to right (standing) Daekeun You, Zhiyun Xue, Md Rahman, Matthew Simpson, Michael Kushnir, Suchet Chachra; (seated) Michael Chung, Sameer Antani, George Thoma, Dina Demner-Fushman, Glenn Ford.
Launched in 2012, Open-i indexes and retrieves medical texts and images, and their captions, from more than 737,000 articles in PubMed Central. Developers focused Open-i on PubMed Central because its content is freely and publicly available so there are no copyright issues. There are more than 2.23 million images in the Open-i database. Each enriched citation is linked to PubMed Central, PubMed, MedlinePlus as well as the article itself at the publisher’s Web site. The site also hosts a separate collection of 8,000 X-rays including clinical readings. Users may search by text words, as well as by query images. Open-i averages some 690,000 hits a day from 41,000 visitors, including bots (programs that run automated tasks over the Internet).
The system’s capacity to process images is possible only through distributed computing. The Communications Engineering Branch team maintains Open-i with three servers working together, each performing 16 separate tasks at lightning speed.
Dr. Demner-Fushman credits an earlier NLM scientific innovation, the Unified Medical Language System—”A ‘little magic’ which the whole world now uses!”—as being key to Open-i‘s ability to extract relevant conclusions from an article. Begun in 1986, the UMLS is a set of resources and software that essentially combines health and biomedical vocabularies and standards, enabling computers to talk to each other in various ways, including search engine retrieval and data mining.
Before such computerized search and retrieval, a person would actually have to go to a library, search through the index file, select a card, read the description of the information contained in the book or paper to judge whether it would be useful, take the paper out, read it, and then, and only then, determine whether it had been helpful.
“Now, Open-i users get immediate, relevant visual and textual information,” says Dr. Demner-Fushman. “The text explains the image, which visualizes the text.” Throughout, Dr. Thoma and his team continually strive to refine and improve Open-i for everyone’s benefit. According to Dr. Demner-Fushman, they’re concentrating now on making the system “responsive” to the design and style of every phone, tablet, laptop, and browser in common use, thereby assuring ever-speedier and easier searches by users. Acknowledging there is always room for improvement, Dr. Demner-Fushman says they’ve begun focusing on how to better recognize and categorize “regions of interest” within images with standardized words. The ultimate goal, for example, would be to understand how a radiologist “sees” an MRI image of your heart attack and present her with the most relevant, useful information possible.
It’s all about patterns—of language and pixels—and standardizing and labeling them for deeper and deeper understanding.
As anyone can see.
by Christopher Klose, NLM in Focus contributor