Frost & Sullivan research teams have for years been tracking developments in the computer vision domain and their potential to transform the health care, automotive, and communications industries. Read several notable health care applications.
We live in a world of artificial intelligence (AI). Today, it exists in the smartphones in our pockets and the smart speakers in our homes, to which we can ask, “Siri, what’s the weather like today?” or command “Google, play my favorite song,” or “Alexa, order my cart items from Amazon!”
But isn’t AI an ability to be similar to humans? Yes, indeed. Do we have that technology yet? No, we are still developing it. The human brain, and our intelligence, is a complicated concept to replicate, and separates mankind from most animals on Earth. There are several aspects to AI, only one of which is computer vision. We are already incorporating it in, and continue to develop if further for, several applications.
Frost & Sullivan research teams have for years been tracking developments in the computer vision domain and their potential to transform the health care, automotive, and communications industries. Several notable health care applications are discussed below.
The Basics of Computer Vision
Simply put, computer vision is the ability to “see” and recognize objects. If you see an image of a car, you can recognize it as a car, and distinguish its color, the make, the model, and possibly the year as well. Infants, as they grow up, start building this ability of recognizing people and objects. How do we replicate this for computers? Teaching a computer to have “vision” is to help it understand what an object is and how to deal or interact with it. Frost & Sullivan defines computer vision as “the science that provides computers with the ability to perceive and process images in much the way humans do. Computer vision enables machines to process and extract useful information from an image or a sequence of images. Computer vision technologies focus on developing algorithms that achieve visual understanding. Computer vision draws upon knowledge from computer science, cognitive science, biology, physiology, mathematics, and electrical engineering.”
A computer that can “see” adds value to what we do today, especially because it can augment our capabilities and help us do a better job. This is applicable whenever a visual perspective is important—for example, in a self-driving car.
Advances in AI and related areas such as deep learning, sensors and graphical processing units (GPUs); the increase in open source platforms; and the availability of public data to train computers are all driving the adoption of computer vision. Still, we are not as advanced with our technology as we would like to be: understanding emotions, for example, remains a challenge. To enable human-level intelligence, computers have to be trained using vast amounts of data sets and examples, but obtaining data at that magnitude is a challenge. Most important of all are the privacy concerns, ethical issues and regulatory hurdles that need to be cleared.
Computer-Aided Health Care Applications
The National Institutes of Health estimates that 12 million Americans are affected by diagnostic errors every year. The error rates are 25% for false negatives and close to 2% for false positives. Radiologists, because of time constraints, often end up focusing on the most important metrics but miss important details. Moreover, early diagnosis of certain health conditions, such as fatty liver disease and type 2 diabetes, can prevent serious related conditions including cardiovascular disease and bone fractures. The health care industry, therefore, requires scalable solutions that help with quick and accurate diagnoses.
The solution? Computer vision to identify (pre-marking, for example) suspect areas on an X-ray so the radiologist does not miss it, which is known as computer-aided detection (CADe), or even diagnosing a condition and directly informing the radiologist, known as computer-aided diagnosis (CADx). Naturally, CADe is the more prevalent solution, but several CADx solutions are emerging.
CADe examples include Zebra Medical Vision for diagnosing emphysema, detecting breast cancer, and assessing cardiovascular disease risks; and MaxQ-AI (formerly MedyMatch) for diagnosing stroke or internal bleeds after trauma to the brain.
CADx, a recent development, saw two U.S. Food & Drug Administration (FDA) approvals in 2018: Imagen Technologies’ automated wrist fracture diagnosis AI software, and IDx LLC’s IDx-DR system for autonomously diagnosing diabetic retinopathy of the eye. The implications for the latter are tremendous: any minimally trained technician can screen populations for the complication—even in rural areas as long as there is an internet connection. Diagnosed patients can be referred to eye specialists, preventing debilitating effects that could result in blindness. For perspective, the number of diabetic retinopathy cases globally is expected to rise to 191 million by 2030, per the International Diabetes Federation.
Ongoing Frost & Sullivan analysis of the medical imaging AI space has found more than 90 active companies. That number will continue to climb as more start-ups emerge from stealth mode. Frost & Sullivan notes that Chinese companies including 12sigma Technologies, Wanliyun Medical Information Technology and Yitu Healthcare are making a mark in the space, but Huiying Medical Technology is perhaps the most notable: it has seen stellar adoption of its technology, and has collaborated with more than 700 hospitals in China, 200 of which are considered top-tier. Alibaba group and Tencent are also pushing in to the medical imaging AI space with their AlibabaCloud ET Medical Brain and Miying AI Medical Innovation System (AIMIS) platforms, respectively. Easier access to medical data through collaborations with hospitals coupled with government policies that are supportive of AI adoption are a boon for the technology in China. The United States, on the other hand, is a hub for start-ups and also has larger medical imaging companies entering the AI space either with native applications or through partnerships.
In the same analysis, neurology and oncology (were found to be the top areas of focus for medical imaging AI companies. The availability of large data sets for cancer and the brain allow for training AI algorithms. Respiratory care, cardiac care, orthopedics, and ophthalmology are gaining traction.
The Road Ahead
Computer vision technology still needs significant improvements, although in medical imaging we are already seeing results. Several hurdles, including regulatory requirements and access to data to “train” computer algorithms, must be cleared, and as a society we need to make conscious decisions about the use of AI as it pertains to our health and wellness. But in all likelihood, we are not far from the day when the following scenario plays out:
Bob was found unconscious at his home, and was taken to the nearest emergency department. His brain’s MRI scans showed suspect areas, and the hospital’s AI system flagged his scans as urgent for the radiologist to immediately evaluate them. Not wasting a minute, the radiologist studied the scans and found the AI system’s diagnosis of stroke to be accurate; he immediately clicked a button, which resulted in a neurologist being pinged on his smartphone with the information and the scans. The neurologist, who was in the hospital, rushed to the emergency department and instructed the team of nurses to prepare for the next set of procedures. Bob is now out of danger, thanks to the AI system and incredible care coordination, which together helped shave several minutes off the treatment process—the difference between life and death.
Copyright © 2018 Frost & Sullivan