The state-of-the-art automated digital pathology solutions provide image creation, management and analysis systems comprised of an ultra-fast pathology slide scanner, an image management system and case viewer. Advanced software tools to manage the scanning, storage, presentation, research analysis and sharing of information complement this solution.
Optimization of digital pathology workflow requires communication between several systems. In this sense, the use of open standards for both digital slide storage and scanner management can accelerate the acceptance of digital pathology.
The DICOM standard is used for storing and exchanging medical images to support digital slides. Other standards are Health Level 7 (HL7) and terminology standards for encoding the findings like Systematized Nomenclature of Medicine – Clinical Terms or International Classification of Diseases (ICD-10). Those standards, among others, are fundamental in providing a data ground for the analytics process, and therefore, workflow improving derived from deep understanding of the effective opportunities to improve it. Consideration of these aspects along with appropriate validation of the use of digital slides for routine pathology can pave the way for pathology departments to go “fully digital.”
Analytics has become a top priority for many organizations looking to wring business benefits out of the data (structured, unstructured and semi-structured). Moreover, big data is still relatively new with many organizations, and its significance in business processes and outcome has been changing every day.
Most people consider business intelligence capabilities an important starting point, with focuses on structured data analysis and reporting. And as several healthcare institutions already have in place systems like PACs, RISC, LIMs and others, they are able to extract proper data to make some descriptive analysis and then make decisions and take action based on the data.
With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions and education, there are also huge research opportunities in image computing.
The increased use of slide scanning in pathology labs has sparked an interest in development and use of automatic image analysis algorithms. The intended goal of these algorithms is to help pathologists with tasks that are notorious for their observer variability and/or are tedious and time consuming. Some example applications include quantification of immunohistochemical stainings, nuclear morphometry, mitotic figures counting and detection of metastases. Some algorithms for quantification of immunohistochemical stainings already have approval by the USA Food and Drug Administration (FDA).
It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine “sub-visual” image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance, and hence, possibly improved prediction of disease aggressiveness and patient outcome.
However, the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high-resolution digitized whole slide images.
Additionally, there has been recent substantial interest in combining and fusing radiologic imaging and proteomics- and genomics-based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome.
Again, there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales.
Essentially, deep learning applications relying on retrospective analytics or performing comparative analytics are already making their way to the market. However, predictive and prescriptive use cases modeling are still unclear how the regulations would impact the results and the solution itself. In some cases, the assumptions are to be considered like any other IT solution, which means the software provider just supports and the final responsibility belongs only to the subject matter expert, a medical doctor in this case.
Research use cases have already begun. Zebra Medical Vision, for one, is already partnering with dozens of facilities on various research projects. Regarding next-generation computer-aided detection, recent examples are FDA approvals for RadLogics (2012) and HealthMyne (2016).
Population health analytics use cases are coming up, as well. Zebra Medical Vision feels confident it will go live in the Dell Cloud in 2016. In addition, a number of other companies are awaiting FDA decision or preparing to file. Clinical decision support use cases should be ramping up over the next three to five years. Diagnostic decision support use cases are likely five or more years away due to the intrinsic complexity and the FDA considerations.
Zebra Medical Vision, an Israeli startup that uses machine learning to teach computers to read and diagnose imaging data, last May raised $12 million in its latest funding round, including a re-investment from Salesforce.com Inc co-founder Marc Benioff.
Zebra has been building a database of millions of files such as CT scans and MRIs of real patients over the past three years, offering enough data so that machines can learn to accurately detect illnesses including breast cancer and health problems with bones, the liver and lungs, said President and co-founder Eyal Gura. Company developers are writing specific algorithms for each health issue and three have been cleared by the U.S. Food and Drug Administration, according to Gura.
The goal in applying quantitative image analysis methods to large numbers of images to automatically detect abnormalities, to segment them and to identify phenotypes in the images that can be used for automatic disease classification and to "precision medicine." According to a number of publications, those deep learning methods have being applied in the following ways:
automated detection of masses on mammography; automated classification of breast lesions;
automated classification of the type of a patient's cancer;
automated classification of the type and severity of eye disease in diabetic retinopathy.
IBM’s Automated Radiologist can read images and medical records. The software is code-named Avicenna and can identify anatomical features and abnormalities in medical images such as CT scans, and it also draws on text and other data in a patient’s medical record to suggest possible diagnoses and treatments. Avicenna is intended to be used by cardiologists and radiologists to speed up their work and reduce errors. It is being tested currently and tuned up using anonymized medical images and records.
This development is in line with the rise of machine learning and image analysis from technology companies like IBM, Microsoft, Google, Apple, SAP and HP, among several startups like SemanticMD, which provides a SaaS-based platform that enables the rapid training of medical image analysis applications and classifiers. Most of them are out of the healthcare vertical market solution from a few years ago or did not even exist as a company.
Others countries are investing in this segment, as well. In Australia, radiology service provider Capitol Health Limited has already figured out this answer by partnering with Enlitic for an “end-to-end transformation of medical diagnostics using deep learning for radiologists and healthcare providers.”
Therefore, if deep learning proves it can help achieve some of these high-level goals and starts making radiologists more productive, diagnoses more accurate, decisions more sound and costs more manageable, then – and only then – would it become a no-brainer: deep learning will revolutionize the field of medical imaging, even if this has to begin outside of the U.S.
The concept of connecting and monitoring medical imaging equipment via remote servers over the Internet is not new. This remote connectivity has allowed vendors to gain tremendous efficiencies in their maintenance and support functions by moving from an after-the-fact, break-and-fix service model to a proactive, preventative service model.
In 2015, the Big Three medical imaging vendors (Siemens Healthcare, GE Healthcare, Philips Healthcare) took major steps toward the next generation of the Internet of Medical Imaging Things.
According to a June 2015 study from McKinsey & Company, the economic effect from cloud-connected health technology could range between $170 billion to a whopping $1.1 trillion a year in the next 10 years. The lion’s share of that impact, the study says, will come from using IoM innovations to more effectively manage and treat chronic illness.
The key strategy has started to network thousands of devices on the new Health Cloud, but intends to connect millions imaging machines worldwide. Also, with some strategic partners, they are planning to accelerate the development of new analytics solutions on Health Cloud, which might be customized by these companies to individual customers.
GE, for example, is already acting on the vision that computer-intensive image processing can effectively be shifted from local processors to the cloud. This can be a huge game-changer, both for imaging equipment and workstation designs. A good example is the cardiac MR application, ViosWorks from Arterys, one of the ISVs now in the Health Cloud ecosystem. This advanced application performs the data heavy-lifting in the cloud while applying deep-learning algorithms; traditionally, this would be performed on the MR workstation or the advanced visualization workstation.
Pixeon, a 2015 Frost & Sullivan Latin American Growth, Innovation & Leadership PACS Award recipient, is a Brazilian leading healthcare IT innovation company in an aggressive expansion to the Latin America market. The company was the only one from Latin America selected in 2016 to participate in the Mayo Clinic Global Medical Business Immersion for solutions co-developing. Hapvida, one of the healthcare providers in Brazil, optimizes the biggest reports center in Brazil with Pixeon’s cloud solution.
The move toward IoM is gaining momentum. In fact, its adoption is inevitable. For starters, it serves the financial interest of key players in healthcare – the insurance industry, big pharma, hospitals and doctors. Secondly, it’s ideally suited to help meet the demands of today’s increasingly health-conscious and empowered patients.
While applications of medical image deep learning analysis have only recently begun to be explored, the potentials of such systems are immense due to the unique characteristics of medical images that make them ideal for deep learning.
For example, all medical images are regulated for highest quality and also manually annotated with a radiologist’s report, or a ground truth. These qualities are especially ideal and necessary as such systems require exceptionally high sensitivity and specificity due to their importance in disease diagnosis and treatment planning.
However, patient privacy laws and policies make access to such medical images difficult, presenting one of the most important questions as we move forward in the field of medical image machine learning – Exactly how large does our training data set have to be to solve a specific classification or detection problem with high accuracy?
Nevertheless, the extensive usage of digital pathology, standards adoption, the expansion of cloud computing and the Internet of Medical Imaging are all providing a big amount of data like imaging, texts, audios, structured data, HL7, DICOM, etc.
According to the potential of those data, the healthcare institution has a huge opportunity to optimize their workflow to a disrupted level based on new insights and new data-driven decision-making process and mind set. This is not magic; it can be achieved after an implementation of data-driven programs, starting with already well-known stuff like BI capabilities (skills, mind set and then technology). And this should be part of a learning curve programming towards machine learning and imaging analysis excellence capabilities.