Artificial Intelligence in Digital Pathology in Gastrointestinal Cancers: Comparison
Please note this is a comparison between Version 2 by Jason Zhu and Version 3 by Jason Zhu.

The implementation of digital pathology (DP) will revolutionize current practice by providing pathologists with additional tools and algorithms to improve workflow. Furthermore, DP will open up opportunities for development of AI-based tools for more precise and reproducible diagnosis through computational pathology. One of the key features of AI is its capability to generate perceptions and recognize patterns beyond the human senses. Thus, the incorporation of AI into DP can reveal additional morphological features and information. At the current rate of AI development and adoption of DP, the interest in computational pathology is expected to rise in tandem. There have already been promising developments related to AI-based solutions in prostate cancer detection; however, in the Gastrointestinal (GI) tract, development of more sophisticated algorithms is required to facilitate histological assessment of GI specimens for early and accurate diagnosis.

  • digital pathology
  • computational pathology
  • whole-slide imaging
  • artificial intelligence

1. Introduction

Gastrointestinal (GI) cancer is a collective term encompassing various cancers related to the GI system, which comprises the organs from the oral cavity down to the anal canal. Malignancy can develop in any part of the GI tract, as it is constantly and directly exposed to carcinogens in the environment through the ingestion of food. Given the numerous hotspots for malignant transformation and the constant exposure to carcinogens, GI cancer accounts for 26% of global cancer incidence and 35% of global cancer-related mortality [1]. The recent surge in incidence and mortality are linked to the increasing prevalence of modifiable risk factors, such as sedentary lifestyle, obesity, unhealthy diet and other metabolic abnormalities [2]. Recent analysis revealed a significant increase in GI cancer incidence in young adults aged 25–49 years, alerting the public to emerging medical burdens [3]. The five major GI cancers causing considerable global burdens are oesophageal squamous cell carcinoma, gastric adenocarcinoma, colorectal cancer (CRC), hepatocellular carcinoma and pancreatic cancer. In order to identify and classify GI cancers, histopathological analysis of endoscopic biopsies or resected tumour specimens remains the gold standard for disease diagnosis. Unfortunately, early diagnosis of GI-related cancers is often missed due to a lack of specific symptoms. Medical attention is often sought only when non-specific symptoms become unbearable [4].
In addition to the existing lack of medical professionals, there are various reasons for the growing demand for histopathological diagnosis, including an increase in morbidity within the aging population, the increasing incidence of malignancies among young adults and the increase in cancer screening programs, resulting in an increased number of annual laboratory cases [5,6,7,8][5][6][7][8]. Furthermore, the complexity of cases, as well arduous criteria for case reporting, constitute burdens to histopathologists and may prolong the turnaround time (TAT) associated with generating reports in AP laboratories [9]. Standardization among AP laboratories is crucial for pathologists to make precise diagnoses. Medical laboratory technologists are not only responsible for preparing tissue samples for diagnoses but also for ensuring that the laboratory performance is satisfactory through quality assurance programs while operating in compliance with laboratory accreditation [10,11][10][11]. With the lack of manpower and restricted laboratory budget, the adoption of more advanced technology is imminent with respect to addressing the overwhelming workload in an AP laboratory.

2. Current Histopathology Practices and Opportunities in Digital Pathology

In modern clinical practice, the advancement of technology enables the development of highly efficient, high-throughput and high-resolution (up to 0.23–0.25 μm/pixel) WSI devices that can digitize traditional glass slides into whole-slide images (WSIs) that can be remotely viewed on a display monitor for remote diagnosis of cases [42,43][12][13]. This digital process is termed “digital pathology” and has additional benefits compared to traditional pathology, including the ease of quickly and securely sharing pathology WSIs worldwide with other pathologists for second opinions, as well as providing a better view of the tissue with additional annotation and measurement tools while allowing for rapid access to archived cases without loss of quality [44][14]. Previous literature reports support the safe transition to digital pathology, showing high concordance of diagnosis using digital pathology with WSIs as compared to traditional slides with light microscopy [45,46,47,48,49][15][16][17][18][19]. The quality of WSIs is a major factor that affects concordance rates and diagnostic accuracy. In order for high-quality WSIs to be generated, specific tasks must be performed by medical laboratory technologists. The tissues of interest extracted from esophagogastroduodenoscopy or surgery must be preserved in fixatives, such as 10% neutral buffered formalin, to prevent autolysis of biological tissue while maintaining tissue structure and integrity. The tissue is then embedded, sectioned, stained and mounted onto a glass slide, ensuring the quality of the tissue and glass slide will. This includes checking for artefacts on the tissue, including possible scores, tears, floating contamination, thick and thin sections and ensuring that the glass slide is intact and free from dust before and after digitization [50,51][20][21]. Considering the forthcoming transition from traditional pathology to digital pathology, there is a demand for the development of new tools to facilitate the reporting process for pathologists. In recent years, powerful WSI analysis software tools that are user-friendly yet packed with clinically relevant tools have been developed. The majority of such software applications are open-source and freely available; these include ImageJ 1.53s (Madison, United States), QuPath 0.3.0 (Belfast, United Kingdom), SlideRunner 2.2.0 (Erlangen, Germany) and Cytomine (Liège, Belgium) [52,53,54][22][23][24]. Such software applications are capable of handling large WSIs and metadata generated from different hardware brands and contain interactive drawing tools for annotation. They also include features that can perform cellular detection and feature extraction. Furthermore, to complement software development, the cost of hardware required for high-performance computation, including high-speed network infrastructure and data storage, has become increasingly affordable. This has led to increased adoption of digital pathology in major hospitals worldwide. However, more investment is required to expand the roles of digital pathology in most hospitals [12][25]. With an increasing number of digital pathology centres, the generation of large and high-quality WSI databases will slowly emerge [17,47,55,56][17][26][27][28]. This growth increases the feasibility of obtaining large datasets and designing algorithms for analysis of WSIs using computer software through a broad range of methods for the study of diseases. This concept is termed computational pathology [57][29]. The combination of computational pathology with digital pathology opens new opportunities for case diagnosis. Algorithms developed for the early detection of cancer are important to improve patients’ chance of survival [58][30]. Automated screening of a large number of specimens may provide improved accessibility and make diagnosis and treatment more affordable [59][31]. Furthermore, the escalating amount of cancer specimens with increasingly complex classification and the urgent need for shorter TAT have resulted in an upsurge in workload in diagnostic pathology services [60][32]. With these challenges in mind, more research has been focused on the use of AI to perform diagnostic tasks for histopathological examination. Some models have been proposed to use computational methods to triage patients and present urgent cases to pathologists with prioritizations. Despite the excellent results and potential application in histopathological diagnosis prediction using AI [61[33][34][35],62,63], there are still significant challenges to overcome prior to the implementation of these AI-based tools and algorithms. In the following section, we will explore the critical areas that hinder the current development and deployment of AI-based tools in clinical practice.

3. Current Challenges of Algorithm Development in Computational Pathology

AI has become a popular tool in the field of medical image processing and analysis [64][36]. AI is capable of extracting meaningful information from images that cannot be obtained with the naked eye. Machine learning is a subset of AI that utilizes algorithms developed to process these data and perform tasks without explicit programming. Deep learning is a subset of machine learning that uses highly complex structures of algorithms based on the neural network of the human brain [65][37]. In recent years, AI has become increasingly popular in histological imaging analysis, with clinical applications that include primary tumour detection, grading and subtyping [66,67,68][38][39][40]. Several well-known research challenge competitions, such as CAMELYON16 and CAMELYON17, encourage the development of novel and automated solutions that are clinically applicable to improve diagnostic accuracy and efficiency [69,70][41][42]. This is achieved through various combinations of machine learning and deep learning techniques, which will be discussed further. Large annotated datasets are also indispensable for the development of successful deep learning algorithms. Through the adoption of digital pathology for diagnosis, it is possible for pathologists to annotate regions of interest during the case reporting process, which may indirectly facilitate the generation of such valuable datasets for future algorithm development [16][43]. Furthermore, the inevitable integration of these AI tools is necessary due to the aging population worldwide and the shortage of pathologists [71][44]. The training of new pathologists requires a long period of time in order to ensure competency [72][45]. Thus, there is an urgent need to develop clinically applicable AI-based tools to relieve the high workload of pathologists, producing more precise and reproducible diagnoses while reducing the TAT of cases. However, there remain obstacles and challenges that are hinder the development process, which will be discussed below.

3.1. Colour Normalization

For routine histopathological diagnosis, haematoxylin and eosin (H&E) staining is the most preferred method for visualizing cellular morphology and tissue structure. Haematoxylin stains cell nuclei a purplish-blue, whereas eosin stains the extracellular matrix and cytoplasm pink. The patterns of coloration play a central role in the differentiation between cells and structures. Colour and staining variations can be affected by, but are not limited to, the thickness of the specimen, non-standardized staining protocols and slide scanner variations [73][46]. It is important to perform colour normalization to ensure consistency across WSI databases, as it can affect the robustness of deep learning models. Common methods for colour normalization include histogram matching [74[47][48],75], colour transfer [76,77,78][49][50][51] and spectral matching [79,80,81][52][53][54]. However, these methods of colour normalization heavily rely on the expertise of pathologists, which makes WSI colour normalization difficult for general researchers with limited knowledge of the colour profile of histological staining. Moreover, this process of manually assessing and adjusting each WSI is impractical. Several algorithms have been proposed by researchers that are capable of performing colour normalization by deep learning using a template glass slide image for reference, such as StainGAN [82][55] and SA-GAN [83][56]. These models have shown promising results with respect to ensuring consistent representation of colour and texture [84,85,86][57][58][59]. Nevertheless, future work is needed to explore the clinical performance of AI-based tools developed using such colour normalization systems.

3.2. Pathologist Interpretation for Model Training

For histopathologists, understanding normal histology is essential for investigating abnormal cells and tissue. Final interpretation comprises several processes, including visualization, spatial awareness, perception and empirical experience. When developing new algorithms for the classification of diseases, the wide variation in the interpretation or classification of diseases cannot be categorized or easily represented in a cardinal manner. If so, the classification task is oversimplified, leading to a lack of detail. Physiologically, cancers are characterized by an increase in cellular proliferation, invasive growth, evasion of apoptosis, and altered genome and expression [87][60]. With H&E staining, histopathologists can identify the changes in cellular characteristics by several broad features of malignant cells, which include anaplasia, loss of polarity, hyperchromatism, nuclear pleomorphism and irregularity, abnormal distribution of chromatin, prominent nucleoli and atypical mitotic figures. Diagnostic reports are often used as the ground truth to label datasets. However, pathologists may use descriptive terminology to explain pathology concepts that are difficult to interpret and categorize [88][61]. Hence, it is important for researchers to obtain diagnostic reports with standardized reporting, including the specimen type, site, histologic type, grading and staging, in accordance with the American Joint Committee on Cancer, which represents different categories of histopathological knowledge under fuzzy ontology [89][62], and streamline the process of algorithm development to generate more accurate disease classifications [90][63].

3.3. Model Transparency and Interpretability for Deployment of AI-Based Tools in Clinical Practice

The incorporation of digital pathology, let alone computational pathology, in the clinical workflow requires solid validation of WSI analysis. However, various studies have reported that there is high discordance between diagnoses made using WSI and those made with light microscopy in GI malignancies [46,91,92,93][16][64][65][66]. More evidence is required to demonstrate that diagnoses made through digital pathology are as performant as conventional light microscopy diagnoses to ensure reliable and safe application in digital pathology practice [91][64]. According to practical recommendations by the Royal College of Pathologists regarding the implementation of digital pathology [94][67], validation and verification processes should be performed in the image analysis system to demonstrate its clinical utility before integrating into clinical workflows. Thus, it is also of considerable importance for AI-based tools to be transparent and interpretable in order to address possible moral and fairness biases by providing evidence in making a specific decision [95][68]. However, there is a dilemma when designing new algorithms, as most algorithm developments focus on using sophisticated deep learning and ensemble methods—so-called “black-box” models—to tackle multidimensional problems. On the other hand, much simpler methods, such as linear regression or decision-tree-based algorithms, may not be sophisticated and powerful enough to achieve the desired outcome [96,97,98,99][69][70][71][72]. Nonetheless, the demand for more explainable AI is on the rise [100][73]. Developers of new AI-based tools targeting applications in digital pathology should continuously assess the model explainability of their tool to meet the expectations of various stakeholders. They should also consider the ethical concerns and possible regulatory barriers imposed by governments and professional bodies regulating practice [101,102][74][75].

4. Clinical Insight for Selected AI Applications in Early Diagnosis and Monitoring of the Progression of GI Cancer

The application of AI in GI cancer is an important and rapidly growing area of research. Early diagnosis and monitoring of GI cancer can minimize cancer incidence, lower mortality rates and reduce the cost of care. Effective clinical management is a prerequisite to achieve these goals, and the incorporation of AI in healthcare systems is indispensable. Many AI-based software applications and algorithms have been developed to assist in the detection of GI premalignant lesions. In addition, several investigations have shown impact through advanced AI algorithms for tumour subtyping by providing valuable information that may affect subsequent treatment planning and prediction of patient outcomes. Finally, promising results have been demonstrated by various intelligential WSI analytical neural networks for the screening of GI cancer, ranging from AI monitoring to providing rapid analysis and diagnosis of GI cancer, optimizing the workflow in an AP laboratory.

4.1. Detection of Early GI Premalignant Lesions

Early detection of precancerous lesions is crucial to identify certain GI cancers, especially in GC and CRC [135][76], which are associated with high incidence and mortality. As chronic H. pylori infection is a major risk factor for GC, evaluation of H. pylori is critical in gastric biopsies. Identification of these organisms requires examination of the slide under high magnification (400×). Although this bacterium can be easily identified in H&E-stained samples if present in large quantities, identifying cases with scanty numbers of H. pylori can be extremely time-consuming. Although ancillary staining (e.g., Warthin–Starry staining, IHC) is possible, these methods require extra cost and time for diagnosis. Deep learning assistance [35,130][77][78] can further speed up GC diagnosis and provide alternative solutions to low-resource settings, where ancillary stains are not readily available. Klein et al. [130][78] designed VGG-based algorithms under an active learning scheme. By utilizing algorithms that are able to proactively learn and train from standard GC histology images with H. pylori, the required amount of pixelwise labelling annotation by pathologists can be significantly reduced, eliminating the need for large, annotated image datasets established by pathologists. Therefore, only a small portion of manual annotation will be required to develop AI applications for detection of H. pylori in order to self-generate annotation of regions with H. pylori. Unlike GC, differentiating high-risk colorectal polyps from low-risk polyps (e.g., innocuous hyperplastic polyps) is an important task in CRC screening. As identifying high-risk polyps is based on characterization of specific types of polyps (e.g., sessile serrated polyps) [136][79] and considerable interobserver variability exists among pathologists [137,138[80][81][82],139], accurate diagnosis of high-risk polyps is needed for effective and early detection of CRC. As all US adults aged 50–75 years old are recommended by the US Preventive Services Task Force [140][83] to screen for CRC yearly, AI applications [29,116,118][84][85][86] for classification of high-risk colorectal polyps could demonstrate its clinical utility in prioritising slides with higher likelihood of malignancy. Wei et al. [118][86] demonstrated that their AI algorithms for colorectal polyp detection can further highlight regions of high-risk polyps on WSIs to provide effective, consistent and accurate diagnoses, demonstrating that AI systems, such as computer-aided diagnosis, can possibly be developed to assist pathologists in the interpretation of CRC WSIs. However, as polyp annotations were provided by several pathologists for AI model training, similar errors were found in both pathologist and AI applications with respect to classifying high-risk polyps and low-risk polyps. This suggests that future AI systems for colorectal polyp detection may require additional development with extra manual annotations by different experienced GI pathologists to reduce errors in classification. Currently, diagnostic procedures still rely heavily on pattern recognition by pathologists to identify regions of interests. Future AI algorithms will guide pathologist by identifying areas of interest by highlighting H. pylori and high-risk polyps for confirmation. Given the promising performance of AI algorithms in accurate delineation of premalignant lesion regions, pathologists might integrate computational pathology to adopt AI-based tools in clinical workflows with increased confidence. Eventually, this will reduce the time spent per case and allow for early detection and diagnosis of patients with GI premalignant lesions.

4.2. Tumour Subtyping Characterization and Estimation of Patient Outcome

In histopathology, the identification and classification of tumour subtypes is important for personalized medicine [141][87]. As certain gene mutations and molecular alterations are associated with specific morphological changes [142][88], deep-learning-based image analysis has the potential to uncover molecular tumour subtyping and build robust classifiers to enhance treatment response in cancer patients. MSI refers to DNA mismatch repair deficiency resulting in accumulation of mutations within short repetitive sequences of DNA (microsatellites). MSI testing is critical for treatment of GI cancer [143][89], as CRCs with high MSI are associated with poor response to conventional chemotherapy but improved response to immunotherapy, [144,145][90][91] leading to improved median overall survival [146][92]. However, due to the high cost of PCR-based MSI testing [147][93], many studies have focused on developing deep learning methods based on H&E imaging to provide fast and accurate MSI detection for GC [28,122][94][95] and CRC [36,121,122][95][96][97] in countries with limited resources, especially in Latin America and the Caribbean [148][98]. Due to the lack of exact 3D quantitative MSI information, training deep learning models without accurate information may limit the prognostic performance in GI cancer patients with MSI. Despite imperfect specificity, the increasing amount of MSI related-AI research indicates that AI will be rapidly applied to screen for MSI in GI cancers to lower laboratory operation costs. Assessment of survival outcomes in GI cancer is another ongoing research area using AI. Many studies have explored the question of whether deep learning applications can be used as prognostic tools for GC [124][99] and CRC [27,117,123][100][101][102] based on H&E-stained WSIs. Using deep learning, suspected GI tumour tissue (e.g., tissue microarrays [27,123,124][99][100][102] and stroma [117][101]) can be assessed quantitatively using H&E-stained WSIs to assist pathologists in identifying patients at high risk of mortality for immediate advanced treatment. Furthermore, some pioneering studies [114,115][103][104] have attempted to explore the classification of consensus molecular subtype (CMS) classification of CRC and patient survival outcomes using deep learning. With the capacity to identify tumour subtypes and perform risk assessment of cancer patients, AI applications can provide a cost-effective and time-saving solution for clinical decision making. Guided by AI with high reproducibility and objectivity, more clinically relevant information can be generated from WSIs to allow for improved clinical management to enhance the survival of GI cancer patients. Performing supplementary laboratory tests, including PCR and IHC tests, remains the gold standard to determine the molecular tumour subtype profile of GI cancer. Advancements in AI algorithms allow for application in clinical settings with limited resources to detect specific biomarkers, such as MSI, using commonly available H&E staining images, facilitating early personalized treatment. Additionally, AI application will allow for improved allocation of medical resource by shortening the time required to identify GI cancer patients at high risk of mortality for earlier treatment.

4.3. Screening GI Cancer in Daily Clinical Operation

As clinical follow up is essential to reduce cancer mortality [59][31], effective pathology service must start by providing early detection and accurate cancer diagnosis. The shortage of pathologists continues to worsen globally, resulting in delayed cancer diagnosis and treatment, especially during the recent COVID-19 pandemic health crisis [149,150,151][105][106][107]. The pressure to provide fast and accurate diagnoses, with overloaded histopathology workforces, has forced the transformation of practice from conventional light microscopy to digital pathology with AI analytical applications. Studies have focused on improving clinical workflow by diagnosing various GI cancers, including oesophageal cancer [34][108], GC [30[109][110],39], etc. Pertinently, Gehrung et al. [34][108] established an H&E-based trained deep learning system to identify and triage oesophageal cases with Barrett’s oesophagus (BO) for early detection of oesophageal adenocarcinoma. The AI application resulted in a workload reduction of more than 55% of for BO identification after triage and further demonstrated that the developed AI system can improve operational efficiency and support clinical decision making in a realistic primary care setting. However, the deep learning system developed by Gehrung et al. [34][108] did not tackle the problem of successful training algorithms requiring large annotated WSI datasets established by pathologists [16][43]. Campanella et al. [152][111] demonstrated a weakly supervised system trained by diagnostic data that can be automatically retrieved from pathology reports to triage prostate, skin and breast cancer cases. A workload reduction of more than 65% [152][111] was observed. As weakly supervised algorithms have proven their value in cancer screening, efforts should be made to develop AI applications to triage and identify GI cancers for daily operation without further overloading the histopathology workforce. At present, histological cases are reviewed in chronological order in traditional AP laboratories. Clinical needs and the current acceptability of AI have not been well-studied and discussed. One of the ways to demonstrate the clinical usefulness of AI is to measure the potential workload reduction associated with automatic revisewarch of benign/disease-free cases. With the utilization of AI for case screening, cases susceptible to GI cancer can be flagged for rapid reviewsearch, and cases with no potential indication of GI cancer can be semiautomatically reviewed (Figure 3B,D). By measuring the change in pathologist workload in prospective trials [153][112], the clinical usefulness of AI can be assessed, enabling earlier and improved treatment planning. Furthermore, using slide-level diagnostics in a weakly supervised setting is also recommended for AI algorithm development. This could also alleviate the overwhelming workload issue in AP laboratories by freeing pathologists from the generation of pixel-level annotation for future algorithmic development used in clinical settings.

References

  1. Arnold, M.; Abnet, C.C.; Neale, R.E.; Vignat, J.; Giovannucci, E.L.; McGlynn, K.A.; Bray, F. Global Burden of 5 Major Types of Gastrointestinal Cancer. Gastroenterology 2020, 159, 335–349.e15.
  2. Islami, F.; Goding Sauer, A.; Miller, K.D.; Siegel, R.L.; Fedewa, S.A.; Jacobs, E.J.; McCullough, M.L.; Patel, A.V.; Ma, J.; Soerjomataram, I.; et al. Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States. CA Cancer J. Clin. 2018, 68, 31–54.
  3. Sung, H.; Siegel, R.L.; Rosenberg, P.S.; Jemal, A. Emerging cancer trends among young adults in the USA: Analysis of a population-based cancer registry. Lancet Public Health 2019, 4, e137–e147.
  4. Yan, H.; Sellick, K. Symptoms, psychological distress, social support, and quality of life of Chinese patients newly diagnosed with gastrointestinal cancer. Cancer Nurs. 2004, 27, 389–399.
  5. Kamel, H.M. Trends and challenges in pathology practice: Choices and necessities. Sultan Qaboos Univ. Med. J. 2011, 11, 38.
  6. Rao, G.G.; Crook, M.; Tillyer, M. Pathology tests: Is the time for demand management ripe at last? J. Clin. Pathol. 2003, 56, 243–248.
  7. Hassell, L.A.; Parwani, A.V.; Weiss, L.; Jones, M.A.; Ye, J. Challenges and opportunities in the adoption of College of American Pathologists checklists in electronic format: Perspectives and experience of Reporting Pathology Protocols Project (RPP2) participant laboratories. Arch. Pathol. Lab. Med. 2010, 134, 1152–1159.
  8. Hewitt, S.M.; Lewis, F.A.; Cao, Y.; Conrad, R.C.; Cronin, M.; Danenberg, K.D.; Goralski, T.J.; Langmore, J.P.; Raja, R.G.; Williams, P.M. Tissue handling and specimen preparation in surgical pathology: Issues concerning the recovery of nucleic acids from formalin-fixed, paraffin-embedded tissue. Arch. Pathol. Lab. Med. 2008, 132, 1929–1935.
  9. Ribé, A.; Ribalta, T.; Lledó, R.; Torras, G.; Asenjo, M.A.; Cardesa, A. Evaluation of turnaround times as a component of quality assurance in surgical pathology. Int. J. Qual. Health Care 1998, 10, 241–245.
  10. Tamil, S.M.; Srinivas, A. Evaluation of quality management systems implementation in medical diagnostic laboratories benchmarked for accreditation. J. Med. Lab. Diagn. 2015, 6, 27–35.
  11. Peter, T.F.; Rotz, P.D.; Blair, D.H.; Khine, A.-A.; Freeman, R.R.; Murtagh, M.M. Impact of laboratory accreditation on patient care and the health system. Am. J. Clin. Pathol. 2010, 134, 550–555.
  12. Kim, D.; Pantanowitz, L.; Schuffler, P.; Yarlagadda, D.V.K.; Ardon, O.; Reuter, V.E.; Hameed, M.; Klimstra, D.S.; Hanna, M.G. (Re) Defining the High-Power Field for Digital Pathology. J. Pathol. Inform. 2020, 11, 33.
  13. Dun, X.-p.; Parkinson, D.B. Visualizing peripheral nerve regeneration by whole mount staining. PLoS ONE 2015, 10, e0119168.
  14. Heffner, S.; Colgan, O.; Doolan, C. Digital Pathology. Available online: https://www.leicabiosystems.com/en-br/knowledge-pathway/digital-pathology/ (accessed on 30 April 2022).
  15. Borowsky, A.D.; Glassy, E.F.; Wallace, W.D.; Kallichanda, N.S.; Behling, C.A.; Miller, D.V.; Oswal, H.N.; Feddersen, R.M.; Bakhtar, O.R.; Mendoza, A.E.; et al. Digital Whole Slide Imaging Compared with Light Microscopy for Primary Diagnosis in Surgical Pathology. Arch. Pathol. Lab. Med. 2020, 144, 1245–1253.
  16. Snead, D.R.; Tsang, Y.W.; Meskiri, A.; Kimani, P.K.; Crossman, R.; Rajpoot, N.M.; Blessing, E.; Chen, K.; Gopalakrishnan, K.; Matthews, P.; et al. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology 2016, 68, 1063–1072.
  17. Hanna, M.G.; Reuter, V.E.; Ardon, O.; Kim, D.; Sirintrapun, S.J.; Schüffler, P.J.; Busam, K.J.; Sauter, J.L.; Brogi, E.; Tan, L.K.; et al. Validation of a digital pathology system including remote review during the COVID-19 pandemic. Mod. Pathol. 2020, 33, 2115–2127.
  18. Cheng, C.L.; Azhar, R.; Sng, S.H.; Chua, Y.Q.; Hwang, J.S.; Chin, J.P.; Seah, W.K.; Loke, J.C.; Ang, R.H.; Tan, P.H. Enabling digital pathology in the diagnostic setting: Navigating through the implementation journey in an academic medical centre. J. Clin. Pathol. 2016, 69, 784–792.
  19. Aloqaily, A.; Polonia, A.; Campelos, S.; Alrefae, N.; Vale, J.; Caramelo, A.; Eloy, C. Digital Versus Optical Diagnosis of Follicular Patterned Thyroid Lesions. Head Neck Pathol. 2021, 15, 537–543.
  20. Salvi, M.; Acharya, U.R.; Molinari, F.; Meiburger, K.M. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput. Biol. Med. 2021, 128, 104129.
  21. Taqi, S.A.; Sami, S.A.; Sami, L.B.; Zaki, S.A. A review of artifacts in histopathology. J. Oral Maxillofac. Pathol. 2018, 22, 279.
  22. Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 2012, 9, 671–675.
  23. Bankhead, P.; Loughrey, M.B.; Fernández, J.A.; Dombrowski, Y.; McArt, D.G.; Dunne, P.D.; McQuaid, S.; Gray, R.T.; Murray, L.J.; Coleman, H.G.; et al. QuPath: Open source software for digital pathology image analysis. Sci. Rep. 2017, 7, 16878.
  24. Aubreville, M.; Bertram, C.; Klopfleisch, R.; Maier, A. SlideRunner. In Bildverarbeitung für die Medizin 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 309–314.
  25. The Royal College of Pathologists. Digital Pathology. Available online: https://www.rcpath.org/profession/digital-pathology.html (accessed on 20 March 2022).
  26. Tizhoosh, H.R.; Pantanowitz, L. Artificial Intelligence and Digital Pathology: Challenges and Opportunities. J. Pathol. Inform. 2018, 9, 38.
  27. Williams, B.; Hanby, A.; Millican-Slater, R.; Verghese, E.; Nijhawan, A.; Wilson, I.; Besusparis, J.; Clark, D.; Snead, D.; Rakha, E.; et al. Digital pathology for primary diagnosis of screen-detected breast lesions—Experimental data, validation and experience from four centres. Histopathology 2020, 76, 968–975.
  28. Eloy, C.; Vale, J.; Curado, M.; Polonia, A.; Campelos, S.; Caramelo, A.; Sousa, R.; Sobrinho-Simoes, M. Digital Pathology Workflow Implementation at IPATIMUP. Diagnostics 2021, 11, 2111.
  29. Abels, E.; Pantanowitz, L.; Aeffner, F.; Zarella, M.D.; van der Laak, J.; Bui, M.M.; Vemuri, V.N.; Parwani, A.V.; Gibbs, J.; Agosto-Arroyo, E.; et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: A white paper from the Digital Pathology Association. J. Pathol. 2019, 249, 286–294.
  30. Hawkes, N. Cancer survival data emphasise importance of early diagnosis. BMJ 2019, 364, l408.
  31. Schiffman, J.D.; Fisher, P.G.; Gibbs, P. Early detection of cancer: Past, present, and future. Am. Soc. Clin. Oncol. Educ. Book 2015, 35, 57–65.
  32. Williams, B.J.; Bottoms, D.; Treanor, D. Future-proofing pathology: The case for clinical adoption of digital pathology. J. Clin. Pathol. 2017, 70, 1010–1018.
  33. Maung, R. Pathologists’ workload and patient safety. Diagn. Histopathol. 2016, 22, 283–287.
  34. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29.
  35. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2018, 19, 1236–1246.
  36. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243.
  37. Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Muller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1312.
  38. Whitney, J.; Corredor, G.; Janowczyk, A.; Ganesan, S.; Doyle, S.; Tomaszewski, J.; Feldman, M.; Gilmore, H.; Madabhushi, A. Quantitative nuclear histomorphometry predicts oncotype DX risk categories for early stage ER+ breast cancer. BMC Cancer 2018, 18, 610.
  39. Hinata, M.; Ushiku, T. Detecting immunotherapy-sensitive subtype in gastric cancer using histologic image-based deep learning. Sci. Rep. 2021, 11, 22636.
  40. Rathore, S.; Iftikhar, M.A.; Chaddad, A.; Niazi, T.; Karasic, T.; Bilello, M. Segmentation and Grade Prediction of Colon Cancer Digital Pathology Images Across Multiple Institutions. Cancers 2019, 11, 1700.
  41. Ehteshami Bejnordi, B.; Veta, M.; Johannes van Diest, P.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; van der Laak, J.; Hermsen, M.; Manson, Q.F.; Balkenhol, M.; et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA 2017, 318, 2199–2210.
  42. Bandi, P.; Geessink, O.; Manson, Q.; Van Dijk, M.; Balkenhol, M.; Hermsen, M.; Ehteshami Bejnordi, B.; Lee, B.; Paeng, K.; Zhong, A.; et al. From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge. IEEE Trans. Med. Imaging 2019, 38, 550–560.
  43. Van der Laak, J.; Litjens, G.; Ciompi, F. Deep learning in histopathology: The path to the clinic. Nat. Med. 2021, 27, 775–784.
  44. The Royal College of Pathologists of Australasia. Ageing Pathologists. Available online: https://www.rcpa.edu.au/getattachment/95c190e1-bdbe-4ab1-83e1-0a218c69ad82/Ageing-Pathologists.aspx (accessed on 1 May 2022).
  45. The Royal College of Pathologists of Australia. Becoming a Pathologist. Available online: https://www.rcpa.edu.au/Pathology-Careers/Becoming-a-Pathologist (accessed on 30 April 2022).
  46. Yagi, Y. Color standardization and optimization in whole slide imaging. Diagn. Pathol. 2011, 6 (Suppl. S1), S15.
  47. Kothari, S.; Phan, J.H.; Moffitt, R.A.; Stokes, T.H.; Hassberger, S.E.; Chaudry, Q.; Young, A.N.; Wang, M.D. Automatic batch-invariant color segmentation of histological cancer images. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 657–660.
  48. Tabesh, A.; Teverovskiy, M.; Pang, H.-Y.; Kumar, V.P.; Verbel, D.; Kotsianti, A.; Saidi, O. Multifeature prostate cancer diagnosis and Gleason grading of histological images. IEEE Trans. Med. Imaging 2007, 26, 1366–1378.
  49. Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41.
  50. Abe, T.; Murakami, Y.; Yamaguchi, M.; Ohyama, N.; Yagi, Y. Color correction of pathological images based on dye amount quantification. Opt. Rev. 2005, 12, 293–300.
  51. Magee, D.; Treanor, D.; Crellin, D.; Shires, M.; Smith, K.; Mohee, K.; Quirke, P. Colour normalisation in digital histopathology images. In Proceedings of the Optical Tissue Image analysis in Microscopy, Histopathology and Endoscopy (MICCAI Workshop), London, UK, 24 September 2009; Daniel Elson: London, UK, 2009; Volume 100, pp. 100–111.
  52. Macenko, M.; Niethammer, M.; Marron, J.S.; Borland, D.; Woosley, J.T.; Guan, X.; Schmitt, C.; Thomas, N.E. A method for normalizing histology slides for quantitative analysis. In Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 1107–1110.
  53. Tani, S.; Fukunaga, Y.; Shimizu, S.; Fukunishi, M.; Ishii, K.; Tamiya, K. Color standardization method and system for whole slide imaging based on spectral sensing. Anal. Cell. Pathol. 2012, 35, 107–115.
  54. Niethammer, M.; Borland, D.; Marron, J.S.; Woosley, J.; Thomas, N.E. Appearance normalization of histology slides. In MLMI 2010: Machine Learning in Medical Imaging; Wang, F., Yan, P., Suzuki, K., Shen, D., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6357, pp. 58–66.
  55. Shaban, M.; Baur, C.; Navab, N.; Albarqouni, S. Staingan: Stain Style Transfer for Digital Histological Images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 953–956.
  56. Kausar, T.; Kausar, A.; Ashraf, M.A.; Siddique, M.F.; Wang, M.; Sajid, M.; Siddique, M.Z.; Haq, A.U.; Riaz, I. SA-GAN: Stain Acclimation Generative Adversarial Network for Histopathology Image Analysis. Appl. Sci. 2022, 12, 288.
  57. Cong, C.; Liu, S.; Di Ieva, A.; Pagnucco, M.; Berkovsky, S.; Song, Y. Texture Enhanced Generative Adversarial Network for Stain Normalisation in Histopathology Images. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1949–1952.
  58. Patil, A.; Talha, M.; Bhatia, A.; Kurian, N.C.; Mangale, S.; Patel, S.; Sethi, A. Fast, Self Supervised, Fully Convolutional Color Normalization of H&E Stained Images. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1563–1567.
  59. Bug, D.; Schneider, S.; Grote, A.; Oswald, E.; Feuerhake, F.; Schüler, J.; Merhof, D. Context-Based Normalization of Histological Stains Using Deep Convolutional Features. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham, Switzerland, 9 September 2017; pp. 135–142.
  60. Baba, A.I.; Câtoi, C. Comparative Oncology; The Publishing House of the Romanian Academy: Bucharest, Romania, 2007.
  61. Elmore, J.G.; Longton, G.M.; Carney, P.A.; Geller, B.M.; Onega, T.; Tosteson, A.N.; Nelson, H.D.; Pepe, M.S.; Allison, K.H.; Schnitt, S.J.; et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 2015, 313, 1122–1132.
  62. Ali, F.; Khan, P.; Riaz, K.; Kwak, D.; Abuhmed, T.; Park, D.; Kwak, K.S. A fuzzy ontology and SVM–based Web content classification system. IEEE Access 2017, 5, 25781–25797.
  63. Amin, M.B.; Greene, F.L.; Edge, S.B.; Compton, C.C.; Gershenwald, J.E.; Brookland, R.K.; Meyer, L.; Gress, D.M.; Byrd, D.R.; Winchester, D.P. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J. Clin. 2017, 67, 93–99.
  64. Azam, A.S.; Miligy, I.M.; Kimani, P.K.U.; Maqbool, H.; Hewitt, K.; Rajpoot, N.M.; Snead, D.R.J. Diagnostic concordance and discordance in digital pathology: A systematic review and meta-analysis. J. Clin. Pathol. 2021, 74, 448.
  65. Buck, T.P.; Dilorio, R.; Havrilla, L.; O’Neill, D.G. Validation of a whole slide imaging system for primary diagnosis in surgical pathology: A community hospital experience. J. Pathol. Inform. 2014, 5, 43.
  66. Tabata, K.; Mori, I.; Sasaki, T.; Itoh, T.; Shiraishi, T.; Yoshimi, N.; Maeda, I.; Harada, O.; Taniyama, K.; Taniyama, D. Whole-slide imaging at primary pathological diagnosis: Validation of whole-slide imaging-based primary pathological diagnosis at twelve Japanese academic institutes. Pathol. Int. 2017, 67, 547–554.
  67. Cross, S.; Furness, P.; Igali, L.; Snead, D.; Treanor, D. Best Practice Recommendations for Implementing Digital Pathology January 2018; The Royal College of Pathologists: London, UK, 2018; Available online: https://www.rcpath.org/uploads/assets/f465d1b3-797b-4297-b7fedc00b4d77e51/Best-practice-recommendations-for-implementing-digital-pathology.pdf (accessed on 16 June 2022).
  68. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18.
  69. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
  70. Ertosun, M.G.; Rubin, D.L. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks. AMIA Annu. Symp. Proc. 2015, 2015, 1899–1908.
  71. Barker, J.; Hoogi, A.; Depeursinge, A.; Rubin, D.L. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles. Med. Image Anal. 2016, 30, 60–71.
  72. Langer, L.; Binenbaum, Y.; Gugel, L.; Amit, M.; Gil, Z.; Dekel, S. Computer-aided diagnostics in digital pathology: Automated evaluation of early-phase pancreatic cancer in mice. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 1043–1054.
  73. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310.
  74. Goldfarb, A.; Teodoridis, F. Why Is AI Adoption in Health Care Lagging? Available online: https://www.brookings.edu/research/why-is-ai-adoption-in-health-care-lagging/ (accessed on 30 April 2022).
  75. McKay, F.; Williams, B.J.; Prestwich, G.; Bansal, D.; Hallowell, N.; Treanor, D. The ethical challenges of artificial intelligence-driven digital pathology. J. Pathol. Clin. Res. 2022, 8, 209–216.
  76. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424.
  77. Zhou, S.; Marklund, H.; Bláha, O.; Desai, M.; Martin, B.A.; Bingham, D.B.; Berry, G.J.; Gomulia, E.; Ng, A.; Shen, J. Deep learning assistance for the histopathologic diagnosis of Helicobacter pylori. Intell. Med. 2020, 1–2, 100004.
  78. Klein, S.; Gildenblat, J.; Ihle, M.A.; Merkelbach-Bruse, S.; Noh, K.-W.; Peifer, M.; Quaas, A.M.; Büttner, R. Deep learning for sensitive detection of Helicobacter Pylori in gastric biopsies. BMC Gastroenterol. 2020, 20, 1–11.
  79. Biscotti, C.V.; Dawson, A.E.; Dziura, B.; Galup, L.; Darragh, T.; Rahemtulla, A.; Wills-Frank, L. Assisted primary screening using the automated ThinPrep Imaging System. Am. J. Clin. Pathol. 2005, 123, 281–287.
  80. Vu, H.T.; Lopez, R.; Bennett, A.; Burke, C.A. Individuals with sessile serrated polyps express an aggressive colorectal phenotype. Dis. Colon Rectum 2011, 54, 1216–1223.
  81. Osmond, A.; Li-Chang, H.; Kirsch, R.; Divaris, D.; Falck, V.; Liu, D.F.; Marginean, C.; Newell, K.; Parfitt, J.; Rudrick, B.; et al. Interobserver variability in assessing dysplasia and architecture in colorectal adenomas: A multicentre Canadian study. J. Clin. Pathol. 2014, 67, 781–786.
  82. Foss, F.A.; Milkins, S.; McGregor, A.H. Inter-observer variability in the histological assessment of colorectal polyps detected through the NHS Bowel Cancer Screening Programme. Histopathology 2012, 61, 47–52.
  83. Davidson, K.W.; Barry, M.J.; Mangione, C.M.; Cabana, M.; Caughey, A.B.; Davis, E.M.; Donahue, K.E.; Doubeni, C.A.; Krist, A.H.; Kubik, M.J.J. Screening for colorectal cancer: US Preventive Services Task Force recommendation statement. JAMA 2016, 315, 2564–2575.
  84. Song, Z.; Yu, C.; Zou, S.; Wang, W.; Huang, Y.; Ding, X.; Liu, J.; Shao, L.; Yuan, J.; Gou, X.; et al. Automatic deep learning-based colorectal adenoma detection system and its similarities with pathologists. BMJ Open 2020, 10, e036423.
  85. Korbar, B.; Olofson, A.M.; Miraflor, A.P.; Nicka, K.M.; Suriawinata, M.A.; Torresani, L.; Suriawinata, A.A.; Hassanpour, S. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images. J. Pathol. Inform. 2017, 8, 30.
  86. Wei, J.; Suriawinata, A.A.; Vaickus, L.J.; Ren, B.; Liu, X.; Lisovsky, M.; Tomita, N.; Abdollahi, B.; Kim, A.S.; Snover, D.C.; et al. Evaluation of a Deep Neural Network for Automated Classification of Colorectal Polyps on Histopathologic Slides. JAMA Netw. Open 2020, 3, e203398.
  87. Zhao, L.; Lee, V.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Briefings Bioinform. 2018, 20, 572–584.
  88. De Smedt, L.; Lemahieu, J.; Palmans, S.; Govaere, O.; Tousseyn, T.; Van Cutsem, E.; Prenen, H.; Tejpar, S.; Spaepen, M.; Matthijs, G.; et al. Microsatellite instable vs stable colon carcinomas: Analysis of tumour heterogeneity, inflammation and angiogenesis. Br. J. Cancer 2015, 113, 500–509.
  89. Baretti, M.; Le, D.T. DNA mismatch repair in cancer. Pharmacol. Ther. 2018, 189, 45–62.
  90. Li, K.; Luo, H.; Huang, L.; Luo, H.; Zhu, X. Microsatellite instability: A review of what the oncologist should know. Cancer Cell Int. 2020, 20, 1–13.
  91. André, T.; Shiu, K.-K.; Kim, T.W.; Jensen, B.V.; Jensen, L.H.; Punt, C.; Smith, D.; Garcia-Carbonero, R.; Benavides, M.; Gibbs, P.; et al. Pembrolizumab in Microsatellite-Instability–High Advanced Colorectal Cancer. N. Engl. J. Med. 2020, 383, 2207–2218.
  92. Fan, J.; Qian, J.; Zhao, Y.J.N. The loss of PTEN expression and microsatellite stability (MSS) were predictors of unfavorable prognosis in gastric cancer (GC). Neoplasma 2020, 67, 1359–1366.
  93. Snowsill, T.; Coelho, H.; Huxley, N.; Jones-Hughes, T.; Briscoe, S.; Frayling, I.M.; Hyde, C. Molecular testing for Lynch syndrome in people with colorectal cancer: Systematic reviews and economic evaluation. Health Technol. Assess. 2017, 21, 1–238.
  94. Su, F.; Li, J.; Zhao, X.; Wang, B.; Hu, Y.; Sun, Y.; Ji, J. Interpretable tumor differentiation grade and microsatellite instability recognition in gastric cancer using deep learning. Lab. Investig. 2022, 102, 641–649.
  95. Kather, J.N.; Pearson, A.T.; Halama, N.; Jäger, D.; Krause, J.; Loosen, S.H.; Marx, A.; Boor, P.; Tacke, F.; Neumann, U.P.; et al. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat. Med. 2019, 25, 1054–1056.
  96. Echle, A.; Grabsch, H.I.; Quirke, P.; van den Brandt, P.A.; West, N.P.; Hutchins, G.G.A.; Heij, L.R.; Tan, X.; Richman, S.D.; Krause, J.; et al. Clinical-Grade Detection of Microsatellite Instability in Colorectal Tumors by Deep Learning. Gastroenterology 2020, 159, 1406–1416.e11.
  97. Cao, R.; Yang, F.; Ma, S.-C.; Liu, L.; Zhao, Y.; Li, Y.; Wu, D.-H.; Wang, T.; Lu, W.-J.; Cai, W.-J.; et al. Development and interpretation of a pathomics-based model for the prediction of microsatellite instability in Colorectal Cancer. Theranostics 2020, 10, 11080–11091.
  98. Goss, P.E.; Lee, B.L.; Badovinac-Crnjevic, T.; Strasser-Weippl, K.; Chavarri-Guerra, Y.; St Louis, J.; Villarreal-Garza, C.; Unger-Saldaña, K.; Ferreyra, M.; Debiasi, M.; et al. Planning cancer control in Latin America and the Caribbean. Lancet Oncol. 2013, 14, 391–436.
  99. Meier, A.; Nekolla, K.; Hewitt, L.C.; Earle, S.; Yoshikawa, T.; Oshima, T.; Miyagi, Y.; Huss, R.; Schmidt, G.; Grabsch, H.I. Hypothesis-free deep survival learning applied to the tumour microenvironment in gastric cancer. J. Pathol. Clin. Res. 2020, 6, 273–282.
  100. Jiang, D.; Liao, J.; Duan, H.; Wu, Q.; Owen, G.; Shu, C.; Chen, L.; He, Y.; Wu, Z.; He, D.; et al. A machine learning-based prognostic predictor for stage III colon cancer. Sci. Rep. 2020, 10, 10333.
  101. Kather, J.N.; Krisam, J.; Charoentong, P.; Luedde, T.; Herpel, E.; Weis, C.-A.; Gaiser, T.; Marx, A.; Valous, N.A.; Ferber, D.; et al. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019, 16, e1002730.
  102. Bychkov, D.; Linder, N.; Turkki, R.; Nordling, S.; Kovanen, P.E.; Verrill, C.; Walliander, M.; Lundin, M.; Haglund, C.; Lundin, J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci. Rep. 2018, 8, 1–11.
  103. Sirinukunwattana, K.; Domingo, E.; Richman, S.D.; Redmond, K.L.; Blake, A.; Verrill, C.; Leedham, S.J.; Chatzipli, A.; Hardy, C.W.; Whalley, C.M.; et al. Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning. Gut 2020, 70, 544–554.
  104. Popovici, V.; Budinská, E.; Dušek, L.; Kozubek, M.; Bosman, F.T. Image-based surrogate biomarkers for molecular subtypes of colorectal cancer. Bioinformatics 2017, 33, 2002–2009.
  105. Banatvala, J.J.L. COVID-19 testing delays and pathology services in the UK. Lancet 2020, 395, 1831.
  106. Al-Shamsi, H.O.; Abu-Gheida, I.; Rana, S.K.; Nijhawan, N.; Abdulsamad, A.S.; Alrawi, S.; Abuhaleeqa, M.; Almansoori, T.M.; Alkasab, T.; Aleassa, E.M.; et al. Challenges for cancer patients returning home during SARS-COV-19 pandemic after medical tourism - a consensus report by the emirates oncology task force. BMC Cancer 2020, 20, 1–10.
  107. Balasubramani, B.; Newsom, K.J.; Martinez, K.A.; Starostik, P.; Clare-Salzler, M.; Chamala, S. Pathology informatics and robotics strategies for improving efficiency of COVID-19 pooled testing. Acad. Pathol. 2021, 8, 23742895211020485.
  108. Gehrung, M.; Crispin-Ortuzar, M.; Berman, A.G.; O’Donovan, M.; Fitzgerald, R.C.; Markowetz, F. Triage-driven diagnosis of Barrett’s esophagus for early detection of esophageal adenocarcinoma using deep learning. Nat. Med. 2021, 27, 833–841.
  109. Song, Z.; Zou, S.; Zhou, W.; Huang, Y.; Shao, L.; Yuan, J.; Gou, X.; Jin, W.; Wang, Z.; Chen, X.; et al. Clinically applicable histopathological diagnosis system for gastric cancer detection using deep learning. Nat. Commun. 2020, 11, 1–9.
  110. Huang, B.; Tian, S.; Zhan, N.; Ma, J.; Huang, Z.; Zhang, C.; Zhang, H.; Ming, F.; Liao, F.; Ji, M.; et al. Accurate diagnosis and prognosis prediction of gastric cancer using deep learning on digital pathological images: A retrospective multicentre study. EBioMedicine 2021, 73, 103631.
  111. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Silva, V.W.K.; Busam, K.J.; Brogi, E.; Reuter, V.E.; Klimstra, D.S.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309.
  112. Ianni, J.D.; Soans, R.E.; Sankarapandian, S.; Chamarthi, R.V.; Ayyagari, D.; Olsen, T.G.; Bonham, M.J.; Stavish, C.C.; Motaparthi, K.; Cockerell, C.J.; et al. Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload. Sci. Rep. 2020, 10, 1–12.
More
ScholarVision Creations