Relevance of Digitalization in Spine Surgery: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Machine learning is a subset of artificial intelligence and refers to computer techniques that allow complex tasks to be solved in a reproducible and standardized way. Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or allow high predictive accuracy in health-related tasks. Convolutional neural networks (CNN) are increasingly applied to image data for various tasks. Its use for non-imaging data becomes feasible through different modern machine learning techniques, converting non-imaging data into images before inputting them into the CNN model. Considering also that healthcare providers do not solely use one data modality for their decisions, this approach opens the door for multi-input/mixed data models which use a combination of patient information, such as genomic, radiological, and clinical data, to train a hybrid deep learning model. Thus, this reflects the main characteristic of artificial intelligence: simulating natural human behavior. 

  • machine learning
  • database
  • spine surgery

1. Introduction

Low back pain is one of the most frequently observed clinical conditions, and degenerative spine disease seems to be a leading driver of low back pain [1]. The global prevalence of low back pain increased from 377.5 million in 1990 to 577.0 million in 2017 [2]. The years lived with a disability increased globally from 42.5 million in 1990 to 64.9 million in 2017, representing an increase of 52.7%. Degenerative spinal disease is a common and impairing condition resulting in high socio-economic costs. Direct medical expenses spent on low-back pain doubled to 102 billion USD between 1997 and 2005, and the number of lumbar fusion procedures has quadrupled over the past 20 years, resulting in significantly increased healthcare costs [2].
Interestingly, the increase in performed surgeries is not directly proportional to improved patient outcomes. Impaired quality of life, persistent pain, and functional problems are reported in up to 40% of patients undergoing low back pain surgery and 20–24% undergoing revision surgeries [3][4]. Indications influencing the decision as to whether a patient should undergo surgery are not entirely based on guidelines but rather on discussions between the surgeon and patient, as well as the expertise and skills of the surgeon. Furthermore, there are no clear guidelines on surgical techniques for treating degenerative spinal diseases; as such, it remains unclear as to whether one treatment approach might perform better in particular cases than another. Overall, there is a considerable lack of data-driven decision-making in low back pain patients, which is particularly concerning when considering the global burden associated with low back pain.
Medical healthcare is driven by an incredible increase in the amount of data generated through various diagnostic tools and nodes within the healthcare systems. Patient data are the fundaments healthcare providers use to find the best fitting prognosis and diagnosis for each patient. Decisions are based on patterns across these datasets that guide towards the “right” diagnosis. Moreover, prognosis healthcare providers utilize these datasets to justify a specific treatment approach. Therefore, the correct interpretation of these datasets is crucial and directly impacts patient outcomes and the operations of healthcare systems.
Furthermore, improvements in treatment guidelines are mainly based on research that has been performed on such datasets. Researchers using these data might not be aware of the patterns hidden in their collected datasets. The process of finding patterns in large datasets which specifically fall under the category of big-data research is called data mining [5]. However, oftentimes, clinical researchers might not have profound knowledge in (bio)statistics personally, nor access to biostatisticians, to apply the best available tools to their datasets to extract all relevant pieces of information. Therefore, it is of high relevance that such datasets are made public and anonymized so that data scientists can use them and possibly determine these patterns using modern data-mining technologies.
The term “digital health” stands for the digitalization of healthcare data that was previously only assessed in an unproductive way through paper-based forms. New healthcare applications have become increasingly relevant and available. Such applications can range from mobile health applications, consumer techs, and telehealth for monitoring and guiding patients to precision medicine utilizing patient-specific data in artificial intelligence and bioinformatics models for individualized treatment approaches.
Machine learning combines biostatistics, mathematics, and computer science into one problem-solving pathway. One advantage is its efficiency and effectiveness, as the underlying programming code can be modified to enhance the accuracy of paths that solve a specific task. In this way, it can be more controllable, cost-efficient, and less error-prone than its “human template.” Although the number of publications and citations in artificial-intelligence-related papers on healthcare topics is overwhelming, the technique is still at the beginning of its maturity. The industry highly supports the progress because of the great potential to improve medical research and clinical care, particularly as healthcare providers increasingly establish electronic health records in their institutions.
Predictive analysis with classical statistical techniques, such as regression models, applied on these datasets has been the gold standard to date. One may ask about the advantages of advanced machine learning techniques over simple regression analysis using widely available statistical software for predictive analysis. It is hard to draw a distinct line indicating where basic statistical methods end and machine learning begins. It is often debated whether statistical techniques are somehow also considered as machine learning techniques, as in these cases, computers are using mathematical models to test a specific hypothesis. The primary differentiation might be the purpose of the application. Statistical methods, such as regression models, aim to find associations between independent variables (e.g., age, sex, body mass index) and dependent variables (e.g., patient-related outcome measures). Contrastingly, machine learning models also use statistical methods but aim to learn from training datasets, helping them to make more accurate predictions on the validation dataset so that the model can be reliably used on other independent datasets for predictive analysis. Hence, machine learning could be explained as focusing on predictive results, whereas simple statistical models analyze significant relationships. However, one further differentiation might be interpretability. The more complex a machine learning technique gets, the more accurate it can become at the cost of interpretability. For example, the lasso regression is a machine learning technique using regression analysis for feature selection and prediction. It has the advantage that it is not necessary to find the relevant independent variables first, as is required in linear regression modes. Its application is quite simple, and the interpretability is high. In contrast, deep learning, which is a subgroup of machine learning and will be discussed later, can get very complex but also very accurate; however, this comes at the cost of interpretability. The general principles of machine learning discussed in this research might help to differentiate between the most utilized approaches.
One significant barrier of machine learning applications is that reliable learning processes are very data-hungry. Machine learning is highly dependent on the premise that a large dataset is available. As computers cannot process visual and textual information the way human brains do, the algorithm needs to know what it is predicting or classifying in order to make decisions. When classification tasks need to be solved or specific areas need to be predicted, annotations are necessary. Data annotations help to make the input data understandable for computers. The task of the data scientist is to reliably label data such as text, audio, images, and video so it can be recognized by machine learning models and used to solve prediction and classification tasks. However, this process can be highly time-consuming, which might represent a major flaw in the implementation of machine learning algorithms. Non-accurate labeling will ultimately lead to inaccurate problem-solving. In previous research entitled “Deep Learning: A Critical Appraisal” [4], Marcus et al. proposed ten concerns associated with machine learning research, and data hungriness was listed as the top factor. He noted that “in problems where data are limited, deep learning often is not an ideal solution” [6]. Data-hungriness was also considered an unsolved problem in artificial intelligence (AI) research, described in Martin Ford’s book “Architects of Intelligence: the Truth About AI From the People Building It” [7]. Most of the researchers interviewed in his book encourage the development of more data-efficient algorithms. Four pillars relevant for the implementation and interpretation of machine learning algorithms were described by Cutillo et al. based on discussions at the National Institute of Health (NIH) healthcare workshop in 2019 [8]. These were Trustworthiness, Explainability, Usability, Transparency, and Fairness.
An increase in data efficiency cannot be made feasible only by increasing the number of input samples but also by improving the machine learning architecture itself. One way to do this is to consider that different data types might contribute differently to the problem-solving task and that the connection between data types might also be relevant. Discussing the data dependency of machine learning algorithms and different hybrid models capable of processing different data types is, unfortunately, a research field that has not received the necessary attention yet. In particular, the translation of such hybrid algorithms to a clinical environment with real-world applications has not yet been reviewed. 

2. The Need for Structured Decision Making in Spine Surgery

A step towards precision and data-driven spine surgery can be achieved by meeting the significant requirement of developing informative outcomes in assessments. These include regular outcome assessments of patients, preferably utilizing digital app-based assessment forms, and the necessity to implement these outcomes as dependent variables in future risk assessment tools. Notably, the improvement of patient-related outcome measures (PROMs) should be the primary goal of decision-making. The value of such outcome measures is more critical in clinics than surrogate markers such as laboratory markers and classical clinical variables such as revision surgery, readmission, or absence of surgical infections. Previous research has shown that patient-related outcome measures do not necessarily correlate with the factors a surgeon might consider relevant. For example, researchers could show that patient-related outcome measures were more correlated with the length of hospital stay than with postoperative complication rates [9]. Therefore, patient-related outcome measures should be an integral part of every predictive tool in spine surgery. In spine surgery, commonly utilized patient-related outcome measures include the Oswestry Disability Index, Core Outcome Measure Index (COMI), the eq-5D, SF-36 form, Numeric Rating Scale of pain, and the Visual Analogue Scale of pain, in addition to others [10]. Notably, Breakwell et al. reported in their publication entitled “Should we all go to the PROM? The first two years of the British Spine Registry” that a significant amount of PROMs forms were entered by the patients themselves [11]. Hence, an app-based tool transferring the results from the PROMs to a central database could be more time-efficient for spine surgeons. An additional benefit would be that outcomes could be compared considering all contributing institutes, and necessary quality-control steps could be performed in an early phase. This could also be very cost-efficient for healthcare institutes.
The integration of these patient-related outcome measures as dependent variables in clinical decision support tools would allow outcomes to be predicted during prospective follow-ups based on a set of several textual independent variables such as surgical technique, preoperative markers, as well as other data modalities such as imaging. This approach could reliably analyze large volumes of data based on previous data input and suggest next steps for treatment, flag potential problems, and enhance care team efficiency. Furthermore, this PROMs-including approach allows surgeons to discuss the possible outcome with patients and therefore improves the communication with patients. Contrastingly, a communication style in which the surgeon advises against surgery based on his subjective experience might lead to a negative surgeon-patient relationship. Such data-driven support tools might also be better to help surgeons communicate with patients.

3. Database Repositories for Machine Learning Applications in Spine Surgery

Databases are repositing data for future research. They are dedicated to housing data related to scientific research on a platform that can be restricted for access or publicly available. One often-used approach is to limit access to all contributors of the database. In this way, the database integrates a simple reward system: the contribution of data allows contributors to use the gathered data. Databases can collect and store a heterogeneous set of patient data and large datasets that fall under the category of big data. Usually, data in online medical databases are stored anonymously, maintaining that data cannot be linked to the patients’ personal information. In such cases, radiological images can be stored with genetic and clinical data, all having a unique identification number linking the different datatypes of the case. These databases cover a wide range of data, including those related to cancer research, disease burden, nutrition and health, and genetics and the environment. Researchers can apply for access to data based on the scope of the database and the application procedures required to perform relevant medical research.
For machine learning purposes, these data can also be labeled/annotated before being uploaded, allowing for utilization via data scientists. Although impactful machine learning models published to date might deal with a well-annotated dataset, the annotation process requires the necessary infrastructure, expertise, and resources, as it is very time-consuming depending on the number of data points. Considering the complexity of data annotation, crowdsourcing platforms are currently emerging. In this crowdsourcing model, the data are annotated by multiple crowdsourcing workers. One advantage is that the labeling can be checked against the consensus label using statistical parameters such as the inter-annotator agreement. Furthermore, this approach could lead to a more generalizable annotation style within the dataset. Therefore, the model might better predict future datasets coming from other workgroups. Crowdsourcing applications introduced were, for example, applications to database curation, the identification of medical terms in patient-authored texts, and the diagnosis of diseases from medical images [12][13][14]. Such platforms could also be applied by institutes in spine surgery. Although recent studies have shown that the accuracy performed by crowd workers is mainly similar to the individual annotation considering a given task, crowdsourcing is more resource-oriented and reliable [15][16][17]. The workflow of machine learning applications in spine surgery is shown in Figure 1.
Figure 1. Workflow of machine learning applications in spine surgery.
Several databases are available that house an impressive number of global biomedical data. Notably, these repositories are regularly updated and extended using new image sets and data types provided by multiple institutions. Thus, they are often used by machine learning research studies, which is essential for progress in the field and exemplary for upcoming databases. For example, the GDC data portal [18] can provide RNA-sequencing, whole-genome, whole-exome sequencing, targeted sequencing, genotype, tissue and diagnostic slides, and ATAC-seq data. These data types could also be used as an input in hybrid machine learning models along with imaging and clinical data types to solve prediction tasks related to spinal oncology. Access to these platforms can be obtained from researchers but only for subsets of the whole dataset. The general principle of these platforms is that only data that will be used can be extracted. However, all mentioned databases do not contain data labeling and annotations. Considering that there can be vast amounts of data depending on the research question, this might be a significant limitation for using these data for machine learning purposes. Several sources of public databases are accessible by anyone who wants to train and test their machine learning models. One such example is the Kaggle Dataset collection, which contains several algorithms and datasets in spine surgery [19][20]. These datasets are often used for competitions and training novel machine learning methods to determine whether they outperform existing models. This allows for a peer-review process as the algorithms are publicly available and commented on by other data scientists, validating the algorithm on the dataset provided and external datasets. However, since journal peer-reviewers may not have the resources to retest provided datasets with the algorithm code, often uploaded in GitHub repositories [21], such open peer-review processes meet crucial research goals, including validity, objectivity, and reliability. Furthermore, provided datasets and codes from the workgroups may not be available after some time. This represents a significant flaw in the assessment and development process of machine learning algorithms for healthcare applications. Publications are not the only relevant output of research; research data should also be considered. This is particularly true when considering that more accurate analysis pathways might not have developed when the study was conducted. This paradigm led to the emergence of data journals, such as Scientific Data from Nature [22] or GigaScience from Oxford Academic [23] in which the data can remain available for future analysis and validity assessments.
Notably, in surgical fields, such databases are still scarce. One of the largest and most intuitive databases in orthopedic surgery is the Osteoarthritis Initiative (OAI) database [24] from the National Institute of Health, which includes ten-year multi-center observational data of knee osteoarthritis cases. It includes DICOM images, clinical data, and laboratory data, and it is one of the few and most extensive repositories in orthopedic surgery capable of integrating multimodal data. Unfortunately, to the best of our knowledge, the only database that seems to include multimodal data in spine surgery is the Austrian Spinal Cord Injury Study [25]. The database contains longitudinal data on spinal cord injury cases in Austria and includes clinical data with patient-related outcome measures and imaging data. Other databases in spine surgery, which mainly include tabular clinical data, are the American College of Surgeon National Surgical Quality Improvement Project (ASC-NSQIP) database, the National Inpatient Sample (NIS) database, the Medicare and Private Insurance Database, the American Spine Registry, and the British Spine Registry [11][26][27]. The SORG (“Sorg Orthopaedic Research Group”) has introduced the most recognized and cited predictive machine learning models, which can be accessed for free on their website [28]. They were already externally validated several times and include mortality prediction algorithms in spinal oncology, PROMs, and postoperative opioid use predictions after spine surgery, as well as discharge disposition for lumbar spinal surgery. Validation and external validation studies are both accessible on the website.
Another emerging field aiming to address the data handling problem in machine learning is privacy-first federated learning [29]. Federated learning [30][31] aims to train machine learning algorithms collaboratively without the need to transfer medical datasets. This approach would address the data governance and privacy politics, often limiting the use of medical data depending on the country where the research is conducted. Federated learning was extensively applied in mobile and edge device applications and is currently increasingly applied in healthcare environments [32][33]. It enables the assessment and development of models collaboratively using peer-review techniques without transferring the medical data out of the institutions where the data were obtained. Instead, machine learning training and testing take place on an institutional level, and only model architecture information and parameters are transferred between the collaborators. Recent studies have shown that machine learning models trained by Federated Learning can achieve similar accuracies to models that were implemented using central databases and are even superior to those processed on a single-institution-level [34][35]. Successful implementation of Federated Learning approaches could thus hold significant potential for enabling resource-oriented precision healthcare at a large scale, with external validation to overcome selection bias in model parameters and to promote the optimal processing of patients’ data by respecting the necessary governance and privacy policies of the participants [33]. Nevertheless, this approach still requires essential infrastructure and quality management processes to ensure that the applications perform well and do not impair healthcare processes or violate patient privacy rules.
Despite the advantages of Federated Learning, this method still has some disadvantages. For example, as described above, the integration of medical datasets in public databases could lead to more extensive research, and the investigation would not be limited to the collaborators. Furthermore, successful model training still depends on factors such as data labeling, data quality, bias, and standardization [36]. These issues would be better targeted when databases are accessible by more researchers and crowdfunding workers dealing with data annotation. This would be the case for both Federated and non-Federated Learning techniques. Appropriate protocols would be required, focusing on well-designed studies, standardized data extractions, standardized labeling and annotation of data, accuracy assessments and quality management, and regularly updated techniques to assess bias or failures. Considering this, Federated Learning would be a feasible approach to overcome data transfer limitations between institutions.

This entry is adapted from the peer-reviewed paper 10.3390/jpm12040509

References

  1. Saravi, B.; Li, Z.; Lang, C.N.; Schmid, B.; Lang, F.K.; Grad, S.; Alini, M.; Richards, R.G.; Schmal, H.; Südkamp, N.; et al. The Tissue Renin-Angiotensin System and Its Role in the Pathogenesis of Major Human Diseases: Quo Vadis? Cells 2021, 10, 650.
  2. Wu, A.; March, L.; Zheng, X.; Huang, J.; Wang, X.; Zhao, J.; Blyth, F.M.; Smith, E.; Buchbinder, R.; Hoy, D. Global Low Back Pain Prevalence and Years Lived with Disability from 1990 to 2017: Estimates from the Global Burden of Disease Study 2017. Ann. Transl. Med. 2020, 8, 299.
  3. Archer, K.R.; Coronado, R.A.; Haug, C.M.; Vanston, S.W.; Devin, C.J.; Fonnesbeck, C.J.; Aaronson, O.S.; Cheng, J.S.; Skolasky, R.L.; Riley, L.H.; et al. A Comparative Effectiveness Trial of Postoperative Management for Lumbar Spine Surgery: Changing Behavior through Physical Therapy (CBPT) Study Protocol. BMC Musculoskelet. Disord. 2014, 15, 325.
  4. Martin, B.I.; Mirza, S.K.; Comstock, B.A.; Gray, D.T.; Kreuter, W.; Deyo, R.A. Reoperation Rates Following Lumbar Spine Surgery and the Influence of Spinal Fusion Procedures. Spine 2007, 32, 382–387.
  5. Mallappallil, M.; Sabu, J.; Gruessner, A.; Salifu, M. A Review of Big Data and Medical Research. SAGE Open Med. 2020, 8, 2050312120934839.
  6. Marcus, G. Deep Learning: A Critical Appraisal. arXiv 2018, arXiv:1801.00631.
  7. Ford, M. Architects of Intelligence: The Truth about AI from the People Building It; Packt Publishing: Birmingham, UK, 2018; ISBN 978-1-78913-126-0.
  8. Cutillo, C.M.; Sharma, K.R.; Foschini, L.; Kundu, S.; Mackintosh, M.; Mandl, K.D. Machine Intelligence in Healthcare—Perspectives on Trustworthiness, Explainability, Usability, and Transparency. NPJ Digit. Med. 2020, 3, 47.
  9. Saravi, B.; Lang, G.; Ülkümen, S.; Südkamp, N.; Hassel, F. Case-Matched Radiological and Clinical Outcome Evaluation of Interlaminar versus Microsurgical Decompression of Lumbar Spinal Stenosis. Meeting Abstract. German Congress of Orthopaedics and Traumatology (DKOU 2021). 2021. Available online: https://doi.org/10.3205/21DKOU024 (accessed on 11 March 2022).
  10. Finkelstein, J.A.; Schwartz, C.E. Patient-Reported Outcomes in Spine Surgery: Past, Current, and Future Directions. J. Neurosurg. Spine 2019, 31, 155–164.
  11. Breakwell, L.M.; Cole, A.A.; Birch, N.; Heywood, C. Should We All Go to the PROM? The First Two Years of the British Spine Registry. Bone Jt. J. 2015, 97, 871–874.
  12. MacLean, D.L.; Heer, J. Identifying Medical Terms in Patient-Authored Text: A Crowdsourcing-Based Approach. J. Am. Med. Inform. Assoc. 2013, 20, 1120–1127.
  13. Warby, S.C.; Wendt, S.L.; Welinder, P.; Munk, E.G.S.; Carrillo, O.; Sorensen, H.B.D.; Jennum, P.; Peppard, P.E.; Perona, P.; Mignot, E. Sleep-Spindle Detection: Crowdsourcing and Evaluating Performance of Experts, Non-Experts and Automated Methods. Nat. Methods 2014, 11, 385–392.
  14. Mavandadi, S.; Dimitrov, S.; Feng, S.; Yu, F.; Sikora, U.; Yaglidere, O.; Padmanabhan, S.; Nielsen, K.; Ozcan, A. Distributed Medical Image Analysis and Diagnosis through Crowd-Sourced Games: A Malaria Case Study. PLoS ONE 2012, 7, e37245.
  15. Crump, M.J.C.; McDonnell, J.V.; Gureckis, T.M. Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 2013, 8, e57410.
  16. Bartneck, C.; Duenser, A.; Moltchanova, E.; Zawieska, K. Comparing the Similarity of Responses Received from Studies in Amazon’s Mechanical Turk to Studies Conducted Online and with Direct Recruitment. PLoS ONE 2015, 10, e0121595.
  17. Wang, C.; Han, L.; Stein, G.; Day, S.; Bien-Gund, C.; Mathews, A.; Ong, J.J.; Zhao, P.-Z.; Wei, S.-F.; Walker, J.; et al. Crowdsourcing in Health and Medical Research: A Systematic Review. Infect. Dis. Poverty 2020, 9, 8.
  18. GDC Data Portal. Available online: Https://Portal.Gdc.Cancer.Gov (accessed on 11 March 2022).
  19. Phan, N.N.; Chattopadhyay, A.; Lu, T.-P.; Tsai, M.-H. Leveraging Well-Annotated Databases for Deep Learning in Biomedical Research. Transl. Cancer Res. TCR 2020, 9, 7682–7684.
  20. Kaggle. Available online: https://www.kaggle.com/ (accessed on 11 March 2022).
  21. GitHub. Available online: https://github.com (accessed on 11 March 2022).
  22. Nature Scientific Data. Available online: https://www.nature.com/sdata (accessed on 11 March 2022).
  23. Oxford Academic GigaScience. Available online: https://academic.oup.com/gigascience (accessed on 11 March 2022).
  24. Osteoarthritis Initiative (OAI) Database. Available online: https://nda.nih.gov/oai (accessed on 11 March 2022).
  25. Austrian Spinal Cord Injury Study. Available online: https://www.ascis.at (accessed on 11 March 2022).
  26. Sebastian, A.S. Database Research in Spine Surgery. Clin. Spine Surg. Spine Publ. 2016, 29, 427–429.
  27. Asher, A.L.; Knightly, J.; Mummaneni, P.V.; Alvi, M.A.; McGirt, M.J.; Yolcu, Y.U.; Chan, A.K.; Glassman, S.D.; Foley, K.T.; Slotkin, J.R.; et al. Quality Outcomes Database Spine Care Project 2012–2020: Milestones Achieved in a Collaborative North American Outcomes Registry to Advance Value-Based Spine Care and Evolution to the American Spine Registry. Neurosurg. Focus 2020, 48, E2.
  28. SORG. Available online: https://sorg.mgh.harvard.edu (accessed on 11 March 2022).
  29. Sadilek, A.; Liu, L.; Nguyen, D.; Kamruzzaman, M.; Serghiou, S.; Rader, B.; Ingerman, A.; Mellem, S.; Kairouz, P.; Nsoesie, E.O.; et al. Privacy-First Health Research with Federated Learning. NPJ Digit. Med. 2021, 4, 132.
  30. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–19.
  31. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Mag. 2020, 37, 50–60.
  32. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Nitin Bhagoji, A.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. FNT Mach. Learn. 2021, 14, 1–210.
  33. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The Future of Digital Health with Federated Learning. NPJ Digit. Med. 2020, 3, 119.
  34. Li, W.; Milletarì, F.; Xu, D.; Rieke, N.; Hancox, J.; Zhu, W.; Baust, M.; Cheng, Y.; Ourselin, S.; Cardoso, M.J.; et al. Privacy-Preserving Federated Brain Tumour Segmentation. In International Workshop on Machine Learning in Medical Imaging; Suk, H.-I., Liu, M., Yan, P., Lian, C., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 133–141.
  35. Sheller, M.J.; Reina, G.A.; Edwards, B.; Martin, J.; Bakas, S. Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2018; pp. 92–104.
  36. Wang, F.; Casalino, L.P.; Khullar, D. Deep Learning in Medicine—Promise, Progress, and Challenges. JAMA Intern. Med. 2019, 179, 293.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations