Artificial-Intelligence-Based Clinical Decision Support Systems in Primary Care: Comparison
Please note this is a comparison between Version 2 by Fanny Huang and Version 1 by Antonio J. Forte.

Primary care stands as a cornerstone in healthcare, serving as the first point of contact and managing the most significant number of patients in the United States and worldwide. AI can mimic human reasoning and behavior and handle the increasing volume of medical data within healthcare systems. Machine learning (ML) is the most common AI technique used.

  • artificial intelligence
  • machine learning
  • clinical decision support systems

1. Introduction

Primary care stands as a cornerstone in healthcare, serving as the first point of contact and managing the most significant number of patients in the United States [1] and worldwide. It offers patient-centered, comprehensive, longitudinal, and coordinated care across settings [2]. Managing a large, heterogeneous population is a challenging task for Primary Care Physicians (PCPs), especially when many patients have concurrent chronic diseases and polypharmacy [3]. Keeping a complete health record and clinical knowledge up to date is essential.
In 2007, the US government encouraged the introduction of Clinical Decision Support Systems (CDSSs) into Electronic Health Records (EHR), and by 2017, 40.2% of US hospitals had advanced CDSS capabilities [4]. CDSSs aid physicians in diagnosis, disease management, prescription, and drug control, often through alarm systems [5,6][5][6]. They have been especially effective in increasing adherence to clinical guidelines, applying prevention and public health strategies, and improving patient safety [3,7][3][7]. Furthermore, with CDSSs’ integration into EHRs, the incidence of pharmacological adverse events has decreased, and both recommendations and alerts have become personalized [7,8][7][8].
According to a meta-analysis, CDSSs improved the average percentage of patients receiving the desired care element by 5.8% [9]. Even with CDSSs supporting PCPs in making up-to-date clinical decisions [10], their impact on morbidity and mortality in Primary Healthcare (PHC) has not been conclusively demonstrated [11]. Moreover, PCPs may face difficulties co-managing patients with specialties due to discrepancies in the recommendations given by the specialists and their CDSSs or outdated EHRs [11].
Although the concept of Artificial Intelligence (AI) was first introduced seven decades ago, its evolution began in 2010 with the enhancement of graphic processing units [12,13,14,15][12][13][14][15]. AI can mimic human reasoning and behavior [12,13,15,16][12][13][15][16] and handle the increasing volume of medical data within healthcare systems [17]. Machine learning (ML) is the most common AI technique used, and it can be categorized into three types: supervised, unsupervised, and reinforcement algorithms [12,18][12][18]. Massive training datasets are used as input to train ML algorithms to make accurate predictions, allowing computers to learn without explicit programming [12,15,16,19][12][15][16][19]

2. Artificial-Intelligence-Based Clinical Decision Support Systems in Primary Care

Clinical Decision Support Systems aid physicians in tasks ranging from administrative automation and documentation to clinical management and patient safety [5]. They become more advantageous when integrated with EHRs as patients’ individual clinical profiles can be matched to the system’s knowledge base. This allows for customized recommendations and specific sets of administrative actions [8]. Regardless, clinician satisfaction remains low due to several factors, such as excessive time consumption, workflow interruption, suboptimal EHR integration, irrelevant recommendations, and poor user-friendliness [33,34][20][21]. A systematic review and meta-analysis by Meunier et al. found that many PCPs either perceived no need for CDSS assistance or disagreed with its recommendations [11]. Additionally, CDSSs disrupt physician workflow and increase their cognitive load, resulting in physicians spending more time to complete tasks and less time with patients [4]. Another significant concern is alert fatigue, forcing physicians to disregard up to 96% of the alerts offered by the CDSS, which sometimes may be detrimental to the patient’s well-being [3,5,9][3][5][9]. As the prevalence of chronic conditions continues to rise, the demand for healthcare services and documentation also increases, resulting in a higher volume of data usage. This incites a vicious cycle with EHRs and CDSSs overloading physicians and physicians entering incomplete, non-uniform data, leading to physician burnout and poor patient management [35,36][22][23]. In a study interviewing 1792 physicians (30% PCPs) about health information technology (HIT)-related burnout, 69.8% reported HIT-related stress, and 25.9% presented ≥1 symptom of burnout. Family medicine was the specialty with the highest prevalence of burnout symptoms and the third with the highest prevalence of HIT-related stress [37][24]. The overall burnout that primary physicians face represents one of the most significant challenges in PHC. Medication prescription errors are frequently reported among family physicians in the United States and other countries [38][25]. On top of that, approximately 5% of adult patients in the US experience diagnostic errors in the outpatient setting every year, with 33% leading to permanent severe injury or immediate or inevitable death [4]. In an attempt to diminish prescription errors, Herter et al. [30][26] implemented a system that considered patients’ characteristics to increase the proportion of successful UTI treatments and avoid overmedication and the risk of resistance. It increased the treatment success rate by 8% and improved adherence to treatment guidelines. While not yet implemented in PHC, one study in Israel reported the use of a CDSS powered by ML that identifies and intercepts potential medication prescription errors based on the analysis of historical EHRs and the patient’s current clinical environment and temporal circumstance. This AI-CDSS reduced prescription errors without causing alert fatigue [39][27]. The big data in EHRs may be a valuable tool for AI-CDSSs. By incorporating AI into CDSSs, they become more capable of clinical reasoning as they can handle more information and approach it more holistically. With ML, AI algorithms can identify patterns, trends, and correlations in EHRs that may not be apparent to physicians [15,16,19,40][15][16][19][28]. Likewise, they can learn from historical patient data to make predictions and recommendations for current patients [23,27][29][30]. In researchers' study, the AI-CDSS in China was helpful for supporting physicians’ diagnoses and avoiding biases when in disagreement. Additionally, it provided similar cases to the current patient and relevant literature in real time. Physicians perceived this as a tool for training their knowledge, facilitating information research, and preventing adverse events [32][31]. In Yao et al. [29][32], the prediction capabilities of their AI-CDSS increased the diagnosis of low ejection fraction within 90 days of the intervention, achieving statistical significance. The intervention proved to be even more effective in the outpatient clinics. With DL, AI arms CDSSs with the possibility of offering personalized treatment recommendations based on a patient’s unique medical history, genetics, and treatment responses [15,16,17,19,23,28,30,32,41,42][15][16][17][19][26][29][31][33][34][35]. Similarly, it can report abnormal tests or clinical results in real time and suggest alternative treatment options [23,29,31,32][29][31][32][36]. This immediateness can reduce the time needed for optimal treatment and increase physicians’ quality time spent with their patients [14,17][14][17]. Researchers identified, as an example, the AI-CDSS used in Seol et al. [27][30], the Asthma-Guidance and Prediction System (A-GPS). Even though it did not prove a significant difference in its core objectives compared to the control, it reduced the time for follow-up care after asthma exacerbations and decreased healthcare costs. Additionally, it showed the potential to reduce clinicians’ burden by significantly reducing the median time for EHR review by 7.8 min. When optimally developed, AI-CDSSs may be powerful tools in team-based care models, such as most PHC settings. They can assist physicians in delivering integrated services by organizing and ensuring that the entire patient-management process, from preventive care and coordination to full diagnostic workup, is effectively performed [13,43][13][37]. In addition, they can automate the process of note writing, extracting relevant clinical information from previous encounters and assembling it into appropriate places in the note [13,14,17][13][14][17]. This guarantees that physicians only focus on human interactions, which is the hallmark of primary care. With their AI-CDSS, physicians in Cruz et al. improved their adherence to clinical pathways in 8 of the 18 recommendations related to common diseases in PHC; 3 were statistically significant [31][36]. Moreover, in Romero-Brufau et al., physicians perceived that the use of their AI-CDSS helped increase patients’ preparedness to manage diabetes and helped coordinate care [28][33]. Among researchers' main findings is the scarcity of the literature research regarding AI-CDSS in PHC in real clinical settings, and not only the outcomes obtained but also the objectives of the studies, which were heterogeneous. Some focused on assessing the effectiveness of the systems [27,29,30,31][26][30][32][36], while others focused on the physicians’ attitudes toward them [28,32][31][33]. The effectiveness of the systems varied, with some proving to be more effective than their comparison group [29[26][32],30], some just proving to be somewhat useful [27[30][33][36],28,31], and others not being useful at all [32][31]. CDSSs and EHRs represent a burden for many physicians, leading to negative prejudices and biases toward them [4]. Additionally, there may be resistance and skepticism toward AI due to the increased workload that EHRs create [17]. Furthermore, there is mistrust in AI and concerns that AI may replace physicians [18,45][18][38]. Because of the latter, early research focuses on comparing and understanding physicians’ attitudes toward AI-CDSSs. This is the case of Romero-Brufau et al. [28][33] and Wang et al. [32][31]. In the former, the researchers found that physicians were less excited about AI and were more likely to feel like AI did not understand their jobs, even after becoming familiar with it. Clinicians gave a median score of 11 on a 1–100 scale, where 0 indicated that the system was not helpful. Only 14% of the physicians would recommend the AI-CDSS to another clinic, and only 10% thought that the AI-CDSS should continue to be integrated into their clinic within the EHR. Thirty-four percent believed the system had the potential to be helpful. This could be because the physicians perceived the interventions recommended by the system as inadequate, not sufficiently personalized for each patient, or simply unuseful [28][33]. In the same way, Wang et al. [32][31] reported that physicians felt the AI-CDSS “Brilliant Doctor” was not optimized for their local context, limiting or eliminating its use. Physicians reported that the confidence score of the diagnosis recommendations was too low, alerts were not useful, resource limitations were not considered, and it would take too long to complete what the system asked in order to obtain recommendations. These negative perceptions were not shared in N.P. Cruz et al. [31][36] and Seol et al. [27][30], where physicians were satisfied with the AI-CDSS. Even with AI proving actual improvement in several health fields, its general implementation faces some challenges. There are four major ethical challenges: informed consent for the use of personal data, safety and transparency, algorithmic fairness and biases, and data privacy [41,46][34][39]. First, most common AI systems lack explainability, what is known as the AI “black box.” This means that there is no way to be sure about which elements make the AI algorithm come to its conclusion. This lack of explainability also represents a main legal concern and reason why physicians distrust AI [47][40]. There is no consensus on to what extent patients should know about the AI that will be used, which biases it could have, or what risks it would pose. Moreover, what should be said about the incapacity to interpret the reason behind each recommendation fully? Secondly, for AI algorithms to function appropriately, they must be initially trained with an extensive dataset. For optimal training, at least 10 times the number of samples as parameters in the network are needed. This is unfeasible for PHC because of data and dataset scarcity, as most people do not have access to it [19,22][19][41]. On top of that, most healthcare organizations lack the data infrastructure to collect the data required to adequately train an algorithm tailored to the local population and practice patterns and to guarantee the absence of bias [6,15,35,48][6][15][22][42]. To solve this problem, some ML models are trained by using synthetic information, and others use datasets that may only derive from specific populations, leading to a selection bias [13,14,17,41,46,49][13][14][17][34][39][43]. The deficiency of real clinical backgrounds and racial diversity leads to inaccurate recommendations, false diagnoses, ineffective treatments, disparity perpetuation, and even fatalities [2]. Another phenomenon derived from data misalignment is the dataset shift, in which systems underperform due to small changes between the data used for training and the actual population in which the algorithm is being implemented [24,50,51][44][45][46]. This raises questions about accountability [16,41][16][34]. Who would be blamed in the case of an adverse event? Although there are forums and committees currently trying to settle this issue, right now it remains unclear, which leaves AI developers free of responsibility, physicians uncomfortable using it, and patients deprived of its potential benefits. AI may have the capacity to grant equitable care among all types of populations, regardless of their socioeconomic backgrounds. However, the cost of implementing these technologies is high, and most developing countries do not have EHRs, or the ones they have are obsolete, sabotaging the implementation of efficient CDSSs [4,11][4][11]. This may partly explain why the success of CDSS in high-income countries cannot be translated to low-resource settings [6]. A reflection of the latter is the results in the research, with five out of six AI-CDSS being tested in high-income countries. Additionally, the AI used in the “Brilliant Doctor” CDSS was not state-of-the-art nor optimally integrated into their EHR, making it difficult to work with [32][31]. Finally, the mistrust physicians and patients have towards AI is another critical challenge for its implementation [18]. In a study analyzing physicians’ perceptions of AI, physicians felt it would make their jobs less satisfying, and almost everyone feared they would be replaced. They also believed AI would be unable to automate clinical reasoning because AI is too rigid, and clinical reasoning is fundamentally the opposite. There were several other concerns, like the fear of unquestioningly following AI’s recommendations and the idea that AI would take control of their jobs [45][38]. In another study, the main reason for patients’ resistance to AI was the belief that AI is too inflexible and would be unable to consider their individual characteristics or circumstances [17]. There is also concern that increasing interaction, mainly with the AI-CDSS, would change the dynamics of the patient–provider relationship, rendering the practical clinic less accurate [14,21][14][47]. Recently, a vast effort has been put into the creation and implementation of explainable AI (XAI) models. These are described as “white-box” or “glass-box” models, which produce explainable results; however, they do not always achieve a state-of-the-art performance due to the simplicity of their algorithms [52,53][48][49]. To overcome this, there has been an increasing interest in developing XAI models and techniques to make the current models interpretable. Interpretability techniques, such as local interpretable model-agnostic explanations (LIMEs), Shapley Additive explanations (SHAPs), and Ancors, can be applied to any “black-box” model to make its output more transparent [52][48]. In healthcare, where the transparency of advice and therapeutic decisions is fundamental, approaches to explain the decisions of ML algorithms focus on visualizing the elements that contributed to each decision, such as heatmaps, which highlight the data that contributed the most to decision making [53][49]. Although XAI is not yet a well-established field, and few pipelines have been developed, the huge volume of studies on interpretability methods showcases the benefits that these models will bring to current AI utilization [52,53][48][49]. Making AI models more transparent will not eradicate mistrust by itself, as issues such as accountability and personal beliefs remain neglected. AI implementation should be a collaborative effort between AI users, developers, legislators, the public, and non-interested parties to ensure fairness [54][50]. More emphasis on conducting qualitative research testing the performance of AI systems would help physicians be sure their use is backed by sound research and not merely by expert opinion. AI education is paramount for a thorough understanding of AI models and, with this, more trust in using these models. With this in mind, some medical schools are upgrading their curriculums to include augmented medicine and improve digital health literacy [16]. Furthermore, some guidelines imply that trust can be achieved through transparency, education, reliability, and accountability [54][50]. The needs of both physicians and patients must be considered. According to Shortliffe and Sepulveda, there are six characteristics that an AI-CDSS must have to be accepted and integrated [55][51]:
  • There should be transparency in the logic of the recommendation.
  • It should be time-efficient and able to blend into the workflow.
  • It should be intuitive and easy to learn.
  • It should understand the individual characteristics of the setting in which it is implemented.
  • It should be made clear that it is designed to inform and assist, not to replace.
  • It should have rigorous, peer-reviewed scientific evidence.
To address some validation concerns and ensure transparent reporting, Vasey et al. proposed the DECIDE-AI reporting guideline, which focuses on the evaluation stage of AI-CDSS [24][44]. Additionally, there should be a specific contracting instrument to ensure that data sharing involves both necessary protection and fair retributions to healthcare organizations and their patients [35][22]. Co-development between developers and physicians is fundamental to obtaining adequate satisfaction levels and limitations for all parties [49][43]. Moreover, physicians need to stop thinking of AI as a replacement and instead start thinking of it as a complement. In PHC, AI and AI-CDSS could become pivotal points for improvement, mainly since reportedly half of the care provided can be safely performed by non-physicians and nurses [56][52]. Also, 77% of the time spent on preventative care and 47% on chronic care could be delegated to non-physicians [57][53]. With optimized AI-CDSS, the time dedicated to healthcare could change focus from quantity to quality.

References

  1. Centers for Disease Control and Prevention. Ambulatory Care Use and Physician Office Visits. 2023. Available online: https://www.cdc.gov/nchs/fastats/physician-visits.htm#print (accessed on 16 October 2023).
  2. Stipelman, C.H.; Kukhareva, P.V.; Trepman, E.; Nguyen, Q.T.; Valdez, L.; Kenost, C.; Hightower, M.; Kawamoto, K. Electronic Health Record-Integrated Clinical Decision Support for Clinicians Serving Populations Facing Health Care Disparities: Literature Review. Yearb. Med. Inform. 2022, 31, 184–198.
  3. Cricelli, I.; Marconi, E.; Lapi, F. Clinical Decision Support System (CDSS) in primary care: From pragmatic use to the best approach to assess their benefit/risk profile in clinical practice. Curr. Med. Res. Opin. 2022, 38, 827–829.
  4. Harada, T.; Miyagami, T.; Kunitomo, K.; Shimizu, T. Clinical Decision Support Systems for Diagnosis in Primary Care: A Scoping Review. Int. J. Environ. Res. Public Health 2021, 18, 8435.
  5. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit. Med. 2020, 3, 17.
  6. Kiyasseh, D.; Zhu, T.; Clifton, D. The Promise of Clinical Decision Support Systems Targetting Low-Resource Settings. IEEE Rev. Biomed. Eng. 2022, 15, 354–371.
  7. Litvin, C.B.; Ornstein, S.M.; Wessell, A.M.; Nemeth, L.S.; Nietert, P.J. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int. J. Med. Inform. 2012, 81, 521–526.
  8. Pinar Manzanet, J.M.; Fico, G.; Merino-Barbancho, B.; Hernández, L.; Vera-Muñoz, C.; Seara, G.; Torrego, M.; Gonzalez, H.; Wastesson, J.; Fastbom, J.; et al. Feasibility study of a clinical decision support system for polymedicated patients in primary care. Healthc. Technol. Lett. 2023, 10, 62–72.
  9. Kwan, J.L.; Lo, L.; Ferguson, J.; Goldberg, H.; Diaz-Martinez, J.P.; Tomlinson, G.; Grimshaw, J.M.; Shojania, K.G. Computerised clinical decision support systems and absolute improvements in care: Meta-analysis of controlled clinical trials. BMJ 2020, 370, m3216.
  10. Trinkley, K.E.; Blakeslee, W.W.; Matlock, D.D.; Kao, D.P.; Van Matre, A.G.; Harrison, R.; Larson, C.L.; Kostman, N.; Nelson, J.A.; Lin, C.T.; et al. Clinician preferences for computerised clinical decision support for medications in primary care: A focus group study. BMJ Health Care Inform. 2019, 26, e000015.
  11. Meunier, P.Y.; Raynaud, C.; Guimaraes, E.; Gueyffier, F.; Letrilliart, L. Barriers and Facilitators to the Use of Clinical Decision Support Systems in Primary Care: A Mixed-Methods Systematic Review. Ann. Fam. Med. 2023, 21, 57–69.
  12. Jheng, Y.C.; Kao, C.L.; Yarmishyn, A.A.; Chou, Y.B.; Hsu, C.C.; Lin, T.C.; Hu, H.K.; Ho, T.K.; Chen, P.Y.; Kao, Z.K.; et al. The era of artificial intelligence-based individualized telemedicine is coming. J. Chin. Med. Assoc. 2020, 83, 981–983.
  13. Liaw, W.; Kakadiaris, I.A. Artificial Intelligence and Family Medicine: Better Together. Fam. Med. 2020, 52, 8–10.
  14. Liyanage, H.; Liaw, S.T.; Jonnagaddala, J.; Schreiber, R.; Kuziemsky, C.; Terry, A.L.; de Lusignan, S. Artificial Intelligence in Primary Health Care: Perceptions, Issues, and Challenges. Yearb. Med. Inform. 2019, 28, 41–46.
  15. Habehh, H.; Gohel, S. Machine Learning in Healthcare. Curr. Genomics 2021, 22, 291–300.
  16. Grech, V.; Cuschieri, S.; Eldawlatly, A.A. Artificial intelligence in medicine and research—The good, the bad, and the ugly. Saudi J. Anaesth. 2023, 17, 401–406.
  17. Thiessen, U.; Louis, E.; St. Louis, C. Artificial Intelligence in Primary Care. Fam. Dr. J. New York State Acad. Fam. Physicians 2021, 9, 11–14.
  18. Bitkina, O.V.; Park, J.; Kim, H.K. Application of artificial intelligence in medical technologies: A systematic review of main trends. Digit. Health 2023, 9, 20552076231189331.
  19. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief Bioinform. 2018, 19, 1236–1246.
  20. Kilsdonk, E.; Peute, L.W.; Jaspers, M.W. Factors influencing implementation success of guideline-based clinical decision support systems: A systematic review and gaps analysis. Int. J. Med. Inform. 2017, 98, 56–64.
  21. Moxey, A.; Robertson, J.; Newby, D.; Hains, I.; Williamson, M.; Pearson, S.A. Computerized clinical decision support for prescribing: Provision does not guarantee uptake. J. Am. Med. Inform. Assoc. 2010, 17, 25–33.
  22. Panch, T.; Mattie, H.; Celi, L.A. The „inconvenient truth” about AI in healthcare. NPJ Digit. Med. 2019, 2, 77.
  23. Linzer, M.; Bitton, A.; Tu, S.P.; Plews-Ogan, M.; Horowitz, K.R.; Schwartz, M.D.; Association of Chiefs and Leaders in General Internal Medicine (ACLGIM) Writing Group; Poplau, S.; Paranjape, A.; et al. The End of the 15-20 Minute Primary Care Visit. J. Gen. Intern. Med. 2015, 30, 1584–1586.
  24. Gardner, R.L.; Cooper, E.; Haskell, J.; Harris, D.A.; Poplau, S.; Kroth, P.J.; Linzer, M. Physician stress and burnout: The impact of health information technology. J. Am. Med. Inform. Assoc. 2019, 26, 106–114.
  25. Jing, X.; Himawan, L.; Law, T. Availability and usage of clinical decision support systems (CDSSs) in office-based primary care settings in the USA. BMJ Health Care Inform. 2019, 26, e100015.
  26. Herter, W.E.; Khuc, J.; Cinà, G.; Knottnerus, B.J.; Numans, M.E.; Wiewel, M.A.; Bonten, T.N.; de Bruin, D.P.; van Esch, T.; Chavannes, N.H.; et al. Impact of a Machine Learning-Based Decision Support System for Urinary Tract Infections: Prospective Observational Study in 36 Primary Care Practices. JMIR Med. Inform. 2022, 10, e27795.
  27. Segal, G.; Segev, A.; Brom, A.; Lifshitz, Y.; Wasserstrum, Y.; Zimlichman, E. Reducing drug prescription errors and adverse drug events by application of a probabilistic, machine-learning based clinical decision support system in an inpatient setting. J. Am. Med. Inform. Assoc. 2019, 26, 1560–1565.
  28. Khakharia, A.; Shah, V.; Jain, S.; Shah, J.; Tiwari, A.; Daphal, P.; Warang, M.; Mehendale, N. Outbreak Prediction of COVID-19 for Dense and Populated Countries Using Machine Learning. Ann. Data Sci. 2020, 8, 1–19.
  29. Susanto, A.P.; Lyell, D.; Widyantoro, B.; Berkovsky, S.; Magrabi, F. Effects of machine learning-based clinical decision support systems on decision-making, care delivery, and patient outcomes: A scoping review. J. Am. Med. Inform. Assoc. 2023, 30, 2050–2063.
  30. Seol, H.Y.; Shrestha, P.; Muth, J.F.; Wi, C.I.; Sohn, S.; Ryu, E.; Park, M.; Ihrke, K.; Moon, S.; King, K.; et al. Artificial intelligence-assisted clinical decision support for childhood asthma management: A randomized clinical trial. PLoS ONE 2021, 16, e0255261.
  31. Wang, D.; Wang, L.; Zhang, Z.; Wang, D.; Zhu, H.; Gao, Y.; Fan, X.; Tian, F. “Brilliant AI doctor” in rural clinics: Challenges in AI-powered clinical decision support system deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–18.
  32. Yao, X.; Rushlow, D.R.; Inselman, J.W.; McCoy, R.G.; Thacher, T.D.; Behnken, E.M.; Bernard, M.E.; Rosas, S.L.; Akfaly, A.; Misra, A.; et al. Artificial intelligence-enabled electrocardiograms for identification of patients with low ejection fraction: A pragmatic, randomized clinical trial. Nat. Med. 2021, 27, 815–819.
  33. Romero-Brufau, S.; Wyatt, K.D.; Boyum, P.; Mickelson, M.; Moore, M.; Cognetta-Rieke, C. A lesson in implementation: A pre-post study of providers’ experience with artificial intelligence-based clinical decision support. Int. J. Med. Inform. 2020, 137, 104072.
  34. Iqbal, J.; Cortes Jaimes, D.C.; Makineni, P.; Subramani, S.; Hemaida, S.; Thugu, T.R.; Butt, A.N.; Sikto, J.T.; Kaur, P.; Lak, M.A.; et al. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023, 15, e44658.
  35. Zeng, D.; Cao, Z.; Neill, D.B. Artificial intelligence–enabled public health surveillance—From local detection to global epidemic monitoring and control. In Artificial Intelligence in Medicine; Academid Press: Cambridge, MA, USA, 2021; pp. 437–453.
  36. Cruz, N.P.; Canales, L.; Muñoz, J.G.; Pérez, B.; Arnott, I. Improving adherence to clinical pathways through natural language processing on electronic medical records. In MEDINFO 2019: Health and Wellbeing e-Networks for All; IOS Press: Amsterdam, The Netherlands, 2019; pp. 561–565.
  37. Dexter, P.R.; Schleyer, T. Golden Opportunities for Clinical Decision Support in an Era of Team-Based Healthcare. In AMIA Annual Symposium Proceedings; American Medical Informatics Association: Washington, DC, USA, 2022.
  38. Van Cauwenberge, D.; Van Biesen, W.; Decruyenaere, J.; Leune, T.; Sterckx, S. “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: An interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med. Ethics. 2022, 23, 50.
  39. Gerke, S.; Minssen, T.; Cohen, G. Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare; Academid Press: Cambridge, MA, USA, 2021; pp. 295–336.
  40. Char, D.S.; Abramoff, M.D.; Feudtner, C. Identifying Ethical Considerations for Machine Learning Healthcare Applications. Am. J. Bioeth. 2020, 20, 7–17.
  41. Peiffer-Smadja, N.; Rawson, T.M.; Ahmad, R.; Buchard, A.; Georgiou, P.; Lescure, F.X.; Birgand, G.; Holmes, A.H. Machine learning for clinical decision support in infectious diseases: A narrative review of current applications. Clin. Microbiol. Infect. 2020, 26, 584–595.
  42. Morales, S.; Engan, K.; Naranjo, V. Artificial intelligence in computational pathology—Challenges and future directions. Digit. Signal Process. 2021, 119, 103196.
  43. Mistry, P. Artificial intelligence in primary care. Br. J. Gen. Pract. 2019, 69, 422–423.
  44. Vasey, B.; Nagendran, M.; Campbell, B.; Clifton, D.A.; Collins, G.S.; Denaxas, S.; Denniston, A.K.; Faes, L.; Geerts, B.; Ibrahim, M.; et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat. Med. 2022, 28, 924–933.
  45. Subbaswamy, A.; Saria, S. From development to deployment: Dataset shift, causality, and shift-stable models in health AI. Biostatistics 2020, 21, 345–352.
  46. Finlayson, S.; Subbaswamy, A.; Karandeep, S.; Bowers, J.; Kupke, A.; Zittrain, J.; Kohane, I. The Clinician and Dataset Shift in Artificial Intelligence. N. Engl. J. Med. 2021, 358, 3.
  47. Turcian, D.; Stoicu-Tivadar, V. Artificial Intelligence in Primary Care: An Overview. Stud. Health Technol. Inform. 2022, 288, 208–211.
  48. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18.
  49. Lötsch, J.; Kringel, D.; Ultsch, A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics 2021, 2, 1–17.
  50. Gille, F.; Jobin, A.; Ienca, M. What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intell.-Based Med. 2020, 1–2, 100001.
  51. Shortliffe, E.H.; Sepulveda, M.J. Clinical Decision Support in the Era of Artificial Intelligence. JAMA 2018, 320, 2199–2200.
  52. Pelak, M.; Pettit, A.R.; Terwiesch, C.; Gutierrez, J.C.; Marcus, S.C. Rethinking primary care visits: How much can be eliminated, delegated or performed outside of the face-to-face visit? J. Eval. Clin. Pract. 2015, 21, 591–596.
  53. Altschuler, J.; Margolius, D.; Bodenheimer, T.; Grumbach, K. Estimating a reasonable patient panel size for primary care physicians with team-based task delegation. Ann. Fam. Med. 2012, 10, 396–400.
More
ScholarVision Creations