Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). This entry will highlight the most recent issues regarding how AI will get us to more autonomous actions in surgery by discussing the different degrees of surgical autonomy, recent advances with reinforcement learning and the ethical roadblocks that lie ahead.
Unlike artificial intelligence (AI) in medicine, which hopes to use autonomously functioning algorithms to better analyze patient data in an effort to improve patient outcomes, AI in surgery also involves movement. Unlike strict medicine, surgery is an art that is also dynamic. When we use the term “surgery” we are also referring to endoscopy and interventional techniques and procedures, because interventional disciplines continue to coalesce into the same field, a trend that is seen by the continued increase in hybrid operating rooms that have angiography compatible tables, mobile CT scanners, minimally invasive surgical equipment and endoscopes all in the same room that can be used in tandem. Because of the fact that interventional fields of medicine also rely greatly on the medical management of patients, we believe that AI medicine (AIM) and AI surgery (AIS) could one day be considered two distinct disciplines, with AIM reserved for instances where computer algorithms are used to better diagnose, manage or treat patients without a specific interventional procedure being done.
AIS could be a term for autonomously acting machines that can do interventional gestures/actions. According to the Gartner Hype Cycle, many surgeons believe that we are languishing in the “Trough of Disillusionment”, and the promise of autonomous surgery seems like a “pipe dream” for most modern-day surgeons 
. However, the reality is that instances of autonomous actions in surgery already exist. Unfortunately, the reluctance of many laparoscopic surgeons to give up on haptics, or the sense of touch, is actually hindering progress in AIS because of the refusal to embrace robotic tele-manipulation technology, in effect, they are refusing to let go, something that will be needed if the dream of AIS is ever to come to pass 
. Unfortunately, the medical community has already been shown to be resistant to any automation of medical tasks even simple computations. It is safe to say that automation of surgical tasks will have an even more profound degree of resistance 
Another obstacle to the growth of AIS is the dogmatic belief that for something to have AI, it must have algorithms that enable progressive improvement and learning by an artificially intelligent device 
. This creates a conundrum, as theoretically, machine learning (ML) should be infinite, and because of this one wonders what the ultimate purpose of perpetual learning is in surgery. Should it be carried out in the hopes that ultimately the surgical action will become so perfect that it is no longer necessary? How much more perfectly does a robot need to place a titanium clip on the cystic duct?
It could be argued that technology used to create monopolar and bipolar technology is an example of AIS as it has tissue sensing technology that adjusts the action of cautery based on the resistance and capacitance of the tissue (TissueFect™, ValleyLab, Medtronic, Dublin, Ireland). In particular, it has a closed-loop control that analyzes 434,000 data points per second. Does this technology need to improve on that level of data analysis to be considered AI? Or what about Ligasure technology, which uses a high-speed advanced algorithm to seal vessels with changes to the duration of action dependent on tissue thickness (Ligasure, ValleyLab, Medtronic, Dublin, Ireland). We certainly do not mean to imply that there is no room for improvement in these technologies, but at what point should something be defined as AI? Shouldn’t any autonomous action be acknowledged and celebrated as an example of AIS?
2. Machine Learning
Machine learning (ML) is a genre of artificial intelligence including algorithms that allow machines to solve problems without specific computer programing. While analyzing big data, machines are enabled to assimilate a large amount of information, applicable for risk stratification, diagnosis, treatment decisions, and survival predictions. Not only can AI models analyze large amounts of data collected over long periods of time, providing predictions for future events on the basis of the statistical weight of past correlations, they can also continuously improve with new data. Through a process called “incremental learning”, trainable neural networks improve over time, surpassing unchanging scoring systems and standardized software. Moreover, the human–machine interaction further improves the performance of ML tools. Indeed, the learning process goes far beyond the textbook data, incorporating real-life scenarios and can improve experts’ opinions.
Most of the studies conducted on ML tools have focused on machine vision, biomarker discovery, and clinical matching for diagnosis, classification and outcome prediction 
. Several studies have applied different ML tools to surgery and, in particular, to risk assessment, performance evaluation, treatment decision making and outcome prediction. In an effort to better identify high-risk surgical patients from complex data, a ML project trained on Pythia was built by Corey et al. to predict postoperative complication risk 
. By using surgical patient electronic health record (EHR) data, including patient demographics, smoking status, medications, co-morbidities, surgical details, and variables addressing surgical complexity, the authors created a decision support tool for the identification of high-risk patients. Similarly, Bertsimas et al. applied novel ML techniques to design an interactive calculator for emergency surgery 
. By using data of the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database, the authors designed the POTTER application (Predictive OpTimal Trees in Emergency Surgery Risk), to predict postoperative mortality and morbidity 
. Differently from the standard predictive models, POTTER accounted for nonlinear interactions among variables; thus, the system reboots after each answer, interactively dictating the subsequent question. A similar tool, predicting eight specific postoperative complications and death within one year after major surgery, was developed and validated by Bihorac et al. 
. MySurgeryRisk can accurately predict acute kidney injury, sepsis, venous thromboembolism, intensive care unit admission >48 h, mechanical ventilation >48 h, and wound, neurologic, and cardiovascular complications with AUC values ranging between 0.82 and 0.94 and death up to 24 months after surgery with AUC values ranging between 0.77 and 0.83. The following studies built different ML models predicting postoperative outcomes that proved to perform better than traditional risk calculators 
In an attempt to better define patient outcomes, Hung et al. applied ML to objectively measure surgeon performance in a pilot study 
. By combining procedure-specific automated performance metrics with ML algorithms the authors were able to objectively assess surgeon’s performance after robotic surgery. These algorithms still include biases and misinformation, and thus multi-institutional data from several high-volume groups are needed to train the tool and to create a robust model that is able to correctly predict different clinical outcomes.
Surgical performance has also been measured to allow surgeons to learn from their experience and refine their skills. Either by using surgical video clips or by applying virtual reality scenarios, deep learning models were trained to estimate performance level and specific surgical skills 
. In the near future, we will be able to personalize training through ML tools such as these.
Other studies have used ML for prediction of staging 
, for treatment decision making 
, and to improve case duration estimations 
, further expanding the applicability of ML. Despite these advantages, ML presents several challenges, such as the need to process a large amount of data before it can be analyzed, the necessity of repetitively training the model, and of refining it according to the various clinical scenarios. Ethical considerations should also be taken into account when applying ML to healthcare, including privacy and security issues, and the risk for medico-legal implications. These issues will be discussed at more length below 
3. Natural Language Processing
Natural Language Processing (NLP) is the evolution of the interaction of Artificial Intelligence (AI) and linguistics. Distinct from simple text information retrieval (IR), which indexes and searches a large volume of text based on a statistical technique 
, NLP evolved from basic approaches (word to word), through a complex process of coding words, sentences, meanings and contexts, to its modern structure. Since 1950, NLP and IR converged into what is known today as NLP, namely, a computer-based algorithm that handles and elaborates natural (human) language, making it suitable for computation 
When applied to healthcare, where available clinical data are kept in Electronic Health Records (EHRs), the need to decode a narrative text coming from this large amount of unstructured data has become urgent because of the complexity of the human language and the routine employment of metaphors and telegraphic prose. When compared to NLP, manual reviewing of EHRs is time consuming, possibly misleading because of biases, and extremely expensive 
The tremendous potential value of a big data analytical system in healthcare can be easily explained: EHRs represent at this time the major source of patient information, but unfortunately, for the most part, data regarding symptoms, risk factors for a specific disease or outcomes after medical or surgical treatment come from unstructured text. The ability to translate this amount of information into a coded algorithm could allow for more precise screening programs and modify medical and/or surgical strategies. A systematic review from Kolech et al. analyzed the available methods, employing NLP to interpret symptoms from EHRs of inpatients and outpatients, finding possible future applications for NLP in the normalization of symptoms to controlled vocabularies, in order to avoid overlapping of different words for the same concept 
. A notable criticism of the available studies has been that reported signs and symptoms are easily mixed as the same variable, making interpretation confusing. In this review, only 11% of studies focused on cancer patients, in contrast with the fact that, currently, a major area of interest for AI (not only NLP) is oncology, where early detection of cancer-specific symptoms could facilitate early diagnosis and potentially enhance screening techniques.
An obvious and immediate advantage of having reliable and decoded data coming from clinical notes is the positive impact on the quality of retrospective studies. Moreover, NLP analysis of symptoms and signs in cancer patients may allow for the improved definition of prognostic factors other than surgical and oncological parameters 
. Emotional status and quality of life of patients after cancer surgery or other cancer treatment has also been investigated through NLP 
. Banerjee et al., with the creation of a specific domain vocabulary of side-effects after prostatectomy, were able to evaluate minor outcomes hidden in clinical free text, resulting in better management, which could be a game-changer in a population with a 5-year life expectancy rate approaching 99% 
When applied to surgery, NPL has been extensively proposed pre-operatively and looking at different post-operative complications such as surgical site infection (SSI). Bucher et al. developed and validated a model of SSI prediction with NLP algorithm by analyzing EHRs from 21,000 patients entered into the ACS-NSQIP, using only text-based documents regarding history and physical condition, operative, progress and nursing notes, radiology reports and discharge summaries 
. This predictive model had a sensitivity of 79% and a specificity of 92% on external validation, but its added value was the absolute reliable negative predictive value (NPV), which is a relevant issue for events with a low incidence. Anastomotic leak 
, deep venous thrombosis, myocardial infarction and pulmonary embolism were also frequently investigated and results from a recent meta-analysis 
demonstrated that performance of NLP methods in detection of postoperative complications is similar, if not superior, to non-NLP methods, with a sensitivity of 0.92 versus 0.58 (p
< 0.001), and comparable specificity. Moreover, NLP models seem to be better than non-NPL models for ruling out specific surgical outcomes, owing to an optimal true-negative identification power. Interestingly, the ability of algorithms to self-correct can increase the utility of their predictions as datasets grow to become more representative of a patient population.
These NLP applications are surely beneficial for patient management, providing better understanding of peri-operative data, but it can be a useful tool for surgeons as well, particularly when applied to surgical education. For example, decoding intra-operative dialogues between residents and faculty, combining NLP and CV, can create and implement a dataset of technique and surgical strategy, thus, creating a real-life textbook of surgery. In addition, NLP has been efficiently used 
to assess Entrustable Professional Activities (EPAs). EPAs describe specific behaviors associated with different training levels of residents and it can potentially enhance the understanding of their training and autonomy in surgical practice.
NLP can be used to validate datasets that are the basis of surgical risk predictive models, but the main limit of their widespread use is the non-homogeneity of NLP models and EHRs data entry forms across institutions and countries. A future improvement would entail expansion of registries from local to national and international levels to set algorithms that can be externally validated on various populations. Surgeons and their low-level confidence with AI represent another limit to developing a system that is theoretically perfect and promising: it is important for them to understand how AI may impact healthcare and to elaborate strategies of safe interaction to implement this nascent technology. Synergy between fields of AI is also essential in expanding its applications. Lastly, ethical issues and privacy rules protecting patients’ sensitive data, can limit the large-scale applicability of NLP over EHRs. Nonetheless, the enormous potential of NLP remains fascinating and the multiple potential benefits of its integration into healthcare must be balanced with risks. Although the technology does not currently exist for NLP to influence autonomous actions in surgery, it must be remembered that communication among team members during surgery is fundamental to the successful performance of surgery. Additionally, devices already exist and are used today that are voice-activated (ViKY, Endocontrol, Grenoble, France), and it is conceivable that NLP could eventually evolve to benefit the action of voice-controlled devices during a procedure