AI in SARS-CoV-2 outbreak: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

Artificial Intelligence (AI) and Machine Learning (ML) have expanded their utilization in different fields of medicine. During the SARS-CoV-2 outbreak, AI and ML were also applied for the evaluation and/or implementation of public health interventions aimed to flatten the epidemiological curve. 

  • artificial intelligence
  • machine learning
  • COVID-19
  • public health interventions
  • prediction models
  • epidemic
  • pandemic
  • severe acute respiratory syndrome coronavirus-2

Note: The following contents are extract from your paper. The entry will be online only after author check and submit it.

1. Introduction

During the last five years, the use of Artificial Intelligence (AI) and Machine Learning (ML) rapidly increased its applications in various areas of medicine [1,2]. In particular, during the SARS-CoV-2 outbreak, AI and ML were shown to be effective in improving diagnostic and prognostic processes of COVID-19, although there were limitations due to potential biases relating to the quality of reporting [3]. AI and ML were also applied to public health issues related to COVID-19. This included the identification of clinical and social factors associated with the risk of COVID-19 infections and deaths [4,5,6], the development of spatial risk maps [5], the prediction of the trends and peak of the epidemic [7,8], and finally, the development of vaccination strategies [9]. For optimizing protection and preventing the spread of COVID-19, several activities need to be implemented, such as the identification of suspicious events, large-scale screening, tracking, associations with experimental treatments, pneumonia screening, data and knowledge collection and integration using the Internet of Intelligent Things (IIoT), resource distribution, robotics for medical quarantine, forecasts, and modeling and simulation [10,11].
In the actual phase of the epidemic, governments are using public health measures such as lockdown, social distancing, and school closures, etc., to contain the spread of the virus. The effectiveness of such strategies is mainly based on theoretical assumptions [12]. Moreover, epidemiological models such as Susceptible–Exposed–Infectious–Recovery (SEIR), stochastic transmission models, etc., that have traditionally been used to study and predict dynamics and possible contagion scenarios [13,14,15] also had limited application to public health interventions for the COVID-19 pandemic.

2. Control the Spread of COVID-19

Based on our findings, quarantine emerged as the most effective intervention to control the spread of COVID-19 [23,26,28]. China implemented a combination of interventions based on quarantine that also included the implementation of cordon sanitary measures and traffic restrictions from 23 January 2020 to 16 February 2020. Before the implementation, the Rt was above 3.0. After the application of the quarantine, on 6 February 2020 the Rt decreased to below 1.0, and on 1 March 2020 the Rt decreased to less than 0.3 [29]. The data of 190 countries worldwide that implemented the quarantine measures (from 23 January 2020 to 13 April 2020) showed how they were associated with a reduction in Rt when compared with countries that did not adopt this measure (Rt = −11.40%, 95% CI (−9.07–−13.66%)) [30].
AI and ML were also applied in the use of lockdown [26]. The results of eleven European countries that implemented a lockdown between 3 February 2020 and 4 May 2020 showed a reduction in Rt below 1 and a large effect on reducing transmission [31]. A recent study that ranked the effectiveness of worldwide COVID-19 public health interventions that were implemented in 79 territories showed that curfews, cancellations of small gatherings and closures of schools, shop and restaurants were among the effective public health policies [32]. All these results were consistent with the outputs of the quarantine and lockdown-based AI and ML models [23,26,27,28].
AI and ML also simulated the adoption of continuously redefining the modification of lockdown measures according to the spatial (area) risk of the spread of the disease in one area (low, moderate, and high) [21]. This intervention was mainly used by Western European countries. Additionally, India implemented the same approach during lockdown phase 3 (from 4 May 2020 to 17 May 2020). After the application of this measure, the Rt decreased from 2.78 to 1.38. In brief, even though this approach reduced the spread of COVID-19 epidemic progression, it was unable to halt and eventually eradicate the COVID-19 epidemic [33].
Social distancing was the last strategy that was evaluated with AI and ML. AI and ML suggested that social distancing could be effective only in combination with the closure of schools/commercial activities and the limitation of public transportation [25]. Additionally, from real life data the application of social distancing as a single intervention was not very successful because case resurgence was likely to occur once it was removed and it did not help to reduce the excess mortality [34,35].
The models used in our study are quite diverse and a few considerations about their characteristics are worthwhile. The main models considered are the following: SIR/SEIR (Susceptible–Exposed–Infected–Recovered), Linear Regression, TOPSIS, Neural Networks, Agent-based Simulation.
These models are from very different families of methods, ranging from differential equation models (SIR/SEIR), to statistical machine learning models (linear regression and neural nets), geometric models (TOPSIS), and, finally, simulation models (agent-based simulation). A direct comparison is then hard, and the choice of one method with respect to another one may depend upon several factors, such as the kind of collected data, the availability of analytical tools, and the contextual situation under which the model can actually be applied. For instance, the SIR family of models, as any model in system theory (i.e., differential equations), assumes that the modeled system abstracts to some specific behavior. In particular, in standard SIR, a homogeneous mixing of the infected I and susceptible S populations is assumed, meaning that a person’s contacts are randomly distributed among all others in the population. However, in real situations, the mixing in a population is heterogeneous and contacts are usually not random; for example, people of different ages may have very different kinds of relationships.
Machine learning models do not assume such a kind of abstract behavior, since they try to predict specific patterns of prediction from data; in other words, they tend to learn the abstract behavior of the system from observations, and they use what has been learnt to make predictions. However, in this case specific modeling assumptions are also present. Standard linear regression is a model with very high bias, since it assumes a linear relationship between observed data and the target; however, the bias can be reduced by adjusting the model to polynomial regression with the introduction of additional non-linear (quadratic, cubic, etc.) parameters. It is well-known that this bias reduction will increase the variance of the model, leading to the problem of overfitting (the inability of the model to generalize to unobserved data, while being really accurate on observed data). Regularization techniques (lasso or L2 regularization) can be adopted to reduce overfitting [36]. Neural networks are more general, since the non-linearity can be captured in the activation functions of the artificial neurons (usually sigmoid functions such as logistic or hyperbolic tangent, as well as Rectified Linear Unit widely adopted in deep neural net modes), and overfitting can be mitigated by both suitable architectural choices as well as regularization. However, the choice of the right set of hyper-parameters of the net (number of neurons, number of hidden layers, activation functions) and of the learning algorithm (learning rate, momentum, parameter initialization) may have a great impact on the final model’s performance and must be made by intensive cross-validation procedures.
Geometric models such as TOPSIS more directly address a decision-making process and are quite interesting in a setting like the one discussed in the present paper, i.e., the evaluation of specific countermeasures to contain the spread of COVID-19. In particular, TOPSIS belongs to the class of Multiple Attribute Decision Making (MADM) approaches, where some courses of action are chosen in the presence of multiple, usually conflicting, features. An interesting observation is that similar approaches have also been investigated in the Machine Learning community with the use of Probabilistic Graphical Models, such as Decision Networks or Influence Diagrams [37], but with the possibility of learning both the structural relationship among the attributes and their quantification in terms of uncertainty (probability) and utility.
Finally, agent-based simulation is a completely different alternative, where no specific modeling is assumed, but the results are obtained by looking at the interactions among the involved agents. The crucial point is to determine the right set of simulation parameters, such as the number of agents, the rate of interaction, the probability of infection given by contacts, etc.
In summary, all the approaches investigated in the different studies have their motivations, as well as their strengths and limitations, and no one can be, in general, considered better or worse than another one. However, the finding suggesting that quarantine is a good and efficient strategy for containing COVID-19 is an important result which is strengthened by the convergence of such different models.
 

This entry is adapted from the peer-reviewed paper 10.3390/ijerph18094499

This entry is offline, you can click here to edit this entry!