1000/1000
Hot
Most Recent
The coronavirus disease (COVID-19) and its pressure on the healthcare system is happening at a time of technological optimism and promise. The digitalization of health data, together with the advent of artificial intelligence (AI) solutions, have the potential to completely change the current pattern of the healthcare scenario and provide precise and predictive medical assessment for individuals in the future.
The coronavirus disease (COVID-19) and its pressure on the healthcare system is happening at a time of technological optimism and promise. The digitalization of health data, together with the advent of artificial intelligence (AI) solutions, have the potential to completely change the current pattern of the healthcare scenario and provide precise and predictive medical assessment for individuals in the future [1][2].
The new frontiers of research made possible by AI algorithms based on machine learning (ML) and deep learning (DL) give the technological possibility of using aggregated healthcare data to produce models that enable a true precision approach to medicine [3][4][5][6][7][8]. Such innovation may facilitate and improve the accuracy of diagnosis, tailoring treatments and targeting resources with maximum effectiveness in a timely and dynamic manner [7][8][9][10][11].
However, such innovative technology must be robust enough to avoid biased learning, which can happen when training datasets are too skewed, too small, and/or poorly annotated. This issue demands a global effort in the field of cross-disciplinary, international agreements for standardization, anonymization, validation, and data sharing. Moreover, it calls for continuous monitoring, starting from appropriate legal and regulatory policies to be shared among different countries and different health systems.
In this scenario, the issues showed in this paper acquire particular importance and urgency. They display the complexity of AI-based healthcare and highlight the need to develop policies and legal strategies that carefully consider the multiple dimensions of the integration process, and this need for multidisciplinary efforts to coordinate, validate and monitor the development and integration of AI tools in the healthcare [12]. Challenges such as organizational and technical barriers for health data use, the debate about the ownership of data and privacy protection, the regulation of data sharing and cybersecurity surrounding it, and accountability issues will have to be addressed as soon as possible.
The coronavirus disease (COVID-19) and its pressure on the healthcare system is happening at a time of technological optimism and promise. The digitalization of health data, together with the advent of artificial intelligence (AI) solutions, have the potential to completely change the current pattern of the healthcare scenario and provide precise and predictive medical assessment for individuals in the future [1][2].
The new frontiers of research made possible by AI algorithms based on machine learning (ML) and deep learning (DL) give the technological possibility of using aggregated healthcare data to produce models that enable a true precision approach to medicine [3][4][5][6][7][8]. Such innovation may facilitate and improve the accuracy of diagnosis, tailoring treatments and targeting resources with maximum effectiveness in a timely and dynamic manner [7][8][9][10][11].
However, such innovative technology must be robust enough to avoid biased learning, which can happen when training datasets are too skewed, too small, and/or poorly annotated. This issue demands a global effort in the field of cross-disciplinary, international agreements for standardization, anonymization, validation, and data sharing. Moreover, it calls for continuous monitoring, starting from appropriate legal and regulatory policies to be shared among different countries and different health systems.
In this scenario, the issues showed in this paper acquire particular importance and urgency. They display the complexity of AI-based healthcare and highlight the need to develop policies and legal strategies that carefully consider the multiple dimensions of the integration process, and this need for multidisciplinary efforts to coordinate, validate and monitor the development and integration of AI tools in the healthcare [12]. Challenges such as organizational and technical barriers for health data use, the debate about the ownership of data and privacy protection, the regulation of data sharing and cybersecurity surrounding it, and accountability issues will have to be addressed as soon as possible.
Both ML and DL technologies require the availability of large amounts of comprehensive, verifiable datasets; integration into clinical workflows; and compliance with regulatory frameworks [1][13]. With improved global connectivity via the internet and cloud-based technologies, data access and distribution have become easier, with both beneficial and malicious outcomes [14]. Adequately regulated integration of health data and disease will provide unprecedented opportunities in the management of medical information at the interface of patients, physicians, hospitals, policymakers, and regulatory institutions. However, despite the pervasive enthusiasm about the potential of AI-based healthcare, there are only a few healthcare organizations with the data infrastructure required to collect the sensitive data needed to train AI algorithms for patients [15]. Consequently, published AI success stories fit the local population and/or the local practice patterns centered on these organizations and should not be expected to be directly applicable to other cohorts [16] (i.e., an AI algorithm trained on one specific population is not expected to have the same accuracy when applied elsewhere) [17].
The scope of the new legislation includes a wider range of products, extends liability in relation to defective products, strengthens the requirements for clinical data and traceability of devices, increases clinical investigation requirements and manages risk to ensure patient safety, reinforces surveillance and management of medical devices as well as the lifecycle of in vitro diagnostic medical devices, and, finally, improves transparency relating to the use of personal data.
According to the new legislation, a software, whether as a component in a wider medical device or standing alone, is qualified as a medical device without any other specifics.
In the U.S., the 21st Century Cures Act [18] of 2016 defined the medical device as a tool “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals” [19].
In the Federal Law No 323-FZ of 21 November 2011, “On the Fundamentals of Healthcare in the Russian Federation”, a “medical device” was defined as “any tools, equipment, devices, materials and other products used for medical purposes, necessary accessories and software” (article 38) [20]. Therefore, any AI solution, used independently or in combination with other medical devices, must be registered as a medical device, passing through clinical testing and acceptance according to article 36.1 of said Federal Law. The Russian supervisory authority—the Federal Service for Supervision of Healthcare (Roszdravnadzor)—requires technical and clinical tests as well as examination of the safety, quality, and effectiveness of all medical devices prior to their use and sale.