Trustworthy Artificial Intelligence: Comparison
Please note this is a comparison between Version 1 by Sabina Szymoniak and Version 2 by Rita Xu.

Artificial Intelligence is an indispensable element of the modern world, constantly evolving and contributing to the emergence of new technologies. Artificial Intelligence techniques must inspire users’ trust because they significantly impact virtually every industry and person. For this reason, systems using Artificial Intelligence are subject to many requirements to verify their trustworthiness in various aspects.

  • Trustworthy Artificial Intelligence
  • safety and robustness
  • physical and environmental security

1. Introduction

We encounter various facilities in everyday life provided by different networks’ systems and architectures. Most of such solutions belong to Cyber–Physical Systems (CPSs) or the Internet of Things (IoT). These systems use sensors and various Artificial Intelligence (AI) algorithms to improve our lives [1]. Moreover, they work in different areas of our lives [2][3][4][2,3,4]. For example, we can find such systems in medicine [5], sports [6], or security [7]. Each solution is characteristic of a specific area and problem. We find systems that use sensors to monitor patients’ health in medicine. The implemented AI methods, combined with sensors, can help control the vital functions of chronically ill people and signal the need to deliver medications to the patient when necessary. People with eyesight problems can also be helped by using AI [8][9][8,9]. In sports, we can find AI methods that enable the analysis of the athlete’s movement and performance. Additionally, these systems can analyze the athlete’s vital functions and react in life-threatening situations [10].
On the other hand, the security aspect affects many different planes of human life. We can consider physical and environmental security here. Physical and environmental security cover many issues. Physical security primarily refers to protecting people and property against various factors (for example, fire, flood, theft, vandalism, and terrorism). Environmental security, in turn, refers to the protection of specific infrastructure. Both issues are reflected in various standards prepared by agencies such as NIST, ISO, COBIT, and GDR [11]. In particular, by physical security, we mean protecting human life against factors that may contribute to human death. For example, physical security may include preventing car accidents. Based on other drivers’ behavior, we can predict a possible collision. Also, physical security is related to pedestrians at crossings and on the shoulders of the road.
In contrast, environmental security concerns data security and computer network users’ identity. Especially characteristic are IoT systems, which consist of many communicating wireless devices (mobile devices and sensors). IoT systems can also use Wireless Sensors Network (WSN) technology, the task of which is to monitor and collect data from a defined area. The characteristics of the data sent by users of devices located in IoT systems can be varied. Regardless of the data characteristics, the data collected by these devices should be appropriately secured against unauthorized access and attacks by rogue users [12][13][12,13].
As mentioned, CPSs or IoT systems use Artificial Intelligence algorithms. In short, these algorithms tell the system how to learn to operate on its own. AI algorithms use techniques like Deep or Machine Learning, Cloud Computing, or Spiking Neural Networks (SNNs), depending on the problem to be solved. As AI algorithms’ users, we can set some requirements for them. From the physical and environmental security point of view, these requirements will concentrate on dimensions like safety, robustness or privacy, and data governance. Therefore, we can set the following requirements: the ability to make informed decisions by system users, the security of users, their privacy and data, system resistance to attacks, traceability, the transparency of system components, and responsibility [14][15][16][14,15,16].
Artificial Intelligence algorithms must meet the requirements of being trustworthy. The concept of TAI emerged in response to the rapid pace of technological change. Moreover, the Trustworthy Artificial Intelligence (TAI) systems have become a priority for the European Union. AI systems must be human-centric and serve humanity and the good of society at large. These systems offer enormous opportunities, but also pose certain risks, which can impact society. As a result, trust in technology has become a key goal for developers of AI-based systems [14]. A Trustworthy Artificial Intelligence system must have three characteristics throughout its life cycle. The first is legal compliance, which means that the system must comply with applicable legal provisions at the international or national level. The second feature is ethics, which requires the system to comply with ethical principles and values that TAI must ensure. The most-important ethical principles include respect for human autonomy, justice, damage prevention, and the possibility of explanation. Also, it is necessary to consider specific values for specific groups of people (e.g., children and people with disabilities). The third feature is robustness, which relates to both a technical and a social point of view. All these features should work together to be a Trustworthy AI [14][15][16][14,15,16]. The TAI-equipped systems must be assessed against the mentioned earlier requirements and dimensions that describe their attributes or characteristics.
Moreover, the safety and robustness dimensions of TAI are strictly connected to TAI’s ethical and explainable aspects. They are essential for building trust in AI systems. TAI requires ethical data processing, so internal procedures and policies to ensure compliance with data protection laws can also help facilitate ethical data processing and, thus, complement existing legal processes. In turn, explainability is crucial to building and maintaining user trust in AI systems. This principle means that processes must be transparent, the capabilities and goals of AI systems openly communicated, and decisions as explainable as possible to those directly and indirectly influenced by them. Without this information, the decision cannot be adequately challenged. It is not always possible to explain why a particular model produced a particular result or decision (and what combination of inputs contributed to it).

2. Trustworthy Methods in Artificial Intelligence

2.1. AI Methods and Their Applications

Artificial Intelligence is a tool that enables machines to learn from experience, adapt to new inputs, and perform human-like tasks. In recent years, AI has reached a significant level to ensure the practical functioning of many issues of collecting and analyzing helpful information. When characterizing AI, first, one should start with Deep Learning (DL) [17][27]. DL is a Machine Learning (ML) technique that teaches computers to do what comes naturally to people, to learn by example. Countless developers use the latest innovative Deep Learning technologies to take their businesses to a new level. There are many areas of AI technology, such as autonomous vehicles, computer vision, automatic text generation, etc., where the scope and use of deep learning are growing. A typical example of AI is neural networks with the ability to recognize objects, such as facial recognition [18][28]. These networks make it possible to recognize individual faces using biometric mapping. Such use has led to breakthrough advances in surveillance technologies, but has also been met with much criticism for breaching privacy. Offering legal agencies surveillance technology to monitor entire cities through a network of CCTV cameras and accurately assigning each citizen his/her real-time social credit score is not something that will be acceptable to the public. This is different in the case of the use of AI in control and automation. AI can perform the same type of work repeatedly without fatigue. By the way, it is an ideal tool in the form of Fuzzy Systems [19][29], where relying on typically real values does not allow for correct control of machines. Automation increases productivity and results in lower overall costs and, in some cases, a safer working environment. One should also mention the appropriate organization of tasks. First of all, genetic and evolutionary algorithms complement the elements of AI, where, often in combination with neural networks, they are perfect for optimization issues. Neural networks work well for obtaining various data. With each passing day, the data everyone produces grow exponentially. Rather than manually entering these data, networks allow you to collect and analyze them based on your past experiences [20][30]. Data acquisition is the transfer of knowledge from various sources to a data storage medium, where it is often accessed, used, and analyzed by various organizations. Often, data collection is preceded by edge processing. AI uses neural networks to analyze a large amount of such data and helps draw logical conclusions. An example of the intensive use of AI algorithms in the form of neural networks (recursive and not only) for speech and text analysis can be found in Chatbot software v1 [21][31]. This is software that provides communication when solving customer problems by inputting audio or text data. Earlier bots only responded to specific commands, and the bot knew what the user meant if the user said the wrong thing. The bot had the capabilities that were implemented for it. The real change came when Chatbots were enhanced with AI algorithms that make it possible to understand the language, not just the commands themselves. Another type of AI is hybrid solutions that form the basis of Quantum Computing. AI helps solve complex quantum physics problems with supercomputers’ accuracy using Quantum Neural Networks [22][32]. This could lead to groundbreaking changes in the near future. It is an interdisciplinary field that focuses on building quantum algorithms to improve computational tasks within AI, including sub-domains such as Machine Learning. The entire concept of quantum-assisted AI algorithms remains in the domain of conceptual research. Cloud Computing is another element of AI [23][33]. With so much data being transferred each day, storing the data in physical form would be a serious problem. ML functions operating in the Cloud Computing environment increase the efficiency of data organization, and in combination with Edge Computing, they significantly reduce the need for space for the crucial data stored.

2.2. Spiking Neural Networks

A Spiking Neural Network (SNN) is a more-biologically plausible version of an Artificial Neural Network (ANN). Neurons communicate through synapses using electrical pulses (spikes, action potentials)—instead of scalar values [24][34]—as almost all biological neurons do. Spikes are binary events that encode information in time and quantity (spike train). Hence, continuous time flow is necessary to process data with SNNs. There is no time step concept in SNNs as opposed to ANNs. However, the digital simulation of SNNs with general-purpose accelerators requires using a time step (as a time quantum for simulation), but this is a term coming from the numerical simulation domain. One of the main goals of SNNs is to tremendously reduce the amount of energy required for training and inference. The human brain requires about 20 W of power to perform extremely complex computations [25][35]. Such low magnitudes of power consumption would not be possible when using CPU or General-Purpose GPU (GPGPU) calculations. Hence, the specialized neuromorphic devices are the native platforms for SNNs to execute. There have been many approaches for creating neuromorphic hardware, for example SpiNNaker, BrainScaleS, IBM TrueNorth, Intel Loihi, and Intel Loihi 2 [26][27][36,37]. Nevertheless, as mentioned earlier, SNNs can be simulated using general-purpose accelerators, which are not as energy efficient as specialized neuromorphic hardware, yet allow for inexpensive and elastic research on SNNs. There are a few frameworks and tools to perform such simulations—Intel LAVA [28][38], Nengo [29][39], Sandia Fugu [30][40], snnTorch [31][41], SpykeTorch [32][42], NEURON [33][43], NEST [34][44], BRIAN [35][45], CARLsim [36][46], and others [37][47].
Video Production Service