Artificial Intelligence Challenges: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Artificial intelligence (AI) has been widely used in recent years. During analyses of the challenges of AI, numerous problems have been reported, some of which are security, safety, fairness, energy consumption, and ethics, to mention a few. The widespread usage of AI leads to arising new challenges.

  • Explanable AI
  • Explainable Machine Learning
  • Trustworthy AI

1. Problem Identification and Formulation

Problem identification and formulation are essential processes that should be implemented in AI-based agents. This issue plays an essential role in organizing HLI-based agents designated in a self-organized manner. Implementing the mentioned processes is very difficult in practice because of some issues explained as follows. As was previously mentioned, we know that human knowledge about its environment can be found in different domains, including known parts, semi-known parts, and unknown parts. There is a set of problems that cannot be formulated in a well-defined format for humans, and therefore there is uncertainty as to how we can organize HLI-based agents to face these problems. For example, in [21], it was proved that there are some types of problems with no algorithm for solving. In addition, some problems for humans, such as the goals of human creation, are not clear, and therefore HLI-based agents will not be able to solve these types of problems because they follow the thinking process of humans. This problem becomes more complicated when the environment of an HLI-based agent is dynamic and unknown. Implementing a problem detection and formulation system at the heart of an HLI-based agent will be required. The role of problem identification is discussed in the following paragraph.
According to [20], a method for solving a problem is to map the problem space into a search space. In this method, detecting the problem, which can be single-state, multiple-state, or contingency, and then formulating it in a well-defined manner are the first steps of AI-based problem-solving. In other words, gathering knowledge and insides from the nature of the problem and converting it to a systematic solving procedure has high priority in this regard. In existing intelligent agents, the process of formulating problems considering security, safety, performance, etc., is considered as the responsibility of designers of AI-based agents. To solve this problem, knowledge-based agents and inference engines such as the Cyc project [22] can be utilized to present some solutions.

2. Energy Consumption

Some learning algorithms, including deep learning, utilize iterative learning processes [23]. This approach results in high energy consumption. Nowadays, the deep learning method is used to design HLI-based agents because of its high accuracy and its similarity to the human brain in the decision-making process. Deep learning models require a high computational power of GPUs. In [11], it was shown that these models are costly to train and develop from financial and energy consumption perspectives. During the operation of an HLI-based agent, the agent may use a predefined plan for learning multiple models concurrently to support self-awareness. Therefore, high computational power is required to support more abilities of cognition. This means that enough energy should be supplied for executing the HLI-based agent. In order to mitigate this problem, four solutions were given in the literature, as given below:
  • Investing in new paradigms with low energy consumption for HLI, such as quantum computing [24].
  • Finding modern mathematical frameworks to find learning models with lower calculations, which leads to lower energy consumption [25].
  • Sharing models to prevent energy consumption. A researcher can share a model with other researchers around the world.
  • In an AI-based system, if there is no way to decrease the load of computations of learning processes, energy harvesting techniques can be used to return the wasted energy [26,27].

3. Data Issues

A type of AI-based agent invests in data-driven methods to construct learning models. In these algorithms, data issues cause various problems. Some of these problems are explained below [28,29,30,31,32]:
  • Cost is one of the main issues of data. Major sources of cost are gathering, preparing, and cleaning the data [28].
  • The size of collected data in a wide range of systems such as IoT (Internet of Things) is another data-related challenge. This huge amount of data leads to a new concept, called big data. Analyzing big data in an online fashion via machine learning algorithms is a very challenging task [28,29,30].
  • Data incompleteness (or incomplete data) is another challenging problem in machine learning algorithms which leads to inappropriate learning of algorithms and uncertainties during data analysis. This issue should be handled during the pre-processing phase. Various approaches can be used for mitigating this problem. Filling missed (incomplete) data via most frequently observed values or developing learning algorithms to predict missed values are some examples of these approaches [32].
  • Data heterogeneity, data insufficiency, imbalanced data, untrusted data, biased data, and data uncertainty are other data issues that may cause various difficulties in data-driven machine learning algorithms [28,29].
  • Bias is a human feature that may affect data gathering and labeling. Sometimes, bias is present in historical, cultural, or geographical data. Consequently, bias may lead to biased models which can provide inappropriate analysis. Despite being aware of the existence of bias, avoiding biased models is a challenging task. For more information, please refer to [28].
Not only the mentioned problem, but also other problems such as imbalanced data, synthetic (fake) data, and noisy data can be classified in this part. Data issues become more critical challenges in designing HLI-based agents. Nowadays, finding datasets that reflect the correct human behaviors might not be possible in many of the domains that HLI-based agents can be utilized.

4. Robustness and Reliability

The robustness of an AI-based model refers to the stability of the model performance after abnormal changes in the input data. The cause of this change may be a malicious attacker, environmental noise, or a crash of other components of an AI-based system [8,31,32]. For example, in telesurgery, an HLI-based agent may detect a patient’s kidney as a bean because of an unknown crash in the machine vision component. Among the several models with similar performance, the robust model has a higher priority in deployment. The traditional mechanisms such as replication and multi-version programming might not work in intelligent systems, and hence this field is in its early stage. Some works, such as [33], discuss the difference between the accuracy of a learning model and the robustness of that model. It seems that theory and concepts of robustness and reliability are in infancy, and new things would appear in this regard. This problem may be challenging in HLI-based agents because weak robustness may have appeared in unreliable machine learning models, and hence an HLI with this drawback is error-prone in practice.

This entry is adapted from the peer-reviewed paper 10.3390/app12084054

This entry is offline, you can click here to edit this entry!
Video Production Service