Artificial Intelligence Challenges: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Artificial intelligence (AI) has been widely used in recent years. During analyses of the challenges of AI, numerous problems have been reported, some of which are security, safety, fairness, energy consumption, and ethics, to mention a few. The widespread usage of AI leads to arising new challenges.

  • Explanable AI
  • Explainable Machine Learning
  • Trustworthy AI

1. Problem Identification and Formulation

Problem identification and formulation are essential processes that should be implemented in AI-based agents. This issue plays an essential role in organizing human-level intelligence (HLI)-based agents designated in a self-organized manner. Implementing the mentioned processes is very difficult in practice because of some issues explained as follows. It is well known that human knowledge about its environment can be found in different domains, including known parts, semi-known parts, and unknown parts. There is a set of problems that cannot be formulated in a well-defined format for humans, and therefore there is uncertainty as how to organize HLI-based agents to face these problems. For example, in [1], it was proved that there are some types of problems with no algorithm for solving. In addition, some problems for humans, such as the goals of human creation, are not clear, and therefore HLI-based agents will not be able to solve these types of problems because they follow the thinking process of humans. This problem becomes more complicated when the environment of an HLI-based agent is dynamic and unknown. Implementing a problem detection and formulation system at the heart of an HLI-based agent will be required. The role of problem identification is discussed in the following paragraph.
According to [2], a method for solving a problem is to map the problem space into a search space. In this method, detecting the problem, which can be single-state, multiple-state, or contingency, and then formulating it in a well-defined manner are the first steps of AI-based problem-solving. In other words, gathering knowledge and insides from the nature of the problem and converting it to a systematic solving procedure has high priority in this regard. In existing intelligent agents, the process of formulating problems considering security, safety, performance, etc., is considered as the responsibility of designers of AI-based agents. To solve this problem, knowledge-based agents and inference engines such as the Cyc project [3] can be utilized to present some solutions.

2. Energy Consumption

Some learning algorithms, including deep learning, utilize iterative learning processes [4]. This approach results in high energy consumption. Nowadays, the deep learning method is used to design HLI-based agents because of its high accuracy and its similarity to the human brain in the decision-making process. Deep learning models require a high computational power of GPUs. In [5], it was shown that these models are costly to train and develop from financial and energy consumption perspectives. During the operation of an HLI-based agent, the agent may use a predefined plan for learning multiple models concurrently to support self-awareness. Therefore, high computational power is required to support more abilities of cognition. This means that enough energy should be supplied for executing the HLI-based agent. In order to mitigate this problem, four solutions were given in the literature, as given below:
  • Investing in new paradigms with low energy consumption for HLI, such as quantum computing [6].
  • Finding modern mathematical frameworks to find learning models with lower calculations, which leads to lower energy consumption [7].
  • Sharing models to prevent energy consumption. A researcher can share a model with other researchers around the world.
  • In an AI-based system, if there is no way to decrease the load of computations of learning processes, energy harvesting techniques can be used to return the wasted energy [8][9].

3. Data Issues

A type of AI-based agent invests in data-driven methods to construct learning models. In these algorithms, data issues cause various problems. Some of these problems are explained below [10][11][12][13][14]:
  • Cost is one of the main issues of data. Major sources of cost are gathering, preparing, and cleaning the data [10].
  • The size of collected data in a wide range of systems such as IoT (Internet of Things) is another data-related challenge. This huge amount of data leads to a new concept, called big data. Analyzing big data in an online fashion via machine learning algorithms is a very challenging task [10][11][12].
  • Data incompleteness (or incomplete data) is another challenging problem in machine learning algorithms which leads to inappropriate learning of algorithms and uncertainties during data analysis. This issue should be handled during the pre-processing phase. Various approaches can be used for mitigating this problem. Filling missed (incomplete) data via most frequently observed values or developing learning algorithms to predict missed values are some examples of these approaches [14].
  • Data heterogeneity, data insufficiency, imbalanced data, untrusted data, biased data, and data uncertainty are other data issues that may cause various difficulties in data-driven machine learning algorithms [10][11].
  • Bias is a human feature that may affect data gathering and labeling. Sometimes, bias is present in historical, cultural, or geographical data. Consequently, bias may lead to biased models which can provide inappropriate analysis. Despite being aware of the existence of bias, avoiding biased models is a challenging task. For more information, please refer to [10].
Not only the mentioned problem, but also other problems such as imbalanced data, synthetic (fake) data, and noisy data can be classified in this part. Data issues become more critical challenges in designing HLI-based agents. Nowadays, finding datasets that reflect the correct human behaviors might not be possible in many of the domains that HLI-based agents can be utilized.

4. Robustness and Reliability

The robustness of an AI-based model refers to the stability of the model performance after abnormal changes in the input data. The cause of this change may be a malicious attacker, environmental noise, or a crash of other components of an AI-based system [13][14][15]. For example, in telesurgery, an HLI-based agent may detect a patient’s kidney as a bean because of an unknown crash in the machine vision component. Among the several models with similar performance, the robust model has a higher priority in deployment. The traditional mechanisms such as replication and multi-version programming might not work in intelligent systems, and hence this field is in its early stage. Some works, such as [16], discuss the difference between the accuracy of a learning model and the robustness of that model. It seems that theory and concepts of robustness and reliability are in infancy, and new things would appear in this regard. This problem may be challenging in HLI-based agents because weak robustness may have appeared in unreliable machine learning models, and hence an HLI with this drawback is error-prone in practice.

This entry is adapted from the peer-reviewed paper 10.3390/app12084054

References

  1. Linz, P. An Introduction to Formal Languages and Automata; Jones & Bartlett Learning: Burlington, MA, USA, 2006.
  2. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Hoboken, NJ, USA, 1994.
  3. Lenat, D.B.; Guha, R.V.; Pittman, K.; Pratt, D.; Shepherd, M. Cyc: Toward programs with common sense. Commun. ACM 1990, 33, 30–49.
  4. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; Cambridge University Press: Cambridge, UK, 1998.
  5. Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. arXiv 2019, arXiv:1906.02243.
  6. Steane, A. Quantum computing. Rep. Prog. Phys. 1998, 61, 117.
  7. Wheeldon, A.; Shafik, R.; Rahman, T.; Lei, J.; Yakovlev, A.; Granmo, O.-C. Learning automata based energy-efficient AI hardware design for IoT applications. Philos. Trans. R. Soc. A 2020, 378, 20190593.
  8. Priya, S.; Inman, D.J. Energy Harvesting Technologies; Springer: Berlin/Heidelberg, Germany, 2009.
  9. Kamalinejad, P.; Mahapatra, C.; Sheng, Z.; Mirabbasi, S.; Leung, V.C.; Guan, Y.L. Wireless energy harvesting for the Internet of Things. IEEE Commun. Mag. 2015, 53, 102–108.
  10. Baig, M.I.; Shuib, L.; Yadegaridehkordi, E. Big Data Tools: Advantages and Disadvantages. J. Soft Comput. Decis. Support Syst. 2019, 6, 14–20.
  11. Sivarajah, U.; Kamal, M.M.; Irani, Z.; Weerakkody, V. Critical analysis of Big Data challenges and analytical methods. J. Bus. Res. 2017, 70, 263–286.
  12. Qiu, J.; Wu, Q.; Ding, G.; Xu, Y.; Feng, S. A survey of machine learning for big data processing. EURASIP J. Adv. Signal Process. 2016, 2016, 67.
  13. Qayyum, A.; Qadir, J.; Bilal, M.; Al-Fuqaha, A. Secure and robust machine learning for healthcare: A survey. IEEE Rev. Biomed. Eng. 2020, 14, 156–180.
  14. Bhagoji, A.N.; Cullina, D.; Sitawarin, C.; Mittal, P. Enhancing robustness of machine learning systems via data transformations. In Proceedings of the 2018 52nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2018; pp. 1–5.
  15. Hanif, M.A.; Khalid, F.; Putra, R.V.W.; Rehman, S.; Shafique, M. Robust machine learning systems: Reliability and security for deep neural networks. In Proceedings of the 2018 IEEE 24th International Symposium on On-Line Testing and Robust System Design (IOLTS), Platja d’Aro, Spain, 2–4 July 2018; pp. 257–260.
  16. Rozsa, A.; Günther, M.; Boult, T.E. Are accuracy and robustness correlated. In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; pp. 227–232.
More
This entry is offline, you can click here to edit this entry!
Video Production Service