Chatbots for Cyber Security: Comparison
Please note this is a comparison between Version 1 by Amit Arora and Version 2 by Rita Xu.

Groups of cyber criminals/hackers have carried out cyber-attacks using various tactics with the goal of destabilizing web services in a specific context for which they are motivated. Predicting these attacks is a critical task that assists in determining what actions should be taken to mitigate the effects of such attacks and to prevent them in the future.

  • chatbot
  • cyber security
  • artificial intelligence

1. Introduction

Cybersecurity is a multidisciplinary field and can have far-reaching economic, environmental, and social consequences [1][2][3][1,2,3]. Cybersecurity statistics indicate that there are 2200 cyber-attacks per day, with a cyber-attack happening every 39 s on average. In the US, a single data breach costs an average of USD 9.44 million, and cybercrime is predicted to cost USD 8 trillion in 2023 [4]. As governments and businesses become more reliant on new communication technologies and social media, the threat of cyber-attacks on such organizations has increased tremendously. To counter such threats, governments and businesses have increased their investments in cybersecurity [5]. Advances in natural language processing (NLP) and machine learning (ML) techniques have led to chatbots (also known as conversational agents) becoming capable of extracting meaningful information regarding cybersecurity threats [6] on social media. The rapid deployment of artificial intelligence (AI) coupled with the digitalization of a globalized economy has produced a vast amount of textual data through social media. Chatbot applications along with technology-enabled solutions lead to the sustainable development of global businesses and economies. Governments, businesses, and political parties depend on the sentiments and opinions expressed on social media sites to gauge the mood of the public in real time [7]. This is also a vital source of information related to security threats to a nation and to business organizations. Consequently, it becomes imperative for intelligence and security communities to delve deeper into cybersecurity to protect national security and economic interests.
Social networks on the internet have enabled people to interact with each other in real-time. Microblogging platforms, such as Twitter, have emerged as the most popular communication tool since it allows a wide variety of expressions, such as interactive short texts, pictures, emojis, etc., with relative ease [8][9][8,9]. Such platforms act as a social public square where users express their feelings, sentiments, ideas, and opinions on wide-ranging topics. Research has shown that analyzing these feelings and sentiments expressed on social networks and platforms is an effective way to forecast a variety of events such as market trends, election results, brand image, etc. [8][10][8,10]. Sentiment analysis can be performed quickly on a large amount of textual data available on social platforms and has been applied in various fields. Recent research has focused on the sentiment analysis of text data on social media related to COVID-19 and monkeypox [11], as well as business-related entrepreneurship [12]. However, there is a dearth of research to assess sentiments in detecting probable cybersecurity threats.

2. Chatbots for Cyber Security

A chatbot is an application that uses artificial intelligence (AI) to communicate. Artificial intelligence is the automation of intelligent behavior which allows machines to simulate anthropomorphic conversations. Chatbots have been programmed to use artificial intelligence and concepts such as natural language processing (NLP), artificial intelligence markup language (AIML), pattern matching, chat script, and natural language understanding (NLU) to communicate with users, analyze the conversation, and use the extracted data for marketing, personal content, to target specific groups, etc. The knowledge domain, the service provided, the goals, the input processing and response generation method, the human-aid, and the build method are some of the categories under which chatbots can be classified. The knowledge domain classification considers the knowledge a chatbot can access, as well as the amount of data it is trained on. Closed-domain chatbots are focused on a certain knowledge subject and may fail to answer other questions, but open-domain chatbots can talk about various topics and respond effectively [13][18]. Conversely, the sentimental proximity of the chatbot to the user, the quantity of intimate connection, and chatbot performance are factors in the classification of chatbots based on the service provided. Interpersonal chatbots are in the communication area and offer services such as restaurant reservations, flight reservations, and FAQs. They gather information and pass it on to the user, but they are not the user’s companions. They are permitted to have a personality, be nice, and recall information about the user; however, they are not required or expected to do so [13][18]. Adamopoulou et al. [13][18] states that “Intrapersonal chatbots exist within the personal domain of the user, such as chat apps like Messenger, Slack, and WhatsApp. They are companions to the user and understand the user like a human does. Inter-agent chatbots become omnipresent while all chatbots will require some inter-chatbot communication possibilities. The need for protocols for inter-chatbot communication has already emerged. Alexa-Cortana integration is an example of inter-agent communication” (pp. 373–383). Informative chatbots, such as FAQ chatbots, are designed to offer the user information that has been stored in advance or is available from a fixed source. The manner of processing inputs and creating responses is taken into consideration when classifying based on input processing and response generation. The relevant replies are generated using one of three models: rule-based, retrieval-based, and generative. Another classification for chatbots is based on how much human-aid is included in its components. Human computation is used in at least one element of a human-aid chatbot. To address the gaps produced by the constraints of completely automated chatbots, crowd workers, freelancers, or full-time employees can incorporate their intelligence in the chatbot logic. The work in [13][18] (pp. 373–383) examines the main classification of chatbots as per the development platform permissions, where the authors defined ‘development platforms’ as “…open-source, such as RASA, or can be of proprietary code such as development platforms typically offered by large companies such as Google or IBM.” Two of the main categories that chatbots may fall into as it relates to their anthropomorphic characteristics are the error-free and the clarification chatbot. Anthropomorphism is “the attribution of human characteristics or traits to nonhuman agents” [14][19] (p. 865). Anthropomorphic chatbots are perceived to be more palatable to consumers since consumers perceive the chatbots to be humanlike, rather than how firms design chatbots as humanlike [15][20]. An error-free chatbot can be defined as a hypothetically flawless chatbot, while a clarification chatbot has difficulties inferring meaning and therefore asks for clarification from the user. Clarification chatbots are seen as more anthropomorphic since clarification by the chatbot is seen as giving care and attention to the needs of the customer. According to [16][21], “The error-free chatbot offers no indication that it is anything but human. It correctly interprets all human utterances and responds with relevant and precise humanlike utterances of its own.” On the first parse, the clarification chatbot does not have the intelligence to accurately interpret all human utterances. The chatbot, on the other hand, is clever enough to identify the root of the misunderstanding, referred to as a difficulty source, and request an explanation. Since seeking clarification is a normal element of interpersonal communication, clarification chatbots’ anthropomorphic characteristics increase with their ability to recognize a problem source and display intersubjective effort. There is no current commercial application of the error-free chatbot; however, clarification chatbots are currently being used by companies such as Amazon, Walmart, T-Mobile, Bank of America, and Apple, as first-contact customer service representatives. Threats and vulnerability are key factors (and dangers) affecting the cyber security of chatbots. Cyber threats can be characterized as methods in which a computer system can be hacked. Spoofing, tampering, repudiation, information disclosure, denial of service, privilege elevation, and other threats are examples of chatbot threats. Conversely, vulnerabilities are ways in which a system can be harmed that are not appropriately mitigated. When a system is not effectively maintained, has bad coding, lacks protection, or is subject to human mistakes, it becomes vulnerable and accessible to assaults. Self-destructive messages can be used in conjunction with other security measures such as end-to-end encryption, secure protocol, user identity authentication, and authorization to reduce vulnerabilities. Another method to ensure the security of chatbots is the use of user behavioral analytics (UBA). A vulnerability is defined as a weakness in a system’s security protocols, internal controls, or implementation that could be exploited or activated by a threat source. The secure development lifecycle refers to the process of incorporating security components into the software development lifecycle (SDLC). SDLC, on the other hand, is a thorough plan that outlines how companies construct applications from conception through decommission. According to [17][13], implementing security development lifecycle (SDL)-related activities into the development lifecycle is one of the most effective ways to mitigate vulnerabilities. Planning and needs, testing the code and outcomes, architecture and design, test planning, and coding are phases commonly followed by all models for the secure development lifecycle. This reduces the vulnerabilities and openness to attacks. User behavioral analytics (UBA) is a method of analyzing user activity patterns through the use of software applications. It allows for the use of advanced algorithms and statistical analysis to spot any unusual behavior that could be a security risk. The use of this analytical software will allow for easy identification of other bots being used to infiltrate a secure system through hacking. Hence, this reduces the risk of a cyber-attack. As previously mentioned, cyber threats can be characterized as methods in which a computer system can be hacked. Spoofing, tampering, repudiation, information disclosure, denial of service, and privilege elevation are examples of threats. To reduce the impacts of these threats, specific approaches need to be taken for each particular threat. Spoofing is performed to gain information and use it for the impersonation of something or someone else. To abate this, correct authentication such as a strong password is required to secure sensitive data. Tampering is a threat where the hacker aims to maliciously modify data. Here, the mitigation strategy is to use digital signatures, audit trails, a network time protocol, and log timestamps. Denial of service is another category of threats in which the attacker intends to deny access to valid users. In this instance, the best strategies to reduce this threat are filtering and throttling [17][13].
Video Production Service