Medthods to Enhance Internet of Things Network Security: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

The internet of things (IoT) has ushered in a new era of connectivity, transforming various industries by enabling faster sensor and data access. The widespread adoption of the IoT is not without challenges. Devices often grapple with limited battery lifetimes, the need to function in remote locations, and demanding transceiver operations. Among these challenges, security stands out as the most daunting.

  • internet of things
  • feature reduction
  • convolutional neural networks
  • XGBoost

1. Introduction

The internet of things (IoT) has ushered in a new era of connectivity, transforming various industries by enabling faster sensor and data access. This enhanced networking capability has been pivotal in facilitating real-time monitoring, which is indispensable for further process optimization across sectors. Real-time data acquisition has significantly improved healthcare by enabling timely patient monitoring and emergency notification systems [1]. In healthcare, IoT devices range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants. The internet of medical things (IoMT) has even led to the creation of automated systems used to analyze health statistics [2]. The industrial application of the IoT includes asset management, predictive maintenance, and manufacturing process control. Overall, the IoT’s integration into various domains is revolutionizing the way we live and work, providing more efficient, cost-effective, and intelligent solutions [3].
In the manufacturing sector, the IoT has streamlined operations, ensuring efficient production and quality control. The integration of IoT devices has revolutionized the way machines and systems communicate and interact with each other, forming the backbone of smart factories. Through IoT, manufacturers can monitor and control their production processes in real time, with sensors embedded in machines, equipment, and products collecting data on various parameters such as temperature, pressure, vibration, and performance metrics. This data can be transmitted and analyzed instantly, providing valuable insights into operations. IoT also helps manufacturers achieve greater visibility and transparency across the entire supply chain, optimizing inventory management, reducing stockouts, and improving order fulfillment. Technological advancements continue to benefit manufacturers by opening new revenue streams, improving industrial safety, and reducing operational costs, reshaping the way manufacturers operate in the digital revolution [4].
The widespread adoption of the IoT is not without challenges. Devices often grapple with limited battery lifetimes, the need to function in remote locations, and demanding transceiver operations. Among these challenges, security stands out as the most daunting. While a device running out of battery is an observable setback, a data breach, often clandestine, can wreak havoc. IoT devices must communicate with each other and central systems, requiring complex transceiver operations. This can lead to high energy consumption and potential interference with other devices, complicating the network [5]. Security is indeed one of the most critical challenges in the IoT ecosystem. The interconnected nature of these devices means that a breach in one device can potentially compromise an entire network. Issues such as weak authentication, lack of encryption, and insecure interfaces can lead to unauthorized access and data theft [6]. Radio frequency (RF) attacks have become a prevalent attack vector within the IoT ecosystem. Attackers can exploit vulnerabilities in wireless communication protocols to intercept, modify, or disrupt the RF signals between devices. This can lead to data leakage, device malfunction, or even taking control of the devices [7]. Message Queuing Telemetry Transport (MQTT) is one of the standard application layer protocols emerging in the IoT ecosystem [8]. As an emerging technology, it is a popular target for malicious actors seeking new vulnerabilities [9].
To counter these threats, several network security measures, such as blocklists and firewalls, have been implemented [10][11]. However, artificial intelligence (AI) algorithms aim to address security challenges without the constraints of predefined rules or continuous manual intervention [12]. A pivotal factor influencing AI’s performance is the judicious selection of hyperparameters that steer the algorithm. With the burgeoning complexity of emerging algorithms, the conventional trial-and-error approach for hyperparameter tuning is becoming increasingly untenable. This optimization challenge can be equated to NP-hard problems, which are notoriously difficult to resolve using discrete methods [13]. A potential respite from this optimization conundrum lies in metaheuristic algorithms. While not always possible to pinpoint the absolute optimal solution, their iterative nature enhances the probability of identifying a near-optimal solution. Often, in practical scenarios, a ”good enough” solution is more valuable than an elusive perfect one. Additionally, it is important to explore and address emerging challenges that accompany developments in the field.
The history of intrusion detection systems (IDSs) and intrusion prevention systems (IPSs) can be traced back to an academic paper written in 1986 [14]. The Stanford Research Institute developed the Intrusion Detection Expert System (IDES) using statistical anomaly detection, signatures, and profiles to detect malicious network behaviors. In the early 2000s, IDSs became a security best practice, with few organizations adopting IPSs due to concerns about blocking harmless traffic. The focus was on detecting exploits rather than vulnerabilities. The latter part of 2005 saw the growth of IPS adoption, with vendors creating signatures for vulnerabilities rather than individual exploits [15]. The capacity of IPSs increased, allowing for more network monitoring.
Next-generation intrusion prevention systems (NGIPSs), which include capabilities like application and user control, were developed during this time period, marking a significant turning point. Sandboxing and emulation features were added to fulfill the requirement for defense against zero-day malware. By 2016, most businesses had deployed next-generation firewalls (NGFWs), which contain IDS/IPS functionality. High-fidelity machine learning is the current focus for tackling threat detection and file analysis [16].
The groundbreaking academic publication “An Intrusion-Detection Model” by Dorothy E. Denning, which inspired the creation of IDES, is one example of earlier studies that addresses intrusion detection in networks. To identify hostile network behaviors, the Stanford Research Institute used statistical anomaly detection, signatures, and profiles. Significant turning points in the development of IPS technology, such as the switch to NGIPSs and NGFWs, have been reached [17].

2. Convolutional Neural Networks

CNNs are a specialized subclass of artificial neural networks (ANNs) that are particularly well-suited for analyzing visual data. CNNs are designed to automatically and adaptively learn spatial hierarchies of features. This is particularly beneficial for tasks like image recognition, object detection, and even medical image analysis. The concept of residual learning, as introduced by Kaiming He et al., further enhances the capabilities of CNNs by allowing them to benefit from deeper architectures without the risk of overfitting or vanishing gradients [18].
As opposed to ANNs, CNNs employ local connectivity by linking each neuron to a localized region of the input space. This is in stark contrast to traditional ANNs, where each neuron is connected to all neurons in the preceding and following layers. Yann LeCun’s paper emphasizes that this local connectivity is crucial for the efficient recognition of localized features in images [19]. Furthermore, they use shared parameters across different regions of the input, which significantly reduces the number of trainable parameters. This is in contrast to traditional ANNs, where each weight is unique, leading to a much larger number of parameters and higher computational costs. CNNs are inherently designed to recognize the same feature regardless of its location in the input space. This is a crucial advantage over traditional ANNs, which lack this form of spatial invariance. Notably, they often employ deeper architectures, which are made computationally feasible through techniques like residual learning, as discussed in Kaiming He et al.’s paper.
The Inception architecture, introduced by Christian Szegedy et al., is another example of a deep yet computationally efficient network [20]. CNNs are designed to be computationally efficient, particularly when dealing with high-dimensional data. The architecture leverages local connectivity and parameter sharing to reduce computational requirements. The concept of residual learning, as discussed in the paper by Kaiming He et al., allows CNNs to be trained more efficiently, even when the network is very deep.
Notably, several unique architectural elements are associated with CNNs, and these include filters, kernels, and pooling layers. Filters and kernels use learnable weight matrices that are crucial for feature extraction. They slide or convolve across the input image to produce feature maps. Yann LeCun’s paper highlights the effectiveness of gradient-based learning techniques in training these filters [21]. Pooling layers serve to reduce the spatial dimensions of the input, thereby decreasing computational complexity and increasing the network’s tolerance to variations in the input. They are particularly useful in making the network robust to overfitting.
CNNs can be effectively combined with other types of neural networks, like recurrent neural networks (RNNs), for sequential data processing tasks such as video analysis and natural language processing. Additionally, CNNs can be integrated with traditional machine learning algorithms, like support vector machines (SVMs), for tasks like classification, thereby creating a hybrid model that leverages the strengths of both methodologies. In summary, CNNs offer a robust, adaptable, and computationally efficient approach to a wide range of machine learning tasks. Their unique architecture, as validated by seminal research papers, makes them highly effective for tasks involving spatial hierarchies and structured grid data.

3. Extreme Gradient Boosting

XGBoost is an optimized distributed gradient boosting approach designed to be highly efficient and flexible. It has gained immense popularity in machine learning competitions and is widely regarded as the “go-to” algorithm for structured data. XGBoost has been optimized for both computational speed and model performance, making it highly desirable for real-world applications [22]. There are several advantages of decision-tree-based techniques [23].
One of the most significant advantages of decision trees is their ease of interpretation. They can be visualized, and the decision-making process can be easily understood, even by non-experts. Decision trees are computationally inexpensive to build, evaluate, and interpret compared to algorithms like support vector machines (SVMs) [24] or ANNs. Unlike other algorithms that require extensive pre-processing, decision trees can handle missing values without imputation, making them more robust. Decision trees can also capture complex non-linear relationships in the data, which linear models may not capture effectively. Further, this approach can be used for both classification and regression tasks, making it very versatile.
Gini impurity is a metric used to quantify the disorder or impurity of a set of items. It is crucial for the “criterion” parameter in the decision tree algorithm. Lower Gini impurity values indicate more “pure” nodes. The Gini impurity is used to decide the optimal feature to split on at each node in the tree.
G i n i ( t ) = 1 i = 1 C p i 2
Further advantages of using XGBoost are that of ensemble learning [25]. Ensemble methods, particularly boosting algorithms like XGBoost, are less susceptible to the overfitting problem compared to single estimators due to their ability to optimize on the error. By combining several models, ensemble methods can average out biases and reduce the variance, thus minimizing the risk of overfitting. Ensemble methods often achieve higher predictive accuracy than individual models. XGBoost, in particular, has been shown to outperform deep learning models in certain types of data sets, especially when the data are tabular.
The objective function optimized by XGBoost includes both a loss term and a regularization term, making it adaptable to different problems:
Obj ( θ ) = i = 1 n l ( y i , y ^ i ) + k = 1 K Ω ( f k )

4. Metaheuristic Optimization

Metaheuristic optimization algorithms have gained significant attention in the field of computational intelligence for their ability to solve complex optimization problems that are often NP-hard. Traditional optimization algorithms, such as gradient-based methods, often get stuck in local optima and are not well suited for solving problems with large, complex search spaces. In contrast, metaheuristics offer several advantages [26].
Additionally, addressing the challenges of multi-objective optimization problems has been a focal point for many works, leading to the development of various multi-objective evolutionary algorithms [27]. However, a common hurdle in these algorithms is the delicate balance required between diversity and convergence. This balance critically impacts the quality of solutions derived from the algorithms [28].
Designed to explore the entire solution space, metaheuristics often find a near-optimal solution within reasonable time periods. They are problem-independent, meaning they can be applied to a wide range of optimization problems without requiring problem-specific modifications. Metaheuristics are highly scalable and can handle problems with a large number of variables and constraints. They are less sensitive to the initial conditions and can adapt to changes in the problem environment. Metaheuristics can find near-optimal solutions to NP-hard problems in polynomial time, which is a significant advantage over traditional methods that often fail to find feasible solutions within a reasonable time frame.
Algorithms often draw inspiration from various natural phenomena, social behaviors, and physical processes. Some notable examples include the genetic algorithms (GAs) [29], inspired by the process of natural selection and genetics; particle swarm optimization (PSO) [30], based on the social behavior of birds flocking or fish schooling; the ant colony optimization (ACO) [31] algorithm, which mimics the foraging behavior of ants in finding the shortest path; and the firefly algorithm (FA) [32], which draws inspiration from courting rituals of fireflies. Additional recent examples include the salp swarm optimizer (SSA) [33], the whale optimization algorithm (WOA) [34], and the COLSHADE [35] optimization algorithm.
Metaheuristics are a popular approach for researchers used to improve hyperparameter selections. Many examples exist in the literature, with some interesting examples originating from medical applications [36]. Further applications include time-series forecasting [37][38] and computer security [39][40][41]. Hybridization techniques have also shown great promise when applied to metastatic algorithms, often producing algorithms that demonstrate performance improvements on given tasks [42].

This entry is adapted from the peer-reviewed paper 10.3390/app132312687

References

  1. Ghubaish, A.; Salman, T.; Zolanvari, M.; Unal, D.; Al-Ali, A.; Jain, R. Recent Advances in the Internet of Medical Things (IoMT) Systems Security. IEEE Internet Things J. 2021, 8, 8707–8718.
  2. Gupta, D.; Kayode, O.; Bhatt, S.; Gupta, M.; Tosun, A.S. Hierarchical Federated Learning based Anomaly Detection using Digital Twins for Smart Healthcare. In Proceedings of the 2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC), Atlanta, GA, USA, 13–15 December 2021.
  3. Turcu, C.; Turcu, C. Improving the quality of healthcare through Internet of Things. In Proceedings of the ICT Management for Global Competitiveness and Economic Growth in Emerging Economies (ICTM), Wroclaw, Poland, 21–23 October 2019.
  4. Valtanen, K.; Backman, J.; Yrjölä, S. Blockchain-Powered Value Creation in the 5G and Smart Grid Use Cases. IEEE Access 2019, 7, 25690–25707.
  5. Okuhara, H.; Elnaqib, A.; Dazzi, M.; Palestri, P.; Benatti, S.; Benini, L.; Rossi, D. A Fully-Integrated 5mW, 0.8Gbps Energy-Efficient Chip-to-Chip Data Link for Ultra-Low-Power IoT End-Nodes in 65-nm CMOS. arXiv 2021, arXiv:2109.01961.
  6. Luo, Z.; Wang, W.; Qu, J.; Jiang, T.; Zhang, Q. ShieldScatter: Improving IoT Security with Backscatter Assistance. arXiv 2018, arXiv:1810.07058.
  7. Gupta, P.; Dedeoglu, V.; Najeebullah, K.; Kanhere, S.S.; Jurdak, R. Energy-aware Demand Selection and Allocation for Real-time IoT Data Trading. arXiv 2020, arXiv:2002.02074.
  8. Azzedin, F.; Alhazmi, T. Secure data distribution architecture in IoT using MQTT. Appl. Sci. 2023, 13, 2515.
  9. Hintaw, A.J.; Manickam, S.; Aboalmaaly, M.F.; Karuppayah, S. MQTT vulnerabilities, attack vectors and solutions in the internet of things (IoT). IETE J. Res. 2023, 69, 3368–3397.
  10. Kodys, M.; Lu, Z.; Fok, K.W.; Thing, V.L.L. Intrusion Detection in Internet of Things using Convolutional Neural Networks. In Proceedings of the 2021 18th International Conference on Privacy, Security and Trust (PST), Auckland, New Zealand, 13–15 December 2021; pp. 1–10.
  11. Ayumi, V.; Rere, L.M.R.; Fanany, M.I.; Arymurthy, A.M. Optimization of Convolutional Neural Network using Microcanonical Annealing Algorithm. In Proceedings of the 2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS), Malang, Indonesia, 15–16 October 2016.
  12. Li, J.; Zhao, Z.; Li, R.; Zhang, H. AI-based Two-Stage Intrusion Detection for Software Defined IoT Networks. arXiv 2018, arXiv:1806.02566.
  13. Xiao, X.; Yan, M.; Basodi, S.; Ji, C.; Pan, Y. Efficient Hyperparameter Optimization in Deep Learning Using a Variable Length Genetic Algorithm. arXiv 2020, arXiv:2006.12703.
  14. Denning, D.E. An Intrusion-Detection Model. IEEE Trans. Softw. Eng. 1987, 2, 222–232.
  15. Bace, R.; Mell, P. Intrusion Detection Systems; Technical Report, NIST Special Publication; NIST: Gaithersburg, MD, USA, 2001.
  16. Anderson, J.P. Computer Security Threat Monitoring and Surveillance; Technical Report; James P. Anderson Co.: Fort Washington, PA, USA, 1980.
  17. Rajib, N. Cisco Firepower Threat Defense (FTD): Configuration and Troubleshooting Best Practices for the Next-Generation Firewall (NGFW), Next-Generation Intrusion Prevention System (NGIPS), and Advanced Malware Protection (AMP); Cisco Press: Indianapolis, IN, USA, 2017.
  18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016.
  19. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324.
  20. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015.
  21. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256.
  22. Bentéjac, C.; Csörgő, A.; Martínez-Muñoz, G. A Comparative Analysis of XGBoost. arXiv 2019, arXiv:1911.01914.
  23. Shwartz-Ziv, R.; Armon, A. Tabular Data: Deep Learning is Not All You Need. arXiv 2021, arXiv:2106.03253.
  24. Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567.
  25. Deng, X.; Li, M.; Deng, S.; Wang, L. Hybrid gene selection approach using XGBoost and multi-objective genetic algorithm for cancer classification. arXiv 2021, arXiv:2106.05841.
  26. Liu, D.; Perreault, V.; Hertz, A.; Lodi, A. A machine learning framework for neighbor generation in metaheuristic search. arXiv 2022, arXiv:2212.11451.
  27. Liang, Y.; He, F.; Zeng, X.; Luo, J. An improved loop subdivision to coordinate the smoothness and the number of faces via multi-objective optimization. Integr. Comput. Aided Eng. 2022, 29, 23–41.
  28. Gao, X.; He, F.; Zhang, S.; Luo, J.; Fan, B. A fast nondominated sorting-based MOEA with convergence and diversity adjusted adaptively. J. Supercomput. 2023, 7, 1–38.
  29. Mirjalili, S.; Mirjalili, S. Genetic algorithm. Evol. Algorithms Neural Netw. Theory Appl. 2019, 780, 43–55.
  30. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165.
  31. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39.
  32. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio Inspired Comput. 2010, 2, 78–84.
  33. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191.
  34. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67.
  35. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for real-world single-objective constrained optimization problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8.
  36. Zivkovic, M.; Jovanovic, L.; Ivanovic, M.; Krdzic, A.; Bacanin, N.; Strumberger, I. Feature selection using modified sine cosine algorithm with COVID-19 dataset. In Evolutionary Computing and Mobile Sustainable Networks: Proceedings of ICECMSN 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 15–31.
  37. Bacanin, N.; Jovanovic, L.; Zivkovic, M.; Kandasamy, V.; Antonijevic, M.; Deveci, M.; Strumberger, I. Multivariate energy forecasting via metaheuristic tuned long-short term memory and gated recurrent unit neural networks. Inf. Sci. 2023, 642, 119122.
  38. Jovanovic, L.; Jovanovic, D.; Bacanin, N.; Jovancai Stakic, A.; Antonijevic, M.; Magd, H.; Thirumalaisamy, R.; Zivkovic, M. Multi-step crude oil price prediction based on lstm approach tuned by salp swarm algorithm with disputation operator. Sustainability 2022, 14, 14616.
  39. AlHosni, N.; Jovanovic, L.; Antonijevic, M.; Bukumira, M.; Zivkovic, M.; Strumberger, I.; Mani, J.P.; Bacanin, N. The xgboost model for network intrusion detection boosted by enhanced sine cosine algorithm. In Proceedings of the International Conference on Image Processing and Capsule Networks, Bangkok, Thailand, 20–21 May 2022; pp. 213–228.
  40. Zivkovic, M.; Jovanovic, L.; Ivanovic, M.; Bacanin, N.; Strumberger, I.; Joseph, P.M. Xgboost hyperparameters tuning by fitness-dependent optimizer for network intrusion detection. In Proceedings of the Communication and Intelligent Systems (ICCIS 2021), Delhi, India, 18–19 December 2022; pp. 947–962.
  41. Salb, M.; Jovanovic, L.; Zivkovic, M.; Tuba, E.; Elsadai, A.; Bacanin, N. Training logistic regression model by enhanced moth flame optimizer for spam email classification. In Computer Networks and Inventive Communication Technologies; Springer: Berlin/Heidelberg, Germany, 2022; pp. 753–768.
  42. Jovanovic, L.; Jovanovic, G.; Perisic, M.; Alimpic, F.; Stanisic, S.; Bacanin, N.; Zivkovic, M.; Stojic, A. The explainable potential of coupling metaheuristics-optimized-xgboost and shap in revealing vocs’ environmental fate. Atmosphere 2023, 14, 109.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations