Methods Based on Software-Defined Networks: Comparison
Please note this is a comparison between Version 1 by Aakanksha SHARMA and Version 3 by Jessie Wu.

With the rapid advancement of the Internet of Things (IoT), there is a global surge in network traffic. Software-Defined Networks (SDNs) provide a holistic network perspective, facilitating software-based traffic analysis, and are more suitable to handle dynamic loads than a traditional network. The standard SDN architecture control plane has been designed for a single controller or multiple distributed controllers; however, a logically centralized single controller faces severe bottleneck issues. 

  • SDN
  • flow fluctuation
  • deep temporal reinforcement learning

1. Introduction

The evolution of the Internet of Things [1], mobile edge computing [2], and big data [3] has produced phenomenal growth in network traffic globally. It has resulted in equipment deployment such as sensors, routers, and switches with more energy consumption. Better infrastructure and traffic control methods are insufficient to ease the traffic load. Many network infrastructures are hardware-based, integrating all the network management functions into the hardware. This causes a delay in applying any new ideas and sometimes the hardware structure needs to be re-designed to fit the new algorithms [3]. Dedicated devices like traffic shapers, load balancers, and QoS mechanisms are deployed in networks to prevent congestion. They regulate data flow, distribute traffic evenly, and prioritize critical applications, ensuring optimal performance and reliability. Often, these network components, such as network controllers, are under-utilized [4][5][4,5], with the average utilization being only 30–40%. In addition, this drops again by a factor of three or higher during off-peak hours [4].
A great 3 has been focused on traditional networks, such as re-engineering and dynamic adaptation [6][7][6,7]. Still, these networks’ lack of centralized control makes implementing device management and protocol updates challenging. The best way to cope with traditional network issues is to equip networks with intelligent functionality.
Software-Defined Networks (SDNs) Ns provide various technical benefits, particularly network traffic engineering [8], due to SDNs’ separation of data and the control plane. The advancement in SDNs has embedded intelligent functionality by learning the complexity of network operation. This improves the network energy efficiency and application Quality of Service (QoS) [9][10][9,10]. Despite various SDN benefits, they still have some limitations, such as scalability, reliability, and a single point of failure [11][12][11,12]. Many existing SDN load-balancing techniques rely on static controller deployment, where controllers are pre-assigned to specific network segments or devices. While this approach provides a straightforward setup, it lacks adaptability to dynamic traffic fluctuations and may lead to imbalanced load distribution among controllers. Multiple controllers [13][14][15][13,14,15] are required to recover from a single point failure and to provide better network performance [16]. However, multiple controllers face placement issues [17], leading to an imbalanced network load [18]. In the existing literature, many researchers have considered load-balancing issues. Still, most existing works consider the static controller deployment in SDNs, avoiding flow variations and traffic bursts, ultimately leading to imbalanced load among controllers and increasing the network latency, load, and overall cost.
Hence, an optimal solution is essential to handle the dynamic and large-scale traffic. The solution should intelligently map the controllers to dynamically deal with the flow fluctuation, which requires a solution to manage the network load on the fly and handle the traffic flow fluctuation. Some advanced SDN load-balancing techniques have been proposed for traffic management, including round-robin load balancing, traffic classification with machine learning, applying unsupervised learning techniques in SDNs, and introducing congestion control methods based on reinforcement learning. These techniques aim to optimize the load distribution by dynamically assigning network segments or devices to controllers based on real-time traffic analysis. However, challenges may arise in efficiently managing dynamic controller mappings and ensuring optimal load balancing without introducing additional overhead.
Additionally, recent advancements in SDN load balancing involve integrating machine learning techniques, specifically reinforcement learning and deep learning. This integration optimizes controller decisions and enhances load distribution efficiency, benefiting from computing technologies like the Tensor Processing Unit (TPU) and Graphics Processing Unit (GPU). These approaches use intelligent algorithms to adapt to varying network conditions and traffic patterns, offering the potential for improved load balancing and network performance. However, managing the dynamic network traffic with flow fluctuation remains a less explored area.

2. Centralized Flow Control Methods Based on Software-Defined NetworkNs 

In recent years, increased efforts have been invested in centralized flow control methods based on SDNs [19][20][20,21]. The centralized methods for flow management using a network operating system named NOX are presented in [21][22][22,23]. These methods manipulate the switches according to the management decisions. If the incoming packet at a switch matches a flow entry, the switch applies the related actions. A round-robin load-balancing method that uses a circular queue to decide where to send the request of each incoming client is proposed in [23][24]. It responds to DNS requests with a list of IP addresses. However, these approaches fail to handle real-time traffic fluctuation. Moreover, centralized re-routing decisions are essential in most mechanisms, but the cost involved in re-routing affects their decision-making efficiency negatively. For instance, the study in [24][25] is based on load shifting for a data center network when the flow is at its peak. However, this shifting method is not suitable for centralized networks. Another study [25][26] proposed a capacitated K center approach to identify the minimum number of controllers needed and their position. Still, it is unable to deal with dynamic traffic variation. A comprehensive review of several optimized controller placement problem algorithms in SDNs is addressed in [26][27], which raised many research challenges, such as unbalanced network load and computing the optimal number of controllers needed in the network. To address these aforementioned challenges, SDN-based technologies must be applied for network load balancing, traffic forwarding, and better bandwidth utilization [27][28]. However, the current centralized SDNs cannot handle the IoT dynamic requirements. As a result, available resources are under-utilized in centralized SDNs due to the lack of a dynamic rule-placement approach. Thus, an efficient approach is needed as billions more devices will be connected in the future, generating data exponentially [28][29]. Therefore, network management is essential to manage the massive collection of information and devices to process the generated data efficiently. The overall network performance depends on resource utilization [29][30]; thus, under- or over-utilizing network components degrades the network performance and minimizes the network utility. So, suitable technologies are required to control the network traffic flows for load balancing and latency minimization. Therefore, addressing these challenges is imperative to improve the network’s scalability and robustness. To balance the network load, better architectures must be designed to enhance the network scalability without overloading the controllers. The main aim is to reduce the central component load without reducing the load balance efficiency. In relation to the identified issue, the authors in [30][31] state that several mechanisms can address the challenge of balancing network load. For instance, one approach involves changing the default controller of a switch by directing all requests from the switch to a new controller, or alternatively, associating each switch with more than one controller. Thus, enabling the switch to send some requests to one controller and rest data to another controller leads to flow distribution; still, this mechanism cannot cope with heavy traffic with flow variation. For a large-scale network, an effective load-balancing algorithm might be required to increase the flexibility of the network. In many instances, Deep Reinforcement Learning (DRL) has yielded impactful results, demonstrating outstanding performance in various applications, such as natural language processing and games [31][32]. Beyond this, many businesses have begun using deep learning to enhance their services and products. A machine learning technique based on DRL is well-suited to achieving favorable outcomes. It can explore the vast solution space, adapt to rapid fluctuations in the data flow, and undergo algorithmic learning from feedback [32][33][34][33,34,35]. The study [35][36] introduced a mathematical model to calculate the number of controllers required in the network and their connection with switches. However, this approach is time-consuming, rendering it ineffective for large-scale networks. In another study [36][37], a cluster controller is proposed to facilitate the movement of switches, enhancing network throughput but at the cost of increased controller response time. Based on the dynamic migration of switches using swarm optimization, a different approach is presented in [37][38], resulting in elevated costs, and often the network remains unstable. An alternative strategy, outlined in [30][31], considers network load. When any controller becomes overloaded, it randomly selects another load to shift its load but does not consider the scenario where another controller may become overloaded after the migration. The authors in [38][39] proposed a mechanism utilizing reinforcement learning to balance the network load, but this method proved ineffective in achieving load balance. The extensive literature on reinforcement learning can be classified as controller optimization and switch migration methods to overcome the issues of unbalanced network load. The controller optimization optimizes the number of controllers needed and the location to place them in the network. At the same time, switch migration methods manage the network load by migrating the load from one controller to another controller. In summary, the existing research only found better solutions for static load balancing and targets only one or two issues related to SDN controllers. No solution in the literature satisfies the network performance, load balancing, latency minimization, and dynamic flow fluctuations. In response to this gap, ouresearchers' novel temporal Deep Q-Network (tDQN) is introduced into the dynamic SDN (dSDN) environment, aiming to improve network Quality of Service (QoS) significantly.
ScholarVision Creations