Multi-Access Edge Computing: Comparison
Please note this is a comparison between Version 1 by Ducsun Lim and Version 2 by Rita Xu.

Multi-access edge computing (MEC), based on hierarchical cloud computing, offers abundant resources to support the next-generation Internet of Things network.

  • mobile edge computing
  • directed acyclic graphs
  • deep reinforcement learning
  • multi-access edge computing
  • soft actor critic

1. Introduction

In recent years, advances in wireless technology combined with the widespread adoption of the Internet of Things have paved the way for innovative computation-intensive applications, which include augmented reality (AR), mixed reality (MR), virtual reality (VR), online gaming, intelligent transportation, and industrial and home automation. Consequently, the demand for these applications has surged [1][2][3][1,2,3]. By 2020, the number of IoT devices is projected to skyrocket to 24 billion. This tremendous increase signifies that many smart devices (SDs) and sensors have been responsible for generating and processing an immense volume of data [4].
To cater to these computationally intensive applications, substantial computing resources and high performance are required. Addressing the escalating need for energy efficiency and managing the swift influx of user requests has emerged as significant challenges [5]. Initially, mobile cloud computing (MCC) was considered a viable solution for processing computationally intensive tasks. However, as the demand for real-time processing increased, the limitations of MCC became apparent [6], resulting in the introduction of mobile edge computing (MEC) as a potential solution to meet this burgeoning demand [7].
Multi-access edge computing [8] is effective at deploying computing resources close to SDs, collaborative radio resource management (CRRM), and collaborative signal processing (CRSP). Conversely, a cloud radio access network (C-RAN) employs centralized signal processing and resource allocation, efficiently catering to user requirements [9]. Collectively, the attributes of these technologies have the potential to fulfill diverse requirements in upcoming artificial intelligence (AI)-based wireless networks [10].
Leveraging MEC to offload tasks is a promising approach to curtail network latency and conserve energy. Specifically, MEC addresses the computational offloading requirements of IoT devices by processing tasks closer to the edge rather than relying solely on a central cloud [11]. However, since the task offloading problem is recognized as a non-deterministic polynomial-time hard (NP-hard) problem [12], addressing it is challenging. Although most research in this area has leaned toward heuristic or convex optimization algorithms, the increasing complexity of MEC coupled with varying radio channel conditions makes it difficult to consistently guarantee optimal performance using these conventional methods. Given that optimization problems often require frequent resolution, meticulous planning is imperative for designing and managing future MEC networks.
In recent years, deep reinforcement learning (DRL), a sub-set of AI, has gained significant attention to its ability to tackle complex challenges across various sectors. As IoT networks become more distributed, the need for decentralized decision-making to enhance throughput and reduce power consumption increases, with DRL serving as a key tool. The emergence of the multi-access edge computing (MEC) paradigm has added complexity to multi-user, multi-server environments, bringing data-offloading decision-making to the forefront [13]. This MEC landscape necessitates addressing both user behavioral aspects and server pricing policies. A recent study combined prospect theory and the tragedy of the commons to model user satisfaction and potential server overexploitation, highlighting the intricate nature of the problem. In the context of MEC, while some research has explored DRL for task offloading, the focus has been predominantly on holistic offloading, overlooking the advantages of partial offloading, such as reduced latency and improved quality of service (QoS). Collaborative efforts among MEC servers, especially within a multi-server framework, have been significantly useful in enhancing overall system performance.

2. Multi-Access Edge Computing

Recent research in the field of MEC has aimed to reduce latency and energy consumption through computation offloading and resource allocation techniques. A heuristic offloading algorithm designed to efficiently manage computationally intensive tasks was introduced [14]. This algorithm can achieve high throughput and minimize latency when transferring tasks from an SD to an MEC server. However, despite its critical role in enhancing overall system performance, the decision-making process for offloading under the algorithm is overly focused on task priorities. A collaborative method between fog and cloud computing to curtail service delays on IoT devices was explored [15]. This study focused on strategies for optimizing computing offloading, allocating computing resources, managing wireless bandwidth, and determining transmission power within a combined cloud/fog computing infrastructure. The overarching goal of these optimization strategies was to reduce both latency and energy consumption. Notably, the authors in both [16][17][16,17] employed sub-optimal methods, favoring minimal complexity, and they highlighted the significance of practical and efficient approaches. The dynamics of energy link selection and transmission scheduling, particularly when processing applications that demanded optimal energy within a network linking SDs and MEC servers, were investigated [18]. Relying on an energy consumption model, the authors formulated an algorithm for energy-efficient link selection and transmission scheduling. An integrated algorithm that facilitated adaptive long-term evolution (LTE)/Wi-Fi link selection and data transmission scheduling was presented to enhance the energy efficiency of SDs in MCC systems [19]. Upon evaluation, the proposed algorithm outperformed its counterparts in terms of energy efficiency. Furthermore, it demonstrated proficiency in managing battery life, especially when considering the unpredictable nature of wireless channels. While these two studies prioritized energy efficiency and the proposed algorithms showed commendable performances, the studies did not address the adaptability required under varying network conditions. The challenges of processing vast amounts of data and computational tasks using deep Q-network (DQN)-based edge intelligence within the MEC framework [20] were addressed. The authors of this study focused on the distribution of computational tasks and the allocation of resources between edge devices and cloud servers. Meanwhile, the authors [21] addressed the performance degradation and energy imbalances in SDs with a deep reinforcement learning-based offloading scheduler (DRL-OS). Noteworthy, as the tally of wireless devices skyrockets, the expenses associated with DQN-based methodologies also increase. Several studies have leveraged actor-critic-based offloading in MEC environments to optimize service quality by analyzing agent behaviors and policies [22][23][22,23]. The authors [24] delved into the offloading challenges in multi-server and multi-user settings, whereas the authors [25] integrated the proximal policy optimization (PPO) algorithm for task offloading decisions. Implementing the PPO in practical scenarios can be challenging because of its extensive sampling requirements. Wang et al. [26] conducted a study centered on task offloading decisions using the PPO algorithm, and Li et al. [27] addressed the offloading issues within a multi-MEC server and in multi-user contexts. Furthermore, several investigations have focused on using the deep deterministic policy gradient (DDPG) algorithm to counteract the offloading issues inherent in the MEC domain. Notably, DDPG outperforms PPO in terms of continuous action space, data efficiency, and stability, making it pivotal for reinforcement learning endeavors in the MEC space and offering effective solutions to offloading challenges. However, within specific environments, the random-search nature of a network may pose hurdles in identifying the optimal policy. By contrast, SAC boasts greater stability than deterministic policies and exhibits excellent sampling efficiency. Modern research is now leveraging SAC to address computational offloading challenges. Liu et al. [28] enhanced the data’s efficiency and stability using SAC, where multiple users collaboratively execute task offloading in an MEC setting. Similarly, Sun et al. [29] harnessed SAC within 6G mobile networks, achieving heightened data efficiency and reliability in MEC settings. The advantages and disadvantages of some existing approaches are listed in Table 1.
Table 1. Comparison of existing approaches.
Regarding MCC servers, they possess significantly greater computing capacities than those of MEC servers and are well-equipped to manage peak user request demands. Therefore, task offloading can be effectively achieved through cooperation among MEC servers. Nonetheless, tasks with high computational complexity should be delegated to cloud servers. He et al. [30] explored a multi-layer task offloading framework within the MEC environment, facilitating collaboration between MCC and MEC and enabling task offloading to other SDs. Furthermore, Akhlaqi, M.Y. [31], pointed out that the increasing use of cloud services by devices has highlighted congestion problems in centralized clouds. This situation has prompted the emergence of multi-access edge computing (MEC) to decentralize processing. Chen, Y. et al. [32], and Mustafa, E. et al. [33], address offloading decisions in MEC systems. The former focuses on assessing the reliability of multi-media data from IoT devices using a game-based approach, while the latter introduces a reinforcement learning framework for making real-time computation task decisions in dynamic networks.
Video Production Service