Multi-Access Edge Computing: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Ducsun Lim.

Multi-access edge computing (MEC), based on hierarchical cloud computing, offers abundant resources to support the next-generation Internet of Things network.

  • mobile edge computing
  • directed acyclic graphs
  • deep reinforcement learning
  • multi-access edge computing
  • soft actor critic

1. Introduction

In recent years, advances in wireless technology combined with the widespread adoption of the Internet of Things have paved the way for innovative computation-intensive applications, which include augmented reality (AR), mixed reality (MR), virtual reality (VR), online gaming, intelligent transportation, and industrial and home automation. Consequently, the demand for these applications has surged [1,2,3][1][2][3]. By 2020, the number of IoT devices is projected to skyrocket to 24 billion. This tremendous increase signifies that many smart devices (SDs) and sensors have been responsible for generating and processing an immense volume of data [4].
To cater to these computationally intensive applications, substantial computing resources and high performance are required. Addressing the escalating need for energy efficiency and managing the swift influx of user requests has emerged as significant challenges [5]. Initially, mobile cloud computing (MCC) was considered a viable solution for processing computationally intensive tasks. However, as the demand for real-time processing increased, the limitations of MCC became apparent [6], resulting in the introduction of mobile edge computing (MEC) as a potential solution to meet this burgeoning demand [7].
Multi-access edge computing [8] is effective at deploying computing resources close to SDs, collaborative radio resource management (CRRM), and collaborative signal processing (CRSP). Conversely, a cloud radio access network (C-RAN) employs centralized signal processing and resource allocation, efficiently catering to user requirements [9]. Collectively, the attributes of these technologies have the potential to fulfill diverse requirements in upcoming artificial intelligence (AI)-based wireless networks [10].
Leveraging MEC to offload tasks is a promising approach to curtail network latency and conserve energy. Specifically, MEC addresses the computational offloading requirements of IoT devices by processing tasks closer to the edge rather than relying solely on a central cloud [11]. However, since the task offloading problem is recognized as a non-deterministic polynomial-time hard (NP-hard) problem [12], addressing it is challenging. Although most research in this area has leaned toward heuristic or convex optimization algorithms, the increasing complexity of MEC coupled with varying radio channel conditions makes it difficult to consistently guarantee optimal performance using these conventional methods. Given that optimization problems often require frequent resolution, meticulous planning is imperative for designing and managing future MEC networks.
In recent years, deep reinforcement learning (DRL), a sub-set of AI, has gained significant attention to its ability to tackle complex challenges across various sectors. As IoT networks become more distributed, the need for decentralized decision-making to enhance throughput and reduce power consumption increases, with DRL serving as a key tool. The emergence of the multi-access edge computing (MEC) paradigm has added complexity to multi-user, multi-server environments, bringing data-offloading decision-making to the forefront [13]. This MEC landscape necessitates addressing both user behavioral aspects and server pricing policies. A recent study combined prospect theory and the tragedy of the commons to model user satisfaction and potential server overexploitation, highlighting the intricate nature of the problem. In the context of MEC, while some research has explored DRL for task offloading, the focus has been predominantly on holistic offloading, overlooking the advantages of partial offloading, such as reduced latency and improved quality of service (QoS). Collaborative efforts among MEC servers, especially within a multi-server framework, have been significantly useful in enhancing overall system performance.

2. Multi-Access Edge Computing

Recent research in the field of MEC has aimed to reduce latency and energy consumption through computation offloading and resource allocation techniques. A heuristic offloading algorithm designed to efficiently manage computationally intensive tasks was introduced [14]. This algorithm can achieve high throughput and minimize latency when transferring tasks from an SD to an MEC server. However, despite its critical role in enhancing overall system performance, the decision-making process for offloading under the algorithm is overly focused on task priorities. A collaborative method between fog and cloud computing to curtail service delays on IoT devices was explored [15]. This study focused on strategies for optimizing computing offloading, allocating computing resources, managing wireless bandwidth, and determining transmission power within a combined cloud/fog computing infrastructure. The overarching goal of these optimization strategies was to reduce both latency and energy consumption. Notably, the authors in both [16,17][16][17] employed sub-optimal methods, favoring minimal complexity, and they highlighted the significance of practical and efficient approaches. The dynamics of energy link selection and transmission scheduling, particularly when processing applications that demanded optimal energy within a network linking SDs and MEC servers, were investigated [18]. Relying on an energy consumption model, the authors formulated an algorithm for energy-efficient link selection and transmission scheduling. An integrated algorithm that facilitated adaptive long-term evolution (LTE)/Wi-Fi link selection and data transmission scheduling was presented to enhance the energy efficiency of SDs in MCC systems [19]. Upon evaluation, the proposed algorithm outperformed its counterparts in terms of energy efficiency. Furthermore, it demonstrated proficiency in managing battery life, especially when considering the unpredictable nature of wireless channels. While these two studies prioritized energy efficiency and the proposed algorithms showed commendable performances, the studies did not address the adaptability required under varying network conditions. The challenges of processing vast amounts of data and computational tasks using deep Q-network (DQN)-based edge intelligence within the MEC framework [20] were addressed. The authors of this study focused on the distribution of computational tasks and the allocation of resources between edge devices and cloud servers. Meanwhile, the authors [21] addressed the performance degradation and energy imbalances in SDs with a deep reinforcement learning-based offloading scheduler (DRL-OS). Noteworthy, as the tally of wireless devices skyrockets, the expenses associated with DQN-based methodologies also increase. Several studies have leveraged actor-critic-based offloading in MEC environments to optimize service quality by analyzing agent behaviors and policies [22,23][22][23]. The authors [24] delved into the offloading challenges in multi-server and multi-user settings, whereas the authors [25] integrated the proximal policy optimization (PPO) algorithm for task offloading decisions. Implementing the PPO in practical scenarios can be challenging because of its extensive sampling requirements. Wang et al. [26] conducted a study centered on task offloading decisions using the PPO algorithm, and Li et al. [27] addressed the offloading issues within a multi-MEC server and in multi-user contexts. Furthermore, several investigations have focused on using the deep deterministic policy gradient (DDPG) algorithm to counteract the offloading issues inherent in the MEC domain. Notably, DDPG outperforms PPO in terms of continuous action space, data efficiency, and stability, making it pivotal for reinforcement learning endeavors in the MEC space and offering effective solutions to offloading challenges. However, within specific environments, the random-search nature of a network may pose hurdles in identifying the optimal policy. By contrast, SAC boasts greater stability than deterministic policies and exhibits excellent sampling efficiency. Modern research is now leveraging SAC to address computational offloading challenges. Liu et al. [28] enhanced the data’s efficiency and stability using SAC, where multiple users collaboratively execute task offloading in an MEC setting. Similarly, Sun et al. [29] harnessed SAC within 6G mobile networks, achieving heightened data efficiency and reliability in MEC settings. The advantages and disadvantages of some existing approaches are listed in Table 1.
Table 1. Comparison of existing approaches.
Regarding MCC servers, they possess significantly greater computing capacities than those of MEC servers and are well-equipped to manage peak user request demands. Therefore, task offloading can be effectively achieved through cooperation among MEC servers. Nonetheless, tasks with high computational complexity should be delegated to cloud servers. He et al. [30] explored a multi-layer task offloading framework within the MEC environment, facilitating collaboration between MCC and MEC and enabling task offloading to other SDs. Furthermore, Akhlaqi, M.Y. [31], pointed out that the increasing use of cloud services by devices has highlighted congestion problems in centralized clouds. This situation has prompted the emergence of multi-access edge computing (MEC) to decentralize processing. Chen, Y. et al. [32], and Mustafa, E. et al. [33], address offloading decisions in MEC systems. The former focuses on assessing the reliability of multi-media data from IoT devices using a game-based approach, while the latter introduces a reinforcement learning framework for making real-time computation task decisions in dynamic networks.

References

  1. Hao, W.; Zeng, M.; Sun, G.; Xiao, P. Edge cache-assisted secure low-latency millimeter-wave transmission. IEEE Internet Things J. 2019, 7, 1815–1825.
  2. Nguyen, Q.-H.; Dressler, F. A smartphone perspective on computation offloading—A survey. Comput. Commun. 2020, 159, 133–154.
  3. Min, M.; Xiao, L.; Chen, Y.; Cheng, P.; Wu, D.; Zhuang, W. Learning-based computation offloading for IoT devices with energy harvesting. IEEE Trans. Veh. Technol. 2019, 68, 1930–1941.
  4. Merenda, M.; Porcaro, C.; Iero, D. Edge machine learning for ai-enabled iot devices: A review. Sensors 2020, 20, 2533.
  5. Hamdan, S.; Ayyash, M.; Almajali, S. Edge-computing architectures for internet of things applications: A survey. Sensors 2020, 20, 6441.
  6. Zheng, J.; Cai, Y.; Wu, Y.; Shen, X. Dynamic computation offloading for mobile cloud computing: A stochastic game-theoretic approach. IEEE Trans. Mob. Comput. 2018, 18, 771–786.
  7. Kekki, S.; Featherstone, W.; Fang, Y.; Kuure, P.; Li, A.; Ranjan, A.; Scarpina, S. MEC in 5G networks. ETSI White Pap. 2018, 28, 1–28.
  8. Porambage, P.; Okwuibe, J.; Liyanage, M.; Ylianttila, M.; Taleb, T. Survey on multi-access edge computing for internet of things realization. IEEE Commun. Surv. Tutor. 2018, 20, 2961–2991.
  9. Peng, M.; Zhang, K. Recent advances in fog radio access networks: Performance analysis and radio resource allocation. IEEE Access 2016, 4, 5003–5009.
  10. Zhao, Z.; Bu, S.; Zhao, T.; Yin, Z.; Peng, M.; Ding, Z.; Quek, T.Q. On the design of computation offloading in fog radio access networks. IEEE Trans. Veh. Technol. 2019, 68, 7136–7149.
  11. Samanta, A.; Chang, Z. Adaptive service offloading for revenue maximization in mobile edge computing with delay-constraint. IEEE Internet Things J. 2019, 6, 3864–3872.
  12. Wang, B.; Song, Y.; Cao, J.; Cui, X.; Zhang, L. Improving Task Scheduling with Parallelism Awareness in Heterogeneous Computational Environments. Future Gener. Comput. Syst. 2019, 94, 419–429.
  13. Mitsis, G.; Tsiropoulou, E.E.; Papavassiliou, S. Price and risk awareness for data offloading decision-making in edge computing systems. IEEE Syst. J. 2022, 16, 6546–6557.
  14. Xiang, X.; Lin, C.; Chen, X. Energy-efficient link selection and transmission scheduling in mobile cloud computing. IEEE Wirel. Commun. Lett. 2014, 3, 153–156.
  15. Zhang, W.; Wen, Y.; Guan, K.; Kilper, D.; Luo, H.; Wu, D.O. Energy-optimal mobile cloud computing under stochastic wireless channel. IEEE Trans. Wirel. Commun. 2013, 12, 4569–4581.
  16. Zhang, Y.; Niyato, D.; Wang, P. Offloading in mobile cloudlet systems with intermittent connectivity. IEEE Trans. Mob. Comput. 2015, 14, 2516–2529.
  17. Guo, F.; Zhang, H.; Ji, H.; Li, X.; Leung, V.C.M. An efficient computation offloading management scheme in the densely deployed small cell networks with mobile edge computing. IEEE ACM Trans. Netw. 2018, 26, 2651–2664.
  18. Haarnoja, T.; Zhou, A.; Hartikainen, K.; Tucker, G.; Ha, S.; Tan, J.; Levine, S. Soft actor-critic algorithms and applications. arXiv 2018, arXiv:1812.05905.
  19. Lim, D.; Lee, W.; Kim, W.-T.; Joe, I. DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing. Sensors 2022, 22, 9212.
  20. Sartoretti, G.; Paivine, W.; Shi, Y.; Wu, Y.; Choset, H. Distributed learning of decentralized control policies for articulated mobile robots. IEEE Trans. Robot. 2019, 35, 1109–1122.
  21. Wang, J.; Hu, J.; Min, G.; Zhan, W.; Ni, Q.; Georgalas, N. Computation offloading in multi-access edge computing using a deep sequential model based on reinforcement learning. IEEE Commun. Mag. 2019, 57, 64–69.
  22. Wang, Z.; Li, M.; Zhao, L.; Zhou, H.; Wang, N. A3C-based Computation Offloading and Service Caching in Cloud-Edge Computing Networks. In Proceedings of the IEEE INFOCOM 2022-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Virtual, 2–5 May 2022; pp. 1–2.
  23. Li, S.; Hu, S.; Du, Y. Deep Reinforcement Learning and Game Theory for Computation Offloading in Dynamic Edge Computing Markets. IEEE Access 2021, 9, 121456–121466.
  24. Sun, Y.; He, Q. Computational offloading for MEC networks with energy harvesting: A hierarchical multi-agent reinforcement learning approach. Electronics 2023, 12, 1304.
  25. Yong, D.; Liu, R.; Jia, X.; Gu, Y. Joint Optimization of Multi-User Partial Offloading Strategy and Resource Allocation Strategy in D2D-Enabled MEC. Sensors 2023, 23, 2565.
  26. Liu, K.-H.; Hsu, Y.-H.; Lin, W.-N.; Liao, W. Fine-Grained Offloading for Multi-Access Edge Computing with Actor-Critic Federated Learning. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 3 March–1 April 2021; pp. 1–6.
  27. Sun, C.; Wu, X.; Li, X.; Fan, Q.; Wen, J.; Leung, V.C.M. Cooperative Computation Offloading for Multi-Access Edge Computing in 6G Mobile Networks via Soft Actor Critic. IEEE Trans. Netw. Sci. Eng. 2021.
  28. He, W.; Gao, L.; Luo, J. A Multi-Layer Offloading Framework for Dependency-Aware Tasks in MEC. In Proceedings of the ICC 2021-IEEE International Conference on Communications, Montreal, QC, Canada, 14–18 June 2021; pp. 1–6.
  29. Akhlaqi, M.Y.; Hanapi, Z.B.M. Task offloading paradigm in mobile edge computing-current issues, adopted approaches, and future directions. J. Netw. Comput. Appl. 2023, 212, 103568.
  30. Chen, Y.; Zhao, J.; Zhou, X.; Qi, L.; Xu, X.; Huang, J. A distributed game theoretical approach for credibility-guaranteed multimedia data offloading in MEC. Inf. Sci. 2023, 644, 119306.
  31. Mustafa, E.; Shuja, J.; Bilal, K.; Mustafa, S.; Maqsood, T.; Rehman, F.; Khan, A.U.R. Reinforcement learning for intelligent online computation offloading in wireless powered edge networks. Clust. Comput. 2023, 26, 1053–1062.
  32. Nath, S.; Wu, J.; Yang, J. Delay and energy efficiency tradeoff for information pushing system. IEEE Trans. Green Commun. Netw. 2018, 2, 1027–1040.
  33. Haarnoja, T.; Tang, H.; Abbeel, P.; Levine, S. Reinforcement learning with deep energy-based policies. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1352–1361.
More
Video Production Service