OptiDJS+: Comparison
Please note this is a comparison between Version 2 by Jason Zhu and Version 1 by Anurag Sinha.

The continuously evolving world of cloud computing presents new challenges in resource allocation as dispersed systems struggle with overloaded conditions. In this regard, OptiDJS+ is a cutting-edge enhanced dynamic Johnson sequencing algorithm made to successfully handle resource scheduling challenges in cloud computing settings. With a solid foundation in the dynamic Johnson sequencing algorithm, OptiDJS+ builds upon it to suit the demands of modern cloud infrastructures. OptiDJS+ makes use of sophisticated optimization algorithms, heuristic approaches, and adaptive mechanisms to improve resource allocation, workload distribution, and task scheduling. To obtain the best performance, this strategy uses historical data, dynamic resource reconfiguration, and adaptation to changing workloads. 

  • OptiDJS+
  • dynamic Johnson sequencing algorithm
  • resource scheduling
  • cloud computing

1. Introduction

Cloud computing has excelled in demonstrating its significance in the development of science and technology in the current world. Utilizing cloud technology, customer access to resources is based on a “pay per use” demand. Users can then examine the resources to which they have been granted access after that. Different scheduling methods and virtual machine (VM) allocation processes are used by various cloud models. It can be challenging for service providers to change their resource supply to correspond with demand. Consequently, resource management is crucial for suppliers of cloud services. To increase system efficiency as the workload increases, better resource management is needed. The capacity workflow is completed through the coordination of resource allocation, dynamic resource allocation, and strategic planning. Depending on the terms of the service level agreement, the Internet service provider (ISP), with its unmatched scalability, flexibility, and cost-effectiveness for both businesses and individuals, has emerged as a paradigm shift in the field of information technology. In this era of pervasive digitalization, cloud computing services have become crucial to a broad range of applications, from data processing and storage to machine learning, Internet of Things installations, and more. While the dynamic and elastic nature of cloud systems has many benefits, it also presents substantial scheduling and resource management issues. A critical issue that necessitates creative solutions is the widespread saturation of cloud resources in particular [1].
The difficulty of allocating computer resources, such as CPU, memory, and storage, to a variety of activities or workloads is known as resource scheduling in the context of cloud computing. Various users or programs may be responsible for these duties, and each will have its own priorities and requirements. Resource allocation in conventional static systems might be controlled by static setups and well-defined policies. The atmosphere in the cloud is everything but constant. Variable workloads, different user needs, and erratic resource contention are its defining characteristics. These elements combine to create a challenging and ever-changing task of optimal resource scheduling. The minimizing of the makespan, which is the total amount of time needed to accomplish a collection of activities, is one of the crucial elements of resource scheduling. In cloud computing, reducing the time span is essential for achieving high resource utilization, meeting service level agreements (SLAs), and ensuring user satisfaction. To solve the makespan reduction challenge, dynamic Johnson sequencing (DJS) has long been acknowledged as a successful solution. DJS, which was first created for manufacturing parallel machines, has been modified for cloud resource scheduling. It is well renowned for being easy to use and for producing outcomes that are close to ideal in terms of time.
Even though DJS is wellknown and efficient, the constantly changing world of cloud computing necessitates novel methods for resource scheduling. Heterogeneous resources, virtualization techniques, and the requirement for dynamic responses to shifting workloads are all characteristics of contemporary cloud settings. To address this, new scheduling algorithms that include real-time monitoring, heuristics, and optimization approaches must be created.

2. Related Work

Regarding distributed computing, problems with job planning may be the main worry. Using planning tactics is working to cut down on hold-up periods. The use of the cloud is expanded through work planning to maximize its advantages. To choose a suitable rundown where the errands are expected to be completed and shorten the overall task execution time, several booking computations are used [6][2]. Some planning calculations are exhibited inside the cloud climate, which differ from typical booking calculations and do not matter to cloud frameworks since the cloud may be a circulating climate made up of heterogeneous structures [2][3]. To help in VM selection for application scheduling, Ref. [3][4] has developed a novel hybrid multi-objective heuristic technique called NSGA-II and GSA. There is no VM-spanning scheduling method [7,8][5][6]. It combines the gravitational search algorithm (GSA) and the non-dominated sorting genetic algorithm-II (NSGA-II). NSGA-II can expand the search area via research, whereas GSA can use a good solution to locate the best answer and keep the algorithm from being trapped in the optimization process [9][7]. This hybrid algorithm aims to schedule a larger number of jobs with the least amount of overall energy usage while obtaining the shortest response time and lowest pricing. There is no scheduling algorithm that works with several VMs [10][8]. Ref. [11][9] uses the modified ant colony optimization for load balancing (MACOLB) approach (VMs) to allocate incoming tasks to the virtual computers. Jobs are allocated to the VMs following their processing capacity by taking into consideration balanced VM workloads (i.e., tasks are distributed in decreasing order, starting with the most powerful VM, and so on). The MACOLB is used to shorten the execution time, enhance system load balancing, and choose the optimal resource allocation for batch processes on the public cloud. The biggest flaw in this technique, however, is thought to be the sharing between VMs. To tackle the VM scheduling problem and optimize performance, The joint shortest queue (JSQ) routing method and the Myopic Max Weight scheduling strategy were developed [12][10] after abandoning this assumption. The idea separates VMs into several groups, each of which corresponds to a certain resource pool, such as the CPU, storage, and space [13][11]. The user-requested VM type is taken into account as the incoming requests are distributed among the virtual servers based on JSQ, which routes connections to virtual servers with the shortest queue lengths [14,15,16][12][13][14]. Theoretical studies show that the principles in [17,18][15][16] may be effectively made throughput-optimal by choosing appropriate large frame lengths. The simulation results show that the rules are also capable of producing useful latency outcomes [19][17]. To schedule different workloads, several heuristic algorithms have been developed and put to use in the public cloud with an ideal fog-cloud offloading system for massive data optimization across diverse IoT networks [20][18]. The first come, first serve basis algorithm, the min–max algorithm, the min–min algorithm, and the Suffrage computation are among the most significant improvements in heuristic approaches. Other important developments include opportunistic load balancing and min–min opportunistic load balancing, shortest task first, balance scheduling (BS), greedy scheduling, and sequence scheduling (SS) [21][19]. In addition, a timetable approach that takes limitations into account can be applied. The ratio of tasks that are accepted in this situation increases, while the ratio of tasks that fail decreases. Modified RRs (MRRs) are suited for high availability, minimize hunger, and minimize delay, in addition to being fair and avoidable [22][20]. By using a more sophisticated RR scheduling approach, resource usage was increased in [23][21]. The algorithmic approach proposed in [24][22] offers better turnaround time and reaction time compared to existing frequency adaptive divided sequencing (BATS) or enhanced differential evolution algorithm (IDEA) systems. The functions of the system consist of real scientific activities from CyberShake and epigenomics. The public cloud will benefit from efficient resource management. For scheduling algorithms in an offline cloud context, deep reinforcement learning approaches are recommended [25][23]. Since they essentially exclusively deal with CPU and memory-related characteristics, DeepRM and DeepRM2 are modified to address resource scheduling concerns. The scheduling strategy used in this instance is a combination of the smallest job first (SJF), longest work first (LJF), tries, and random approaches. Thispaper argues in favor of applying reinforcement learning strategies to similar optimization issues [26][24]. Real-time resource allocation QOS is difficult to implement. It is compared to statistical data and appraised data for the current situation. Then, based on the same historical situation, the resources are allocated in the best or nearly the best way possible. In this case, the unused spectrum is distributed using revolutionary design methodologies and supervised machine learning techniques [27][25]. Classification mining techniques are widely applied in a variety of contexts, including text, image, multimedia, traffic, medicine, big data, and other application scenarios. The implementation of axis-parallel binary DTC is described in [28][26] using a parallelizing structure that significantly minimizes the amount of assets needed in terms of space. Sequence categorization was initially discussed in some papers using rules made up of fascinating patterns found in a set of labeled episodes and corresponding class labels. The authorsassessed a pattern’s interest in a certain class of sequences by combining the coherence and support of the pattern. They outlined two different strategies for creating classifiers and employed the patterns they identified to create a trustworthy classification model. The structures that the program discovers properly represent the patterns and have been found to outperform other machine learning methods for training sets. For automatic text categorization, a Bayesian classification approach employing class-specific properties was proposed in [29][27]. Table 1, presented here, summarizes several scheduling techniques and their main emphasis on particular scheduling characteristics. Each technique is labeled with a “Yes” or “No” to indicate if it focuses on a particular scheduling issue. Some scheduling algorithms, such as the genetic algorithm, round-robin, priority-based job-scheduling algorithm, Dens, and multi-homing, place a strong emphasis on factors like “Makespan” (reducing the maximum completion time) and “Completion Time.”On the other side, techniques like simulated annealing and reinforcement learning are more geared toward solving “Total Processing Time” and “Completion Time.” In addition, there are techniques with a combined focus on various scheduling elements, such as round-robin (different example),min–min, and MaxMin. These indications give users information about the benefits and considerations of each scheduling technique, allowing them to select the one that best suits their needs and goals [30][28].
Table 1.
Research gap findings in related literature based on parameters used in proposed method for overcoming the gap.
  • As a task is finished, the processing time is updated in this type of task-based scheduling, which is widely used for repeating tasks. By using dynamic scheduling, the number of jobs, the location of the machines, and the allocation of resources are not fixed. When the jobs are delivered is unknown before submission.
REF. NO Scheduling Method Maxspan Total Processing Time Completion Time Turnaround Time
1 Genetic algorithm Yes No Yes No
2 Simulated annealing no Yes Yes yes
3 Round-robin Yes No Yes No
4 Round-robin No Yes No Yes
5 Min–min, MaxMin Yes No No Yes
6 Meta-heuristic No Yes Yes No
7 Reinforcement learning Yes No Yes No
15 Ant colony method No Yes No No
18 Priority-based job-scheduling algorithm Yes Yes Yes No
19 Dens Yes No Yes No
20 Multi-homing Yes Yes Yes No
After an analysis of Table 1, the following points are observed and filled in in proposed method:
  • Approaches for measuring clouds include batch, interactive, and real-time approaches. Batch systems allow the forecasting of throughput and turnaround times. To grade responsiveness and fairness, a live, interactive system that keeps track of deadlines might be used. The market and performance are the key topics of the third category.
  • For task and execution mapping, several rules are taken into consideration since performance-based execution time optimization is the primary goal. The only aspect that counts in a market system is pricing. Two market-based scheduling techniques, the backtrack algorithm and the genetic algorithm, are built on top of it. Static scheduling allows for the usage of any conventional scheduling technique, including round-robin, min–min, and FCFS. Dynamic scheduling may use any heuristic technique, including genetic algorithms and particle swarm optimization.

References

  1. Talukder, M.K.; Buyya, R. Multiobjective differential evolution for scheduling workflow applications on global grids. Concurr. Comput. Pract. Exp. 2009, 21, 1742–1756.
  2. Banerjee, P.; Roy, S. An Investigation of Various Task Allocating Mechanism in Cloud. In Proceedings of the 2021 5th International Conference on Information Systems and Computer Networks (ISCON), Mathura, India, 22–23 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6.
  3. Banerjee, P.; Tiwari, A.; Kumar, B.; Thakur, K.; Singh, A.; Dehury, M.K. Task Scheduling in cloud using Heuristic Technique. In Proceedings of the 2023 7th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 11–13 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 709–716.
  4. Baranwal, G.; Vidyarthi, D.P. A fair multi-attribute combinatorial double auction model for resource allocation in cloud computing. J. Syst. Softw. 2015, 108, 60–76.
  5. Barrett, E.; Howley, E.; Duggan, J. A learning architecture for scheduling workflow applications in the cloud. In Proceedings of the Ninth IEEE European Conference on Web Services (ECOWS), Lugano, Switzerland, 14–16 September 2011; pp. 83–90.
  6. Cheng, C.; Li, J.; Wang, Y. An energy-saving task scheduling strategy based on vacation queuing theory in cloud computing. Tsinghua Sci. Technol. 2015, 20, 28–39.
  7. Lin, C.-C.; Liu, P.; Wu, J.-J. Energy-efficient Virtual Machine Provision Algorithms for Cloud Systems. In Proceedings of the 2011 Fourth IEEE International Conference on Utility and Cloud Computing, Melbourne, Australia, 5–8 December 2011; pp. 81–88.
  8. Delavar, A.G.; Javanmard, M.; Shabestari, M.B.; Talebi, M.K. RSDC (Reliable scheduling distributed in cloud computing). Int. J. Comput. Sci. Eng. Appl. 2012, 2, 1–16.
  9. Ding, J.; Zhang, Z.; Ma, R.T.; Yang, Y. Auction-based cloud service differentiation with service level objectives. Comput. Netw. 2016, 94, 231–249.
  10. El-Sayed, T.E.; El-Desoky, A.I.; Al-Rahamawy, M.F. Extended maxmin scheduling using Petri net and load balancing. Int. J. Soft Comput. Eng. 2012, 2, 198–203.
  11. Shaikh, F.B.; Haider, S. Security threats in cloud computing. In Proceedings of the 6th International IEEE Conference on Internet Technology and Secured Transaction, Abu Dhabi, United Arab Emirates, 11–14 December 2012; pp. 214–219.
  12. Gan, G.; Huang, T.; Gao, S. Genetic simulated annealing algorithm for task scheduling based on cloud computing environment. In Proceedings of the IEEE International Conference on Intelligent Computing and Integrated Systems (ICISS), Guilin, China, 22–24 October 2010; pp. 60–63.
  13. Ge, J.W.; Yuan, Y.S. Research of cloud computing task scheduling algorithm based on improved genetic algorithm. Appl. Mech. Mater. 2013, 347, 2426–2429.
  14. Khazaei, H.; Misic, J.; Misic, V.B. Performance Analysis of Cloud Computing Centers Using M/G/m/m+rQueuing Systems. IEEE Trans. Parallel Distrib. Syst. 2012, 23, 936–943.
  15. Khazaei, H.; Misic, J.; Misic, V.B. Modelling of Cloud Computing Centers Using M/G/m Queues. In Proceedings of the 31st IEEE International Conference on Distributed Computing Systems Workshops (ICDCSW), Minneapolis, MN, USA, 20–24 June 2011; pp. 87–92.
  16. Liu, H.; Jin, H.; Liao, X.; Hu, L.; Yu, C. Live migration of virtual machine based on full system trace and replay. In Proceedings of the 18th ACM International Symposium on High Performance Distributed Computing, Garching, Germany, 11–13 June 2009; pp. 101–110.
  17. Himthani, P.; Saxena, A.; Manoria, M. Comparative Analysis of VM Scheduling Algorithms in Cloud Environment. Int. J. Comput. Appl. 2015, 120, 1–6.
  18. Gu, J.; Hu, J.; Zhao, T.; Sun, G. A New Resource Scheduling Strategy Based on Genetic Algorithm in Cloud Computing Environment. J. Comput. 2012, 7, 42–52.
  19. Ye, K.; Jiang, X.; Ye, D.; Huang, D. Two Optimization Mechanisms to Improve the Isolation Property of Server Consolidation in Virtualized Multi-core Server. In Proceedings of the 12th IEEE International Conference on Performance Computing and Communications, Melbourne, Australia, 1–3 September 2010; pp. 281–288.
  20. Bebortta, S.; Tripathy, S.S.; Modibbo, U.M.; Ali, I. An optimal fog-cloud offloading framework for big data optimization in heterogeneous IoT networks. Decis. Anal. J. 2023, 8, 100295.
  21. Kalra, M.; Singh, S. A review of metaheuristic scheduling techniques in cloud computing. Egypt. Inform. J. 2015, 16, 275–295.
  22. Kumar, K.; Hans, A.; Sharma, A.; Singh, N. A Review on Scheduling Issues in Cloud Computing. In Proceedings of the International Conference on Advancements in Engineering and Technology (ICAET 2015), Incheon, Republic of Korea, 11–13 December 2015; pp. 4–7.
  23. Kumar, N.; Sankar, S.G.; Kumar, M.N.; Manikanta, P.; Aravind, V.S. Enhanced Real-Time Group Auction System for Efficient Allocation of Cloud Internet Applications. IJITR 2016, 4, 2836–2840.
  24. Kuo, R.J.; Cheng, C. Hybrid meta-heuristic algorithm for job shop scheduling with due date time window and release time. Int. J. Adv. Manuf. Technol. 2013, 67, 59–71.
  25. Lee, C.; Wang, P.; Niyato, D. A real-time group auction system for efficient allocation of cloud internet applications. IEEE Trans. Serv. Comput. 2015, 8, 251–268.
  26. Hines, M.; Gopalan, K. Post-copy based live virtual machine migration using adaptive pre-paging and dynamic selfballooning. In Proceedings of the 2009 ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, Washington, DC, USA, 11–13 March 2009; pp. 51–60.
  27. Mangla, N.; Singh, M.; Rana, S.K. Resource Scheduling In Cloud Environment: A Survey. Advances in Science and Technology. Res. J. 2016, 10, 38–50.
  28. Schmidt, M.; Fallenbeck, N.; Smith, M.; Freisleben, B. Efficient Distribution of Virtual Machines for Cloud Computing. In Proceedings of the Parallel, Distributed and Network-Based Processing (PDP), 2010 18th Euromicro International Conference, Pisa, Italy, 17–19 February 2010; pp. 567–574.
More
Video Production Service