OptiDJS+: Comparison
Please note this is a comparison between Version 1 by Anurag Sinha and Version 3 by Jason Zhu.

The continuously evolving world of cloud computing presents new challenges in resource allocation as dispersed systems struggle with overloaded conditions. In this regard, OptiDJS+ is a cutting-edge enhanced dynamic Johnson sequencing algorithm made to successfully handle resource scheduling challenges in cloud computing settings. With a solid foundation in the dynamic Johnson sequencing algorithm, OptiDJS+ builds upon it to suit the demands of modern cloud infrastructures. OptiDJS+ makes use of sophisticated optimization algorithms, heuristic approaches, and adaptive mechanisms to improve resource allocation, workload distribution, and task scheduling. To obtain the best performance, this strategy uses historical data, dynamic resource reconfiguration, and adaptation to changing workloads. 

  • OptiDJS+
  • dynamic Johnson sequencing algorithm
  • resource scheduling
  • cloud computing

1. Introduction

Cloud computing has excelled in demonstrating its significance in the development of science and technology in the current world. Utilizing cloud technology, customer access to resources is based on a “pay per use” demand. Users can then examine the resources to which they have been granted access after that. Different scheduling methods and virtual machine (VM) allocation processes are used by various cloud models. It can be challenging for service providers to change their resource supply to correspond with demand. Consequently, resource management is crucial for suppliers of cloud services. To increase system efficiency as the workload increases, better resource management is needed. The capacity workflow is completed through the coordination of resource allocation, dynamic resource allocation, and strategic planning. Depending on the terms of the service level agreement, the Internet service provider (ISP), with its unmatched scalability, flexibility, and cost-effectiveness for both businesses and individuals, has emerged as a paradigm shift in the field of information technology. In this era of pervasive digitalization, cloud computing services have become crucial to a broad range of applications, from data processing and storage to machine learning, Internet of Things installations, and more. While the dynamic and elastic nature of cloud systems has many benefits, it also presents substantial scheduling and resource management issues. A critical issue that necessitates creative solutions is the widespread saturation of cloud resources in particular [1].
The difficulty of allocating computer resources, such as CPU, memory, and storage, to a variety of activities or workloads is known as resource scheduling in the context of cloud computing. Various users or programs may be responsible for these duties, and each will have its own priorities and requirements. Resource allocation in conventional static systems might be controlled by static setups and well-defined policies. The atmosphere in the cloud is everything but constant. Variable workloads, different user needs, and erratic resource contention are its defining characteristics. These elements combine to create a challenging and ever-changing task of optimal resource scheduling. The minimizing of the makespan, which is the total amount of time needed to accomplish a collection of activities, is one of the crucial elements of resource scheduling. In cloud computing, reducing the time span is essential for achieving high resource utilization, meeting service level agreements (SLAs), and ensuring user satisfaction. To solve the makespan reduction challenge, dynamic Johnson sequencing (DJS) has long been acknowledged as a successful solution. DJS, which was first created for manufacturing parallel machines, has been modified for cloud resource scheduling. It is well renowned for being easy to use and for producing outcomes that are close to ideal in terms of time.
Even though DJS is wellknown and efficient, the constantly changing world of cloud computing necessitates novel methods for resource scheduling. Heterogeneous resources, virtualization techniques, and the requirement for dynamic responses to shifting workloads are all characteristics of contemporary cloud settings. To address this, new scheduling algorithms that include real-time monitoring, heuristics, and optimization approaches must be created.

2. Reslatearch Trendsd Work

Regarding distributed computing, problems with job planning may be the main worry. Using planning tactics is working to cut down on hold-up periods. The use of the cloud is expanded through work planning to maximize its advantages. To choose a suitable rundown where the errands are expected to be completed and shorten the overall task execution time, several booking computations are used [2][6]. Some planning calculations are exhibited inside the cloud climate, which differ from typical booking calculations and do not matter to cloud frameworks since the cloud may be a circulating climate made up of heterogeneous structures [3][2]. To help in VM selection for application scheduling, Ref. [4][3] has developed a novel hybrid multi-objective heuristic technique called NSGA-II and GSA. There is no VM-spanning scheduling method [5][6][7,8]. It combines the gravitational search algorithm (GSA) and the non-dominated sorting genetic algorithm-II (NSGA-II). NSGA-II can expand the search area via research, whereas GSA can use a good solution to locate the best answer and keep the algorithm from being trapped in the optimization process [7][9]. This hybrid algorithm aims to schedule a larger number of jobs with the least amount of overall energy usage while obtaining the shortest response time and lowest pricing. There is no scheduling algorithm that works with several VMs [8][10]. Ref. [9][11] uses the modified ant colony optimization for load balancing (MACOLB) approach (VMs) to allocate incoming tasks to the virtual computers. Jobs are allocated to the VMs following their processing capacity by taking into consideration balanced VM workloads (i.e., tasks are distributed in decreasing order, starting with the most powerful VM, and so on). The MACOLB is used to shorten the execution time, enhance system load balancing, and choose the optimal resource allocation for batch processes on the public cloud. The biggest flaw in this technique, however, is thought to be the sharing between VMs. To tackle the VM scheduling problem and optimize performance, The joint shortest queue (JSQ) routing method and the Myopic Max Weight scheduling strategy were developed [10][12] after abandoning this assumption. The idea separates VMs into several groups, each of which corresponds to a certain resource pool, such as the CPU, storage, and space [11][13]. The user-requested VM type is taken into account as the incoming requests are distributed among the virtual servers based on JSQ, which routes connections to virtual servers with the shortest queue lengths [12][13][14][14,15,16]. Theoretical studies show that the principles in [15][16][17,18] may be effectively made throughput-optimal by choosing appropriate large frame lengths. The simulation results show that the rules are also capable of producing useful latency outcomes [17][19]. To schedule different workloads, several heuristic algorithms have been developed and put to use in the public cloud with an ideal fog-cloud offloading system for massive data optimization across diverse IoT networks [18][20]. The first come, first serve basis algorithm, the min–max algorithm, the min–min algorithm, and the Suffrage computation are among the most significant improvements in heuristic approaches. Other important developments include opportunistic load balancing and min–min opportunistic load balancing, shortest task first, balance scheduling (BS), greedy scheduling, and sequence scheduling (SS) [19][21]. In addition, a timetable approach that takes limitations into account can be applied. The ratio of tasks that are accepted in this situation increases, while the ratio of tasks that fail decreases. Modified RRs (MRRs) are suited for high availability, minimize hunger, and minimize delay, in addition to being fair and avoidable [20][22]. By using a more sophisticated RR scheduling approach, resource usage was increased in [21][23]. The algorithmic approach proposed in [22][24] offers better turnaround time and reaction time compared to existing frequency adaptive divided sequencing (BATS) or enhanced differential evolution algorithm (IDEA) systems. The functions of the system consist of real scientific activities from CyberShake and epigenomics. The public cloud will benefit from efficient resource management. For scheduling algorithms in an offline cloud context, deep reinforcement learning approaches are recommended [23][25]. Since they essentially exclusively deal with CPU and memory-related characteristics, DeepRM and DeepRM2 are modified to address resource scheduling concerns. The scheduling strategy used in this instance is a combination of the smallest job first (SJF), longest work first (LJF), tries, and random approaches. Thispaper argues in favor of applying reinforcement learning strategies to similar optimization issues [24][26]. Real-time resource allocation QOS is difficult to implement. It is compared to statistical data and appraised data for the current situation. Then, based on the same historical situation, the resources are allocated in the best or nearly the best way possible. In this case, the unused spectrum is distributed using revolutionary design methodologies and supervised machine learning techniques [25][27]. Classification mining techniques are widely applied in a variety of contexts, including text, image, multimedia, traffic, medicine, big data, and other application scenarios. The implementation of axis-parallel binary DTC is described in [26][28] using a parallelizing structure that significantly minimizes the amount of assets needed in terms of space. Sequence categorization was initially discussed in some papers using rules made up of fascinating patterns found in a set of labeled episodes and corresponding class labels. The authorsassessed a pattern’s interest in a certain class of sequences by combining the coherence and support of the pattern. They outlined two different strategies for creating classifiers and employed the patterns they identified to create a trustworthy classification model. The structures that the program discovers properly represent the patterns and have been found to outperform other machine learning methods for training sets. For automatic text categorization, a Bayesian classification approach employing class-specific properties was proposed in [27][29]. Table 1, presented here, summarizes several scheduling techniques and their main emphasis on particular scheduling characteristics. Each technique is labeled with a “Yes” or “No” to indicate if it focuses on a particular scheduling issue. Some scheduling algorithms, such as the genetic algorithm, round-robin, priority-based job-scheduling algorithm, Dens, and multi-homing, place a strong emphasis on factors like “Makespan” (reducing the maximum completion time) and “Completion Time.”On the other side, techniques like simulated annealing and reinforcement learning are more geared toward solving “Total Processing Time” and “Completion Time.” In addition, there are techniques with a combined focus on various scheduling elements, such as round-robin (different example),min–min, and MaxMin. These indications give users information about the benefits and considerations of each scheduling technique, allowing them to select the one that best suits their needs and goals [28][30].
Table 1.
Research gap findings in related literature based on parameters used in proposed method for overcoming the gap.
  • For task and execution mapping, several rules are taken into consideration since performance-based execution time optimization is the primary goal. The only aspect that counts in a market system is pricing. Two market-based scheduling techniques, the backtrack algorithm and the genetic algorithm, are built on top of it. Static scheduling allows for the usage of any conventional scheduling technique, including round-robin, min–min, and FCFS. Dynamic scheduling may use any heuristic technique, including genetic algorithms and particle swarm optimization.
  • As a task is finished, the processing time is updated in this type of task-based scheduling, which is widely used for repeating tasks. By using dynamic scheduling, the number of jobs, the location of the machines, and the allocation of resources are not fixed. When the jobs are delivered is unknown before submission.
REF. NO Scheduling Method Maxspan Total Processing Time Completion Time Turnaround Time
1 Genetic algorithm Yes No Yes No
2 Simulated annealing no Yes Yes yes
3 Round-robin Yes No Yes No
4 Round-robin No Yes No Yes
5 Min–min, MaxMin Yes No No Yes
6 Meta-heuristic No Yes Yes No
7 Reinforcement learning Yes No Yes No
15 Ant colony method No Yes No No
18 Priority-based job-scheduling algorithm Yes Yes Yes No
19 Dens Yes No Yes No
20 Multi-homing Yes Yes Yes No
After an analysis of Table 1, the following points are observed and filled in in proposed method:
  • Approaches for measuring clouds include batch, interactive, and real-time approaches. Batch systems allow the forecasting of throughput and turnaround times. To grade responsiveness and fairness, a live, interactive system that keeps track of deadlines might be used. The market and performance are the key topics of the third category.
Video Production Service