Reliable Storage of Cloud Data: Comparison
Please note this is a comparison between Version 3 by Camila Xu and Version 4 by Camila Xu.

The prime objective of the cloud data storage process is to make the service, irrespective of being infinitely extensible, a more reliable storage and low-cost model that also encourages different data storage types. Owing to the storage process, it must satisfy the cloud users’ prerequisites. Nevertheless, storing massive amounts of data becomes critical as this affects the data quality or integrity. Hence, this poses various challenges for existing methodologies. An efficient, reliable cloud storage model is proposed using a hybrid heuristic approach to overcome the challenges. The prime intention of the proposed system is to store the data effectively in the cloud environment by resolving two constraints, which are general and specific (structural). The cloud data were initially gathered and used to analyze the storage performance. Since the data were extensive, different datasets and storage devices were considered. Every piece of data as specified by its corresponding features, whereas the devices were characterized by the hardware or software components. Subsequently, the objective function was formulated using the network’s structural and general constraints. The structural constraints were determined by the interactions between the devices and data instances in the cloud. Then, the general constraints regarding the data allocation rules and device capacity were defined. To mitigate the constraints, the components were optimized using the Hybrid Pelican–Billiards Optimization Algorithm (HP-BOA) to store the cloud data. Finally, the performance was validated, and the results were analyzed and compared against existing approaches. Thus, the proposed model exhibited the desired results for storing cloud data appropriately. 

  • cloud data storage
  • cloud computing
  • resource allocation
  • virtualization

1. Introduction

On a cloud network, cloud data are extensively distributed by cloud storage providers. While sharing the data services, the cloud users can share their required information within the group. This thus mitigates the data storage complexity. Furthermore, the users cannot control the storage capacity in a physical manner [1]. Moreover, some flaws can jeopardize the data integrity due to system faults due to the software or hardware and human intervention errors. To combat such problems, cloud storage is a prerequisite when sharing on a cloud network [2]. A user should be blocked from the group or be removed from the group due to misbehavior. Hence, revocation is the standard process when auditing cloud data for storage. To ensure the security level, data management requires a private key to verify the legitimacy of generating the fileblocks [3]. Through this authentication process, the fileblocks are proven to possess the data. During the user revocation from a group, his/her private key is also removed from the user group. In the general and traditional [4] auditing approaches, the authenticators of revoked users are transformed into the authenticators of the non-revoked cloud user group. For such a scenario, the non-revoked users have to fetch all of the information using the user’s fileblocks, which are used to again sign and updatenew authenticators in the cloud network. Because of the high-dimensional representation of cloud data, this process has a high costin terms of computational and overhead communication [5].
Several auditing methods have been developed to further resolve the existing issues, along with user revocation in the storage of cloud data [6]. The revoked user groups are transformed into non-revoked groups, where the private key is again required for authentication. This adds to the computational complexity problem, as it results in more fileblocks [7]. Hence, this has an impact on the cloud environment. In real-world applications, user transformation is critical to achieving better storage performance [8]. Furthermore, the performance is degraded, as this often changes the membership function of the group. Hence, the challenging factor is to design an effective model for real-time data [9]. Depending on the needs of different users, various data files are employed to store the cloud data [10]. To meet the requirements, standard storage products are implemented by service providers to save the data [11]. Thus, it becomes challenging to achieve cost-effective networks and highprovider storage capacity.
In cloud data storage management and virtual machine (VM) assignment, providing high-latency, low-cost, high-quality service and scalability is challenging for researchers. Optimization techniques provide high-quality service. Various technologies have been demonstrated to provide high-quality, reliable cloud data storage, and their challenges and features are illustrated in Table 1. CMPSO [12] provides highly secure and reliable resource allocation over wireless networks and has a low computational cost. However, it does not provide a fine-tuned strategy for accommodating connectivity. Furthermore, the power consumption of the entire system is very high. ANC [13] is easy to implement with increased network performance in terms of robustness and fidelity, and also, the communication throughput is very high. Yet, this strategy is not flexible because of the changing channel qualities in wireless networks, and this decreases the total response time for users when the workload is high. The Tabu meta-heuristic [14] meets the wireless requirements such as heterogeneity, reliability, and lowlatency, and also, it provides high synchronization and updating of data over wireless networks. If an unexpected power outage occurs, the valuable data stored in the data center could be lost and unrecoverable. Hence, there is a high cost to protectthe cloud storage system. OMT [15] provides automatic services when customers require more servicesover the network channel. Hence, it can easily interface with the applications and the data sources. It also has a higher offloading failure probability; therefore, the transmission reliability is decreased. Furthermore, it has less scalability in the search space. The EMSA algorithm [16] is highly elastic, costless, and trustworthy. Moreover, the information is quickly accessible by the users, and it is more reliable. Finally, it has high, virtually limitless storage capacity. Yet, it does not meet the bandwidth requirements and has a low maturity level. Furthermore, it does not have any loop-back connectivity or access control. PKI-based signatures [17] provide greater hardware redundancy and have automatic storage failover. However, the packet loss ratio is very high. In addition, the signal-to-noise ratio is very high during packet transmission. Ant colony optimization (ACO) [18] achieves excellent performance by balancing the network load, providing the increased security and integrity of the information over the network channel. However, it may result in considerable network delays, and it has a high overhead and low service quality in terms of cost, security, and latency. The Fibonacci cryptographic technique [19] can handle the network traffic and has lowcomputational complexity. However, it has a high consumption of network resources, poornode authentication, a high transmission time, and less caching ability. Hence, to resolve these challenges, a new reliable cloud data storage system was developed with optimization for high-quality service.
Table 1.
Features and challenges of reliable data storage using optimization.
Diverse approaches have been deployed to mitigate the cost function and increase the system’s reliability [20]. During the storing process of cloud data, some critical issues are met, such as transmission and communication overhead.

2. Reliable Cloud Data Storage: System Model and Problem Formulation

2.1. System Model

Reliable cloud data storage has become the most-effective process for managing data. Cloud storage is the process of managing the data remotely and the process of safeguarding the data with third-party servers. To store the data in the cloud, the cloud can give the assurance to improve the security of the data. It considers the four distinct kinds of entities to achieve the storage mechanism. They are “the data owner, the data user, the cloud user, and the third-party server”. The data owner manages the data to store them in various VMs. The data user can have the capacity to choose the machines where the data are recovered. Simultaneously, the third-party servers are used to check the data integrity frequently.
Storage mechanism: Cloud storage comprises many devices such as machines. Here, the data storage is nothing but mapping the logical and physical storage. Hence, considering the required components, the storage network may have several constraints while storing the data on the respective servers or VMs. Conversely, the storage process is differentiated into three types, which are explained as follows:
1
File storage: The files are hierarchically placed in this type. The information is stored in the metadata format of every file. Hence, the files are managed in higher-level abstraction types. Thus, it aids in improving performance.
2
Block storage: Here, the data or files are segmented into different chunks and represented with block addresses. This process does not contain the server for authorization.
3
Object storage: The encapsulation is performed with the object and metadata. Since the data belong to any type, they are distributed over the cloud. This also ensures the scalability and reliability of the system.
The major goals of designing a reliable cloud data storage system are listed below:
  • Data reliability and availability: By storing the data with more machines or servers, the data user can obtain the encoded data to be deciphered further as the original data. When any of the servers has a fault, they are then used by the other effective servers, thereby enhancing the data integrity and reliability of the cloud network.
  • Security: The better system enhances the security level. It also verifies the data integrity and confidentiality, which protects the network from any corrupted services.
  • Offline data owner: Once the data are outsourced to a server or machine, there is no need to check the integrity of the stored data in the system.
  • Efficiency: Due to this objective, the system’s efficacy is reached in terms of less storage space, resolving the overhead problem in communication and computation, and so on.
Considering the above key points, the proposed reliable data cloud storage system using heuristic development is represented in Figure 1.
Figure 1.
Architectural representation of proposed reliable cloud data storage using HP-BOA.
The primary aim of this novel framework is to save cloud data by rectifying the general and structural constraints. Firstly, it considers the components and VMs for storage purposes. Since the components pose various constraints, a new reliable model is introduced. Each piece of cloud data is specified with individual traits, which are then characterized by hardware and software components. Consequently, the new objective function is derived for solving both constraints. The interactions and data instances in the cloud data storage define the structural constraint. Similarly, the allocation rules and device capacity are included in the general constraints. To alleviate the constraint issues, a novel HP-BOA is newly proposed. In the last stage, the performance was measured with metrics, and its simulation results were carried out. Thus, the extensive results proved that the proposed work appropriately stored the cloud data using the components.
Video Production Service