Single-Board Computer, Edge Computing and Virtualization Technology: Comparison
Please note this is a comparison between Version 1 by Georgios Lambropoulos and Version 2 by Jessie Wu.

The widespread adoption of cloud computing has resulted in centralized datacenter structures; however, there is a requirement for smaller-scale distributed infrastructures to meet the demands for speed, responsiveness, and security for critical applications. Single-Board Computers (SBCs) present numerous advantages such as low power consumption, low cost, minimal heat emission, and high processing power, making them suitable for applications such as the Internet of Things (IoT), experimentation, and other advanced projects.

  • SBC
  • virtualization
  • edge computing
  • Raspberry Pi

1. Introduction

The rapid adoption of cloud computing by most modern corporations leads to centralized and consolidated datacenter structures. Nevertheless, in both public and private implementations, cloud computing may not always meet the necessary requirements in terms of speed, responsiveness, and security to cover the needs of several critical applications. To address the above shortcomings, the implementation of smaller-scale distributed infrastructures at the edges of corporate networks and specifically near endpoints that feature intense data transactions is recommended. This practice is often referred to as edge computing. The edge-computing model features distributed micro datacenter infrastructures closer to the data generation sites to allow faster networking response, local data storage, and enhanced security. Specifically, by creating decentralized datacenters near the data creation source, edge computing reduces exposure concerns since data processing takes place on-premise by utilizing local resources, thus minimizing the potential attack risks that arise by the continuous data transmission to remote infrastructures. Furthermore, edge computing facilitates the adoption of traditional security policies and tools that cannot otherwise be implemented in complex cloud-oriented environments [1].
Despite the advantages of edge computing, there are a few concerns that are mostly due to the servicing needs, power consumption and remote administration of the infrastructures that need to be implemented. Especially in cases of small office branches or shop-in-a-shop scenarios, a dedicated and controlled environment for hosting sensitive hardware equipment is very difficult to allocate. Power consumption and air conditioning needs are also limiting factors. A possible solution that addresses these concerns is the usage of Single-Board Computers (SBCs).
Over the last decade, SBCs have become increasingly relevant due to their low power consumption, low purchasing cost and minimal heat generation. Additionally, the rapid development of power-efficient processors, mostly based on the Aarch64 (ARM64) architecture, makes SBCs ideal for numerous applications such as Internet of Things (IoT), experimentation, prototyping and robotics. The increased demand for more powerful and scalable SBC platforms drives hardware manufacturing companies to produce several different boards either for general-purpose development or optimized for specific tasks (i.e., sensor control, image processing and data analytics) [2]. In the same context, modern SBCs also feature powerful specifications, such as more physical memory (RAM), and are equipped with faster embedded hardware, such as USB3 ports, gigabit Ethernet controllers, Bluetooth radios and Wi-Fi adapters. Indicative examples of such SBCs are Raspberry Pi (by Raspberry Foundation), NVIDIA Jetson (by NVIDIA Corporation), Layerscape Design Board (by NXP Semiconductors) and Quartz64 (by Pine64).
Even though SBCs seem to be a viable and appealing option for edge computing, it is essential to take into account a number of important factors in order to implement reliable, expandable and efficient infrastructures. Specifically, one of the most important prerequisites is that these edge infrastructures shall feature enterprise-level functionalities, such as flexible administration, failover clustering capabilities, and disaster recovery tools. Additionally, all hosted services should be hardware-independent and easily migratable among different types of hosts. Based on the above facts, the underlying technology on which these infrastructures should be based on is virtualization.

2. Single-Board Computer, Edge Computing, Serverless Computing and Virtualization Technology Implementations

The idea of employing a small, reasonably priced, linked computer in various scientific and educational setups was made more popular by the founding of the Raspberry Pi foundation, a nonprofit organization promoting the educational value of its devices. Single-Board Computer research is mainly focused on studying their employment in sectors such as science, engineering and education [3][4][3,4], the implementation of Software-Defined Radio (SDR) systems [5], as well as their usage for creating clustered computing environments that leverage their cost efficiency compared to traditional computer systems [6]. Other works study their energy efficiency on edge-computing implementations [7] and their ability to integrate sensor technologies for specific IoT applications [8]. It should be noted that Single-Board Computers have both benefits and drawbacks. On the one hand, vendors can speed up time to market by needing less development time, and a wide range of sizes, functions, and prices are offered by several providers. However, they are not always economically viable for high volumes of computation or data. As far as edge computing is concerned, the relevant research is mainly focused on the enhancement of cloud provided services due to the incremental growth of utilization and connected devices mostly in the field of IoT [9]. Researchers have identified key areas such as network performance, availability, power consumption and security, where edge computing may considerably contribute [10]. International Data Corporation (IDC) in co-operation with VMware, identifies edge computing as the next step for the transformation and evolution of the cloud industry [11]. Investments on edge computing are expected to increase mainly in the fields of customer service, transportation, tourism and logistics [12]. This is further validated by a forecast by IDC Corporation that predicts an average of USD 176 billion on edge-computing investments by the end of 2022. The same forecast predicts that total investments on edge computing are expected to reach USD 274 billion by the end of 2025. These investments include hardware, software and service procurement costs [13]. Virtualization technology has been employed for more than one decade in most enterprise datacenter implementations. Specifically, virtualization features a variety of benefits, such as significant cost reduction, higher performance, and availability as well as easier maintenance and administrative flexibility [14]. Additionally, virtualization facilitates the deployment and migration of applications while ensuring high availability for operational and application areas. Particularly in terms of energy efficiency and the lowering of an organization’s CO2 footprint, virtualization is an excellent technique for minimizing the environmental effect of datacenters. Additionally, it aids in enhancing flexibility and decreasing maintenance expenses [15]. As compared to traditional virtualization solutions (VMware, KVM), Docker is a high-level container engine technology that is based on LXC (Linux Container), the widely used method for virtualization processes. Lightweight virtualization for resource and process separation is provided by the kernel virtualization technology LXC. Docker containers are the mainstream solution in the current virtualization field [16]. With the massive use of edge computing, new possibilities have arisen for IoT and IIoT. These come along with new problems related to storage and computing power. Efficient resource utilization became an urgent need, and virtualization technology came to partially solve this issue. It can solve these issues but at the cost of duplicate resource configuration and provision delays in some instances [17]. To overcome these problems, a new model called serverless computing has recently been introduced [18][19][18,19]. Serverless computing can autoscale the service offered following the customers’ demand and also charge the customers fairly only for the service offered, independently of the underlying infrastructure [20]. Moreover, other scholars focused on solving resource allocation problems through the use of optimization methods [21]. Finally, distributed intelligence sharing is handled efficiently in [22]. The latter method can be the solution to the overfitting of learning algorithms that work in edge environments where the data samples can be limited. Based on the analysis of the related work, it is evident that the technology has progressed to such a state where the transition to Single-Board Computers could be feasible for some applications and processes. This researchtudy looks at the idea of using Single-Board Computers (SBCs) with virtualization technologies to develop secure and economical edge-computing environments. The goal of this analysis is to investigate the plausibility of such implementations both now and in the near future by studying current hardware and software technology advancements and capabilities.