Fog computing, also known as edge computing, is an architecture that uses fog nodes to receive tasks from IoT devices and perform a large amount of computation, storage, and communication locally, and may route processed data over the Internet to the cloud for further processing if necessary. The goal of fog computing is to improve the efficiency of local and cloud data storage. Fog computing can handle massive data initiated from IoT devices on the edge of networks. Because of its characteristics like low latency, mobility, and heterogeneity, fog computing is considered to be the best platform for IoT applications, sometimes called a fog-based IoT platform. For example, fog computing reduces the amount of data that needs to be sent to the cloud and keeps the latency to a minimum, which is a key requirement for time-sensitive applications such as IoT-based healthcare services 
. Another example is, in a fog-based IoT platform for smart buildings, the information about the indoor ambience is collected in real-time by IoT devices and sent to the fog for aggregating and preprocessing before being passed to the cloud for storage and analysis. Proper decisions are made and sent back to the related actuators to set the ambience accordingly, or to fire an alarm 
. More investigation about the essential components of fog-based architecture for IoT systems and their implementation approaches is surveyed in 
Fog and cloud concepts are very similar to each other, however, there are some differences. Here is a brief comparison of fog computing and cloud computing: (1) Cloud architecture is centralized and consists of large data centers located around the globe; fog architecture is distributed and consists of many small nodes located as close to client devices as possible. (2) Data processing in cloud computing is in remote data centers. Fog processing and storage are done on the edge of the network close to the source of information, which is crucial for real-time control. (3) Cloud is more powerful than fog regarding computing capabilities and storage capacity, but fog is more secure due to its distributed architecture. (4) Cloud performs long-term deep analysis due to slower responsiveness, while fog aims for short-term edge analysis due to instant responsiveness. (5) A cloud system collapses without an Internet connection. Fog computing has a much lower risk of failure due to using various protocols and standards. Overall, while both cloud and fog computing have their respective advantages, it is important to note that fog computing does not replace cloud computing but complements it. Choosing between these two systems depends largely on the specific needs and goals of the user or developer.
2. Various Relevant Works of Fog-Based IoT Platform Performance Modeling and Optimization
The modelling, analysis and validation for fog computing systems, particularly with IoT applications, have been extensively studied in the literature. A brief summary of different research aspects is reviewed as follows. In 
, a set of new fall detection algorithms were investigated to facilitate fall detection process, and a real-time fall detection system employing fog computing paradigm was designed and employed to distribute the analytics throughout the network by splitting the detection task between the edge devices. In 
, a conceptual model of fog computing was presented, and its relation to cloud-based computing models for IoT was investigated. In 
, a new fog computing model was presented by inserting a management layer between the fog nodes and the cloud data center to manage and control resources and communication. This layer addresses the heterogeneity nature of fog computing and its challenging complex connectivity. Simulation results showed that the management layer achieves less bandwidth consumption and execution time. In 
, a health monitoring system with Electrocardiogram (ECG) feature extraction as the case study was proposed by exploiting the concept of fog computing at smart gateways providing advanced techniques and services. ECG signals are analyzed in smart gateways with features extracted including heart rate, P wave and T wave. The experimental results showed that fog computing helps achieve more than 90% bandwidth efficiency and offers low-latency real-time response at the edge of the network.
, a charging mechanism called “FogSpot” was introduced to study the emerging market of provisioning low latency applications over the fog infrastructure. In FogSpot, cloudlets offer their resources in the form of Virtual Machines (VMs) via markets. FogSpot associates each cloudlet with a price that targets to maximize cloudlet’s resource utilization. In 
, a distributed dataflow (DDF) programming model was proposed for an IoT application in the fog. The model was evaluated by implementing a DDF framework based on an open-source flow-based run-time and visual programming tool called Node-RED, showing that the proposed approach eases the development process and is scalable. In 
, a complex event processing (CEP) based fog architecture was proposed for real-time IoT applications that use a publish-subscribe protocol. A testbed was developed to assess the effectiveness and cost of the proposal in terms of latency and resource usage. In 
, an alternative to the hierarchical approach was proposed using the self-organizing computer nodes. These nodes organize themselves into a flat model, which leverages on the network properties to provide improved performance.
, a dynamic computation offloading framework was proposed for fog computing to determine how many tasks from IoT devices should be run on servers and how many should be run locally in a vibrant environment. The proposed algorithm makes dynamic decisions of offloading according to CPU usage, delay sensitivity, residual energy, task size and bandwidth, as well as by sending time-sensitive tasks to local devices or fog nodes for processing and resource-intensive tasks to the cloud. In 
, a distributed machine learning model was proposed to investigate the benefits of fog computing for industrial IoT. The proposed framework was implemented and tested in a real-world testbed for making quantitative measurements and evaluating the system. In 
, a resource-aware placement of a data analytics platform was studied in fog computing architecture, seeking to adaptively deploy the analytic platform, and thus reducing the network costs and response time for the user.
, a joint optimization framework was proposed for computing resource allocations in three-tier IoT fog networks involving all fog nodes (FNs), data service operators (DSOs), and data service subscribers (DSSs). The proposed framework allocates the limited computing resources of FNs to all the DSSs to achieve an optimal and stable performance in a distributed fashion. In 
, a container migration mechanism was presented in fog computing that supports the performance and latency optimization through an autonomic architecture based on the MAPE-K control loop, providing a foundation for the analysis and optimization design for IoT applications. In 
, a market-based framework was proposed for a fog computing network that enables the cloud layer and the fog layer to allocate resources in the form of pricing, payment, and supply–demand relationship. Utility optimization models were investigated to achieve optimal payment and optimal resource allocation via convex optimization techniques, and a gradient-based iterative algorithm was proposed to optimize the utilities.
Although fog computing is considered to be the best platform for IoT applications, it has some major issues in the authentication, privacy, and security aspects. Authentication is one of the most concerning issues of fog computing, since fog services are offered at a large scale. This complicates the whole structure and trust situation of fog. A rouge fog node may pretend to be legal and coax the end user to connect to it. Once a user connects to it, it can manipulate the signals coming to and from the user to the cloud and can easily launch attacks. Since fog computing is mainly based on wireless technology, the issue of network privacy has attracted much attention. There are so many fog nodes that every end user can access them, thus more sensitive information is passed from end users to fog nodes. Fog computing security concerns arise as there are many IoT devices connected to fog nodes. Every device has a different IP address, and any hacker can forge your IP address to gain access to your personal information stored in the particular fog node. In 
, a lightweight anonymous authentication and secure communication scheme was proposed for fog computing services, which uses one-way hash function and bitwise XOR operations for authentication among cloud and fog, with a user and a session key agreed upon by both fog-based participants to encrypt the subsequent communication messages. More references can be found in 
that survey main security and privacy challenges of fog and edge computing, and the effect of these security issues on the implementation of fog computing.
3. A Fog-Based IoT Platform Architecture
Fog computing is a decentralized computing infrastructure in which data, computer storage and applications are located somewhere between the source (end user) and the cloud. Thus, fog computing brings the advantages and power of the cloud closer to end users. The proposed fog computing architecture (i.e., the fog-based IoT platform) comprises three layers: IoT devices, fog nodes, and cloud, as shown in Figure 1. The need for smart control and decision-making at each layer depends on the time sensitivity of an IoT application.
Figure 1. The fog computing system architecture.
The lowest layer is the end user layer, which consists of a large number of IoT devices, such as robots, smart security, smart phones, wearable devices, smart watches, smart glasses, laptops, and autonomous vehicles. Some of these devices may have the capability of computing, while others may only collect raw data through intelligent sensing of objects or events and send the collected data to the upper layer for processing and storage.
The middle layer is the fog layer consisting of a group of fog nodes, such as routers, gateways, switchers, access points, base stations, and fog servers. Fog nodes are independent devices that calculate, transmit, and temporarily store the generated data from IoT devices, while a fog server also computes the data to make decision of an action. Other fog devices are usually linked to fog servers. Fog nodes can be deployed in a fixed position or on a moving vehicle and are linked to IoT devices to provide intelligent services. Fog nodes are located closer to the IoT devices compared with the cloud and, thus, in this way allow real-time analysis and delay-sensitive applications to be performed within the fog layer. Fog nodes are usually involved when an IoT device does not have data processing capability, or the generated data amount is too large for an IoT device (with computing capability) to process locally. Fog nodes are also connected to the upper layer cloud data centers through the Internet, and can obtain more powerful computing and storage capabilities for some large and complex data processing tasks.
The upper layer is the cloud layer, which includes many servers and storage devices with powerful computing and storage capabilities to provide intelligent application services. This layer can support a wide range of computational analysis and storage of large amounts of data. However, unlike the traditional cloud computing architecture, fog computing does not handle all computing and storage through the cloud. Fog nodes themselves have appropriate computing and storage capabilities. According to the data processing load and quality of service (QoS) requirements, some control strategies can be used to effectively manage and schedule the processing tasks between fog nodes and the cloud, to improve the utilization rate of the overall system resources.
The data transmission between the end users and the fog nodes may be in wired or wireless modes. The selection of the connectivity mode depends mostly on an IoT application. One may think that wired connectivity is generally faster and more secure than the wireless mode. The major wireless technologies for the IoT communication protocols include Bluetooth low energy (BLE) 
, Zigbee 
, Z-Wave 
, WiFi 
, cellular (GSM, 4G LTE, 5G), NFC 
, and LoRaWAN 
. These IoT communication protocols cater to and fulfill the specific functional requirements of IoT systems. The Internet connection acts as the bridge between the fog layer and the cloud layer and establishes the interaction and communication between them.