1. Ethical Concerns in Cloud Computing
Cloud Computing has enabled and provided promising outcomes for the industrial IoT environment. The cloud offers different models (i.e., public, private, hybrid, multi-cloud, federated, etc.) and services (Software as a Service, Platform as a Service, and Infrastructure as a Service) [
32]. A cloud Service Level Agreement (SLA) is a legal contract agreed between the cloud tenant and the service provider for delivering the promised services. At any point, if the services are not met, the vendor may be subject to a penalty (the contract could be voided) or renegotiated based on new SLA. However, despite having an SLA for controlling the Quality of Service (i.e., reliability, availability, etc.) metrics, the biggest ethical and privacy issues arise when services are processed on third-party premises, without the end-user’s knowledge/consent. Many terms and linked conditions mentioned in the SLA lead to ambiguity and misleading statements [
33].
By 2025, around 85% of the world’s industrial data will be processed in the cloud [
34,
35]. The existing models lack the capacity for such resource demands and would rely on federated and brokerage cloud models. However, the federated models lack in terms of standardization, security, governance, risk and control (GRC), trust, access management, incident response, and business continuity. The NIST Cloud Federation Reference Architecture (NIST SP 500-332) [
36] only provides a basic understanding of the roles different cloud actors (vendors, carriers, brokers, users, auditors, etc.) perform, a description of technical and service levels, and guidance to ease the barriers for adoption [
37].
In 2021, the IEEE P2302
“Standards for Cloud Federation” [
38] is working on aligning with NIST 800-332; however, that is an on-going project and presently there is no cloud federation model that would provide interoperability and uniform governance.
Considering these limitations, and taking into account the fact that cloud setups cannot provide end-to-end security or a guaranteed service, is it ethical to process sensitive data on such setups just to save computational costs? Further, if an industry does, the cloud must follow a baseline/minimal security standards and controls to protect the data.
Consider the example of a Genomic datacenter that computes, analyses, and processes DNA structures/patterns and may require huge computing power. At times, the data centers would require sharing the DNA/sensitive information for treatments with other research centers based in different jurisdictions through cloud models. Any breach, or negligence of cloud security standards or policy, would impact all those people receiving treatments, resulting in a breach of the social contract and GDPR as well. In [
40], the authors mention ethical challenges related to the privacy and security of genomic data and raises concerns whether the existing compliance and security mechanisms would suffice in securing the data in the transforming nature of emerging technologies. The ethical implications of Cloud Computing are influenced by several technological factors such as: security, privacy, compliance, performance metrics, etc.
Table 1, provided below highlights the ethical and privacy considerations in a cloud environment.
Recent cloud data breaches that occurred in 2021 and 2022 [
42] raised awareness over different cloud security and vulnerabilities (i.e., (i) Accenture’s LockBit ransomware attack happened because of misconfigured cloud servers that led to data breach, compromising 40,000 customer accounts, causing financial and reputational damage (ii) where as Kesaya’s lack of security implementations for access control, zero trust, remote policies, and multi-factor authentication controls left the cloud SaaS vulnerable and open to zero-day exploits. The number of managed service providers were affected as an outcome of this ransomware attack, leading to three weeks of operational disruption, and financial and reputational damage. The companies impacted by the breaches are well-known, and the cloud service providers were well-known as well, claiming to have strong security mechanisms; yet, a supply-chain type of cyber attack took place). This supports the legitimacy of claims made in [
30,
31,
32,
34,
41] regarding privacy and regulatory issues, claiming that they are valid and still persist.
With the increasing adoption of Cloud Computing in different sectors, especially the healthcare industry, it is essential that the appropriate regulatory (GDPR) and security standards controls and kept in place. As cloud is mistaken to be a separate/exclusive entity, the security methods (zero trust, availability, and compliance) used within an industry’s private and public cloud may differ, making it easier for cyber criminals to breach the environment. At present, none of the standardization organizations have provided or released an interoperable cloud standards platform; therefore, the only way to mitigate cloud-based risk would be by developing insights, visibility, and control. Ref. [
41] provides a roadmap for understanding the differences at the SLA levels and bridging the gaps between the industrial operational environment and the cloud. However, the gap analysis must be extended as new and innovative technologies are additionally deployed for mitigating the ethical, social, and privacy implications.
2. Ethical Dilemmas in Autonomous Vehicles
Fully automated vehicles are already in the development stage and will soon be offered on the market. In recent years, questions related to ethical concerns in autonomous technologies have been increasing. Lawmakers are accustomed to driver assistance, automatic braking, blind spot monitoring, and adaptive cruise control since they regulate traffic safety. These arguments have primarily focused on extreme traffic circumstances portrayed as moral dilemmas, and they are well-documented in the scientific literature, i.e., circumstances where the autonomous vehicle (AV) seems to be required to make challenging ethical choices (e.g., potential hazard situations). Standardization and legalization are needed to help prevent serious issues between society and technology. Also, policies are needed that can verify and validate the ethical behavior of autonomous systems. Once these principles are put in place, they will help to make the system more transparent, effective, and easy to operate. As autonomous vehicles highly rely on Artificial Intelligence algorithms, they are susceptible to various ethical dilemmas, as shown in Table 2.
The recent Autonomous Vehicles cyber-attacks (i.e., Yandex taxi hack [
48], Tesla Model Y [
49]) raise similar ethical, privacy, security, and regulatory concerns to those mentioned in [
43,
44,
45,
46,
47]. At present, one of the biggest concerns is related to the AI-based decision making software used in self-driving vehicles. For example, if the Autonomous Vehicle predicts a collision endangering pedestrians, the AI-based self-learning software (using Big Data and Machine Learning algorithms for predictive analysis in cloud), quickly reroutes and tries to find an alternate path with lesser casualties [
50,
51]. If this choice is given to the AI software, it may take the path with the least possible casualties (that is single pedestrian), saving the rest of the crowd. Such an approach is called utilitarian ethics. Utilitarian decision making is widely known to be used in warfare situations, where the path for the least fatalities/casualties is optimised. Morally and ethically, it would be impossible for humans to make such a choice: a loss of life is a loss, there is no comparison between a single fatality or multiple. From this perspective, there must be a standardized, compliant, and legal system developed before such autonomous vehicles and devices are implemented in real-time scenarios.
A graph in [
43] presents the automotive industry’s awareness about the ethical concerns regarding self-driving vehicles. Reports from 66 companies based in California were evaluated in the research conducted by [
43]. It is interesting to note that the majority of companies focused on: (i) safety and cybersecurity; (ii) sustainability; (iii) human oversight, control, and auditing; (iv) public awareness; (v) privacy; (vi) accountability; (vii) transparency; (viii) ethical design; (viii) legislative frameworks; (ix) dual use problem and military certification. However, none of them addressed the ethical issues related to fairness, non-discrimination, justice, and hidden costs. This form of negligence is in breach of data protection, privacy, and legislative regulations. Also, none of the companies [
43] invested in responsible research funding for an emerging technology, which is susceptible to high-risk impact scenarios. Consider if such an AV became involved in an incident or accident, and went unprosecuted due to lack of fairness of data [
16], judiciary regulations, and laws in this domain. This may potentially lead to major unrests and promote crimes. Ref. [
52] presents an intriguing question related to robot ethics, that is, whether social robots should have certain rights or not. Although the research provides sets of modalities related to robot rights, it mentions that, at this point in time, robots do not possess the necessary capabilities or properties to be considered full moral and legal beings [
53,
54]. Referring back to cyber attacks mentioned in [
48,
49], where the hackers took over the command and control, and exploited software vulnerabilities, of Autonomous Vehicles, leading to hours of traffic jam, presents the level of escalated cyber risks autonomous devices are susceptible to and the impact they may have. The authors agree with the recommendations of [
53,
54] in terms of autonomous decision making: such devices must only be enabled once the potential risks have been realised, controlled, and mitigated. They should also be bound around standardised regulatory and ethical guidelines/bindings.
3. Understanding Ethical Artificial Intelligence (AI)
Artificial Intelligence (AI)-based technologies have accomplished incredible things such as machine vision, medical diagnosis, and Autonomous Vehicles. They hold immense potential for improving societal progress, economic expansion, and human welfare and security [
55]. Despite this, industries, societies, and communities face serious hazards because of the low degree of interpretability, data inaccuracies, data protection, privacy laws, and ethical issues with AI-based technologies. One of the biggest challenges in this domain is developing AI that is compliant with moral and ethical requirements. To deal with this, industries must look at both dimensions (AI Ethics and ways to develop Ethical AI). AI Ethics refers to the study of the moral principles, regulations, standards, and laws that apply to AI, following the fundamental principles related to: transparency, respect for human values, fairness, safety, accountability, and privacy [
55,
56]. These principles are similar to the ones the European GDPR [
56] provides. The EU AI Act, passed in 2022, aims to develop a legal framework for AI to promote trust and mitigate potential harm that the technology may cause. However, the Members of the European Parliament have addressed their concerns associated with fundamental rights assessment for high-risk users this year. As per the AI Act, a detailed plan for risk impact assessment related to various threat scenarios, potential breaches (i.e., compliance, AI-cybersecurity, etc.) must be provided [
10]. As the AI Act is still a work in progress, it is essential to understand the principles on which AI ethics is based and how Ethical AI could be developed.
3.1. Transparency
AI-based algorithms and techniques must be transparently designed, with a thorough description as well as a valid justification for being developed, as they play a crucial role in tracking the results and ensuring their accordance with human morals so that one can unambiguously comprehend, perceive, and recognize the designs decision-making mechanism. Twitter serves as an eye-opener here, in 2021 the company faced huge criticism for using AI algorithms to assess racial and gender bias [
57]. Twitter is now making amends to mitigate the damages caused by the algorithm and implement the six fundamental attributes of AI Ethics. Considering an industrial/Cyber-Physical System (CPS) environment, transparency is essential for both humans and universal machines.
3.2. Respect for Human Values
AI inventions are obliged to uphold human values and positively affect the progress of individuals and industries, as well to assure to protect sensitivity toward cultural diversities and beliefs.
3.3. Fairness
Fostering an inclusive environment free from discrimination against employees based on their gender, colour, caste, or religion is essential (including team members from various cultural backgrounds helps to reduce prejudice and advance inclusivity). In the past, AI algorithms have been criticized for profiling healthcare data, employees’ resumes, etc. Considering this from a GDPR perspective, fair use of data in the European jurisdiction is mandatory. Since the fairness aspect maps across AI fairness and GDPR fair use of data, they must be aligned.
3.4. Safety
Safety relates to both the security of user information and the welfare of individuals. It is essential to recognize hazards and focus on solutions to eliminate such issues. The users’ ownership over the data must be protected and preserved by using security techniques such as encryption and giving users control over what data are used and in what context. This also aligns with the scope of GDPR.
3.5. Accountability
Decision-making procedures should be auditable, particularly when AI is handling private or sensitive information such as copyright law, or identifying biometrics information or personal health records.
3.6. Privacy
Protecting user privacy while using AI techniques must be kept as the highest priority. The user’s permission must be obtained to utilize and preserve their information. The strictest security measures must be followed to prevent the disclosure of sensitive data.
Lessons must be learnt from Google’s project Nightingale and Ascension [
58] lawsuits which were an outcome of gathering personal data and raised privacy concerns in terms of data sharing and the use of AI. There are various dilemmas when it comes to the applicability of AI. As an example, AI’s implementation in self-driving vehicles has raised huge ethical concerns because, when its designed software was based on a utilitarian approach, in a crash type of situation it would opt for the option with the least casualties; however, when it was programmed based on the social contract theory, the autonomous vehicle could not make a decision as it kept looking for pre-set conditions in loops which ultimately resulted in an accident, as it did not move itself away from the hazard situation [
50]. This is one of the biggest challenges, to enable AI to think similarly to humans and have the same ethical and moral conduct; however, with the growing autonomous and self-driving industry there is no going back. Therefore, the only means to control ethical issues related to AI would be to fully develop the standards and regulations. As the authors mentioned earlier, risk impact assessment is merely a means for damage control (analyzing the impact of a breach or vulnerability if exploited). As well, for the cybersecurity threat landscape [
14], where the threat actors are constantly evolving, regulating AI—where number of implications are yet to be realized, only best practices and following existing standards and policies can mitigate risks associated to AI deployments in the Industrial environment.
Table 3 elaborates ethical guidelines and existing directives for AI. The authors suggest that a gap analysis of the similarities between them could assist in bridging the compliance/regulatory gaps in the Industrial environment.
A technology that is agile, intelligent and value-driven has already set its course towards digitally transforming the environment. International policy-makers [
9,
10,
11], professional bodies [
59,
60,
61], and industries [
61,
62,
63] have realised the need for regulation, encouraging smooth and higher deployments of AI in the Industrial environment.
The three frameworks [
16,
57,
58] developed by different professional standards/regulatory bodies have few attributes in common; however, a complete mapping or interoperability between the ethical frameworks was not provided. This becomes a potential issue when industries tend to implement a standardized approach. Another issue arises when industries have different manufacturing regions/setups across the world (Europe, USA, and China) and are subject to different jurisdictions, data regulations, and compliance. In circumstances where a production environment deploys different AI regulatory frameworks, it will make the dissemination of information across the digital factory, supply chain, and data classification a complex process. Industry 5.0 is value driven and its vision may only be achieved by mapping synergies across the ethical, technical, innovative, and sustainable domains. The guidelines provided by the EU in [
9,
10,
11] have been the first ones to take initiative in shaping European Digital Strategy, developing standardized regulatory and legal frameworks for AI and mitigating the potential risks. Aligning the AI deployments with the provided Act and security controls [
14] is the only regulated way for now, imbibed with the AI ethical principles (i.e., privacy, accountability, fairness of data, transparency), that contribute and map with GDPR principles as well. However, as discussed earlier, it is important to note that AI depends on various technologies (i.e., Big Data, Machine Learning, etc.). If any of these technologies have security gaps, it may lead to potential breaches in the AI domain as well; therefore, the adapting industries must make sure that their ethical and legal framework is compliant and reflects across the interconnected emerging technologies.
This entry is adapted from the peer-reviewed paper 10.3390/s23031151