Sustainability of AI and Sustainability Claims

The potential of artificial intelligence (AI) and its manifold applications have fueled the discussion around how AI can be used to facilitate sustainable objectives. However, the technical, ethical, and legal literature on how AI, including its design, training, implementation, and use can be sustainable, is rather limited. At the same time, consumers incrementally pay more attention to sustainability information, whereas businesses are increasingly engaging in greenwashing practices, especially in relation to digital products and services, raising concerns about the efficiency of the existing consumer protection framework in this regard. The objective of this paper is to contribute to the discussion toward sustainable AI from a legal and consumer protection standpoint while focusing on the environmental and societal pillar of sustainability. After analyzing the multidisciplinary literature available on the topic of the environmentally sustainable AI lifecycle, as well as the latest EU policies and initiatives regarding consumer protection and sustainability, we will examine whether the current consumer protection framework is sufficient to promote sharing and substantiation of sustainability information in B2C contracts involving AI products and services. Moreover, we will assess whether AI-related AI initiatives can promote a sustainable AI development. Finally, we will propose a set of recommendations capable of encouraging a sustainable and environmentally-conscious AI lifecycle while enhancing information transparency among stakeholders, aligning the various EU policies and initiatives, and ultimately empowering consumers.

sustainability;artificial intelligence;sustainable AI;sustainabillity claims;consumer protection

1. AI HLEG Ethics Guidelines for Trustworthy AI

In April 2019, the High-Level Expert Group on Artificial Intelligence (AI-HLEG) set up by the European Commission published its final version of the Ethics Guidelines for Trustworthy AI following a public consultation [21][1]. According to the guidelines, an AI-system will be considered trustworthy when, throughout its lifecycle, it meets the following components cumulatively: (i) it is lawful, meaning that it complies with the applicable laws and regulations; (ii) it is ethical, meaning that it observes ethical principles and values, and (iii) it is technically and socially robust. For these components to materialize, a set of core ethical principles, as well as seven requirements based on technical and non-technical methods, should be met.
The ethical principles and requirements are identified in the Table 1 below:
Table 1.
 Trustworthy AI ethical principles and requirements.
It should be noted that AI HLEG advises that when implementing these ethical principles or “ethical imperatives” identified throughout the lifecycle of the technology, especially in adherence to the principle of prevention of harm, vulnerable groups and relationships where there are information asymmetries, as for instance, between businesses and consumers, should be taken into account. At the same time, one of the proposed non-technical means to facilitate meeting these requirements is information transparency. In particular, the AI HLEG suggests that providing information to stakeholders in a clear and proactive manner about the capabilities and limitations of AI, as well as of the means used to implement the seven requirements, is essential. The objective of this measure is to ensure that the stakeholders have realistic expectations about the technology.
More specifically, in order to meet the requirement of diversity, non-discrimination, and fairness, which is closely related to the ethical principle of fairness itself, aside from avoiding unfair bias and promoting stakeholder participation in AI development, accessibility and universal design is pivotal. Under this sub-requirement, it is advised that AI products and services are accessible to consumers, irrespective of their own abilities. To this, we add that the accessibility requirement does not necessarily involve only the functionality of the product or service, but also the information provided about the product or service. Information, including sustainability information and claims, should be put in a clear, legible, and accessible manner for the consumer. Therefore, overly technical and specialized vocabulary should be avoided.
In addition, the requirement of environmental and societal well-being suggests that the sustainability of AI systems should be ensured and promoted throughout the AI value chain and lifecycle. To determine whether an AI product or service is sustainable, AI HLEG suggests a critical assessment of “resources, energy consumption during training[21][1]. In its Assessment List for Trustworthy AI (ALTAI) [14][2], in order to assess the conformity with the societal and environmental well-being requirement, AI HLEG proposes the following self-assessment checklist:
  • “Are there potential negative impacts of the AI system on the environment?
    Which potential impact(s) do you identify?
    Which potential impact(s) do you identify?
 
  • Where possible, did you establish mechanisms to evaluate the environmental impact of the AI system’s development, deployment and/or use (for example, the amount of energy used and carbon emissions)?
    Did you define measures to reduce the environmental impact of the AI system throughout its lifecycle?” [2]
    Did you define measures to reduce the environmental impact of the AI system throughout its lifecycle?” [14]
Ethical Principles
Requirements
Respect for human autonomy Human agency and oversight
Prevention of harm Technical robustness and safety
Fairness Privacy and data governance
Explicability Transparency
  Diversity, non-discrimination, and fairness
  Environmental and societal well-being
  Accountability
Notwithstanding, examples of the possible methodology or mechanisms that can be used to assess the environmental impact, or to mitigate it, are not provided.
Furthermore, although AI HLEG does not analyze or provide recommendations in relation to the lawfulness component for a trustworthy AI, these soft law recommendations to some extent reflect already existing legal provisions, and may influence future legislative initiatives. Especially in relation to the information transparency method, it is clear that it reflects principles embedded in various laws addressing information asymmetries such as GDPR (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC) for the protection of data subjects, the Prospectus Regulation (Regulation (EU) 2017/1129 of the European Parliament and of the Council of 14 June 2017 on the prospectus to be published when securities are offered to the public or admitted to trading on a regulated market, and repealing Directive 2003/71/EC) for the protection of investors (, and, of course, the panoply EU consumer protection laws. In this regard, the ethical approach proposed in the guidelines is based on the fundamental rights of the EU Treaties and the Charter of Fundamental Rights of the EU (“EU Charter”). In relation to consumer protection, Article 38 of the EU Charter and Article 169 of the Treaty on the Functioning of the European Union (TFEU) are of relevance.
The guidelines and the assessment list are non-binding, and therefore non-enforceable by administrative authorities or courts. However, they can provide guidance to the various stakeholders involved in the AI lifecycle. From a consumer protection and sustainability claims standpoint, although the obligation to provide the information to the consumer lies with the trader, a transparent and facilitated flow of information between the stakeholders of the AI value chain is essential to ensure the substantiation of such claims. Especially when the provider of information is not the same as the designer, developer, or manufacturer of the AI, it is advised that information regarding the sustainable features of the product or service are addressed and supported after the product or service is put on the market and given to the trader in order to meet their own obligations vis-à-vis consumers. Such practice can be enforced contractually. Although the contractual enforcement of such an obligation to the third-party designer, developer, or manufacturer can help the trader to demonstrate, if requested by a supervisory authority or court, that substantiated information was provided to the consumers, a formal contractual relationships with the third-party that will permit contract negotiation and, secondly, a certain level of market power of the AI-trader over the third-party developer are presupposed [26][3]. Therefore, the scope of application of this measure may be limited in the AI-field.

2. Could Sustainability Information Be Included in the “Main Characteristics” of AI Products and Services?

As it was briefly mentioned in Section 4, under Article 6 of the Unfair Commercial Practices Directive, the trader cannot provide misleading information to the consumer [27][4]. The requirement to not mislead the consumer through untrue environmental or sustainability related claims is, of course, included in this obligation. In other words, the trader cannot engage in greenwashing practices. For instance, this is the case when a trader states that “due to its composition, how it has been manufactured or produced, how it can be disposed of and the reduction in energy or pollution expected from its use” a product or service will have a positive impact in the environment or a less negative impact than its competitors, without such claim being true or, at least, verifiable [28][5]. According to a screening conducted by the European Commission and national consumer authorities a percentage as high as 42% of market players may actually be engaging in some type of greenwashing [10][6].
For example, for AI, this would mean that it would not be possible to advertise a certain algorithm as trained using 100% renewable energy if, in fact, the energy had come from non-renewable sources, or if there is no adequate manner to ensure that the sources were indeed renewable. In the same manner, a trader using an AI algorithm to offer predictive maintenance to the consumer should be able to adequately substantiate any sustainability benefits (for example, related to energy consumption and waste) that they claimed to achieve, if requested.
Nonetheless, it is important to go one step further and understand that if there is margin in the current legislation to argue that in certain cases, there can be a proactive requirement to offer sustainability-related information for AI-based products and services. In this regard, both the Consumer Rights Directive [Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights (as amended)], in its Articles 5 and 6, and the Unfair Commercial Practices Directive through Article 7(4) are clearly establishing that the consumer should be informed about “the main characteristics of the goods or services, to the extent appropriate to the medium and to the goods or services”. This obligation should be interpreted in a coherent manner between both legal instruments. This should mean that a trader complying with these obligations under the Consumer Rights Directive is also complying with the obligations under the Unfair Commercial Practices Directive, as sustained by the European Commission in the Directorate-General for Justice and Consumers’ Guidance Document for the application of the Consumer’s Rights Directive [29][7] (both this Guidance document and the one related to the Unfair Commercial Practices Directive are to be updated by 2022, as announced in the New Consumer Agenda, to take into account the changes brought by the Omnibus Directive).
The question here is in fact whether in a world where consumers overwhelmingly find the environment to be important, and 57% of consumers are willing to change their purchasing habits based on sustainability considerations, 72% are willing to pay a premium for brands that promote sustainable or environmentally responsible behaviors, and 71% for brands that are fully transparent, information related to sustainability can be considered in determined cases as being part of the main characteristics of the product or service [30][8]. The question can be particularly relevant for AI-based products and services because as we have seen, AI can be both an engine for sustainability, but also a drain on natural resources, and consumers may want to know in which category the good or service they are buying falls in before completing the transaction. Trustworthy AI, according to the commission and AI HLEG, means both transparent and sustainable AI.
The answer to the abovementioned question is that under the instruments analyzed in this section, there is no clear legally binding requirement to provide sustainability-related information specifically for AI-based products and services. While it is certainly true that consumers are aware of sustainability in general, it would be relevant to know if they value it more when it relates to AI comparing to other characteristics to understand if it should be considered as one of AI’s “main characteristics”. Of course, conclusions can differ based on particular applications of the technology. For instance, if an autonomous vehicle can reduce emissions and fuel/electricity consumption by 40% due to the algorithm used for autonomous driving, one can certainly argue that this is very important information (maybe even a main characteristic of the product). On the opposite side, the fact that Gaming Console A consumes 5% less electricity than Gaming Console B due to some form of AI-based technology deployed will probably not be the key factor driving the consumer’s decision to purchase.

3. The Commission’s Proposal on AI Regulation

From late February to mid-June 2020, the European Commission ran a public consultation regarding the expected proposal for a regulation on artificial intelligence and policy options proposed in the White Paper: On Artificial Intelligence—A European approach to excellence and trust [10][6]. Arising from this public consultation on 21 April, the European Commission presented its proposal for an AI Act putting forward a single set of rules to regulate artificial intelligence in the European Union.
The Proposal for an AI Act came four years after the European Parliament called upon the commission to frame a legislative proposal for a set of civil law rules on robotics and artificial intelligence, arguably, the starting point of the EU’s path to produce a specific AI legal instrument. It opts for a risk-based approach to AI regulation with most of its obligations being reserved for high-risk AI, and it possesses the makings of a potentially effective legal instrument with extraterritorial scope, detailed rules on market surveillance, and extremely high fines. With these characteristics, the proposal for an AI Act could have been designed in a manner that could contribute to further promoting transparency and sustainability and reinforcing consumer protection, information transparency, and fundamental rights enforceability regarding these matters. In this regard, it should be noted that the proposal for an AI Act imposes certain transparency obligations to the producer, as well as specific transparency obligations for certain AI uses. However, this information obligation focuses on the use and consequences of an AI system, and not on sustainability. Therefore, the issue of sustainability is mostly ignored in the proposal, with the exception of the possible integration of requirements related to environmental sustainability in voluntary codes of conduct (Article 69(2)) (for a more detailed assessment of the Proposal for an AI Act see Cabral and Kindylidi [31][9] and Cabral [32][10]).
In a proposal establishing requirements from risk management to data governance, and where transparency takes a central role, one cannot avoid thinking that it would be easy to go further. In fact, at least for high-risk AI systems, establishing an obligation to detail sustainability impacts in the technical documentation, and to disclose said impacts to the consumer, would not be difficult, nor would it appear out of context in the current proposal. In addition, an important aspect that should not be disregarded is that the inclusion of rules or principles regarding environment and sustainability in the final text of the regulation, even if through a light touch principle-based approach, would mean that the EU’s fundamental rights standard, based on Article 37 of the EU Charter, along with Article 3(3) TEU and 191 TFEU, will then be unquestionably applicable [33,34][11][12]. Concomitantly, this will make the likelihood of action by the Court of Justice of the European Union (ECJ) to protect and develop the “European standard” more likely.

References

  1. High-Level Expert Group on Artificial Intelligence, AI Ethics Guidelines for Trustworthy AI. 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 5 October 2021).
  2. High-Level Expert Group on Artificial Intelligence, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self-Assessment. 2020. Available online: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence (accessed on 10 October 2021).
  3. Kindylidi, I.; Antas de Barros, I. AI Training Datasets & Article 14 GDPR: A risk assessment for the proportionality exemption of the obligation to provide information. Law State Telecommun. Rev. 2021, 13, 1–27.
  4. Carvalho, J.M. Direito do Consumo, 7th ed.; Almedina: Coimbra, Portugal, 2020; ISBN 9789724088921.
  5. European Commission. Commission Staff Working Document Guidance on the Implementation/Application of Directive 2005/29/EC on Unfair Commercial Practices Accompanying the Document Communication from the Commission to the European Parliament, the Council, the European Economic. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016SC0163&from=EN (accessed on 25 May 2016).
  6. European Commission. White Paper On Artificial Intelligence—A European Approach to Excellence and Trust; European Commission: Brussels, Belgium, 2020.
  7. European Commission. DG JUSTICE Guidance Document Concerning Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on Consumer Rights, Amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council a. 2014. Available online: https://ec.europa.eu/info/sites/info/files/crd_guidance_en_0_updated_0.pdf (accessed on 1 October 2021).
  8. Haller, K.; Lee, J.; Cheung, J. Meet the 2020 Consumers Driving Change: Why Brands Must Deliver on Omnipresence, Agility, and Sustainability. Available online: https://www.ibm.com/downloads/cas/EXK4XKX8 (accessed on 3 October 2021).
  9. Cabral, T.S.; Kindylidi, I. WhatNext.Law, Proposal for a Regulation on a European Approach for Artificial Intelligence: An Overview. Available online: https://whatnext.law/2021/05/05/proposal-for-a-regulation-on-a-european-approach-for-artificial-intelligence-an-overview-pt/ (accessed on 3 October 2021).
  10. Cabral, T. EU Law Live, The Proposal for an AI Regulation: Preliminary Assessment. Available online: https://eulawlive.com/oped-the-proposal-for-an-ai-regulation-preliminary-assessment-by-tiago-sergio-cabral/ (accessed on 14 October 2021).
  11. Cabral, T.S.; Silveira, A.; Abreu, J. UNIO EU Law Journal, The “mandatory” contact-tracing App “StayAway COVID”—A matter of European Union Law. Available online: https://officialblogofunio.com/2020/10/20/the-mandatory-contact-tracing-app-stayaway-covid-a-matter-of-european-union-law/ (accessed on 2 October 2021).
  12. Vilaça, J.L.C.; Silveira, A. The European federalisation process and the dynamics of fundamental rights. In Citizenship within the EU Federal Context; Cambridge University Press: Cambridge, UK, 2017; pp. 125–146.
More
Top