| Version | Summary | Created by | Modification | Content Size | Created at | Operation |
|---|---|---|---|---|---|---|
| 1 | Xavier Baraza | -- | 1777 | 2026-01-27 06:28:10 | | | |
| 2 | Jeremy Duan | Meta information modification | 1777 | 2026-01-27 06:38:14 | | | | |
| 3 | Jeremy Duan | Meta information modification | 1777 | 2026-01-27 07:24:38 | | |
Artificial intelligence (AI) refers to autonomous or semi-autonomous systems capable of interpreting data, generating inferences, and guiding decisions, thereby reshaping the foundations of work and organizational processes. Its rapid integration into productive settings gives rise to emerging risks, understood as new or evolving hazards that stem from human–machine interaction, algorithmic decision-making, and shifting sociotechnical conditions. Within occupational safety and health (OSH), these risks encompass novel cognitive, psychosocial, organizational, and ethical challenges, making it necessary to develop preventive frameworks that align technological innovation with human well-being, transparency, and responsible governance.
The concept of “artificial intelligence” (AI) was introduced by John McCarthy in 1955 [1]. Since then, the term has referred to the ability of machines to replicate human cognitive processes such as reasoning, learning, and problem-solving. AI is grounded in algorithms and computational models that enable systems to perform tasks previously dependent on human intelligence, including language comprehension, visual or auditory pattern recognition, complex decision-making, and language translation [2].
AI is a dynamic concept whose definition has been debated across international organizations. The Organisation for Economic Co-operation and Development (OECD) [3] and the European Union in the AI Act [4] converge in describing AI as a “machine-based system capable of operating with varying levels of autonomy and exhibiting post-deployment adaptability, which, with explicit or implicit objectives, infers from input data to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. UNESCO highlights the functional nature of AI, noting that systems built on data, hardware, and connectivity enable machines to emulate human abilities such as perception, problem-solving, linguistic interaction, and creativity [5]. These definitions, summarized in Table 1, reveal relevant nuances: while the OECD and the European Union emphasize autonomy, adaptability, and input–output relations, UNESCO stresses imitative capacities and the constitutive role of technological resources.
Table 1. Definitions of artificial intelligence according to leading international organizations and institutions.
|
Institution |
Main Definition |
Key Elements/Nuances |
|
OCDE (2024, p. 3) [3] |
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” |
· “Explicit” or “implicit” objectives. · Produces outputs such as predictions, recommendations, decisions, or content. · Influences physical or virtual environments. · Varies in autonomy and in its ability to adapt after deployment (“autonomy” and “adaptiveness”). · Is “machine-based,” that is, systems grounded in machines/computation. |
|
European Union (2024, chapter 1, article 3) [4] |
“An AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” |
Very similar to the OECD definition (co-aligned). It adds: · “Designed to operate with various levels of autonomy.” · Adaptability (“adaptiveness”) after deployment. · Explicit recognition of input and output, and their effect on physical or virtual environments. · Clear indication that objectives may be explicit or implicit. |
|
UNESCO (2021) [5] |
“Built from data, hardware and connectivity, AI allows machines to mimic human intelligence such as perception, problem-solving, linguistic interaction or creativity.” |
This definition places emphasis on: · Imitation of human intelligence functions: perception, problem-solving, linguistic interaction, creativity. · Constitutive elements: data, hardware, connectivity. · A more descriptive focus on capabilities (what AI does) rather than on internal functioning or the degree of autonomy/adaptability. |
From an economic and business perspective, AI can be understood as a technological innovation process [6]. It refers to the social stock of knowledge used to create digital artifacts that, when applied to economic activity, emulate and may enhance or replace human cognitive capacities [7]. These techniques rely on AI’s ability to generate value through prediction. Predictive AI (PAI) encompasses computational systems and machine and deep learning algorithms designed to interpret and anticipate events, support or automate decisions, and execute actions in controlled contexts. PAI is a higher-order technology, a driver of radical innovation, and a general-purpose technology [8]. It also fosters technological convergence, derivative innovations, complementarities with economic assets, particularly intangible assets and human capital, new business models, productivity and employment gains, and a long-term economic cycle [9].
The rapid emergence of generative AI (GAI), with revolutionary “killer apps” such as ChatGPT and Gemini, has created a new and clearly disruptive inflection point at several critical inflection points, that is, moments when gradual technological developments trigger major changes in how work is organized and risks emerge, shaping new trajectories for occupational safety and health [10]. GAI is also a general-purpose technology and is extending a key new value: the value of creation [11]. This value, driven by transformer-based machine and deep learning algorithms that generate digitalized artifacts, enhances AI’s performance and is profoundly transforming production [12] and work [[13],[14]]. Progress in connectionist AI will not stop here: future algorithmic generations will produce far more advanced systems, with more agents, greater power, and enhanced capacities for learning, replication, and resource acquisition [15].
Transformative AI (TAI) refers to highly capable systems that operate as independent and autonomous agents pursuing their objectives, and whose performance far exceeds that of human labour across a wide range of tasks, including many that are essential to the economy, work, and society [[16],[17]]. The value associated with TAI is the value of transformation.
As their capabilities expand, predictive, generative, and transformative AIs emulate an increasing number of human skills, accelerating their potential to replace non-routine cognitive work. Technically, the emergence of TAI capable of producing ideas, generating innovations, and decoupling economic growth from human labor may occur in the coming decades [18]. Such TAI poses an existential risk: its superiority in prediction, creation, and transformation would grant it an economic advantage that could render humans redundant in many social domains, particularly work [19]. This would misalign AI from human progress, exacerbating automation challenges, insufficient control, polarization, and inequality already observed with PAI and GAI. However, if TAI were aligned and directed toward human and organizational well-being, it could foster a new era of growth and social prosperity through its capacity to enhance productivity, economic expansion, social welfare, and environmental protection [20].
In occupational risk prevention, these conceptual differences are not merely semantic but have significant practical implications. Autonomy and adaptiveness allow anticipating risks from systems that, once deployed, may behave in not fully predictable ways, creating uncertainty regarding supervision and control [[21],[22]]. The reference to explicit or implicit objectives raises questions about responsibility allocation and failure management [[23]], directly influencing organizational prevention culture. The distinction between physical and virtual environments indicates that emerging risks also include digital dimensions such as surveillance, automated decision-making, and the management of workers’ data [[24]]. UNESCO’s focus on imitative human capacities introduces psychosocial and ethical risks related to human–machine interaction, potential substitution of cognitive functions, and associated tensions in work organization [[1]]. Overall, these perspectives show that defining AI and its dimensions is not only a conceptual task but an essential first step for identifying and managing emerging risks in workplace settings.
In the workplace, AI plays a dual role that combines opportunities and challenges for occupational risk prevention. From a positive perspective, it can reduce workers’ exposure to hazardous environments through collaborative robots, drones, or intelligent monitoring systems [[25]]. AI also enhances ergonomic workstation design, anticipates physical overloads, and strengthens preventive management through predictive models capable of identifying accident patterns before they occur [[21],[22]. Moreover, it enables more personalized preventive training by tailoring content to worker characteristics, thereby improving the effectiveness of occupational safety and health education[22].
However, alongside these advantages, new categories of risk arise that must be considered in preventive planning. Surveillance algorithms may create psychosocial tensions linked to perceptions of excessive control, affecting emotional well-being and organizational trust [[1],[23]]. Human–machine interaction in digitalized environments can lead to additional cognitive load, misinterpretation of automated recommendations, or excessive reliance on technological systems [26]. Ethical and legal dilemmas also emerge regarding algorithmic transparency, biases in automated decision-making, and the allocation of responsibility in the event of failure [[24]]. These issues add to the broader impact of AI on employment, including skill polarization and work reorganization, which directly influence workers’ safety and health conditions [[2]].
Concrete examples of these challenges can already be observed in practice. In large-scale warehouse and logistics operations, the use of algorithmic management systems for task allocation and performance monitoring has been associated with work intensification, reduced autonomy, and increased musculoskeletal and psychosocial risks. Similar issues have been reported in road transport and delivery sectors, where AI-driven scheduling and monitoring systems have contributed to time pressure, fatigue, and elevated safety risks among drivers in several national contexts.
In this context, the connection between AI and emerging risks places occupational risk prevention in a strategic position. It must anticipate both the benefits and the threats that technology introduces into workplace environments [[27]]. This requires developing adaptive regulatory frameworks, innovative assessment methodologies, and intervention strategies that integrate not only the technical dimension but also the organizational, psychosocial, and ethical aspects of digital transformation.
The aim of this entry is to provide a preventive perspective on the intersection between AI and emerging risks in workplace environments. In this entry, a preventive perspective refers to an approach that explicitly prioritizes the early identification, anticipation, and mitigation of potential risks associated with AI adoption, rather than assuming that technological innovation will automatically lead to harm reduction. This perspective emphasizes the need for governance, human oversight, and precautionary design choices throughout the lifecycle of AI systems. Beyond approaches focused solely on technological innovation or AI’s economic potential, this analysis centers on implications for occupational safety and health. Occupational risk prevention (ORP) is presented as a framework capable of anticipating, interpreting, and managing the uncertainties arising from the adoption of intelligent systems at work [[21],[23],[24]]. This perspective helps identify benefits, such as reduced physical exposure, ergonomic improvements, and early incident detection, while also addressing emerging threats, including technostress and ethical dilemmas related to algorithmic transparency and responsibility allocation [[1],[23]].
This entry aims to provide a synthetic resource, understood as a digitally generated source of information derived from data integration, modelling, and algorithmic inference, rather than from direct human observation alone, for both the academic community and prevention professionals, offering an updated overview of the opportunities and risks associated with AI in the workplace. It also seeks to foster interdisciplinary debate on the need for more flexible regulatory frameworks, innovative assessment methodologies, and proactive preventive strategies capable of addressing rapid technological change with significant social implications [[26]]. Ultimately, it aspires to support the safe, ethical, and sustainable integration of AI in work environments, ensuring that technological advances translate into real improvements in workers’ health, safety, and well-being [[24]–[26]].