Artificial intelligence is increasingly being used in all branches of the media system and has transformed the way specialists in this field work in recent years. Currently, applications of artificial intelligence are used across a range of processes involved in the production, editing, distribution, and consumption of media content. These include technologies such as generative chatbots, automated transcription, writing, translation, and editing tools, as well as applications for image and video creation. All of these types of applications have taken over a significant portion of the traditional activities carried out by media professionals. From a technological point of view, these uses primarily rely on machine learning, natural language processing, and computer vision techniques, complemented by generative models that automatically analyze, generate, and interpret text, sound, and images. Although these technologies contribute to increased efficiency, faster work, and reduced operating costs, they also pose significant risks, particularly regarding the spread of false information. From a theoretical perspective, artificial intelligence goes beyond the status of a technological tool, being conceptualized as a communicational actor that actively intervenes in the generation, structuring, and circulation of messages, influencing the relationships between producers, content, and audiences in the current media environment.
Artificial intelligence (AI), particularly generative AI (GenAI), which includes systems capable of generating textual, visual, audio, or multimodal content, is expanding rapidly
[1][2][3] and playing an increasingly important role in transforming the contemporary media ecosystem
[4]. Interest in artificial intelligence (AI) has grown significantly as its concrete applications expand across industry, society, and public policy
[5]. AI-based applications are widely used in all branches of the media sphere and continue to evolve rapidly
[6] from journalism, social platforms, and strategic communication, influencing content production processes, information distribution, message personalization, and decision-making.
The literature on the use of AI in the media has expanded rapidly in recent years, addressing applications such as automated journalism
[7][8][9], content moderation
[10][11], or audience analysis
[12][13]. However, existing research is scattered across diverse disciplinary perspectives, including media studies, computer science, social sciences, and public policy. As a result, there is a limited number of works that provide an integrated, state-of-the-art synthesis of consolidated knowledge on the technological, ethical, and institutional implications of AI in the media.
Existing studies highlight benefits such as increased productivity, improved content accessibility, and the sustainability of data-driven journalism
[14][15]. At the same time, research points to persistent risks, including the amplification of misinformation, diminished editorial control, and the erosion of public trust in media institutions
[16][17].
In this context, media literacy and artificial intelligence literacy have become essential components for the responsible use of AI in the media. Both media professionals and the general public need to understand how algorithmic systems work, the limitations of automatically generated content, and the ethical implications of information automation
[18][19]. At the same time, the literature emphasises the need to develop clear editorial policies and institutional governance frameworks that regulate the use of AI in accordance with the fundamental values of the media
[20].
In an analytical sense, this entry correlates the main areas of application of artificial intelligence in the media with their technological, ethical, and institutional implications. The approach follows the media production chain, moderation, verification, and governance, and synthesizes the existing literature around common axes of analysis, providing a guidance tool for researchers, practitioners, and decision-makers.