Artificial intelligence (AI) is no longer science fiction. It is so close to our daily life that there is even existing legislation and norming like in an ISO standard. But despite these developments, AI has barely entered the consciousness of ordinary users of IT. In an academic context, the importance of AI is well recognized and there are notable efforts to integrate AI into teaching and development of teaching, for example, in curricular development or even to pass an exam.
1. Prerequisites for Machine–Human Communication
Under what conditions do humans accept an artificial intelligence (AI) as communication partner on a level playing field? If consider this question, researchers must clarify whether there is a difference between the communication of humans with each other or with an AI. Researchers want to point out some differences:
-
Capabilities: AI systems are typically designed to perform specific tasks and are not capable of the same level of understanding and general intelligence as a human being. This means that an AI may be able to perform certain tasks accurately but may not be able to understand or respond to complex or abstract concepts in the same way that a human can
[1].
-
Responses: AI systems are typically programmed to respond to specific inputs in a predetermined way. This means that the responses of an AI may be more limited and predictable than those of a human, who is capable of a wide range of responses based on their own experiences and understanding of the world
[2].
-
Empathy: AI systems do not have the ability to feel empathy or understand the emotions of others in the same way that a human can. This means that an AI may not be able to respond to emotional cues or provide emotional support in the same way that a human can
[3].
-
Learning: While AI systems can be trained to perform certain tasks more accurately over time, they do not have the ability to learn and adapt in the same way that a human can. This means that an AI may not be able to adapt to new situations or learn from its own experiences in the same way that a human can
[4].
-
Trust: Humans are very critical toward any kind of failure an artificial system is permitting. The level of trust in information being delivered from an AI, in the case of violation, is clearly lower than it would be if the information was delivered from the lips of a human
[5].
These differences illustrate that in the case of communication between humans and AIs, interpersonal behavior patterns cannot simply be assumed to evaluate the quality of communication.
2. The Four-Sides Model in Communication
To analyze interpersonal interaction, researchers apply the four-sides model, also referred to as the four-ears model, the communication square
[6][7][8]. The four-sides model proposes that every communication has four different dimensions: factual, appeal, self-revealing, and relationship. The model suggests that these four dimensions are always present in communication, and that people can use them to understand the different aspects of a communication and the intentions of the speaker.
Other tools to understand the meaning of communication include the model developed by Richards
[9] following a similar line. The four-sides model and Richards’ concept of four kinds of meaning are both frameworks for understanding and analyzing communication. However, they have different purposes and focus on different aspects of communication. The Schultz von Thun four-sides approach models interpersonal communication. Richards’ concept of four kinds of meaning, on the other hand, is a framework for understanding the different types of meaning that can be conveyed through language. Richards identified four types of meaning: denotative, connotative, emotional, and thematic, whereas the Schultz von Thun four-sides model is focused on understanding the different dimensions of communication. The main critics of the four-sides model largely corresponds to general criticism of communication models
[10].
We apply the four-sides model in our research focus and benefit from a tool that allows us to analyze the different levels of communication but does not center on a linguistic approach.
The four-sides model gives us a framework for analyzing communication between the AI and humans. The student or faculty member communicating with the AI, on the other hand, is aware of the source of the communication. A framework is needed to capture the various technology acceptance factors that influence the outcome of the communication situation. This leads to our second research question: What must be done to ensure that humans accept AI decisions in semi-structured decision situations (RQ2)?
3. Technology Acceptance Model
The technology acceptance model (TAM) is successfully used to analyze and understand the different requirements to reach technology acceptance
[11]. Although TAM was introduced as early as 1989, the number of publications in which this model has been used as a basis for analyzing technology acceptance continues to increase
[12]. TAM has been criticized
[13] and modified several times. Venkatesh developed the widely used UTAUT model (unified theory of acceptance and use of technology)
[14]. Although more recent modeling approaches are available, researchers use TAM because, first, TAMs have a higher application than UTAUT in analyzing AI adoption
[12], and second, particularly in the education sector, TAMs have very positive support
[15]. In addition, TAM has been shown to integrate successfully with a variety of different theoretical approaches
[15]. The combination of technology acceptance analysis with our chosen Schultz von Thun model of communication in the education sector argues for the use of TAM.
4. Artificial Intelligence in Higher Education
Based on these models, our research aims at identifying the technology acceptance of a thesis marking done by AI. Zhang et al. found that assessment for an academic scholarship benefits from a rule-based cloud computing application system
[16]. This structured decision-making does not use an AI, but it shows relevant technology acceptance with the affected students
[17]. More than 70 scholars were interviewed to obtain information about the adaptability of AI in the use of automatic short answer grading. The results showed that it is of great value and importance to build trust to understand how the AI is conducting the grading. It was found that trust-building was stronger when the AI was proactively explaining its decision itself. In this case, the AI supported the grading and the final grade was given by a responsible lecturer. But there is also research concerning the options that AI may fill in the future. Kaur et al. state that AI will be of value to perform grading in an academic context
[18]. If AI is used in qualitative marking, then communication and cooperation requirements significantly exceed system performance compared to the simple case mentioned above. Current research
[19][20] shows that there may well be useful starting points for using AI as a co-decision-maker in academic education. But does this extend to the evaluation of scientific work, for example, bachelor theses, which is also conceivable, and under what conditions? Hence, our research question 3: Is the grading process in higher education, explicitly the grading of a bachelor/master thesis, an acceptable field of application for AI (RQ3)?
This entry is adapted from the peer-reviewed paper 10.3390/educsci13090865