Integrate Artificial Intelligence into Teaching: Comparison
Please note this is a comparison between Version 2 by Jessie Wu and Version 3 by Jessie Wu.

Artificial intelligence (AI) is no longer science fiction. It is so close to our daily life that there is even existing legislation and norming like in an ISO standard. But despite these developments, AI has barely entered the consciousness of ordinary users of IT. In an academic context, the importance of AI is well recognized and there are notable efforts to integrate AI into teaching and development of teaching, for example, in curricular development or even to pass an exam. 

  • AI as a sender
  • higher education
  • semi-structured decisions
  • four-sides model

1. Prerequisites for Machine–Human Communication

Under what conditions do humans accept an artificial intelligence (AI) as communication partner on a level playing field? If consider this question, researchers must clarify whether there is a difference between the communication of humans with each other or with an AI. Researchers want to point out some differences:
  • Capabilities: AI systems are typically designed to perform specific tasks and are not capable of the same level of understanding and general intelligence as a human being. This means that an AI may be able to perform certain tasks accurately but may not be able to understand or respond to complex or abstract concepts in the same way that a human can [1].
  • Responses: AI systems are typically programmed to respond to specific inputs in a predetermined way. This means that the responses of an AI may be more limited and predictable than those of a human, who is capable of a wide range of responses based on their own experiences and understanding of the world [2].
  • Empathy: AI systems do not have the ability to feel empathy or understand the emotions of others in the same way that a human can. This means that an AI may not be able to respond to emotional cues or provide emotional support in the same way that a human can [3].
  • Learning: While AI systems can be trained to perform certain tasks more accurately over time, they do not have the ability to learn and adapt in the same way that a human can. This means that an AI may not be able to adapt to new situations or learn from its own experiences in the same way that a human can [4].
  • Trust: Humans are very critical toward any kind of failure an artificial system is permitting. The level of trust in information being delivered from an AI, in the case of violation, is clearly lower than it would be if the information was delivered from the lips of a human [5].
These differences illustrate that in the case of communication between humans and AIs, interpersonal behavior patterns cannot simply be assumed to evaluate the quality of communication. 

2. The Four-Sides Model in Communication

To analyze interpersonal interaction, researchers apply the four-sides model, also referred to as the four-ears model, the communication square [6][7][8]. The four-sides model proposes that every communication has four different dimensions: factual, appeal, self-revealing, and relationship. The model suggests that these four dimensions are always present in communication, and that people can use them to understand the different aspects of a communication and the intentions of the speaker.
Other tools to understand the meaning of communication include the model developed by Richards [9] following a similar line. The four-sides model and Richards’ concept of four kinds of meaning are both frameworks for understanding and analyzing communication. However, they have different purposes and focus on different aspects of communication. The Schultz von Thun four-sides approach models interpersonal communication. Richards’ concept of four kinds of meaning, on the other hand, is a framework for understanding the different types of meaning that can be conveyed through language. Richards identified four types of meaning: denotative, connotative, emotional, and thematic, whereas the Schultz von Thun four-sides model is focused on understanding the different dimensions of communication. The main critics of the four-sides model largely corresponds to general criticism of communication models [10].
RWesearchers apply the four-sides model in researchers'our research focus and benefit from a tool that allows us to analyze the different levels of communication but does not center on a linguistic approach.
The four-sides model gives us a framework for analyzing communication between the AI and humans. The student or faculty member communicating with the AI, on the other hand, is aware of the source of the communication. A framework is needed to capture the various technology acceptance factors that influence the outcome of the communication situation. This leads to ouresearchers' second research question: What must be done to ensure that humans accept AI decisions in semi-structured decision situations (RQ2)?

3. Technology Acceptance Model

The technology acceptance model (TAM) is successfully used to analyze and understand the different requirements to reach technology acceptance [11]. Although TAM was introduced as early as 1989, the number of publications in which this model has been used as a basis for analyzing technology acceptance continues to increase [12]. TAM has been criticized [13] and modified several times. Venkatesh developed the widely used UTAUT model (unified theory of acceptance and use of technology) [14]. Although more recent modeling approaches are available, researchers use TAM because, first, TAMs have a higher application than UTAUT in analyzing AI adoption [12], and second, particularly in the education sector, TAMs have very positive support [15]. In addition, TAM has been shown to integrate successfully with a variety of different theoretical approaches [15]. The combination of technology acceptance analysis with researcheours' chosen Schultz von Thun model of communication in the education sector argues for the use of TAM.

4. Artificial Intelligence in Higher Education

Based on these models, reseaourchers' research aims at identifying the technology acceptance of a thesis marking done by AI. Zhang et al. found that assessment for an academic scholarship benefits from a rule-based cloud computing application system [16]. This structured decision-making does not use an AI, but it shows relevant technology acceptance with the affected students [17]. More than 70 scholars were interviewed to obtain information about the adaptability of AI in the use of automatic short answer grading. The results showed that it is of great value and importance to build trust to understand how the AI is conducting the grading. It was found that trust-building was stronger when the AI was proactively explaining its decision itself. In this case, the AI supported the grading and the final grade was given by a responsible lecturer. But there is also research concerning the options that AI may fill in the future. Kaur et al. state that AI will be of value to perform grading in an academic context [18]. If AI is used in qualitative marking, then communication and cooperation requirements significantly exceed system performance compared to the simple case mentioned above. Current research [19][20] shows that there may well be useful starting points for using AI as a co-decision-maker in academic education. But does this extend to the evaluation of scientific work, for example, bachelor theses, which is also conceivable, and under what conditions? Hence, ouresearchers' research question 3: Is the grading process in higher education, explicitly the grading of a bachelor/master thesis, an acceptable field of application for AI (RQ3)?

References

  1. Korteling, J.E.H.; van de Boer-Visschedijk, G.C.; Blankendaal, R.A.M.; Boonekamp, R.C.; Eikelboom, A.R. Human- versus Artificial Intelligence. Front. Artif. Intell. 2021, 4, 622364.
  2. Hill, J.; Randolph Ford, W.; Farreras, I.G. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Comput. Hum. Behav. 2015, 49, 245–250.
  3. Alam, L.; Mueller, S. Cognitive Empathy as a Means for CharacterizingHuman-Human and Human-Machine Cooperative Work. In Proceedings of the International Conference on Naturalistic Decision Makin, Orlando, FL, USA, 25–27 October 2022; Naturalistic Decision Making Association: Orlando, FL, USA, 2022; pp. 1–7.
  4. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning: From Theory to Algorithms; 14th Printing 2022; Cambridge University Press: Cambridge, UK, 2022; ISBN 978-1-107-05713-5.
  5. Alarcon, G.M.; Capiola, A.; Hamdan, I.A.; Lee, M.A.; Jessup, S.A. Differential biases in human-human versus human-robot interactions. Appl. Ergon. 2023, 106, 103858.
  6. Schulz von Thun, F. Miteinander Reden: Allgemeine Psychologie der Kommunikation; Rowohlt: Hamburg, Germany, 1990; ISBN 3499174898.
  7. Ebert, H. Kommunikationsmodelle: Grundlagen. In Praxishandbuch Berufliche Schlüsselkompetenzen: 50 Handlungskompetenzen für Ausbildung, Studium und Beruf; Becker, J.H., Ebert, H., Pastoors, S., Eds.; Springer: Berlin, Germany, 2018; pp. 19–24. ISBN 3662549247.
  8. Bause, H.; Henn, P. Kommunikationstheorien auf dem Prüfstand. Publizistik 2018, 63, 383–405.
  9. Richards, I.A. Practical Criticism: A Study of Literary Judgment; Kegan Paul, Trenche, Trubner Co., Ltd.: London, UK, 1930.
  10. Czernilofsky-Basalka, B. Kommunikationsmodelle: Was leisten sie? Fragmentarische Überlegungen zu einem weiten Feld. Quo Vadis Rom.—Z. Für Aktuelle Rom. 2014, 43, 24–32.
  11. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319.
  12. García de Blanes Sebastián, M.; Sarmiento Guede, J.R.; Antonovica, A. Tam versus utaut models: A contrasting study of scholarly production and its bibliometric analysis. TECHNO REVIEW Int. Technol. Sci. Soc. Rev./Rev. Int. De Tecnol. Cienc. Y Soc. 2022, 12, 1–27.
  13. Ajibade, P. Technology Acceptance Model Limitations and Criticisms: Exploring the Practical Applications and Use in Technology-Related Studies, Mixed-Method, and Qualitative Researches; University of Nebraska-Lincoln: Lincoln, NE, USA, 2018.
  14. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425.
  15. Granić, A. Technology Acceptance and Adoption in Education. In Handbook of Open, Distance and Digital Education; Zawacki-Richter, O., Jung, I., Eds.; Springer Nature: Singapore, 2023; pp. 183–197. ISBN 978-981-19-2079-0.
  16. Zhang, Y.; Liang, R.; Qi, Y.; Fu, X.; Zheng, Y. Assessing Graduate Academic Scholarship Applications with a Rule-Based Cloud System. In Artificial Intelligence in Education Technologies: New Development and Innovative Practices; Cheng, E.C.K., Wang, T., Schlippe, T., Beligiannis, G.N., Eds.; Springer Nature: Singapore, 2023; pp. 102–110. ISBN 978-981-19-8039-8.
  17. Schlippe, T.; Stierstorfer, Q.; Koppel, M.T.; Libbrecht, P. Artificial Intelligence in Education Technologies: New Development and Innovative Practices; Cheng, E.C.K., Wang, T., Schlippe, T., Beligiannis, G.N., Eds.; Springer Nature: Singapore, 2023; ISBN 978-981-19-8039-8.
  18. Kaur, S.; Tandon, N.; Matharou, G. Contemporary Trends in Education Transformation Using Artificial Intelligence. In Transforming Management Using Artificial Intelligence Techniques, 1st ed.; Garg, V., Agrawal, R., Eds.; CRC Press: Boca Raton, FL, USA, 2020; pp. 89–104. ISBN 9781003032410.
  19. Zhai, X.; Chu, X.; Chai, C.S.; Jong, M.S.Y.; Istenic, A.; Spector, M.; Liu, J.-B.; Yuan, J.; Li, Y.; Cai, N. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity 2021, 2021, 8812542.
  20. Razia, B.; Awwad, B.; Taqi, N. The relationship between artificial intelligence (AI) and its aspects in higher education. Dev. Learn. Organ. Int. J. 2023, 37, 21–23.
More
ScholarVision Creations