1. Introduction
In a rapidly evolving world, the integration of technology has revolutionized many aspects of our everyday lives, including education. Both educators and learners face the challenge to embrace successfully technological means to unlock interactive learning experiences and personalized instruction. In this context, the term educational technology emerged as well as the subsequent need for its interpretation. The Association for Educational Communications and Technology (2008) defines educational technology as “the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources”
[1]; the systematic application of teaching methods, instructional techniques, multimedia, tools, and technology to continuously enhance learning, teaching, and academic outcomes. Huang
[2]—giving emphasis to the various context learning might take place—states that educational technology encompasses the utilization of tools, technologies, processes, resources, and methods aimed at enhancing learning encounters across diverse settings, including formal, informal, non-formal, lifelong, on-demand, workplace, and just-in-time learning. This field has progressed from early adoption of teaching tools to a rapid expansion in recent times, now incorporating a wide range of devices and approaches such as mobile technologies, virtual and augmented realities, simulations, immersive environments, collaborative learning, social networking, cloud computing, flipped classrooms, and various other innovations. A simple, concise, recent definition of educational technology refers to the application of technology in diverse educational environments, aiming to optimize learning and enhance educational achievements
[3].
Educational technology is widely employed in every aspect of the learning process since it seems more appealing to students and teachers than using traditional means. It is met at all educational stages, from preschool to universities in many types and forms. Specifically, technology usage in education is proved to be beneficial to: students’ motivation and engagement
[4,5[4][5][6][7],
6,7], self-confidence
[4], increased student understanding
[4], increased instructional differentiation
[4,5][4][5], increased exposure to more current content material
[4,6][4][6], accessible and located learning
[5].
However, the use of technological tools is not a panacea for achieving every single learning objective. Even though it is often referred as a means of maximizing the learning outcomes, while reducing costs, further research is needed to estimate the true cost-effectiveness in each context
[8]. The “technology usage–improved learning outcomes” relationship is mediated and influenced by many factors, such as users’ learning styles
[9[9][10],
10], frequency of technological system’s use
[11], instructional methods
[12[12][13],
13], information and communication technologies (ΙCT) competency, and the technological system’s usability.
In 1998, the International Organization for Standardization established a widely accepted definition of usability. According to this definition, usability pertains to the level to which a product or system can be effectively, efficiently, and satisfactorily used by its users aiming to achieve specific objectives in a particular environment
[14]. Bevan et al.
[15] give emphasis to usability as a result of interaction rather than an intrinsic characteristic of a product itself. Researchers’ primary focus were the first two objective dimensions initially, but afterwards the need of evaluating the third subjective dimension arose
[16].
At this point, it is of paramount importance to highlight the distinction between usability and perceived usability. Even though they are strongly related concepts, they have some key differences. On the one hand, perceived usability indicates how easy to use a technology is perceived by its intended users. It is a subjective measure, and it can be influenced by many factors such as user expectations, ICT competency, past experiences, personal characteristics, and preferences. Contrarily, the level of usability is linked to the tangible, actual degree of convenience and user-friendliness exhibited by a technological system, while perceived usability is a measure of how easy the technology is perceived to be by the users. Thus, perceived usability is more related to the user’s perception and experience, while usability is more related to the technical aspects of the technology.
Perceived usability (satisfaction) seems to play an important role to students’ learning gain
[17,18,19,20,21][17][18][19][20][21]. A user-friendly interface enables students and teachers to interact directly and effectively with a technological system without having to spend cognitive resources on learning the system thus focusing on the learning content. Hence, the perceived usability primarily hinges on the user’s subjective perception and experiential encounters, whereas usability primarily revolves around the intricate technical facets of the technology at hand.
Literature findings highlight the disconnection between human computer interaction research and technology enhanced learning as it is reflected upon the lack of usability frameworks
[18]. Researchers reviewed the current body of scholarly works regarding the utilization of educational technology and how it is assessed in terms of perceived usability. Given that benchmarks for educational technology have already been created using the System Usability Scale (SUS)
[22], the predominant instrument employed to gauge the perceived usability. By conducting a systematic review using PSSUQ/CSUQ, researchers can compare the results and findings from multiple studies that have used PSSUQ and CSUQ in educational technology. This comparison allows for a deeper understanding of the factors affecting usability and user experience across different technological interventions. It is noteworthy that SUS and PSSUQ/CSUQ scores are proved to be highly correlated in recent studies
[11,23][11][23].
Consequently, the significance of usability in educational technology is underscored, and all parties involved are aware of its meaning for the technology-enhanced learning. In addition, access is now provided to a concentrated framework which includes many variables such as: educational stage, type of participant, study’s date, age of participants, subject being learned. A comprehensive review of existing literature using PSSUQ and CSUQ ensures that decisions related to the implementation and improvement of educational technology are based on evidence and empirical data rather than anecdotal or subjective assessments. Consequently, the development and improvement of technological tools and systems used for educational purposes can now be relied on solid research data.
2. Post-Study System Usability Questionnaire
The Post-Study System Usability Questionnaire (PSSUQ) is an evaluation tool used to assess perceived usability, and it does not require a license for its usage. Its first version consists of 19 items (short version: 16 items) and utilizes a 7-point Likert scale where lower ratings signify greater degrees of perceived usability (satisfaction). In numerous studies, factor analysis has consistently identified three distinct factors (subscales) known as System Usefulness, Information Quality, and Interface Quality. It is highly reliable with a Cronbach’s alpha that spans from 0.83 to 0.96
[24]. The Computer System Usability Questionnaire (CSUQ) was developed afterwards with identical items and similar wording using the present tense instead of PSSUQ’s past tense. It is sensitive both concerning many variables (type of participant, age of participant, type of technology assessed, user’s extensive years of experience and wide-ranging exposure to computer systems) and small sample sizes. The PSSUQ/CSUQ covers all aspects of usability: effectiveness, efficiency, satisfaction, and learnability
[25]. They have been referred as tools that are used in various applications and provide a good validation in order to evaluate the usability of educational technology systems
[26].
There are several tools, techniques, and methodologies for evaluating usability. The rationale for using the PSSUQ (Post-Study System Usability Questionnaire) and CSUQ (Computer System Usability Questionnaire) scales lies in their effectiveness and extensive use as reliable tools for evaluating usability. Specifically, researchers employing these scales can measure users’ satisfaction levels about the interaction with a technology system and they are able to identify precisely the areas that need improvement in terms of usability.
Standardized usability questionnaires are widely accepted. However, they add value only when an interpretive framework is provided. Regarding PSSUQ/CSUQ scales, Sauro and Lewis
[27] aggregated 21 studies using PSSUQ and provided benchmarks regarding the three subscales and the overall score. The scores for system usefulness, information quality, interface quality, and overall score are as follows: system usefulness scored 2.80, information quality scored 3.02, interface quality scored 2.49, and the overall score was 2.82. Researchers and Human–Computer Interaction (HCI) experts can interpret a system’s perceived usability by examining the PSSUQ/CSUQ score in light of the benchmarks mentioned above. Since it is highly correlated with SUS scores, it can support the originally published norms for SUS after converting them to a 0–100-point scale
[28].
Tullis and Stetson
[29] found that 12 participants produced equivalent results to a larger sample size in 90% of the cases. It has also been referred
[30] that as a questionnaire CSUQ seem to have a positive aspect in its statements. This may simplify the answer procedure but also causes a kind of response bias.
As PSSUQ is a common and widely used usability tool it has been translated into many languages, mainly European
[31]. There are versions in Greek
[32], Arabic
[31], Portuguese
[33], Turkish
[34], French
[35], Spanish
[36].
Regarding demographics, gender does not seem to significantly affect PSSUQ scores in a dataset of 21 studies on dictation systems
[24]. Alhadreti
[11] examined possible correlation between CSUQ ratings of the Blackboard platform and the age of participants in a dataset of 187 scholars affiliated with Umm Al-Qura University in Saudi Arabia. He found no statistically significant correlation between CSUQ score and age (
p = 0.820, ns). Sonderegger et al.
[37] employed a 2 × 2 between-subjects design in a quasi-experiment using PSSUQ, with two age groups (old and young). Between the two age groups, in a sample of 60 subjects, no significant difference was found (F < 0). In conclusion, the PSSUQ/CSUQ tools seem to be easy generalizable and able providing a stable reliability through different implementations
[38].