4. Application
How could DML be facilitated in the context of formal education? DML frameworks conceptualize education quality as the cognitive, affective, and social skills activation
[21][35]. DML success in physical and online contexts depends on every individual’s idiosyncratic attributes in terms of personalities, abilities, perceptions, and goals
[14]. Hence DML on scale requires adaptation and differentiation to accommodate personalized needs. Education stakeholders need to orchestrate litanies of activities and experiences to foster deep learning approaches
[36]. DML from the educator’s angle is a tough challenge as it entails the expenditure of extra energy for sophisticated planning, patience, mindfulness, and diligence
[14]. Information and communication technology could support DML when the latter is used for teaching and learning strategies such as knowledge synthesis, discussion, articulation, cooperation, and reflection
[13][15][37].
DML is even harder to achieve and maintain in online learning where learners’ dynamic emotional and motivational fluctuations are sometimes neglected
[38]. For instance, curiosity, interest, and goal orientation are essential as they influence directly cognitive learning procedures
[39]. Quality e-learning towards higher-order processes should be organized around learner-centered meaningful, demanding activities assisting students to build associations of new information with existing knowledge and experiences
[40].
More specific, DML is influenced by factors of three types: learners’ individual traits (e.g., personality, skills, emotions, motivation), contextual (e.g., teaching methods, assessment, teacher, class), and perceived contextual factors (e.g., workload, usefulness, relevance)
[38]. In the context of distance education, a systematic review has integrated fifteen influencing factors into a blended model for deep and meaningful e-learning in social virtual reality environments
[41]. Factors are organized in three classes: in relation to the learner (e.g., perceptions, technical skills), the implemented instructional design according to teacher perceptions and beliefs (e.g., learning theory, environment, activities), and the used technology (e.g., access, usability), before and during learning.
Hence, the community of inquiry theory was formulated to promote DML in tertiary education
[42]. Deriving from a social constructivist epistemology, its empirically supported premise is that effective distant educational experiences should combine three crucial components: teaching, cognitive, and social presence. Teaching presence comprises the responsibilities and actions of educators such as instructional design, direct instruction, and online facilitation. Cognitive and social presence relates to student behavior. Cognitive presence is “the extent to which the participants in any particular configuration of a community of inquiry are able to construct meaning through sustained communication”
[35]. Social presence is achieved when learners communicate purposively and build collectively shared identities in an environment of trust.
Online learning features principally flexible, self-regulated study. Even when learning features synchronous virtual meetings, i.e., teacher-led tutorials or group work, learner isolation is an inherently inhibiting factor
[37][43][44]. Active, challenging activities, cooperative problem-based tasks, and emotional empowerment are recommended to promote DML
[45]. Additionally, overlooking the importance of internal student incentives in distance education leads to high course attrition rates
[46]. When distance students cannot interact socially with their fellows they have a higher probability of abandoning a course
[47]. This effect has been observed on a magnified scale in Massive Open Online Courses (MOOCs). Global enrollment in each MOOC rose to thousands and even hundreds of thousands but completion rates typically do not exceed ten percent
[14][48].
Excessive coursework is one common, DML blocking mistake educators commit despite their benevolent intentions is. Too much work inevitably pushes students towards a surface approach to learning due to time pressure. Hence, reducing content is recommended so that learners have the time to reflect on the studied subject
[18]. Another universal teacher recommendation towards DML is to allow students to confront their own misconceptions. Learners should be animated to demonstrate comparatively their constructed meaning and interpretations of the studied domain and debate with each other
[18].
DML proposes an outcome or competency-based design approach in e-learning
[49]. Research in distance education connects DML with active learning, peer communication, and collaboration
[50] as well as high levels of teaching and social presence
[14][51]. Meaningful e-learning relies on the quality rather than the quantity of meaningful online interactions of learners with content, instructors, and peers
[52]. These interactions should be designed around realistic experiences necessitating complex knowledge construction tasks with ample cooperation and reflection opportunities
[14][53][54]. Game-based and gamified interventions such as serious games in physical and online, virtual settings have produced supporting evidence of DML
[55][56]. Distance courses designed with constructivist principles integrating community interactions, open-ended discussions, and team assignments into a flexible curriculum with fluid content achieve higher levels of learner satisfaction and deep learning
[57].
5. Evaluation
Summative student assessment in formal education serves one main purpose: to ascertain the degree to which course participants have achieved the intended learning outcomes. Its format, however, constitutes an indirect hint to students as what is deemed of the highest value to focus on and learn
[58]. Hence, a course aiming at deep meaningful knowledge development should examine higher-order competencies. Proposed evaluation strategies include authentic, realistic performance tasks, self-evaluation, and peer assessment
[59][60]. Suggested assessment methods to encourage deep learning approaches are catalytic assessment, concept maps, problem-based learning, and e-portfolios
[18][61].
Catalytic assessment starts with a question that students have to tackle
[61]. The quest to find the right answer triggers first individual exploration and then discourse, often in dyads or larger teams where students present and defend their choices. Catalytic assessment can be applied in large audiences in physical and online settings as demonstrated by the peer instruction method
[62].
Although concepts maps are learning resources, their creation by students can be a form of assessment
[63]. Concept maps demonstrate a person’s cognitive organization of comprehension of a topic. Building links, hierarchical structures, and branches among related concepts, processes, and categories allows the accurate representation of students’ mental models.
Problem-based learning is a learner-centered method that starts with a real, ill-defined problem
[11]. In order to solve the problem, students have to take initiative and direct their own learning in multiple ways: analyze the situation, identify its components, study sources, collect evidence, formulate and test hypotheses, communicate with peers, argue and take decisions, experiment, and validate their beliefs and assumptions.
Learning portfolios are collections of nowadays mostly digital artifacts (e.g., essays, papers, projects, digital files, etc.) that students build gradually throughout the course
59. Portfolios, similarly to PBL, place the responsibility and initiative of learning to each learner. Moreover, they strengthen learners’ agency and relatedness with personally meaningful values and connections. E-portfolios have the additional advantage that they can be transferable to other digital platforms and visible to social networks and other outlets enabling a seamless transition from educational to professional roles and settings
[64]. In this way, portfolios encourage students’ intrinsic goal orientation.
6. Research Instruments
In an attempt to describe and classify the level, depth, complexity and quality of student learning and understanding, Biggs and Collis formulated the Structure of the Observed Learning Outcome taxonomy (SOLO), a hierarchy of five stages for learning outcomes
[65]. These categories are the following from lowest to highest order:
-
Prestructural: Unstructured, inappropriate work.
-
Unistructural: Appropriate presentation of one relevant subject aspect.
-
Multistructural: Appropriate presentation of several relevant but unconnected subject aspects.
-
Relational: Integration of several relevant subject aspects.
-
Extended Abstract: Creation of a coherent, holistic approach at a new abstraction level.
SOLO taxonomy distinguishes two phases in student learning, intended or recorded. In the lowest, quantitative phase (stages 1 to 3), learning is mainly superficial, additive. In the qualitative phase (stages 4 and 5), learning results in advanced, deeper understanding, the ability of application, reflective abstraction and transfer. SOLO categories have correspondences with the six levels of Bloom’s revised taxonomy (remembering, understanding, applying, analyzing, evaluating, creating)
[66]. SOLO can be used by educators in the design and assessment stage of education: to formulate learning objectives, techniques, activities, evaluation methods and to assess students’ outcomes and performance
[67].
DML can be researched both with qualitative and quantitative methods. A qualitative DML research approach is phenomenography
[68]. It constitutes a new research paradigm aiming at interpreting differences in thought and experiences based on the descriptions of understanding
[69].
Validated quantitative research instruments to measure subjectively DML include the Study Process Questionnaire SPQ
[70], the Approaches and Study Skills Inventory for Students (ASSIST)
[71], the Motivated Strategies for Learning Questionnaire (MSLQ)
[72], and the Community of Inquiry framework survey
[73].
SPQ and more specifically the Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a questionnaire developed by Biggs that measures two factors, deep and surface study approach
[70]. It consists of twenty items, e.g., “my aim is to pass the course while doing as little work as possible” (surface study approach), “I feel that virtually any topic can be highly interesting once I get into it” (deep study approach). Students’ replies are scored on a five-point scale from “this is never or very rarely true of me” to “this always or almost always true of me”. R-SPQ-2F can be combined with SOLO taxonomy to link student study strategies to learning outcomes
[74].
ASSIST is a self-reporting questionnaire that reflects relative student preferences towards three studying approaches: deep, surface and strategic, stemming from the work of Entwistle and Ramsden
[71]. It contains three sections with the main section being the Revised Approaches to Studying Inventory (RASI). RASI includes 52 items, e.g., “I tend to read very little beyond what is actually required to pass” (surface approach), “Before tackling a problem or assignment, I first try to work out what lies behind it” (deep approach), I organize my study time carefully to make the best use of it (strategic approach). Students are invited to mark their degree of (dis)agreement across a five-level Likert type scale: agree, agree somewhat, unsure, disagree somewhat, agree.
MSLQ is based on Pintrich’s socio-cognitive assumption on learning depending primarily on the dynamic and contextual interplay between cognitive learning strategies and motivation orientation
[75]. MSLQ can be used to measure 15 different motivation and learning strategy scales that can be used collectively or separately, e.g., intrinsic and extrinsic goals, self-efficacy, critical thinking, self-regulation, management of resources
[72]. It contains 81 statements students assess ranging from 1 (not at all true of me) to 7 (very true of me), e.g., “I’m confident I can learn the basic concepts taught in this course”, “When studying for this course, I often try to explain the material to a classmate or friend”.
The Community of Inquiry framework survey was developed to measure the three primary scales of the studied model: cognitive, teaching, and social presence
[73]. It comprises 34 items—statements such as “The instructor clearly communicated important course goals” and “Course activities piqued my curiosity”. Respondents are scored from 0 (strongly disagree) to 4 (strongly agree).