Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2042 2024-01-29 09:21:08 |
2 update references and layout Meta information modification 2042 2024-01-29 09:45:00 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Ortega, E.; Martín, B.; González-Ávila, S. Student and Instructor Ratings in Geographic Information Systems. Encyclopedia. Available online: https://encyclopedia.pub/entry/54466 (accessed on 17 May 2024).
Ortega E, Martín B, González-Ávila S. Student and Instructor Ratings in Geographic Information Systems. Encyclopedia. Available at: https://encyclopedia.pub/entry/54466. Accessed May 17, 2024.
Ortega, Emilio, Belén Martín, Sergio González-Ávila. "Student and Instructor Ratings in Geographic Information Systems" Encyclopedia, https://encyclopedia.pub/entry/54466 (accessed May 17, 2024).
Ortega, E., Martín, B., & González-Ávila, S. (2024, January 29). Student and Instructor Ratings in Geographic Information Systems. In Encyclopedia. https://encyclopedia.pub/entry/54466
Ortega, Emilio, et al. "Student and Instructor Ratings in Geographic Information Systems." Encyclopedia. Web. 29 January, 2024.
Student and Instructor Ratings in Geographic Information Systems
Edit

Geographic information system (GIS) education empowers engineering students to make informed decisions, integrate comprehensive data, and communicate effectively through maps and visualizations. In GIS education, it is common to employ problem-based learning, which can benefit from the advantages of peer assessment methods.

geography education problem-based learning geographic information systems peer assessment

1. Background

Geography, as an academic discipline within the spatial sciences, is devoted to the examination of the reciprocal influences between the environment and human societies. Geography investigates dimensions such as scale, movement, regions, human–environment interaction, location, and place. Essentially, geography functions as a comprehensive domain that addresses the interplay of environmental, social, and economic aspects [1]. For this reason, it becomes a fundamental discipline for any engineer who must solve problems related to territory and people during professional activities. The assimilation of geographical concepts by students is enhanced through the integration of computer tools based on geographic information systems (GISs) [2]. A GIS can be defined as a computer system used for the input, storage, transformation, visualization, mapping, and analysis of spatial and non-spatial data, which must necessarily have coordinates that position them at a location on the Earth [3]. Due to its ability to analyze spatial data and the associated quantitative and qualitative information and layer structure, the study of geographic information is based on the superimposition of layers in order to establish relationships between the information they contain [4][5]. These characteristics make GISs a tool that integrates a multitude of disciplines, and it is essential in territorial planning processes [6], environmental impact assessments [7], transport modeling and its effects [8], urban mobility [9], allocation of uses [10], and landscape [11], among others.
Processing, classifying, and mapping data using GISs, i.e., GIS knowledge and skills, spatial thinking, and problem-solving, are key competencies for technically trained bachelor’s and master’s students [12]. For this reason, engineering-related bachelor’s and master’s curricula incorporate GISs, data processing, and mapping as important training competencies for students. The acquisition of these competencies will enable students to solve a wide range of problems related to land management [6], such as those mentioned above. The use of GISs also motivates students to develop relevant skills as they recognize the contributions of spatial analysis and geographical perspectives in their training [13]. Ref. [14] found that learning GISs helped them improve their spatial thinking, which in turn strongly correlated with their performance in the GIS course. Today, GISs are taught in departments such as geodesy, geography, photogrammetry, ecology, natural resources, forestry, civil engineering, landscaping, and urban design and planning [15].
In summary, the utilization of geographic data and spatial analysis has become indispensable in the domain of engineering pertaining to territory and the environment. Notably, GISs have established themselves in recent years as a foundational tool for engineers in this field. Consequently, GIS education holds paramount significance in engineering and natural sciences as it imparts students with essential skills in spatial analysis, data integration, visualization, resource management, and infrastructure planning. It facilitates informed decision-making, comprehensive data integration, and effective communication through maps and visualizations.

2. Teaching of Geographic Information Systems

The teaching of GISs has a markedly practical nature. The instructor guides the students by explaining the different tools available and their application to specific cases, which must then be resolved by the students. Teaching GISs is complex; instructors and students often struggle with technical obstacles, file management, and complex software operations [16]. This can lead to what [17] calls “buttonology”, i.e., students focusing on learning how to point and click with the mouse to complete certain functions rather than engaging in the intended reasoning. The course in which the subject is taught can be a barrier, as students must have knowledge of other subjects to enable them to use the power of GISs to solve problems. This difficulty can be solved by incorporating different GIS subjects throughout the bachelor’s or master’s degree, starting with basic levels and increasing in difficulty to show more specific applications. The instructor must also ensure that students do not simply copy what the instructor has done but are able to understand the complexity of the spatial relationships between the data and identify the appropriate tools to solve each problem. The main barrier to achieving this is the limited number of hours available for teaching these subjects [18], which require a very high practical use of software. One solution is to provide students with practical cases to solve outside the classroom that have a positive impact on their final grade.

3. New Rating Techniques: Student Peer Assessment

To guarantee that students take an active role in the teaching–learning process, the learning environment must be interactive and cooperative. The instructor is responsible for designing a methodology to optimize learning, in accordance with the established objectives. The teaching methodology consists of a set of methods and techniques used by the teaching staff to undertake training actions and transmit knowledge and competencies designed to achieve certain objectives, in which the instructor teaches the student to learn and to learn with a critical spirit throughout life [19][20]. The teaching–learning model requires aligning assessment methods and systems with competencies [21]. The rating or assessment system must therefore be useful both for students to learn and for instructors to improve their teaching [22][23]. Competence-oriented assessment involves four fundamental aspects [23]: (1) it must be a planned process that is coherent with the competencies to be achieved, which are in turn aligned with the professional activity; (2) it must specify the level of achievement or performance of competences that are considered adequate; and (3) it must be coherent with students’ active learning; and (4) it must be formative and continuous.
The assessment of ratings is a key aspect of any learning process [24]. It conditions what and how students learn and is the most useful tool instructors have to influence how students respond to the teaching and learning process [24][25][26]. The assessment process is based on collecting information by different means (written, oral, or observation), analyzing that information and making a judgement on it and then reaching decisions based on that judgement. It is an action that continues throughout the teaching–learning process. Its functions are formative, regulatory, pedagogical, and communicative. Its communicative function is fundamental, because it contributes to the feedback of information between students and instructors, between the students themselves, and between instructors and students, i.e., it facilitates interaction and cooperation.
Some authors consider that traditional assessment methods have become obsolete as they promote passivity and should be replaced by others that encourage dynamic learning [27]. In this context, it is increasingly common for instructors to incorporate new methods for assessing courses, despite relinquishing their dominant role in a competence that was traditionally their exclusive domain [28]. Students’ participation in the assessment method gives them the necessary skills to objectively analyze other types of documents [29].
These new assessment methods include self-assessment, in which students assess their own work; peer assessment, where students assess their peers; and co-assessment, in which both students and instructors score [30]. Peer assessment consists of “a process by which students evaluate their classmates in a reciprocal manner, applying assessment criteria” [31], or can be considered “a specific form of collaborative learning in which learners make an assessment of the learning process or product of all or some students or group of students” [32]. Peer assessment is therefore perfectly adapted to the framework of the European Higher Education Area, where assessment has become another activity within the teaching–learning process and contributes to the development of competencies [33]. Its implementation is very open: it can be anonymous or not, students can be assessors and/or assessed, and it can be carried out quantitatively or qualitatively and with or without feedback [34].

4. Current Practice in Peer Rating Assessment

Effective peer assessment requires the use of instruments to guide the students. One of the most common assessment tools is rubrics [35]. Rubrics are scoring guides that describe the specific characteristics of a task at various levels of performance in order to assess its execution [36]. The rubric is similar to a list of specific and essential criteria against which knowledge and/or competencies are assessed [37] in order to establish a gradation of the different criteria or elements that make up the task, competence, or content. To be helpful to students, it is essential for the instructor to offer clear instructions, provide a list of aspects to consider when determining the overall grade, and use a numerical scoring system for assessment [35]. This rating method is particularly suitable for courses with predominantly practical content and which rely on active student participation. Peer assessment has been applied in many disciplines such as computer applications [38][39], developmental psychology, music didactics and the psychology of language and thought [30], physical chemistry [40], environmental education [33], linear algebra [41], and online training [42], to name a few examples. While engaged in this evaluation process, students increase their capacity for analysis and synthesis, their organizational and professional communication skills, and the development of critical judgement [43][44][45], which they will later transfer to their own work [46]. Studies suggest that this enhancement in critical judgement correlates with improved writing skills. Additionally, the more critical students were of their peers, the higher grades they achieved [43]. It can enhance students’ performance [47], engagement with the course [48], and the quality of work they present [45]. Ref. [49] reported that the integration of peer assessment enabled over 60% of students to become more aware and reflective and learn from their mistakes. This method allows students to receive more diverse feedback compared to feedback solely from the instructor. Its application in evaluating work carried out by students in a team also yields benefits. Ref. [50] concluded that the more abrupt the decrease in peer assessments, the more pronounced the rate of change in effort. Ref. [51] showed that including peer assessment in students’ assignments improves their perception of fairness in the rating process, as it is not solely in the hands of instructors. Other studies state that students find peer review useful [52][53] and they hold a consistent standard when giving scores to peers [54].
By receiving peer ratings, students also encountered distributive and procedural justice [55]; following the implementation of peer assessment, complaints from students about fairness in grading decreased, and their overall opinion about the course improved [30]. Instructors also obtain benefits from the application of peer review, in the form of greater student motivation and learning [40] and an increase in the competencies achieved with little cost in time and effort for instructors as the students carry out the task [30]. In addition, no significant differences were found between the ratings assigned by the instructors and those assigned by peers [55]. For all of these reasons, peer rating assessment is a useful and accurate tool that should be used as another learning aid [28], provided that the students feel comfortable and engaged in the learning process [56] and that it is designed and implemented in a thoughtful way to ensure its effectiveness [57].
However, it should be noted that the weaknesses of this assessment method may raise some doubts about its reliability and validity and cause some educators to have little confidence in its use [58]. These weaknesses may include the requirement for prior training, the time commitment by the student, and the choice of explicit, clear, and simple criteria [59]. Previous experience has shown that the application of these assessment methods poses several problems, mainly related to the objectivity of the grades assigned [29]. Some authors highlight students’ possible misgivings about being assessed by their peers due to bias in the rating [41]; the influence of personal relationships on the grades assigned [53], which often causes them to be high [60]; the time at which the evaluation is carried out [35], being lower if the evaluator has already made the presentation of his/her work [61]; the fact that some students consider that their peers do not have the skills required to evaluate them [62]; or that they themselves do not consider themselves to be experts [57]. The authors also argue that in the assessment of teamwork, individuals contribute to the effort to enhance others’ perceptions and, consequently, impact the ratings [63]. This all leads some authors to consider the technique to be inadequate, as the resulting scores differ greatly from those assigned by the instructor, who is kinder to the students [29] or the reverse [64]; however, other studies show the opposite, with no major differences between the two assessment sources [41][65]. Student participation in the assessment process must therefore be supervised [29] and blind in order to reduce pressure and lack of objectivity [34].

References

  1. Kerski, J.J. The role of GIS in Digital Earth education. Int. J. Digit. Earth 2008, 1, 326–346.
  2. Artvinli, E. The Contribution of Geographic Information Systems (GIS) to Geography Education and Secondary School Students’ Attitudes Related to GIS. Educ. Sci. Theory Pract. 2010, 10, 1277–1292.
  3. Burrough, P.A. Principles of Geographic Information Systems for Land Resources Assessment; Clarendon: Hong Kong, China, 1986.
  4. Heywood, I.; Cornelius, S.; Carver, S. An Introduction to Geographical Information Systems; Prentice Hall: Hoboken, NJ, USA, 2002.
  5. Longley, P.A.; Goodchild, M.F.; MacGuire, D.J.; Rhind, D.W. Geographical Information Systems: Principles, Techniques, Applications, and Management; Wiley and Sons: Hoboken, NJ, USA, 1999.
  6. Sikder, I.U. Knowledge-based spatial decision support systems: An assessment of environmental adaptability of crops. Expert Syst. Appl. 2009, 36, 5341–5347.
  7. Rodriguez-Bachiller, A.; Glasson, J. Expert Systems and GIS for Impact Assessment; Taylor and Francis: Abingdon, UK, 2004.
  8. Monzón, A.; López, E.; Ortega, E. Has HSR improved territorial cohesion in Spain? An accessibility analysis of the first 25 years: 1990–2015. Eur. Plan. Stud. 2019, 27, 513–532.
  9. Ortega, E.; Martín, B.; De Isidro, Á.; Cuevas-Wizner, R. Street walking quality of the ‘Centro’ district, Madrid. J. Maps 2020, 16, 184–194.
  10. Santé-Riveira, I.; Crecente-Maseda, R.; Miranda-Barrós, D. GIS-based planning support system for rural landuse allocation. Comput. Electron. Agric. 2008, 63, 257–273.
  11. Martín, B.; Ortega, E.; Martino, P.; Otero, I. Inferring landscape change from differences in landscape character between the current and a reference situation. Ecol. Indic. 2018, 90, 584–593.
  12. Schulze, W.; Kanwischer, D.; Reudenbach, C. Essential competences for GIS learning in higher education: A synthesis of international curricular documents in the GISandT domain. J. Geogr. High. Educ. 2013, 37, 257–275.
  13. Mkhongi, F.A.; Musakwa, W. Perspectives of GIS education in high schools: An evaluation of uMgungundlovu district, KwaZulu-Natal, South Africa. Educ. Sci. 2020, 10, 131.
  14. Lee, J.; Bednarz, R. Effect of GIS Learning on Spatial Thinking. J. Geogr. High. Educ. 2009, 33, 183–198.
  15. Demirci, A.; Kocaman, S. Türkiye’de coğrafya mezunlarının CBS ile ilgili alanlarda istihdam edilebilme durumlarının değerlendirilmesi . Marmara Coğraf. Derg. 2007, 16, 65–92.
  16. Radinsky, J.; Hospelhorn, E.; Melendez, J.W.; Riel, J.; Washington, S. Teaching American migrations with GIS census web maps: A modified “backwards design” approach in middle-school and college classrooms. J. Soc. Stud. Res. 2014, 38, 143–158.
  17. Marsh, M.J.; Golledge, R.G.; Battersby, S.E. Geospatial concept understanding and recognition in G6–College Students: A preliminary argument for minimal GIS. Ann. Assoc. Am. Geogr. 2009, 97, 696–712.
  18. Johansson, T. GIS in Instructor Education—Facilitating GIS Applications in Secondary School Geography. In Proceedings of the ScanGIS’2003, The 9th Scandinavian Research Conference on Geographical Information Science, Espoo, Finland, 4–6 June 2003.
  19. Fernández-March, A. Metodologías activas para la formación de competencias . Educ. Siglo XXI 2006, 24, 35–56.
  20. Rodríguez-Jaume, M.J. Espacio Europeo de Educación Superior y Metodologías Docentes Activas: Dossier de Trabajo ; Universidad de Alicante: Alicante, Spain, 2009.
  21. Castejón, F.J.; Santos, M.L. Percepciones y dificultades en el empleo de metodologías participativas y evaluación formativa en el Grado de Ciencias de la Actividad Física . Rev. Electrón. Interuniv. Form. Profr. 2011, 14, 117–126.
  22. Antón, M.A. Docencia Universitaria: Concepciones y Evaluación de los Aprendizajes. Estudio de Casos . Ph.D. Thesis, Universidad de Burgos, Burgos, Spain, 2012.
  23. San Martín Gutiérrez, S.; Torres, N.J.; Sánchez-Beato, E.J. La evaluación del alumnado universitario en el Espacio Europeo de Educación Superior . Aula Abierta 2016, 44, 7–14.
  24. Double, K.S.; McGrane, J.A.; Hopfenbeck, T.N. The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educ. Psychol. Rev. 2020, 32, 481–509.
  25. Brown, S.; Pickforf, R. Evaluación de Habilidades y Competencias en Educación Superior ; Narcea: Asturias, Spain, 2013.
  26. Panadero, E.; Alqassab, M. An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assess. Eval. High. Educ. 2019, 44, 1253–1278.
  27. Zmuda, A. Springing into active learning. Educ. Leadersh. 2008, 66, 38–42.
  28. Rodríguez-Esteban, M.A.; Frechilla-Alonso, M.A.; Sáez-Pérez, M.P. Implementación de la evaluación por pares como herramienta de aprendizaje en grupos numerosos. Experiencia docente entre universidades . Adv. Build. Educ. 2018, 2, 66–82.
  29. Blanco, C.; Sánchez, P. Aplicando Evaluación por Pares: Análisis y Comparativa de distintas Técnicas . In Proceedings of the Actas Simposio-Taller Jenui 2012, Ciudad Real, Spain, 1–8 July 2012.
  30. Bernabé Valero, G.; Blasco Magraner, S. Actas de XI Jornadas de Redes de Investigación en Docencia Universitaria: Retos de Futuro en la Enseñanza Superior: Docencia e Investigación para Alcanzar la Excelencia Académica; Universidad de Alicante: Alicante, Spain, 2013; pp. 2057–2069.
  31. Sanmartí, N. 10 Ideas Clave: Evaluar para Aprender ; Graó: Castellón, Spain, 2007.
  32. Ibarra, M.; Rodríguez, G.; Gómez, R. La evaluación entre iguales: Beneficios y estrategias para su práctica en la universidad . Rev. Educ. 2012, 359, 206–231.
  33. Bautista-Cerro, M.J.; Murga-Menoyo, M.A. La evaluación por pares: Una técnica para el desarrollo de competencias cívicas (autonomía y responsabilidad) en contextos formativos no presenciales. Estudio de caso . In XII Congreso Internacional de Teoría de la Educación (CITE2011) ; Universitat de Barcelona: Barcelona, Spain, 2011.
  34. Arruabarrena, R.; Sánchez, A.; Blanco, J.M.; Vadillo, J.A.; Usandizaga, I. Integration of good practices of active methodologies with the reuse of student-generated content. Int. J. Educ. Technol. High. Educ. 2019, 16, 10.
  35. Luaces, O.; Díez, J.; Bahamonde, A. A peer assessment method to provide feedback, consistent grading and reduce students’ burden in massive teaching settings. Comput. Educ. 2018, 126, 283–295.
  36. Andrade, H. Teaching with rubrics. Coll. Teach. 2005, 53, 27–31.
  37. Purchase, H.; Hamer, J. Peer-review in practice: Eight years of Aropä. Assess. Eval. High. Educ. 2018, 43, 1146–1165.
  38. Chang, C.C.; Tseng, K.H.; Lou, S.J. A comparative analysis of the consistency and difference among instructor assessment, student self-assessment and peer-assessment in a web-based portfolio assessment environment for high school students. Comput. Educ. 2012, 58, 303–320.
  39. Jaime, A.; Blanco, J.M.; Domínguez, C.; Sánchez, A.; Heras, J.; Usandizaga, I. Spiral and project-based learning with peer assessment in a computer science project management course. J. Sci. Educ. Technol. 2016, 25, 439–449.
  40. Monllor-Satoca, D.; Guillén, E.; Lana-Villarreal, T.; Bonete, P.; Gómez, R. La evaluación por pares (“peer review”) como método de enseñanza aprendizaje de la Química Física . In Jornadas de Redes de Investigación en Docencia Universitaria X. Alicante ; Tortosa, M.T., Álvarez, J.D., Pellín, N., Eds.; Editorial Universitat Politècnica de València: Valencia, Spain, 2012.
  41. Delgado, J.; Medina, N.; Becerra, M. La evaluación por pares. Una alternativa de evaluación entre estudiantes universitarios . Rehuso Rev. Cienc. Humaníst. Soc. 2020, 5, 14–26.
  42. Loureiro, P.; Gomes, M.J. Online peer assessment for learning: Findings from higher education students. Educ. Sci. 2023, 13, 253.
  43. Yalch, M.M.; Vitale, E.M.; Fordand, J.K. Benefits of Peer Review on Students’ Writing. Psychol. Learn. Teach. 2019, 18, 317–325.
  44. Aston, K.J. ‘Why is this hard, to have critical thinking?’ Exploring the factors affecting critical thinking with international higher education students. Act. Learn. High. Educ. 2023.
  45. Väyrynen, K.; Lutovac, S.; Kaasila, R. Reflection on peer reviewing as a pedagogical tool in higher education. Act. Learn. High. Educ. 2023, 24, 291–303.
  46. Boud, D.; Cohen, R.; Sampson, J. Peer Learning and Assessment. Assess. Eval. High. Educ. 1999, 24, 413–426.
  47. Li, H.; Xiong, Y.; Hunter, C.V.; Guo, X.; Tywoniw, R. Does peer assessment promote student learning? A meta-analysis. Assess. Eval. High. Educ. 2020, 45, 193–211.
  48. Shishavan, H.B.; Jalili, M. Responding to student feedback: Individualising teamwork scores based on peer assessment. Int. J. Educ. Res. Open 2020, 1, 100019.
  49. Gómez, M.; Quesada, V. Coevaluación o Evaluación Compartida en el Contexto Universitario: La Percepción del Alumnado de Primer Curso . Rev. Iberoam. Eval. Educ. 2017, 10, 9–30.
  50. Román-Calderón, J.P.; Robledo-Ardila, C.; Velez-Calle, A. Global virtual teams in education: Do peer assessments motivate student effort? Stud. Educ. Eval. 2021, 70, 101021.
  51. Ion, G.; Díaz-Vicario, A.; Mercader, C. Making steps towards improved fairness in group work assessment: The role of students’ self- and peer-assessment. Act. Learn. High. Educ. 2023.
  52. Joh, J.; Plakans, L. Peer assessment in EFL teacher preparation: A longitudinal study of student perception. Lang. Teach. Res. 2021.
  53. Rød, J.K.; Nubdal, M. Double-blind multiple peer reviews to change students’ reading behaviour and help them develop their writing skills. J. Geogr. High. Educ. 2022, 46, 284–303.
  54. Chang, C.C.; Tseng, J.S. Student rating consistency in online peer assessment from the perspectives of individual and class. Stud. Educ. Eval. 2023, 79, 101306.
  55. Vander Schee, B.A.; Stovall, T.; Andrews, D. Using cross-course peer grading with content expertise, anonymity, and perceived justice. Act. Learn. High. Educ. 2022.
  56. Rotsaert, T.; Panadero, E.; Schellens, T. Anonymity as an instructional scaffold in peer assessment: Its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. Eur. J. Psychol. Educ. 2018, 33, 75–99.
  57. Wanner, T.; Palmer, E. Formative self-and peer assessment for improved student learning: The crucial factors of design, instructor participation and feedback. Assess. Eval. High. Educ. 2018, 43, 1032–1047.
  58. Agrawal, A.; Rajapakse, D.C. Perceptions and practice of peer assessments: An empirical investigation. Int. J. Educ. Manag. 2018, 32, 975–989.
  59. Marín García, J.A. Los alumnos y los profesores como evaluadores. Aplicación a la calificación de presentaciones orales . Rev. Esp. Pedagog. 2009, 242, 79–98.
  60. Panadero, E.; Romero, M.; Strijbos, J.W. The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Stud. Educ. Eval. 2013, 39, 195–203.
  61. McMillan, A.; Solanelles, P.; Rogers, B. Bias in student evaluations: Are my peers out to get me? Stud. Educ. Eval. 2021, 70, 101032.
  62. Barriopedro, M.; López, C.; Gómez, M.; Rivero, A. La coevaluación como estrategia para mejorar la dinámica del trabajo en grupo: Una experiencia en Ciencias del Deporte . Rev. Complut. Educ. 2016, 27, 571–584.
  63. Tavoletti, E.; Stephens, R.D.; Dong, L. The impact of peer evaluation on team effort, productivity, motivation and performance in global virtual teams. Team Perform. Manag. Int. J. 2019, 25, 334–347.
  64. Raposo, M.; Martínez, M. Evaluación educativa utilizando rúbrica: Un desafío para docentes y estudiantes universitarios . Educ. Educ. 2014, 17, 499–513.
  65. Conde, M.; Sanchez-Gonzalez, L.; Matellan-Olivera, V.; Rodriguez-Lera, F.J. Application of Peer Review Techniques in Engineering Education. Int. J. Eng. Educ. 2017, 33, 918–926.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 68
Revisions: 2 times (View History)
Update Date: 29 Jan 2024
1000/1000