Expert and Inexpert Instructors Talking About Teaching: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Using mixed-method social network analysis, researchers explored the discussions happening between instructors within a teaching-related network and how instructional expertise correlated with the content of those discussions. Instructional expertise, defined by the extent to which effective teaching practices were implemented, was measured for 82 faculty teaching at a Midwestern research university in the USA using the Faculty Inventory of Methods and Practices Associated with Competent Teaching (F-IMPACT).

  • social network analysis
  • evidence-based teaching practices
  • diffusion

1. Introduction

The importance of social connections to an individual instructor’s decision-making process regarding instructional practice has been well established [1][2][3][4][5]. For example, these studies have shown how social connections among faculty can influence the diffusion of instructional practices throughout a department, and some have attempted to uncover patterns in teaching-related discussions. Knowing the “who, what, when, where, why, and how” of teaching discussions within these networks can help inform efforts to facilitate broad-scale unit and/or institutional implementation of effective, evidence-based instructional practices.
Unfortunately, uncovering what is happening during faculty–faculty interactions within these networks has been complicated. Common methods include surveys and interviews. Within these surveys and interviews, the prompts typically include something generic about identifying alters with whom the respondent talked about teaching [2][3][6]. Most studies assume that teaching interactions of any type involve the sharing of “good” practices, either by soliciting no clarification about what was discussed, assuming the respondents had pedagogical expertise, and/or making assumptions about the quality of the conversations [6][7][8].
There are also myriad issues with how expertise is defined and measured. For example, Van Waes et al. used three factors to define expertise, none of which have any direct relationship to the implementation of effective teaching practices. The reliable identification of individuals with expertise in a network is important, since a teaching network with no existing source of expertise will fail to adopt best practices, and recent research suggests that even the existence of that expertise is not sufficient for the diffusion of best practices across the network [9].

2. Expertise in Teaching

An increasing number of studies have been conducted to examine the difference in teaching discussion networks based on expertise. One such study looked at the relationship between a faculty member’s stage of instructional development and the faculty network used to communicate about teaching practices [10]. The study demonstrated a relationship between network size and stage of instructional development where experienced expert faculty members had larger networks than novices and experienced non-experts. The study also demonstrated that experienced experts also had more diversity in their networks and less frequency of teaching interactions, while this study provided insight into possible methods to investigate the difference in teaching networks based on expertise, the method through which they determined actual expertise was flawed.
The term “expert” in the study referred to experienced top performers who excel in a particular field, or as professionals who achieve at least a moderate degree of success in their occupation [11]. For Van Waes et al., an “expert faculty member” performs at a high level when implementing effective teaching practices in the classroom, consisting of student-centric practices [10]. However, they used three different factors, none of which have any direct relationship to the implementation of effective teaching practices, to determine the instructional stages of the 30 faculty members they interviewed for the study. The three factors included years teaching, scores on student evaluation of teaching surveys (SETs), and department chair nomination. For a faculty member to be identified as an experienced expert, they had to have a minimum of 10 years teaching experience, perform in the top quartile on SETs, and be nominated by their department chair.
The combination of these three factors resembles the use of triangulation to determine expertise, while triangulation is a plausible method, there are interdependent limitations to the factors used. Years of experience is not a reliable measure of expertise and studies have found no significant relationships between years of teaching experience and implementation of best practices [12][13][14]. Berger et al. did however show a significant increase in a faculty member’s sense of self-efficacy with years of teaching experience [12]. Research does show a significant positive bias on SET scores toward instructor years of experience, but also shows a similar bias towards increasing instructor confidence [15]. The implication of Berger et al. is that years of experience is covariant with confidence and therefore SET scores. Not only do studies demonstrate bias toward years of experience in student evaluations, but many studies have also demonstrated the presence of gender, racial, and cultural biases in SETs [16][17][18]. There is also no evidence that traditional affect-based SET scores correlate with measures of student learning, or the instructional practices used within a course. Finally, in the Van Waes et al. study, department supervisors provided no observational evidence of actual evidence-based practice implementation within their nominations, and supervisors could be similarly biased to both years of experience and/or increased instructor confidence [10].
Recently, studies have begun to use more quantitative and reliable methods that directly measure faculty members’ usage levels of effective teaching practices. Middleton et al. used the Approaches to Teaching Inventory (ATI) in combination with network metrics to measure faculty perceptions of their own teaching [19]. The ATI is a self-reported assessment consisting of items that fall into four dimensions: conceptual change intention, student-centered strategies, information transmission, and teacher-focused strategies [20]. Similarly, Reding et al. used the Teaching Practices Inventory (TPI) to examine the relationship between faculty member network elements and the implementation of effective teaching practices [9]. The self-reported TPI measures the use of multiple practices shown by research to support student learning and teaching effectiveness in STEM and social science courses [21]. Factors that support student learning include knowledge organization, reducing cognitive load, motivation, practice, feedback, metacognition, and group learning. Factors that support effective teaching include prior knowledge/beliefs, feedback on effectiveness, and gaining relevant knowledge and skills. Recently, the TPI was modified for validity in both in-person and online courses, with the modified version called the Faculty Inventory of Methods and Practices Associated with Competent Teaching (F-IMPACT) [22].
As self-reported surveys, instruments like the ATI, TPI, and F-IMPACT also have limitations; however, these types of instruments have been designed to directly measure the implementation of effective teaching practices. Researchers have adopted the Van Waes et al. definition of a teaching expert as a high-level implementer of effective teaching practices in the classroom [10]. However, researchers have used the F-IMPACT instrument to measure the level of implementations more directly, with F-IMPACT score representing an instructor’s level of expertise. By establishing a valid measure of expertise within the broad domain of teaching, how expert and inexpert instructors interact with their social connections can be measured in an effort to better support diffusion of evidence-based teaching practices.

3. Social Capital and Network Analysis

The importance of social connections aiding in the diffusion of evidence-based teaching practices has been supported by research based on a social capital theoretical framework. There are numerous definitions of social capital depending on the author, but within an educational context, it has been defined as “the knowledge and resources for teaching practice that are accessible through a social network” [23]. Social capital studies in higher education have investigated informal teaching advice networks, identification of instructional leaders, the conditions related to the development of teaching-related ties, and the influence of social capital on long-term academic performance [7][8][9][24][25]. Social capital operates at many levels including ego, sub-group, and whole network. This study operates under the ego-level perspective, which includes three intersecting elements: the resources embedded within the network; individual accessibility to these resources; and individual mobilization or actualization of these resources [26]. Studies interested in examining the diffusion of evidence-based practices, such as this current study, view teaching expertise as the resource and faculty members as the individuals.
We use Social Network Analysis (SNA) to quantify these components of social capital. SNA is an empirical method rooted in graph theory and is used to investigate relational concepts, processes, and patterns within a social network [27]. SNA views social structures as multi-faceted and consisting of network entities, which could be individuals, departments, organizations, etc., that have relationships based on some sort of interaction. In SNA, the entities are known as actors and their interactions are known as ties. To connect this with the components of social capital for this study, the actors are the faculty members, and their ties are their discussions related to teaching.
The ties between the actors are conduits for the diffusion of instructional expertise through their discussions. This diffusion of instructional expertise through social capital relies on the three intersecting elements that were previously identified, including instructional expertise being embedded within the network, faculty members having access to the instructional expertise, and finally, faculty members mobilizing, or implementing, the instructional expertise into their own courses. Assuming that instructional expertise exists within a faculty teaching discussion network, the topics of discussion must be examined in order to understand the accessibility and mobilization of practices. There are several methods through which faculty teaching discussion network data are obtained. Depending on the scope of the research, some methods use a roster approach, for instance, where the names of all faculty members within a department are listed and each individual selects the type of discussions they have. This approach is typically used when researchers want to better understand the whole network of a department or unit. Other times, researchers are focused on ego-level networks and may employ name generators, where a respondent constructs the list. Regardless of the data collection method, the instrument used must provide some sort of prompt to describe the type of teaching discussion that might occur. Due to the relational nature of networks, these prompts, and their interpretation by respondents, are instrumental to the overall analysis.
The most common types of biases in SNA self-reports are social desirability bias, reference bias, and introspective ability [28]. When survey participants operate under social desirability bias, they tend to rate themselves “higher”, hoping to appear more socially desirable. Social desirability can result in an inflated number of alters being identified, increased frequency of interactions, or the selection of more advanced types of interactions. Reference bias occurs when respondents interpret scales and prompts differently, which can result in misinterpretation of the content of teaching discussions and un-reciprocated interaction types. Introspective ability bias refers to an individual’s ability to objectively rate themselves, which can also result in either the misidentification of alters and/or the nature of their ties. There are several ways to limit these biases in SNA, which include the provision of descriptive prompts with examples which can be used for ego-level or network-level studies to minimize a sense of ambiguity [29].
Some studies determining the presence, diffusion, and subsequent implementation of evidence-based practices throughout a teaching network tend to assume teaching discussions of any type inherently involve the sharing of “good” practices. One such study used the term “teaching-related issues” to refer to discussion about teaching with no additional clarification for what teaching-related content was actually discussed [6]. Another study interviewed 22 participants through a semi-structured interview with the prompt of discussing “methods or techniques they can use to better teach their students important skills, knowledge, or abilities” [7]. The issue with this prompt in the context of the diffusion of specific effective practices is that it assumes the respondent has a solid evidence-based pedagogical foundation. It is possible that they may have networks that are not beneficial because they consist of other faculty members who similarly do not have a solid, evidence-based understanding of what their students need to succeed.
Another study that used a semi-structured interview approach used the prompt “In the past half year, who did you talk to about your teaching? More specifically, who do you talk to about the preparation of courses, teaching courses, student guidance or assessment, experiences with students and/or teaching? You do not have to include administrative or judicial aspects of teaching” [10]. While this prompt provided examples of what was meant by talking about teaching, it combined all levels of teaching discussion into one prompt, so it is impossible to parse out what they were actually discussing in regard to teaching. Other studies that do parse out the content of discussions into separate relational ties also make assumptions about the quality of the instructional conversations. Apkarian and Rasmussen used SNA to uncover formal and informal instructional leadership structures [8], while they investigated four different instruction-specific relationships, including advice about teaching, seeking instructional materials, discussing instructional matter, and instructional influence, assumptions regarding the degree to which respondents understood the difference between the various types of discussions inhibit the validity of the results. The actual prompts were not provided and there was no mention of the provision of examples to help respondents better understand what was being asked.

This entry is adapted from the peer-reviewed paper 10.3390/educsci13060591

References

  1. Borrego, M.; Henderson, C. Increasing the use of evidence-based teaching in STEM higher education: A comparison of eight change strategies. J. Eng. Educ. 2014, 103, 220–252.
  2. Lane, A.K.; Skvoretz, J.; Ziker, J.P.; Couch, B.A.; Earl, B.; Lewis, J.E.; McAlpin, J.D.; Prevost, L.B.; Shadle, S.E.; Stains, M. Investigating how faculty social networks and peer influence relate to knowledge and use of evidence-based teaching practices. Int. J. STEM Educ. 2019, 6, 1–14.
  3. Ma, S.; Herman, G.L.; West, M.; Tomkin, J.; Mestre, J. Studying STEM faculty communities of practice through social network analysis. J. High. Educ. 2019, 90, 773–799.
  4. McConnell, M.; Montplaisir, L.; Offerdahl, E.G. A model of peer effects on instructor innovation adoption. Int. J. STEM Educ. 2020, 7, 1–11.
  5. Shadle, S.E.; Liu, Y.; Lewis, J.E.; Minderhout, V. Building a community of transformation and a social network analysis of the POGIL project. Innov. High. Educ. 2018, 43, 475–490.
  6. Quardokus, K.; Henderson, C. Promoting instructional change: Using social network analysis to understand the informal structure of academic departments. High. Educ. 2015, 70, 315–335.
  7. Benbow, R.J.; Lee, C. Teaching-focused social networks among college faculty: Exploring conditions for the development of social capital. High. Educ. 2019, 78, 67–89.
  8. Apkarian, N.; Rasmussen, C. Instructional leadership structures across five university departments. High. Educ. 2021, 81, 865–887.
  9. Reding, T.; Moore, C.; Pelton, J.A.; Edwards, S. Barriers to Change: Social Network Interactions Not Sufficient for Diffusion of High-Impact Practices in STEM Teaching. Educ. Sci. 2022, 12, 512.
  10. Van Waes, S.; Van den Bossche, P.; Moolenaar, N.M.; De Maeyer, S.; Van Petegem, P. Know-who? Linking faculty’s networks to stages of instructional development. High. Educ. 2015, 70, 807–826.
  11. Boshuizen, H.P.; Bromme, R.; Gruber, H. Professional Learning: Gaps and Transitions on the Way from Novice to Expert; Kluwer Academic Publishers: Amsterdam, The Netherlands, 2014.
  12. Berger, J.L.; Girardet, C.; Vaudroz, C.; Crahay, M. Teaching experience, teachers’ beliefs, and self-reported classroom management practices: A coherent network. SAGE Open 2018, 8, 2158244017754119.
  13. Harris, D.N.; Sass, T.R. What Makes for a Good Teacher and Who Can Tell? Urban Institute: Washington, DC, USA, 2009.
  14. Irvine, J. Relationship between Teaching Experience and Teacher Effectiveness: Implications for Policy Decisions. J. Instr. Pedagog. 2019, 22, EJ1216895.
  15. McPherson, M.A.; Jewell, R.T.; Kim, M. What determines student evaluation scores? A random effects analysis of undergraduate economics classes. East. Econ. J. 2009, 35, 37–51.
  16. Fan, Y.; Shepherd, L.J.; Slavich, E.; Waters, D.; Stone, M.; Abel, R.; Johnston, E.L. Gender and cultural bias in student evaluations: Why representation matter. PLoS ONE 2019, 14, e0209749.
  17. Chávez, K.; Mitchell, K.M. Exploring bias in student evaluations: Gender, race, and ethnicity. PS Political Sci. Politics 2020, 53, 270–274.
  18. Carpenter, S.K.; Witherby, A.E.; Tauber, S.K. On students’ (mis)judgments of learning and teaching effectiveness. J. Appl. Res. Mem. Cogn. 2020, 9, 137–151.
  19. Middleton, J.A.; Krause, S.; Judson, E.; Ross, L.; Culbertson, R.; Hjelmstad, K.D.; Hjelmstad, K.L.; Chen, Y.C. A Social Network Analysis of Engineering Faculty Connections: Their Impact on Faculty Student-Centered Attitudes and Practices. Educ. Sci. 2022, 12, 108.
  20. Trigwell, K.; Prosser, M. Development and use of the approaches to teaching inventory. Educ. Psychol. Rev. 2004, 16, 409–424.
  21. Wieman, C.; Gilbert, S. The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. CBE Life Sci. Educ. 2014, 13, 552–569.
  22. Moore, C.; Cutucache, C.; Edwards, S.; Pelton, J.; Reding, T. Modification and validation of the Teaching Practices Inventory for online courses. In Proceedings of the 2021 Physics Education Research Conference, Virtual, 4–5 August 2021.
  23. Baker-Doyle, K.J.; Yoon, S.A. In search of practitioner-based social capital: A social network analysis tool for understanding and facilitating teacher collaboration in a US-based STEM professional development program. Prof. Dev. Educ. 2011, 37, 75–93.
  24. Thiele, L.; Sauer, N.C.; Kauffeld, S. Why extraversion is not enough: The mediating role of initial peer network centrality linking personality to long-term academic performance. High. Educ. 2018, 76, 789–805.
  25. Reding, T.E.; Dorn, B.; Grandgenett, N.; Siy, H.; Youn, J.; Zhu, Q.; Engelmann, C. Identification of the emergent leaders within a CSE professional development program. In Proceedings of the 11th Workshop in Primary and Secondary Computing Education (WiPSCE 2016), Münster, Germany, 13–15 October 2016; pp. 37–44.
  26. Lin, N.; Cook, K.; Burt, R.S. Social Capital: Theory and Research; Transaction Publishers: Piscataway, NJ, USA, 2001.
  27. Froehlich, D.E. Mapping mixed methods approaches to social network analysis in learning and education. In Mixed Methods Social Network Analysis; Routledge: London, UK, 2020; pp. 13–24.
  28. McDonald, J.D. Measuring personality constructs: The advantages and disadvantages of self-reports, informant reports and behavioural assessments. Enquire 2008, 1, 1–19.
  29. Choi, B.C.; Pak, A.W. A catalog of biases in questionnaires. Prev. Chronic Dis. 2005, 2, A13.
More
This entry is offline, you can click here to edit this entry!
Video Production Service