2. Expertise in Teaching
An increasing number of studies have been conducted to examine the difference in teaching discussion networks based on expertise. One such study looked at the relationship between a faculty member’s stage of instructional development and the faculty network used to communicate about teaching practices 
. The study demonstrated a relationship between network size and stage of instructional development where experienced expert faculty members had larger networks than novices and experienced non-experts. The study also demonstrated that experienced experts also had more diversity in their networks and less frequency of teaching interactions, while this study provided insight into possible methods to investigate the difference in teaching networks based on expertise, the method through which they determined actual expertise was flawed.
The term “expert” in the study referred to experienced top performers who excel in a particular field, or as professionals who achieve at least a moderate degree of success in their occupation 
. For Van Waes et al., an “expert faculty member” performs at a high level when implementing effective teaching practices in the classroom, consisting of student-centric practices 
. However, they used three different factors, none of which have any direct relationship to the implementation of effective teaching practices, to determine the instructional stages of the 30 faculty members they interviewed for the study. The three factors included years teaching, scores on student evaluation of teaching surveys (SETs), and department chair nomination. For a faculty member to be identified as an experienced expert, they had to have a minimum of 10 years teaching experience, perform in the top quartile on SETs, and be nominated by their department chair.
The combination of these three factors resembles the use of triangulation to determine expertise, while triangulation is a plausible method, there are interdependent limitations to the factors used. Years of experience is not a reliable measure of expertise and studies have found no significant relationships between years of teaching experience and implementation of best practices 
. Berger et al. did however show a significant increase in a faculty member’s sense of self-efficacy with years of teaching experience 
. Research does show a significant positive bias on SET scores toward instructor years of experience, but also shows a similar bias towards increasing instructor confidence 
. The implication of Berger et al. is that years of experience is covariant with confidence and therefore SET scores. Not only do studies demonstrate bias toward years of experience in student evaluations, but many studies have also demonstrated the presence of gender, racial, and cultural biases in SETs 
. There is also no evidence that traditional affect-based SET scores correlate with measures of student learning, or the instructional practices used within a course. Finally, in the Van Waes et al. study, department supervisors provided no observational evidence of actual evidence-based practice implementation within their nominations, and supervisors could be similarly biased to both years of experience and/or increased instructor confidence 
Recently, studies have begun to use more quantitative and reliable methods that directly measure faculty members’ usage levels of effective teaching practices. Middleton et al. used the Approaches to Teaching Inventory (ATI) in combination with network metrics to measure faculty perceptions of their own teaching 
. The ATI is a self-reported assessment consisting of items that fall into four dimensions: conceptual change intention, student-centered strategies, information transmission, and teacher-focused strategies 
. Similarly, Reding et al. used the Teaching Practices Inventory (TPI) to examine the relationship between faculty member network elements and the implementation of effective teaching practices 
. The self-reported TPI measures the use of multiple practices shown by research to support student learning and teaching effectiveness in STEM and social science courses 
. Factors that support student learning include knowledge organization, reducing cognitive load, motivation, practice, feedback, metacognition, and group learning. Factors that support effective teaching include prior knowledge/beliefs, feedback on effectiveness, and gaining relevant knowledge and skills. Recently, the TPI was modified for validity in both in-person and online courses, with the modified version called the Faculty Inventory of Methods and Practices Associated with Competent Teaching (F-IMPACT) 
As self-reported surveys, instruments like the ATI, TPI, and F-IMPACT also have limitations; however, these types of instruments have been designed to directly measure the implementation of effective teaching practices. Researchers have adopted the Van Waes et al. definition of a teaching expert as a high-level implementer of effective teaching practices in the classroom 
. However, researchers have used the F-IMPACT instrument to measure the level of implementations more directly, with F-IMPACT score representing an instructor’s level of expertise. By establishing a valid measure of expertise within the broad domain of teaching, how expert and inexpert instructors interact with their social connections can be measured in an effort to better support diffusion of evidence-based teaching practices.
3. Social Capital and Network Analysis
The importance of social connections aiding in the diffusion of evidence-based teaching practices has been supported by research based on a social capital theoretical framework. There are numerous definitions of social capital depending on the author, but within an educational context, it has been defined as “the knowledge and resources for teaching practice that are accessible through a social network” 
. Social capital studies in higher education have investigated informal teaching advice networks, identification of instructional leaders, the conditions related to the development of teaching-related ties, and the influence of social capital on long-term academic performance 
. Social capital operates at many levels including ego, sub-group, and whole network. This study operates under the ego-level perspective, which includes three intersecting elements: the resources embedded within the network; individual accessibility to these resources; and individual mobilization or actualization of these resources 
. Studies interested in examining the diffusion of evidence-based practices, such as this current study, view teaching expertise as the resource and faculty members as the individuals.
Researchers use Social Network Analysis (SNA) to quantify these components of social capital. SNA is an empirical method rooted in graph theory and is used to investigate relational concepts, processes, and patterns within a social network 
. SNA views social structures as multi-faceted and consisting of network entities, which could be individuals, departments, organizations, etc., that have relationships based on some sort of interaction. In SNA, the entities are known as actors and their interactions are known as ties. To connect this with the components of social capital for this study, the actors are the faculty members, and their ties are their discussions related to teaching.
The ties between the actors are conduits for the diffusion of instructional expertise through their discussions. This diffusion of instructional expertise through social capital relies on the three intersecting elements that were previously identified, including instructional expertise being embedded within the network, faculty members having access to the instructional expertise, and finally, faculty members mobilizing, or implementing, the instructional expertise into their own courses. Assuming that instructional expertise exists within a faculty teaching discussion network, the topics of discussion must be examined in order to understand the accessibility and mobilization of practices. There are several methods through which faculty teaching discussion network data are obtained. Depending on the scope of the research, some methods use a roster approach, for instance, where the names of all faculty members within a department are listed and each individual selects the type of discussions they have. This approach is typically used when researchers want to better understand the whole network of a department or unit. Other times, researchers are focused on ego-level networks and may employ name generators, where a respondent constructs the list. Regardless of the data collection method, the instrument used must provide some sort of prompt to describe the type of teaching discussion that might occur. Due to the relational nature of networks, these prompts, and their interpretation by respondents, are instrumental to the overall analysis.
The most common types of biases in SNA self-reports are social desirability bias, reference bias, and introspective ability 
. When survey participants operate under social desirability bias, they tend to rate themselves “higher”, hoping to appear more socially desirable. Social desirability can result in an inflated number of alters being identified, increased frequency of interactions, or the selection of more advanced types of interactions. Reference bias occurs when respondents interpret scales and prompts differently, which can result in misinterpretation of the content of teaching discussions and un-reciprocated interaction types. Introspective ability bias refers to an individual’s ability to objectively rate themselves, which can also result in either the misidentification of alters and/or the nature of their ties. There are several ways to limit these biases in SNA, which include the provision of descriptive prompts with examples which can be used for ego-level or network-level studies to minimize a sense of ambiguity 
Some studies determining the presence, diffusion, and subsequent implementation of evidence-based practices throughout a teaching network tend to assume teaching discussions of any type inherently involve the sharing of “good” practices. One such study used the term “teaching-related issues” to refer to discussion about teaching with no additional clarification for what teaching-related content was actually discussed 
. Another study interviewed 22 participants through a semi-structured interview with the prompt of discussing “methods or techniques they can use to better teach their students important skills, knowledge, or abilities” 
. The issue with this prompt in the context of the diffusion of specific effective practices is that it assumes the respondent has a solid evidence-based pedagogical foundation. It is possible that they may have networks that are not beneficial because they consist of other faculty members who similarly do not have a solid, evidence-based understanding of what their students need to succeed.
Another study that used a semi-structured interview approach used the prompt “In the past half year, who did you talk to about your teaching? More specifically, who do you talk to about the preparation of courses, teaching courses, student guidance or assessment, experiences with students and/or teaching? You do not have to include administrative or judicial aspects of teaching” 
. While this prompt provided examples of what was meant by talking about teaching, it combined all levels of teaching discussion into one prompt, so it is impossible to parse out what they were actually discussing in regard to teaching. Other studies that do parse out the content of discussions into separate relational ties also make assumptions about the quality of the instructional conversations. Apkarian and Rasmussen used SNA to uncover formal and informal instructional leadership structures 
, while they investigated four different instruction-specific relationships, including advice about teaching, seeking instructional materials, discussing instructional matter, and instructional influence, assumptions regarding the degree to which respondents understood the difference between the various types of discussions inhibit the validity of the results. The actual prompts were not provided and there was no mention of the provision of examples to help respondents better understand what was being asked.