This article demonstrates the contribution that the theory of regulatory compliance makes as a unifying framework for child care and early education structural and process quality.
How the Theory of Regulatory Compliance Explains the Relationship Between Structural and Process Quality, Data Distributions, Scoring Systems, and a New Scale for Parents
Richard Fiene PhD
Penn State Edna Bennett Pierce Prevention Research Center
June 2025
1. Introduction
Child care and early education (CCEE) quality has been defined in the research literature along a structural and process continuum where structural quality has been dealing with hard, countable standards while process quality deals with the softer side of quality dealing with adult child interactions. To add more substance to this continuum, process quality is the real heart of quality, getting at the essence of those intricacies of what happens in individual classrooms in individual and group interactions amongst teachers and children. Structural quality are the surrogates to quality, such as measuring compliance with the number of teachers to children in staff-child ratios, or group sizes of children, or the number of violations of specific rules, regulations or standards. Structural quality does not look at the softer elements of quality when it comes to interactions or classroom atmosphere, sometimes it looks at the program curriculum but generally not. Structural quality is more concerned with health and safety standards, things that may harm children rather than things that will enhance their environment, that is left to process quality.
Structural quality elements are generally present in licensing rules and regulations, while process quality elements are present in tools such as the Environmental Rating Scales (ERS) or the Classroom Assessment Scoring System (CLASS). The ERS and CLASS are generally not used on their own although that was their original intent but are usually a part of other quality initiatives or CCEE systems, such as QRIS (ERS) and Head Start (CLASS). Structural and process quality complement each other in a building block way. Structural quality provides the foundation while process quality builds upon that foundation in an ever-expanding manner.
Another way of thinking about quality and its elements, is to think of a quality spectrum where we place structural and process quality on a spectrum line with the associated quality interventions. The quality interventions can be grouped in the following manner for structural quality: they would include licensing, quality rating and improvement systems (QRIS), Head Start Performance Standards, accreditation, and professional development systems. For process quality this is where the ERS and CLASS tools would go. Think of the quality spectrum as using a prism and splitting up light into all its various wavelengths and resulting colors.
How does the theory of regulatory compliance fit into all this? The theory provides the overarching and unifying framework to depict how structural quality and process quality work together. One of the main discoveries with the theory of regulatory compliance was demonstrating the importance of substantial regulatory compliance with structural quality rules. This discovery was made when a ceiling effect was determined in comparing structural to process quality. And this ceiling effect was discovered in all structural quality: licensing, Head Start, accreditation, and QRIS systems. Licensing demonstrates the greatest ceiling effect and, in some cases, a diminishing returns effect when moving from substantial to full 100% compliance but all of these structural quality systems demonstrate some form of a ceiling effect. Process quality follows a linear relationship, and its data distribution is normally distributed while structural quality follows a nonlinear relationship, and its data distribution is positively skewed. Studies in CCEE over the past 50 years have clearly demonstrated these relationships with structural and process quality when it comes to measuring compliance with the rules, regulations, and standards of each view of quality.
The theory of regulatory compliance has led to refocusing licensing decision-making that takes substantial compliance into account when determining who gets a full license and who does not. It clearly demonstrates how at times substantial compliance is equivalent to full 100% compliance with all rules, regulations, or standards; and, in some cases, is better than full compliance. This has also led to abbreviated, targeted or focused inspections where key predictor rules or high-risk rules are assessed which instituted a nuanced program monitoring system called differential monitoring.
It has also led to identifying quality indicators and infusing quality into the licensing rule and regulatory landscape. The use of licensing and quality predictor indicators has been the cornerstone of the differential monitoring approach and for good reason. These licensing and quality indicators can be looked upon as the anchors to structural and process quality. These key indicators statistically predict overall compliance with the full set of rules, regulations, and standards and studies have confirmed this relationship in licensing repeatedly, QRIS, Head Start, accreditation, ERS, and in the development of a new quality indicator scale.
From a statistical methodological point of view, it has led to, at times, significant correlations between structural and process quality but generally these correlations are at the lower end of significance. The reason being is that structural quality follows this ceiling effect or nonlinear skewed data distribution which does not match with the normal distributions found in process quality data distributions. So, researchers and scientists should not be surprised to find that their correlations between process and structural quality elements are not statistically significant. When looking at structural quality it is difficult to distinguish between the truly high performers and the mediocre performers. With process quality, it is much easier making that determination. With both structural and process quality it is equally easy to distinguish the high performers from the low performers.
Because of this difficulty in distinguishing between the high performers and the mediocre performers has led to the introduction of a new metric in structural quality called the Regulatory Compliance Scale (RCS). The reason for doing this is twofold: 1) The RCS fits more closely with the theory of regulatory compliance in demonstrating the importance of substantial compliance and having a categorical sequencing; 2) The categorical or ordinal sequence fits nicely with the existing process quality tools which are organized and measured on an ordinal scale of 1 - 7 scale. The RCS has been pilot tested in several jurisdictions, and it has demonstrated its ability to be a better measure when comparing structural quality to process quality than using straight rule, regulation or standard compliance violation frequency data.
These above assertions have been addressed previously but probably not in one place demonstrating the impact of the theory of regulatory compliance on structural and process quality. It is hopeful in the coming years that research psychologists and regulatory scientists will attempt to replicate these findings so that the public policy implications can be carried to their logical end point: substantial compliance being a sufficient level of compliance for issuing a full license, and the institutionalization of differential monitoring throughout the CCEE field. For this to happen, the ceiling effect in structural quality needs to be replicated when compared to process quality.
2. Structural Quality (RC) and Process Quality (PQ) Data Distributions
This research abstract provides the data distributions for a series of structural (RC) and program quality (PQ) studies which show dramatically different frequencies and centralized statistics. The structural quality data distributions have some very important limitations that will be noted as well as some potential adjustments that can be made to the data sets to make statistical analyses more meaningful. These data distributions are from the USA and Canada.
It is obvious when one observes the PQ as versus the RC data distributions that the RC data distributions are much more skewed, medians and means are significantly different, and kurtosis values are much higher which means that the data contain several outliers. These data distributions are provided for researchers who may be assessing structural quality (RC) data for the first time. There are certain limitations of these data which are not present in more parametric data distributions which are more characteristic of process quality (PQ) data.
To deal with the level of skewness of RC data, weighted risk assessments have been suggested in order to introduce additional variance into the data distributions. Also, dichotomization of data has been used successfully with very skewed data distributions as well. One of the problems with very skewed data distributions is that it is very difficult to distinguish between high performing providers and mediocre performing providers because of the ever present ceiling effect when comparing structural quality to process quality. Skewed data distributions provide no limitations in distinguishing low performing providers from their more successful providers.
In looking at the process quality (PQ) data, these data are generally normally distributed and do not demonstrate any severe skewness in their respective distributions. These data are distributed as should be the case where sufficient variance is present and there is no need for weighting because of the dispersion in the data. It is easy to be able to distinguish high performing providers from mediocre performing providers and from low performing providers.
For purposes of reading the following Table:
|
ThisData research abstract provides Set = the study that the data distributions for a series of structural (RC) and prograare drawn from. qualSity (PQ) studies which show dramatically different frequencies and centralized statistics. The structural quality data distributions have some very important limitations that will be noted aes = the number of sites in the particular study. Mean = the average of the scores. well as some potential adjustments that can be made to the data sets to make statistical analysesd = standard deviation. morep0 meaningful. These data distributions are from the USA and Canada= the average score at the 0 percentile. Itp25 is obvious when one observes the PQ as versus the RC data distributions that the RC data distributions are much more skewed, medians and means are significantly different, and kurtosis values are much higher which means that the data contain several outliers. These data distributions are provided for researchers who may be assessing structural quality (RC) data for the first time. There are certain limitations of these data which are not present in more parametric data distributions which are more characteristic of process quality (PQ) data= the average score at the 25th percentile. Top50 deal with the level of skewness of RC data, weighted risk assessments have been suggested in order to introduce additional variance into the data distributions. Also, dichotomization of data has been used successfully with very skewed data distributions as well. One of the problems with very skewed data distributions is that it is very difficult to distinguish between high performing providers and mediocre performing providers because of the ever present ceiling effect when comparing structural quality to process quality. Skewed data distributions provide no limitations in distinguishing low performing providers from their more successful providers= the average score at the 50th percentile or the median. p75 = the average score at the 75th percentile. In looking at the process quality (PQ) data, these data are generally normally distributed and do not demonstrate any severe skewness in their respective distributions. These data are distributed as should be the case where sufficient variance is present and there is no need for weighting because of the dispersion in the data. It is easy to be able to distinguish high performing providers from mediocre performing providers and from low performing providers100 = the average score at the 100th percentile. For purposes of reading the following Table: Data Set = the study that the data are drawn from. Sites = the number of sites in the particular study. Mean = the average of the scores. sd = standard deviation. p0 = the average score at the 0 percentile. p25 = the average score at the 25th percentile. p50 = the average score at the 50th percentile or the median. p75 = the average score at the 75th percentile. p100 = the average score at the 100th percentile. |
|||||||||
|---|---|---|---|---|---|---|---|---|---|
|
Data Set |
Sites |
mean |
sd |
p0 |
p25 |
p50 |
p75 |
p100 |
PQ or RC |
|
ECERS total score PQ |
209 |
4.24 |
0.94 |
1.86 |
3.52 |
4.27 |
4.98 |
6.29 |
PQ |
|
FDCRS total score PQ |
163 |
3.97 |
0.86 |
1.71 |
3.36 |
4.03 |
4.62 |
5.54 |
PQ |
|
ECERS and FDCRS totals PQ |
372 |
4.12 |
0.91 |
1.71 |
3.43 |
4.12 |
4.79 |
6.29 |
PQ |
|
ECERS prek PQ |
48 |
4.15 |
0.74 |
2.56 |
3.6 |
4.15 |
4.65 |
5.56 |
PQ |
|
ECERS preschool PQ |
102 |
3.42 |
0.86 |
1.86 |
2.82 |
3.26 |
4.02 |
5.97 |
PQ |
|
ITERS PQ |
91 |
2.72 |
1.14 |
1.27 |
1.87 |
2.34 |
3.19 |
5.97 |
PQ |
|
FDCRS PQ |
146 |
2.49 |
0.8 |
1.21 |
1.87 |
2.42 |
2.93 |
4.58 |
PQ |
|
CCC RC |
104 |
5.51 |
5.26 |
0 |
2 |
4 |
8 |
25 |
RC |
|
FCC RC |
147 |
5.85 |
5.71 |
0 |
2 |
4 |
8.5 |
33 |
RC |
|
CCC RC |
482 |
7.44 |
6.78 |
0 |
2 |
6 |
11 |
38 |
RC |
|
FDC RC |
500 |
3.52 |
4.05 |
0 |
0 |
2 |
5 |
34 |
RC |
|
CI Total Violations RC |
422 |
3.33 |
3.77 |
0 |
1 |
2 |
5 |
24 |
RC |
|
CLASS ES PQ |
384 |
5.89 |
0.36 |
4.38 |
5.69 |
5.91 |
6.12 |
6.91 |
PQ |
|
CLASS CO PQ |
384 |
5.45 |
0.49 |
3.07 |
5.18 |
5.48 |
5.77 |
6.56 |
PQ |
|
CLASS IS PQ |
384 |
2.98 |
0.7 |
1.12 |
2.5 |
2.95 |
3.37 |
5.74 |
PQ |
|
CLASS TOTAL OF THREE SCALES PQ |
384 |
14.33 |
1.32 |
8.87 |
13.52 |
14.33 |
15.11 |
17.99 |
PQ |
|
ECERS PQ |
362 |
4.52 |
1.05 |
1.49 |
3.95 |
4.58 |
5.25 |
7 |
PQ |
|
FDCRS PQ |
207 |
4.5 |
1 |
1.86 |
3.83 |
4.66 |
5.31 |
6.71 |
PQ |
|
CCC RC |
585 |
5.3 |
5.33 |
0 |
2 |
4 |
8 |
51 |
RC |
Studies Completed After 2020:.
|
QRIS RC |
585 |
2.78 |
1.24 |
0 |
2 |
3 |
4 |
4 |
RC |
|---|---|---|---|---|---|---|---|---|---|
|
FDC RC |
2486 |
2.27 |
3.42 |
0 |
0 |
1 |
3 |
34 |
RC |
|
FDC PQ |
2486 |
1.35 |
1.26 |
0 |
0 |
1 |
2 |
4 |
PQ |
|
CCC RC |
199 |
7.77 |
8.62 |
0 |
3 |
6 |
10 |
61 |
RC |
|
CCC RC |
199 |
6.69 |
10.32 |
0 |
1 |
4 |
8 |
98 |
RC |
|
CCC RC |
199 |
6.77 |
7.91 |
0 |
1.5 |
4 |
8.5 |
57 |
RC |
|
QRIS RC |
199 |
1.06 |
1.32 |
0 |
0 |
1 |
2 |
4 |
RC |
|
CCC RC |
199 |
7.08 |
6.96 |
0 |
2.33 |
5.67 |
9.84 |
52 |
RC |
|
QRIS RC |
381 |
2.55 |
0.93 |
0 |
2 |
3 |
3 |
4 |
RC |
|
CCC RC |
1399 |
1.13 |
2.1 |
0 |
0 |
0 |
1 |
20 |
RC |
|
CCC RC |
153 |
5.28 |
5.97 |
0 |
1 |
3 |
6 |
32 |
RC |
|
FDC RC |
82 |
3.52 |
4.36 |
0 |
0 |
2 |
4 |
21 |
RC |
3. Structural and Process Quality Scoring Systems in Child Care and Early Education
This section will delve into the details of the scoring systems used in structural and process quality for child care and early education programs. The scoring systems have evolved significantly over the years influenced by the tests and measurement research literature. Presently a significant change has been proposed with measuring structural quality that needs to be shared and reacted upon.
There are a great deal of similarities between the scoring systems for structural quality and process quality although the content varies greatly. Both structural and process quality scoring systems measure compliance with specific rules, items, or standards in a similar fashion. Both can use a weighting of each rule, item or standard but that is not always the case. And usually the overall score or global score is on a scale such as 1-7 (Environmental Rating Scales (ERS) and Classroom Assessment Scoring System (CLASS)) or 1-5 (Quality Rating and Improvement Systems (QRIS)) with the exception of licensing systems in which violation counts have been used in the past.
The purpose of this section is to suggest the use of a scale (Regulatory Compliance Scale (RCS)) in place of doing violation counts so that licensing data distributions can mirror more closely what is occurring in other structural quality systems, such as QRIS and accreditation systems; and throughout process quality systems (ERS and CLASS). The reason for suggesting this change is that in studies conducted in the state of Washington and the Province of Saskatchewan it was determined that the RCS was more effective in distinguishing the relative quality of programs than using violation count data.
Here is a potential scale structure that could be used in transposing the violation count data to the Regulatory Compliance Scale (RCS) (Table 1). These thresholds or buckets for the RCS scale were determined by analyzing a multitude of regulatory compliance data sets drawn from the USA and Canadian Provinces and these violation counts were found to be best at distinguishing the various levels of quality.
Table 1: Comparison of RCS with Regulatory Compliance Violation Counts
Table 1. Comparison of RCS with Regulatory Compliance Violation Counts.
|
RCS |
Violation Counts |
Description |
|---|---|---|
|
7 |
0 |
Full 100% Regulatory Compliance |
|
5 |
1-2 |
Substantial Regulatory Compliance |
|
3 |
3-9 |
Mediocre Regulatory Compliance |
|
1 |
10+ |
Low Non-Optimal Regulatory Compliance |
This transformation of data from violation counts to the RCS scale could vary if a weighting system is used with the rules. If not, then the above transformation has worked well in creating this ordinal/categorical ranking of regulatory compliance related to violation count data that has not been weighted. It fits with the prevailing theory of regulatory compliance which discusses the importance of substantial compliance with rules in determining the quality of a setting.
By utilizing this change in moving to a Regulatory Compliance Scale, it mirrors the other scoring systems in structural quality and all the scoring systems in process quality. From an analytical point of view, this greatly simplifies and makes more straightforward future analyses.
4. The Emergence of a New Early Childhood Program Quality Tool/Scale for Parents in Measuring both Structural and Process Quality in Selecting Child Care
This section provides an overview that parents can look for in their selection process of choosing high quality child care for their children. It is based upon 50 years of research delving into what constitutes a high quality child care and early education program. It is drawn from the major quality initiatives that have been implemented throughout the USA and Canada during this time frame.
The key indicators that every parent should be looking for:
These are the basic structural and process quality key indicators that every parent should be looking for when selecting their child care. The more of these you see the better.