1000/1000

Hot
Most Recent

Submitted Successfully!

To reward your contribution, here is a gift for you: A free trial for our video production service.

Thank you for your contribution! You can also upload a video entry or images related to this topic.

Do you have a full video?

Are you sure to Delete?

Cite

If you have any further questions, please contact Encyclopedia Editorial Office.

HandWiki. Homoscedasticity. Encyclopedia. Available online: https://encyclopedia.pub/entry/31956 (accessed on 19 June 2024).

HandWiki. Homoscedasticity. Encyclopedia. Available at: https://encyclopedia.pub/entry/31956. Accessed June 19, 2024.

HandWiki. "Homoscedasticity" *Encyclopedia*, https://encyclopedia.pub/entry/31956 (accessed June 19, 2024).

HandWiki. (2022, October 31). Homoscedasticity. In *Encyclopedia*. https://encyclopedia.pub/entry/31956

HandWiki. "Homoscedasticity." *Encyclopedia*. Web. 31 October, 2022.

Copy Citation

In statistics, a sequence (or a vector) of random variables is homoscedastic/ˌhoʊmoʊskəˈdæstɪk/ if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used. Assuming a variable is homoscedastic when in reality it is heteroscedastic (/ˌhɛtəroʊskəˈdæstɪk/) results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

heteroscedasticity
random variables
heteroskedasticity

A standard assumption in a linear regression, [math]\displaystyle{ y_i= X_i \beta + \epsilon_i, i = 1,\ldots, N, }[/math] is that the variance of the disturbance term [math]\displaystyle{ \epsilon_i }[/math] is the same across observations, and in particular does not depend on the values of the explanatory variables [math]\displaystyle{ X_i. }[/math]^{[1]} This is one of the assumptions under which the Gauss–Markov theorem applies and ordinary least squares (OLS) gives the best linear unbiased estimator ("BLUE"). Homoscedasticity is not required for the coefficient estimates to be unbiased, consistent, and asymptotically normal, but it is required for OLS to be efficient.^{[2]} It is also required for the classical standard errors of the estimates to be unbiased and consistent, so it is required for accurate hypothesis testing, e.g. for a t-test of whether a coefficient is significantly different from zero.

A more formal way to state the assumption of homoskedasticity is that the diagonals of the variance-covariance matrix of [math]\displaystyle{ \epsilon }[/math] must all be the same number: [math]\displaystyle{ E \epsilon_i\epsilon_i= \sigma^2 }[/math], where [math]\displaystyle{ \sigma^2 }[/math] is the same for all *i*.^{[3]} Note that this still allows for the off-diagonals, the covariances [math]\displaystyle{ E \epsilon_i\epsilon_j }[/math], to be nonzero, which is a separate violation of the Gauss-Markov assumptions known as serial correlation.

The matrices below are covariances of the disturbance, with entries [math]\displaystyle{ E \epsilon_i\epsilon_j }[/math], when there are just three observations across time. The disturbance in matrix A is homoskedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroskedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value of x. The disturbance in matrix D is homoskedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.

- [math]\displaystyle{ A = \sigma^2 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\;\;\;\;\;\;\; B = \sigma^2 \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \\ \end{bmatrix}\;\;\;\;\;\;\; C = \sigma^2 \begin{bmatrix} x_1 & 0 & 0 \\ 0 & x_2 & 0 \\ 0 & 0 & x_3 \\ \end{bmatrix} \;\;\;\;\;\;\; D = \sigma^2 \begin{bmatrix} 1 & \rho & \rho^2 \\ \rho & 1 & \rho \\ \rho^2 &\rho & 1 \\ \end{bmatrix} }[/math]

If *y* is consumption, *x* is income, and [math]\displaystyle{ \epsilon }[/math] is whims of the consumer, and we are estimating [math]\displaystyle{ y_i = \beta x_i + \epsilon_i, }[/math] then if richer consumers' whims affect their spending more in absolute dollars, we might have [math]\displaystyle{ Var(\epsilon_i) = x_i \sigma^2, }[/math] rising with income, as in matrix C above.^{[3]}

Residuals can be tested for homoscedasticity using the Breusch–Pagan test,^{[4]} which performs an auxiliary regression of the squared residuals on the independent variables. From this auxiliary regression, the explained sum of squares is retained, divided by two, and then becomes the test statistic for a chi-squared distribution with the degrees of freedom equal to the number of independent variables.^{[5]} The null hypothesis of this chi-squared test is homoscedasticity, and the alternative hypothesis would indicate heteroscedasticity. Since the Breusch–Pagan test is sensitive to departures from normality or small sample sizes, the Koenker–Bassett or 'generalized Breusch–Pagan' test is commonly used instead.^{[6]} From the auxiliary regression, it retains the R-squared value which is then multiplied by the sample size, and then becomes the test statistic for a chi-squared distribution (and uses the same degrees of freedom). Although it is not necessary for the Koenker–Bassett test, the Breusch–Pagan test requires that the squared residuals also be divided by the residual sum of squares divided by the sample size.^{[6]} Testing for groupwise heteroscedasticity requires the Goldfeld–Quandt test.

Two or more normal distributions, [math]\displaystyle{ N(\mu_i,\Sigma_i) }[/math], are homoscedastic if they share a common covariance (or correlation) matrix, [math]\displaystyle{ \Sigma_i = \Sigma_j,\ \forall i,j }[/math]. Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis.

The concept of homoscedasticity can be applied to distributions on spheres.^{[7]}

- Peter Kennedy, A Guide to Econometrics, 5th edition, p. 137.
- Achen, Christopher H.; Shively, W. Phillips (1995), Cross-Level Inference, University of Chicago Press, pp. 47–48, ISBN 9780226002194, https://books.google.com/books?id=AEXUlWus4K4C&pg=PA47 .
- Peter Kennedy, A Guide to Econometrics, 5th edition, p. 136.
- Breusch, T. S.; Pagan, A. R. (1979). "A Simple Test for Heteroscedasticity and Random Coefficient Variation". Econometrica 47 (5): 1287–1294. doi:10.2307/1911963. ISSN 0012-9682. https://www.jstor.org/stable/1911963.
- Ullah, Muhammad Imdad (2012-07-26). "Breusch Pagan Test for Heteroscedasticity" (in en-US). https://itfeature.com/correlation-and-regression-analysis/ols-assumptions/breusch-pagan-test.
- Pryce, Gwilym. "Heteroscedasticity: Testing and Correcting in SPSS". pp. 12–18. http://reocities.com/Heartland/4205/SPSS/HeteroscedasticityTestingAndCorrectingInSPSS1.pdf.
- Hamsici, Onur C.; Martinez, Aleix M. (2007) "Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification", Journal of Machine Learning Research, 8, 1583-1623 http://jmlr.csail.mit.edu/papers/volume8/hamsici07a/hamsici07a.pdf

More

Information

Subjects:
Statistics & Probability

Contributor
MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register
:

View Times:
879

Entry Collection:
HandWiki

Update Date:
31 Oct 2022