In
statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for
model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the
likelihood function and it is closely related to the
Akaike information criterion (AIC).
When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result in
overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC for sample sizes greater than 7.[1]
The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[2] where he gave a
Bayesian argument for adopting it.
= the maximized value of the
likelihood function of the model , i.e. , where are the parameter values that maximize the likelihood function and is the observed data;
= the number of data points in , the number of
observations, or equivalently, the sample size;
= the number of
parameters estimated by the model. For example, in
multiple linear regression, the estimated parameters are the intercept, the slope parameters, and the constant variance of the errors; thus, .
Derivation
The BIC can be derived by integrating out the parameters of the model using
Laplace's method, starting with the following
model evidence:[5][6]: 217
where is the prior for under model .
The log-likelihood, , is then expanded to a second order
Taylor series about the
MLE, , assuming it is twice differentiable as follows:
where is the average
observed information per observation, and denotes the residual term. To the extent that is negligible and is relatively linear near , we can integrate out to get the following:
As increases, we can ignore and as they are
. Thus,
where BIC is defined as above, and either (a) is the Bayesian posterior mode or (b) uses the MLE and the prior has nonzero slope at the MLE. Then the posterior
Usage
When picking from several models, ones with lower BIC values are generally preferred. The BIC is an increasing
function of the error variance and an increasing function of k. That is, unexplained variation in the
dependent variable and the number of explanatory variables increase the value of BIC. However, a lower BIC does not necessarily indicate one model is better than another. Because it involves approximations, the BIC is merely a heuristic. In particular, differences in BIC should never be treated like transformed Bayes factors.
It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable[b] are identical for all models being compared. The models being compared need not be
nested, unlike the case when models are being compared using an
F-test or a
likelihood ratio test.[citation needed]
The BIC generally penalizes free parameters more strongly than the
Akaike information criterion, though it depends on the size of n and relative magnitude of n and k.
It is independent of the prior.
It can measure the efficiency of the parameterized model in terms of predicting the data.
It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
the above approximation is only valid for sample size much larger than the number of parameters in the model.
the BIC cannot handle complex collections of models as in the variable selection (or
feature selection) problem in high-dimension.[7]
Gaussian special case
Under the assumption that the model errors or disturbances are independent and identically distributed according to a
normal distribution and the boundary condition that the derivative of the
log likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only on n and not on the model):[8]
where is the error variance. The error variance in this case is defined as
^The AIC, AICc and BIC defined by Claeskens and Hjort[4] are the negatives of those defined in this article and in most other standard references.
^A dependent variable is also called a response variable or an outcome variable. See
Regression analysis.
References
^See the review paper: Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules", IEEE Signal Processing Magazine (July): 36–47,
doi:
10.1109/MSP.2004.1311138,
S2CID17338979.