If greater values of one variable mainly correspond with greater values of the other variable, and the same holds for lesser values (that is, the variables tend to show similar behavior), the covariance is positive.[2] In the opposite case, when greater values of one variable mainly correspond to lesser values of the other (that is, the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance, therefore, shows the tendency in the
linear relationship between the variables. The magnitude of the covariance is the geometric mean of the variances that are in common for the two random variables. The
correlation coefficient normalizes the covariance by dividing by the geometric mean of the total variances for the two random variables.
A distinction must be made between (1) the covariance of two random variables, which is a
populationparameter that can be seen as a property of the
joint probability distribution, and (2) the
sample covariance, which in addition to serving as a descriptor of the sample, also serves as an
estimated value of the population parameter.
where is the expected value of , also known as the mean of . The covariance is also sometimes denoted or , in analogy to
variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values:
The
units of measurement of the covariance are those of times those of . By contrast,
correlation coefficients, which depend on the covariance, are a
dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
If the (real) random variable pair can take on the values for , with equal probabilities , then the covariance can be equivalently written in terms of the means and as
It can also be equivalently expressed, without directly referring to the means, as[5]
More generally, if there are possible realizations of , namely but with possibly unequal probabilities for , then the covariance is
In the case where two discrete random variables and have a joint probability distribution, represented by elements corresponding to the joint probabilities of , the covariance is calculated using a double summation over the indices of the matrix:
Examples
Consider 3 independent random variables and two constants .
In the special case, and , the covariance between and , is just the variance of and the name covariance is entirely appropriate.
Suppose that and have the following
joint probability mass function,[6] in which the six central cells give the discrete joint probabilities of the six hypothetical realizations :
x
5
6
7
y
8
0
0.4
0.1
0.5
9
0.3
0
0.2
0.5
0.3
0.4
0.3
1
can take on three values (5, 6 and 7) while can take on two (8 and 9). Their means are and . Then,
Properties
Covariance with itself
The
variance is a special case of the covariance in which the two variables are identical (that is, in which one variable has the same distribution as the other):[4]: 121
Covariance of linear combinations
If , , , and are real-valued random variables and are real-valued constants, then the following facts are a consequence of the definition of covariance:
For a sequence of random variables in real-valued, and constants , we have
Hoeffding's covariance identity
A useful identity to compute the covariance between two random variables is the Hoeffding's covariance identity:[7]
where is the joint cumulative distribution function of the random vector and are the
marginals.
Random variables whose covariance is zero are called
uncorrelated.[4]: 121 Similarly, the components of random vectors whose
covariance matrix is zero in every entry outside the main diagonal are also called uncorrelated.
The converse, however, is not generally true. For example, let be uniformly distributed in and let . Clearly, and are not independent, but
In this case, the relationship between and is non-linear, while correlation and covariance are measures of linear dependence between two random variables. This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are
jointly normally distributed (but not if they are merely
individually normally distributed), uncorrelatedness does imply independence.[9]
and whose covariance is positive are called positively correlated, which implies if then likely . Conversely, and with negative covariance are negatively correlated, and if then likely .
Relationship to inner products
Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an
inner product:
In fact these properties imply that the covariance defines an inner product over the
quotient vector space obtained by taking the subspace of random variables with finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.) That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly the
L2 inner product of real-valued functions on the sample space.
As a result, for random variables with finite variance, the inequality
The sample covariances among variables based on observations of each, drawn from an otherwise unobserved population, are given by the matrix with the entries
which is an estimate of the covariance between variable and variable .
The sample mean and the sample covariance matrix are
unbiased estimates of the
mean and the covariance matrix of the
random vector, a vector whose jth element is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that the population mean is not known and is replaced by the sample mean . If the population mean is known, the analogous unbiased estimate is given by
For a vector of jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the variance–covariance matrix or simply the covariance matrix) (also denoted by or ) is defined as[10]: 335
Let be a
random vector with covariance matrix Σ, and let A be a matrix that can act on on the left. The covariance matrix of the matrix-vector product A X is:
For real
random vectors and , the cross-covariance matrix is equal to[10]: 336
(Eq.2)
where is the
transpose of the vector (or matrix) .
The -th element of this matrix is equal to the covariance between the i-th scalar component of and the j-th scalar component of . In particular, is the
transpose of .
Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space
More generally let and , be
Hilbert spaces over or with anti linear in the first variable, and let be resp. valued random variables.
Then the covariance of and is the
sesquilinear form on
(anti linear in the first variable) given by
When , the equation is prone to
catastrophic cancellation if and are not computed exactly and thus should be avoided in computer programs when the data has not been centered before.[11]Numerically stable algorithms should be preferred in this case.[12]
Comments
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of
linear algebra (see
linear dependence). When the covariance is normalized, one obtains the
Pearson correlation coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
Applications
In genetics and molecular biology
Covariance is an important measure in
biology. Certain sequences of
DNA are conserved more than others among species, and thus to study secondary and tertiary structures of
proteins, or of
RNA structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in
noncoding RNA (such as
microRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close relatives as well as inference on estimation of heritability of complex traits.
In the theory of
evolution and
natural selection, the
price equation describes how a
genetic trait changes in frequency over time. The equation uses a covariance between a trait and
fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission and natural selection have on the proportion of genes within each new generation of a population.[13][14]
In meteorological and oceanographic data assimilation
The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as
data assimilation. The 'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between measurements (off the diagonal). This is an example of its widespread application to
Kalman filtering and more general
state estimation for time-varying systems.
In micrometeorology
The
eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.
In signal processing
The covariance matrix is used to capture the spectral variability of a signal.[15]
^Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 104.
^
abcdePark, Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer.
ISBN9783319680743.
^Yuli Zhang; Huaiyu Wu; Lei Cheng (June 2012). "Some new deformation formulas about variance and covariance". Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.
^Dekking, Michel, ed. (2005). A modern introduction to probability and statistics: understandig why and how. Springer texts in statistics. London [Heidelberg]: Springer.
ISBN978-1-85233-896-1.
^
abGubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press.
ISBN978-0-521-86470-1.