In
statistical theory, the field of high-dimensional statistics studies data whose
dimension is larger (relative to the number of datapoints) than typically considered in classical
multivariate analysis. The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the
sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.[1][2]
There are several notions of high-dimensional analysis of statistical methods including:
Non-asymptotic results which apply for finite (number of data points and dimension size, respectively).
Kolmogorov asymptotics which studies the asymptotic behavior where the ratio is converges to a specific finite value.[3]
where is an unknown parameter vector, and is random noise with mean zero and variance . Given independent responses , with corresponding covariates , from this model, we can form the response vector , and
design matrix. When and the design matrix has full
column rank (i.e. its columns are
linearly independent), the
ordinary least squares estimator of is
However,
overfitting is a concern when is of comparable magnitude to : the matrix in the definition of may become
ill-conditioned, with a small minimum
eigenvalue. In such circumstances will be large (since the
trace of a matrix is the sum of its eigenvalues). Even worse, when , the matrix is
singular. (See Section 1.2 and Exercise 1.2 in [1].)
It is important to note that the deterioration in estimation performance in high dimensions observed in the previous paragraph is not limited to the ordinary least squares estimator. In fact, statistical inference in high dimensions is intrinsically hard, a phenomenon known as the
curse of dimensionality, and it can be shown that no estimator can do better in a worst-case sense without additional information (see Example 15.10[2]). Nevertheless, the situation in high-dimensional statistics may not be hopeless when the data possess some low-dimensional structure. One common assumption for high-dimensional linear regression is that the vector of regression coefficients is
sparse, in the sense that most coordinates of are zero. Many statistical procedures, including the
Lasso, have been proposed to fit high-dimensional linear models under such sparsity assumptions.
In the low-dimensional setting where increases and is held fixed, is a
consistent estimator of in any
matrix norm. When grows with , on the other hand, this consistency result may fail to hold. As an illustration, suppose that each and . If were to consistently estimate , then the eigenvalues of should approach one as increases. It turns out that this is not the case in this high-dimensional setting. Indeed, the largest and smallest eigenvalues of concentrate around and , respectively, according to the limiting distribution derived by
Tracy and Widom, and these clearly deviate from the unit eigenvalues of . Further information on the asymptotic behaviour of the eigenvalues of can be obtained from the
Marchenko–Pastur law. From a non-asymptotic point of view, the maximum eigenvalue of satisfies
Again, additional low-dimensional structure is needed for successful covariance matrix estimation in high dimensions. Examples of such structures include
sparsity,
low rankness and
bandedness. Similar remarks apply when estimating an inverse covariance matrix
(precision matrix).
History
From an applied perspective, research in high-dimensional statistics was motivated by the realisation that advances in computing technology had dramatically increased the ability to collect and store
data, and that traditional statistical techniques such as those described in the examples above were often ill-equipped to handle the resulting challenges. Theoretical advances in the area can be traced back to the remarkable result of
Charles Stein in 1956,[4] where he proved that the usual estimator of a multivariate normal mean was
inadmissible with respect to squared error loss in three or more dimensions. Indeed, the
James-Stein estimator[5] provided the insight that in high-dimensional settings, one may obtain improved estimation performance through shrinkage, which reduces variance at the expense of introducing a small amount of bias. This
bias-variance tradeoff was further exploited in the context of high-dimensional
linear models by Hoerl and Kennard in 1970 with the introduction of
ridge regression.[6] Another major impetus for the field was provided by
Robert Tibshirani's work on the
Lasso in 1996, which used regularisation to achieve simultaneous model selection and parameter estimation in high-dimensional sparse linear regression.[7] Since then, a large number of other
shrinkage estimators have been proposed to exploit different low-dimensional structures in a wide range of high-dimensional statistical problems.
Topics in high-dimensional statistics
The following are examples of topics that have received considerable attention in the high-dimensional statistics literature in recent years:
Linear models in high dimensions. Linear models are one of the most widely used tools in statistics and its applications. As such, sparse linear regression is one of the most well-studied topics in high-dimensional statistical research. Building upon earlier works on
ridge regression and the
Lasso, several other
shrinkage estimators have been proposed and studied in this and related problems. They include
The Dantzig selector, which minimises the maximum covariate-residual correlation, instead of the residual sum of squares as in the Lasso, subject to an constraint on the coefficients.[8]
Elastic net, which combines regularisation of the
Lasso with regularisation of
ridge regression to allow highly correlated covariates to be simultaneously selected with similar regression coefficients.[9]
The
Group Lasso, which allows predefined groups of covariates to be selected jointly.[10]
The
Fused lasso, which regularises the difference between nearby coefficients when the regression coefficients reflect spatial or temporal relationships, so as to enforce a piecewise constant structure.[11]
High-dimensional variable selection. In addition to estimating the underlying parameter in regression models, another important topic is to seek to identify the non-zero coefficients, as these correspond to variables that are needed in a final model. Each of the techniques listed under the previous heading can be used for this purpose, and are sometimes combined with ideas such as
subsampling through Stability Selection.[12][13]
High-dimensional covariance and precision matrix estimation. These problems were introduced above; see also
shrinkage estimation. Methods include tapering estimators [14] and the constrained minimisation estimator.[15]
Sparse principal component analysis.
Principal Component Analysis is another technique that breaks down in high dimensions; more precisely, under appropriate conditions, the leading eigenvector of the sample covariance matrix is an inconsistent estimator of its population counterpart when the ratio of the number of variables to the number of observations is bounded away from zero.[16] Under the assumption that this leading eigenvector is sparse (which can aid interpretability), consistency can be restored.[17]
Matrix completion. This topic, which concerns the task of filling in the missing entries of a partially observed matrix, became popular owing in large part to the
Netflix prize for predicting user ratings for films.
Graphical models for high-dimensional data. Graphical models are used to encode the conditional dependence structure between different variables. Under a Gaussianity assumption, the problem reduces to that of estimating a sparse precision matrix, discussed above.
^Hoerl, Arthur E., and Robert W. Kennard. “Ridge Regression: Biased Estimation for Nonorthogonal Problems.” Technometrics, vol. 12, no. 1, 1970, pp. 55–67. [www.jstor.org/stable/1267351 JSTOR]. Accessed 13 March 2021.
^Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the lasso". Journal of the Royal Statistical Society. Series B (methodological). 58 (1). Wiley: 267–88.
JSTOR2346178.
^Tibshirani, Robert, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. 2005. “Sparsity and Smoothness via the Fused lasso”. Journal of the Royal Statistical Society. Series B (statistical Methodology) 67 (1). Wiley: 91–108.
https://www.jstor.org/stable/3647602.