pcaFS

pcaFS performs Principal Component Analysis (PCA) on raw data.

Syntax

Description

The main differences with respect to MATLAB function pca are: 1) accepts an input X also as table;

2) produces in table format the percentage of the variance explained single and cumulative of the various components and the associated scree plot, in order to decide about the number of components to retain.

3) returns the loadings in table format and shows them graphically.

4) provides guidelines about the automatic choice of the number of components;

5) returns the communalities for each variable with respect to the first k principal components in table format;

6) retuns the orthogonal distance ($OD_i$) of each observation to the PCA subspace. For example, if the subspace is defined by the first two principal components, $OD_i$ is computed as:

\[ OD_i=|| z_i- V_{(2)} V_{(2)}' z_i || \]

where z_i is the i-th row of the original centered data matrix $Z$ of dimension $n \times v$ and $V_{(2)}=(v_1 v_2)$ is the matrix of size $p \times 2$ containing the first two eigenvectors of $Z'Z/(n-1)$. The observations with large $OD_i$ are not well represented in the space of the principal components.

7) returns the score distance $SD_i$ of each observation. For example, if the subspace is defined by the first two principal components, $SD_i$ is computed as:

\[ SD_i=\sqrt{(z_i'v_1)^2/l_1+ (z_i'v_2)^2/l_2 } \]

where $l_1$ and $l_2$ are the first two eigenvalues of $Z'Z/(n-1)$.

8) calls app biplotFS which enables to obtain an interactive biplot in which points, rowslabels or arrows can be shown or hidden. This app also gives the possibility of controlling the length of the arrows and the position of the row points through two interactive slider bars. In the app it is also possible to color row points depending on the orthogonal distance ($OD_i$) of each observation to the PCA subspace. If optional input argument bsb or bdp is specified, it is possible to have in the app two tabs which enable the user to select the breakdown point of the analysis or the subset size to use in the svd. The units which are declared as outliers or the units outside the subset are shown in the biplot with filled circles.

example

out =pcaFS(Y) Use of pcaFS with citiesItaly dataset.

example

out =pcaFS(Y, Name, Value) use of pcaFS on the ingredients dataset.

Examples

expand all

  • Use of pcaFS with citiesItaly dataset.
  • load citiesItaly;
    % Use all default options
    out=pcaFS(citiesItaly);

  • use of pcaFS on the ingredients dataset.
  • load hald
    % Operate on the covariance matrix.
    out=pcaFS(ingredients,'standardize',false,'biplot',0);
    The first PC already explains more than 0.95^v variability
    In what follows we still extract the first 2 PCs
    Initial covariance matrix
               Y1       Y2       Y3       Y4  
              _____    _____    _____    _____
    
        Y1     1.00     0.23    -0.82    -0.25
        Y2     0.23     1.00    -0.14    -0.97
        Y3    -0.82    -0.14     1.00     0.03
        Y4    -0.25    -0.97     0.03     1.00
    
    Explained variance by PCs
               Eigenvalues    Explained_Variance    Explained_Variance_cum
               ___________    __________________    ______________________
    
        PC1      517.80             86.60                    86.60        
        PC2       67.50             11.29                    97.89        
        PC3       12.41              2.07                    99.96        
        PC4        0.24              0.04                   100.00        
    
    Loadings = correlations between variables and PCs
               PC1       PC2 
              ______    _____
    
        Y1      1.54     5.31
        Y2     15.44     0.16
        Y3     -0.66    -6.21
        Y4    -16.63     0.89
    
    Communalities
               PC1       PC2     PC1-PC2
              ______    _____    _______
    
        Y1      2.38    28.17     30.55 
        Y2    238.39     0.03    238.41 
        Y3      0.44    38.51     38.94 
        Y4    276.60     0.79    277.39 
    
    Units with the 5 largest values of (combined) score and orthogonal distance
        10     1     7     8    11
    
    
    Click here for the graphical output of this example (link to Ro.S.A. website).

    Input Arguments

    expand all

    Y — Input data. 2D array or table.

    n x p data matrix; n observations and p variables. Rows of Y represent observations, and columns represent variables.

    Missing values (NaN's) and infinite values (Inf's) are allowed, since observations (rows) with missing or infinite values will automatically be excluded from the computations.

    Data Types: single|double

    Name-Value Pair Arguments

    Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

    Example: 'bsb',[2 10:90 93] , 'bdp',0.4 , 'standardize',false , 'plots',0 , 'biplot',0 , 'dispresults',false , 'NumComponents',2

    bsb —units forming subset on which to perform PCA.vector.

    Vector containing the list of the untis to use to compute the svd. The other units are projected in the space of the first two PC. bsb can be either a numeric vector of length m (m<=n) containin the list of the units (e.g. 1:50) or a logical vector of length n containing the true for the units which have to be used in the calculation of svd. For example bsb=true(n,1), bsb(13)=false; excludes from the svd unit number 13.

    Note that if bsb is supplied bdp must be empty.

    Example: 'bsb',[2 10:90 93]

    Data Types: double or logical

    bdp —breakdown point.scalar.

    It measures the fraction of outliers the algorithm should resist. In this case any value greater than 0 but smaller or equal than 0.5 will do fine. Note that if bdp is supplied bsb must be empty.

    Example: 'bdp',0.4

    Data Types: double

    standardize —standardize data.boolean.

    Boolean which specifies whether to standardize the variables, that is we operate on the correlation matrix (default) or simply remove column means (in this last case we operate on the covariance matrix).

    Example: 'standardize',false

    Data Types: boolean

    plots —plots on the screen.scalar.

    If plots is 1 (default) it is possible to show on the screen the scree plot of the variance explained, the plot of the loadings for the first two PCs.

    Example: 'plots',0

    Data Types: double

    biplot —launch app biplotFS.scalar.

    If biplot is 1 (default) app biplotFS is automatically launched. With this app it is possible to show in a dynamic way the rows points (PC coordinates), the arrows, the row labels and control with a scrolling bar the length of the arrows and the spread of row points.

    Example: 'biplot',0

    Data Types: double

    dispresults —show the results in the command window.if dispresults is true, the percentage of variance explained together with the loadings, the criteria for deciding the number of components to retain and the 5 units with the largest score and orthogonal distance (combined) are shown in the command window.

    Example: 'dispresults',false

    Data Types: char

    NumComponents —the number of components desired.specified as a scalar integer $k$ satisfying $0 < k \leq p$.

    When specified, pcaFS returns the first $k$ columns of out.coeff and out.score. If NumComponents is not specified pcaFS returns the minimum number of components which cumulatively enable to explain a percentage of variance which is equal at least to $0.95^p$. If this threshold is exceeded already by the first PC, pcaFS still returns the first two PCs.

    Example: 'NumComponents',2

    Data Types: char

    Output Arguments

    expand all

    out — description Structure

    Structure which contains the following fields

    Value Description
    Rtable

    p-by-p correlation matrix in table format.

    explained

    p \times 3 matrix containing respectively 1st col = eigenvalues;

    2nd col = Explained Variance (in percentage) 3rd col = Cumulative Explained Variance (in percentage)

    explainedT

    the same as out.explained but in table format.

    coeff

    p-by-NumComponents matrix containing the ordered eigenvectors of the correlation (covariance matrix) in table format.

    First column is referred to first eigenvector ...

    Note that out.coeff'*out.coeff= I_NumComponents.

    coeffT

    the same as out.coeff but in table format.

    loadings

    p-by-NumComponents matrix containing the correlation coefficients between the original variables and the first NumComponents principal components.

    loadingsT

    the same as out.loadings but in table format.

    score

    the principal component scores. The rows of out.score correspond to observations, columns to components. The covariance matrix of out.score is $\Lambda$ (the diagonal matrix containing the eigenvalues of the correlation (covariance matrix).

    scoreT

    the same as outscore but in table format.

    communalities

    matrix with p-by-2*NumComponents-1 columns.

    The first NumComponents columns contain the communalities (variance extracted) by the the first NumComponents principal components. Column NumComponents+1 contains the communalities extracted by the first two principal components. Column NumComponents+2 contains the communalities extracted by the first three principal components...

    communalitiesT

    the same as out.communalities but in table format.

    orthDist

    orthogonal distance from PCA subspace.

    Column vector of length n containing the orthogonal distance of each observation from the PCA subspace.

    scoreDist

    score distance from centroid.

    Column vector of length n containing the score distance of each observation from the PCA subspace.

    The analysis of out.orthDist and out.scoreDist reveals the good leverage points, the orthogonal outliers and the bad leverage points.

    Good leverage points: points which lie close to the PCA space but far from the regular observations. Good leverage points have a large score distance and low orthogonal distance. Orthogonal outliers are points which have a large orthogonal distance to the PCA subspace but cannot be seen when we look only at their projection on the PCA subspace. Bad leverage points are points which have a large orthogonal distance and whose projection on the PCA subspace is remote from the typical projections.

    These points have a large score distance and a large orthogonal distance.

    References

    See Also

    |

    This page has been automatically generated by our routine publishFS