Back

σ-PCA: a building block for identifiable representations

σ-PCA: a building block for identifiable representations

Linear principal component analysis (PCA) learns orthogonal transformations that orient axes to maximise variance, but it suffers from a subspace rotational indeterminacy: it fails to find, to identify, a unique orthogonal transformation (rotation) for axes that share the same variance — it fails to disentangle them.

In our paper, we propose a method, which we call σ-PCA, that can eliminate the subspace rotational indeterminacy from linear PCA.

σ-PCA is a modification of conventional nonlinear PCA, a special case of linear independent component analysis (ICA).

We also delver deeper into the relationship between linear PCA, nonlinear PCA, and linear ICA — three methods with single-layer autoencoder formulations for learning special linear transformations from data.

@misc{kanavati2023pca,
    title={$σ$-PCA: a unified neural model for linear and nonlinear principal component analysis},
    author={Fahdi Kanavati and Lucy Katsnith and Masayuki Tsuneki},
    year={2023},
    eprint={2311.13580},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}