I have vectors in 4096 [VLAD codes][1], each one of them representing an image.
I have to run PCA on them *without reducing their dimension* on learning datasets with less than 4096 images (e.g., the [Holiday dataset][2] with <2k images.) and obtaining the rotation matrix `A`.
In [this](http://search.ieice.org/bin/summary.php?id=e99-d_10_2656) paper (which also explains why I need to run this PCA without dimensionality reduction) they solved the problem with this approach:
>For efficient computation of A, we compute at most the first 1,024 eigenvectors by eigendecomposition of the covariance matrix, and
the remaining orthogonal complements up to D-dimensional are filled using Gram-Schmidt orthogonalization.
Now, how do I implement this using C++ libraries? I'm using OpenCV, but [cv::PCA](http://docs.opencv.org/trunk/d3/d8d/classcv_1_1PCA.html) seems not to offer such a strategy. Is there any way to do this?
[1]: http://www.vlfeat.org/overview/encodings.html
[2]: http://lear.inrialpes.fr/people/jegou/data.php
↧