Just finished last course of Mathematics for Machine Learning by Imperial College of London.
Last course was related to Principal Component Analysis, with the main content as:
Last course was related to Principal Component Analysis, with the main content as:
- Measures of center, such as mean and variance.
- How to calculate inner products in 1D and Generalization to multiple dimensions.
- Orthogonal Projections in 1D and its generalization to multidimensional
- PCA, how to project multidimensional spaces into subspaces without losing information.
- Idea behind PCA is to compress data.
- Data should be normalized
- Compute the covariance matrix
- Compute the eigen vectors and eigen values that maximize the variance
- Obtain the principal components.
This last section of the course has been really hard and challenging, and with that I got the importance of understanding the foundations of PCA and its applications.