Dimensionality reduction: Difference between revisions
No edit summary |
|||
| Line 23: | Line 23: | ||
* [https://www.youtube.com/watch?v=3uxOyk-SczU] | * [https://www.youtube.com/watch?v=3uxOyk-SczU] | ||
* [https://www.youtube.com/watch?v=AU_hBML2H1c] | * [https://www.youtube.com/watch?v=AU_hBML2H1c] | ||
* [http://courses.washington.edu/css581/lecture_slides/17_dimensionality_reduction.pdf] | |||
== References == | == References == | ||
Revision as of 01:23, 24 March 2020
Dimensionality reduction is one of the main applications of unsupervised learning . It can be understood as the process of reducing the number of random variables under consideration by getting a set of principal variables.[1]
Categories
Dimensionaliy reduction can be devided into two subcategories[2]:
- Feature selection:
- Wrappers
- Filters
- Embedded
- Feature extraction:
Algorithms
Some of the most common dimensionality reduction algorithms in machine learning are listed as follows[1]:
- Principal Component Analysis
- Kernel principal component analysis (Kernel PCA)
- Locally-Linear Embedding