Dimensionality reduction: Difference between revisions

From Machinelearning
No edit summary
No edit summary
Line 19: Line 19:
* [[Kernel principal component analysis]] (Kernel PCA)
* [[Kernel principal component analysis]] (Kernel PCA)
* [[Locally-Linear Embedding]]
* [[Locally-Linear Embedding]]
== Methods ==
Some common methods to perform dimensionality reduction are listed as follows<refname="">[https://data-flair.training/blogs/dimensionality-reduction-tutorial/ What is Dimensionality Reduction – Techniques, Methods, Components]</ref>:
* Missing values:
* Low variance:
* Decision trees:
* Random forest:
* High correlation:
* Backward feature elimination:
* Factor analysis:
* Principal component analysis (PCA):





Revision as of 18:13, 24 March 2020

Dimensionality reduction is one of the main applications of unsupervised learning . It can be understood as the process of reducing the number of random variables under consideration by getting a set of principal variables.[1] High dimensionality has many costs, including redundant and irrelevant features which degrade the performance of some algorithms, difficulty in interpretation and visualization, and infeasible computation.[2]

Categories

Dimensionality reduction can be devided into two subcategories[3]:

Algorithms

Some of the most common dimensionality reduction algorithms in machine learning are listed as follows[1]:

Methods

Some common methods to perform dimensionality reduction are listed as follows<refname="">What is Dimensionality Reduction – Techniques, Methods, Components</ref>:

  • Missing values:
  • Low variance:
  • Decision trees:
  • Random forest:
  • High correlation:
  • Backward feature elimination:
  • Factor analysis:
  • Principal component analysis (PCA):



References