Hierarchical clustering: Difference between revisions
No edit summary |
|||
| Line 18: | Line 18: | ||
* The hierarchical algorithm can never do any previous steps throughout the algorithm | * The hierarchical algorithm can never do any previous steps throughout the algorithm | ||
* The time complexity for the clustering can result in very long computation times in comparison with efficient algorithms such as [[K-means]]. | * The time complexity for the clustering can result in very long computation times in comparison with efficient algorithms such as [[K-means]]. | ||
* | * If we have a large data set, it can become difficult to determine the correct number of clusters by the dendrogram. | ||
== References == | == References == | ||
Revision as of 14:20, 14 April 2020
Hierarchical clustering, also known as hierarchical cluster analysis, is an algorithm that groups similar objects into groups called clusters.[1]
Strategies
Strategies for hierarchical clustering generally fall into two types[2]:
- Divisive:
- Agglomerative:
Advantages vs disadvantages[3]
Advantages
- Hierarchical clustering does not require the number of clusters to be specified.
- It is easy to implement.
- Hierarchical clustering produces a dendogram, which hlps with understanding the data.
Disadvantages
- The hierarchical algorithm can never do any previous steps throughout the algorithm
- The time complexity for the clustering can result in very long computation times in comparison with efficient algorithms such as K-means.
- If we have a large data set, it can become difficult to determine the correct number of clusters by the dendrogram.
References
- ↑ What is Hierarchical Clustering?displayr.com
- ↑ Intro to Hierarchical ClusteringCoursera
- ↑ Hierarchical ClusteringCoursera