diff --git a/_episodes_rmd/07-hierarchical.Rmd b/_episodes_rmd/07-hierarchical.Rmd index 1cc445cb..16f44d0e 100644 --- a/_episodes_rmd/07-hierarchical.Rmd +++ b/_episodes_rmd/07-hierarchical.Rmd @@ -264,7 +264,7 @@ dendrogram showing how the data is partitioned into clusters. But how do we inte > Use `hclust()` to implement hierarchical clustering using the > distance matrix `dist_m` and > the `complete` linkage method and plot the results as a dendrogram using -> `plot()`. +> `plot()`. Why is hierarchical clustering and the resulting dendrogram useful for performing clustering this case? > > > ## Solution: > > @@ -272,6 +272,9 @@ dendrogram showing how the data is partitioned into clusters. But how do we inte > > clust <- hclust(dist_m, method = "complete") > > plot(clust) > > ``` +> > Hierarchical clustering is particularly useful (compared to K-means) when we do not know the number of clusters +> > before we perform clustering. It is useful in this case since we have assumed we do not already know what a suitable +> > number of clusters may be. > {: .solution} {: .challenge}