Skip to content

Commit

Permalink
Add short part to challenge 1, episode 7 to answer objective
Browse files Browse the repository at this point in the history
Objective: "Understand when to use hierarchical clustering on high-dimensional data"
  • Loading branch information
mallewellyn authored Mar 28, 2024
1 parent e9e2041 commit 22506b4
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion _episodes_rmd/07-hierarchical.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -264,14 +264,17 @@ dendrogram showing how the data is partitioned into clusters. But how do we inte
> Use `hclust()` to implement hierarchical clustering using the
> distance matrix `dist_m` and
> the `complete` linkage method and plot the results as a dendrogram using
> `plot()`.
> `plot()`. Why is hierarchical clustering and the resulting dendrogram useful for performing clustering this case?
>
> > ## Solution:
> >
> > ```{r plotclustex, fig.cap=" "}
> > clust <- hclust(dist_m, method = "complete")
> > plot(clust)
> > ```
> > Hierarchical clustering is particularly useful (compared to K-means) when we do not know the number of clusters
> > before we perform clustering. It is useful in this case since we have assumed we do not already know what a suitable
> > number of clusters may be.
> {: .solution}
{: .challenge}
Expand Down

0 comments on commit 22506b4

Please sign in to comment.