Clustering in Machine learning


December 17, 2021, Learn eTutorial
1534

Clustering is an unsupervised machine learning method that categorizes the objects in unlabelled data into different categories. It can be defined officially as a method to group or categorize unlabelled data into different groups depending on the similarities. It means the clustering group the data points which are similar to one group and not similar to another group.

How clustering is making the groups from an unlabelled dataset? It is done by finding some similarities or patterns from the dataset like color or shape. Then it groups the objects which have the same features into a group and others into another group like that.

After the grouping of unlabelled data into clusters, a unique id is given to each cluster for identification when we are dealing with the huge dataset. Clustering is somewhat similar to the classification algorithms but the clustering works on unlabelled data, where the classification works on labeled data. Normally this method is for statistical analysis of data. Consider the below image to understand the clustering concept in pictorial view.

Clustering

We can learn clustering with a real example, considering we are visiting a market. Inside the market, we can see all the items are grouped with their similarities and features like fishes, different kinds of meats, vegetables, and different kinds of fruits like that. It will be easy for us to find things when it is sorted into groups and it is a perfect example of clustering. 
Clustering applicability is involved in different fields, some of them are in

  1. Anomaly sensing
  2. Image segmentation
  3. Social network analysis
  4. Data analysis

Clustering is also used by many areas like Amazon, YouTube, or Netflix for sorting and grouping the videos and showing the perfect recommendation. It is also involved in the e-commerce companies for showing similar products etc.

Types of Clustering

In a dataset, there may be objects which can be grouped only to a single group and that is called hard clustering. Like that there will be data objects which can be other groups too that we call soft clustering. This is the broad classification. Let us check the clustering methods used in ML one by one.

  1. Partitioning clustering
  2. Density-based Clustering
  3. Distribution Model
  4. Hierarchical Clustering
  5. Fuzzy Clustering

Partitioning Clustering

Partitioning clustering is also called centroid-based clustering because this clustering is based on a center point called centroid and grouping based on that centroid. In partitioning clustering, the data point is divided into non-hierarchical groups. For example, K means clustering.
In this clustering method, the dataset is split into a number of predefined groups which can be represented as K, so K groups. Each cluster has a center and that is created in a way that the similar points to that cluster will be close (minimum distance) to the center point than the other points of other clusters.

Clustering

Density-based Clustering

Density-based clustering is a way of clustering a dataset with respect to the density of data points and connecting these denser data points into a cluster. This method connects the highly-dense points to a cluster. 
In this method, the ML algorithm finds all the clusters in the dataset, and then it connects the dense areas of similar features into clusters. Due to this connection of dense areas the shape of this cluster will be arbitrarily shaped. This type of clustering is tough if the dataset is highly varying densities and dimensions.

Clustering

Distribution Model Based clustering

This method of clustering is based on probability as the data is divided into different clusters based on the chance of how the dataset belongs to a distribution. This method is making the grouping by assuming some distribution called Gaussian distribution.
Examples: Expectation-Maximum Clustering

Clustering

Hierarchical Clustering

Hierarchical clustering is a little similar to partitioning clustering so it can be used as an alternative for that. The difference is that in partitioning clustering we need to pre-define the cluster numbers but here, there is no need for that. 
In this clustering method as we know a hierarchal tree is created which is called a dendrogram. We can add any number of observations to that dendrogram by cutting the dendrogram's incorrect level. 
Example: Agglomerative Hierarchical Algorithm

Clustering

Fuzzy Clustering Algorithm

This type of clustering is for soft clustering. In soft clustering, a data point may belong to more than one group so it will be hard to group such points. Here the data points are given a value that we call membership. The point is added to the cluster on the degree of membership. 
Example: Fuzzy C means algorithm or Fuzzy K Means Clustering

Clustering Applications

There are many application-level uses for clustering and some of them are given below.

  1. Medical Field: This clustering is used widely in the medical field for example in the oncology department this clustering is used to identify the cancer cells from the other cells. It can split the cells into cancerous and noncancerous.
  2. Search Engines: Clustering is widely used in search engines as it groups the results to show the perfect result to the user.  It makes the cluster with similar objects based on the search query.
  3. Market research: It is used in businesses such as markets to split the customers based on their interests and choices.
  4. Biology: this clustering method is used to make the groups of plants and animals based on different criteria like carnivores, herbivores, etc.

Different Clustering Algorithms

As we discussed above there are different clustering types and based on that types we have different clustering algorithms. There are many clustering algorithms available but only a few are commonly used. A clustering algorithm is depending on the data type we need to cluster. For example, some clustering algorithms need to know how much the number of clusters is in the data set. Some clustering algorithms need to know the distance of the data points.
Let us check some of the clustering algorithms that are commonly used in machine learning.

K means clustering

This is the most popular type of clustering algorithm, which is now used in machine learning. This algorithm categorizes a dataset into different groups depending on the variances with a complexity of O(n).

For K means clustering, the algorithm needs to specify the number of clusters in the dataset.

Agglomerative Hierarchical algorithm:

This is an example of hierarchical clustering which makes a tree structure. In the Agglomerative algorithm, the clustering will be bottom-up fashion where each data point will be considered as a cluster in the beginning, and then it will be merged in the tree.

Mean-shift algorithm

The means shift algorithm is a fine example of centroid-based clustering which makes a center point and makes all the data points related to that cluster, close to the centroid. This works as making the points that may be the centroid to be in the center of the cluster.

Affinity Propagation algorithm

This is a different clustering algorithm from the ones discussed above, as this algorithm does not need to mention the number of clusters. In this algorithm, data points communicate between pairs before converging. The issue with this clustering is its complexity which is O(N2T)