Sling Academy
Home/Scikit-Learn/Scikit-Learn's `DBSCAN` Clustering: A Complete Tutorial

Scikit-Learn's `DBSCAN` Clustering: A Complete Tutorial

Last updated: December 17, 2024

Clustering is a pivotal concept in machine learning, where the aim is to group a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups. One powerful tool for clustering with a focus on detecting anomalies or discovering interesting structures is the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm, which is available in the Scikit-Learn library.

Understanding DBSCAN

DBSCAN stands out by focusing on dense regions and working by identifying core samples, which are areas of high density, surrounded by similar dense samples. It handles arbitrary shapes and noise, which makes it different from other algorithms like k-means.

Key Parameters of DBSCAN

  • eps: The maximum distance between two samples for them to be considered as in the same neighborhood.
  • min_samples: The number of samples (or total weight) in a neighborhood for a point to be considered as a core point. This includes the point itself.

Equipped with these parameters, let's dive into using Scikit-Learn to apply DBSCAN clustering on a dataset.

Installing Scikit-Learn

Before diving into codes, ensure you have the scikit-learn library installed. Use pip to install:

pip install scikit-learn

DBSCAN with Scikit-Learn: A Practical Example

Let's apply DBSCAN on a sample dataset to see how we can discover clusters:

from sklearn.datasets import make_moons
from sklearn.cluster import DBSCAN
import matplotlib.pyplot as plt

# Generate sample data
X, _ = make_moons(n_samples=300, noise=0.05, random_state=0)

# Apply DBSCAN
dbscan = DBSCAN(eps=0.2, min_samples=5)
clusters = dbscan.fit_predict(X)

# Visualize the results
plt.scatter(X[:, 0], X[:, 1], c=clusters, cmap='rainbow')
plt.title('DBSCAN Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()

In the above code, we first create a synthetic dataset using make_moons to produce two interleaving half circles, which is a classic case where DBSCAN can outperform K-Means due to the elliptical shapes of the clusters.

Tuning DBSCAN Parameters

The effectiveness of DBSCAN largely depends on the chosen parameters. Let's look at their roles:

eps: Too small and most of the data will be considered noise, resulting in many small clusters. Too large and dense clusters will merge into a big single cluster.

min_samples: Lowering this value results in more clusters, including smaller ones. High values tend to merge neighboring samples into fewer clusters, ignoring smaller but dense structures.

The general approach is to start testing with various eps values while keeping min_samples around 5 or more, gradually refining based on the output clusters. Using a heuristic such as plotting the distances to the k-nearest neighbor, as shown in the sample below, helps determine a suitable eps value:

from sklearn.neighbors import NearestNeighbors
import numpy as np

neigh = NearestNeighbors(n_neighbors=2)
neigh.fit(X)
distances, _ = neigh.kneighbors(X)

distances = np.sort(distances, axis=0)
distances = distances[:, 1]
plt.plot(distances)
plt.title('k-distance Graph')
plt.xlabel('Samples')
plt.ylabel('Distance to 2nd Nearest Neighbor')
plt.show()

Advantages and Limitations

Advantages:

  • DBSCAN does not require specifying the number of clusters beforehand.
  • Handles noisy data efficiently.
  • Detects clusters of various shapes.

Limitations:

  • Choosing optimal parameters can be challenging without prior knowledge of the data.
  • Tends to become computationally expensive on very large datasets.

Conclusion

DBSCAN is a versatile clustering method that finds applicability both in simple scenarios with well-defined dense clusters and in complex datasets where noise and irregular shapes are present. Experimentation with parameters specific to your data can unlock profound insights, highlighting the value of exploring different clusters autonomously parsed by DBSCAN.

Next Article: Feature Agglomeration with Scikit-Learn

Previous Article: Clustering with Scikit-Learn's `BisectingKMeans`

Series: Scikit-Learn Tutorials

Scikit-Learn

You May Also Like

  • Generating Gaussian Quantiles with Scikit-Learn
  • Spectral Biclustering with Scikit-Learn
  • Scikit-Learn Complete Cheat Sheet
  • ValueError: Estimator Does Not Support Sparse Input in Scikit-Learn
  • Scikit-Learn TypeError: Cannot Broadcast Due to Shape Mismatch
  • AttributeError: 'dict' Object Has No Attribute 'predict' in Scikit-Learn
  • KeyError: Missing 'param_grid' in Scikit-Learn GridSearchCV
  • Scikit-Learn ValueError: 'max_iter' Must Be Positive Integer
  • Fixing Log Function Error with Negative Values in Scikit-Learn
  • RuntimeError: Distributed Computing Backend Not Found in Scikit-Learn
  • Scikit-Learn TypeError: '<' Not Supported Between 'str' and 'int'
  • AttributeError: GridSearchCV Has No Attribute 'fit_transform' in Scikit-Learn
  • Fixing Scikit-Learn Split Error: Number of Splits > Number of Samples
  • Scikit-Learn TypeError: Cannot Concatenate 'str' and 'int'
  • ValueError: Cannot Use 'predict' Before Fitting Model in Scikit-Learn
  • Fixing AttributeError: NoneType Has No Attribute 'predict' in Scikit-Learn
  • Scikit-Learn ValueError: Cannot Reshape Array of Incorrect Size
  • LinAlgError: Matrix is Singular to Machine Precision in Scikit-Learn
  • Fixing TypeError: ndarray Object is Not Callable in Scikit-Learn