Sling Academy
Home/Scikit-Learn/Text Processing with Scikit-Learn's `CountVectorizer`

Text Processing with Scikit-Learn's `CountVectorizer`

Last updated: December 17, 2024

Text processing is a fundamental step in many machine learning projects, particularly in the domain of natural language processing (NLP). Scikit-learn, a popular machine learning library in Python, offers several tools to facilitate text processing. One such tool is the CountVectorizer, which is useful for converting a collection of text documents to a matrix of token counts.

What is CountVectorizer?

CountVectorizer is part of scikit-learn’s feature_extraction.text module. It transforms text into a sparse matrix of integers, representing the count of each token (word, in most cases) appearing in the input text corpus.

Basic Usage

Let’s start with a simple example to illustrate how CountVectorizer works:

from sklearn.feature_extraction.text import CountVectorizer

# Define a corpus of text documents
corpus = [
    'This is the first document.',
    'This document is the second document.',
    'And this is the third one.',
    'Is this the first document?'
]

# Initialize a CountVectorizer object
vectorizer = CountVectorizer()

# Fit and transform the corpus
X = vectorizer.fit_transform(corpus)

# Print the feature names
doc_term_matrix = X.toarray()
feature_names = vectorizer.get_feature_names_out()
print('Feature Names:', feature_names)

print('Document-Term Matrix:')
print(doc_term_matrix)

In this example, the CountVectorizer transforms the input corpus into a set of token counts. The get_feature_names_out() method retrieves the names of each feature (token), while toarray() converts the sparse matrix to a dense format, showing the count of each token in each document.

Basic Parameters

Scikit-learn's CountVectorizer offers several parameters to control its operation:

  • max_features: Limits the number of features.
  • stop_words: Removes specified stop words (like 'and', 'is', etc.).
  • ngram_range: Considers sequences of n tokens, called n-grams. For example, ngram_range=(1, 2) will include both unigrams and bigrams.

Using Stop Words

Eliminating stop words can be crucial for reducing noise and focussing on more informative words.

# Initialize CountVectorizer with stop words
vectorizer_with_stopwords = CountVectorizer(stop_words='english')
X_stopwords = vectorizer_with_stopwords.fit_transform(corpus)
print('Feature Names after removing stop words:')
print(vectorizer_with_stopwords.get_feature_names_out())

Working with N-grams

To consider combinations of words, adjust the ngram_range parameter:

# Initialize CountVectorizer with an n-gram range
vectorizer_ngrams = CountVectorizer(ngram_range=(1, 2))
X_ngrams = vectorizer_ngrams.fit_transform(corpus)
print('Feature Names with n-gram ranges:')
print(vectorizer_ngrams.get_feature_names_out())

The above example considers both single words (unigrams) and pairs of words (bigrams) as features. Altering this parameter can significantly change the number of terms considered and output by the vectorizer.

Conclusion

The CountVectorizer in Scikit-learn is a powerful tool to convert text data into numerical data, allowing it to be fed into machine learning algorithms. By effectively configuring its parameters, you can optimize your text processing workflow to better suit the characteristics of your data, enabling the extraction of meaningful patterns and insights.

Understanding and leveraging this tool is a crucial skill in any machine learning toolkit, especially when dealing with language data. From removing stop words to experimenting with n-grams, CountVectorizer provides you with the flexibility to tailor your text processing pipeline as needed.

Next Article: Using `TfidfVectorizer` for Text Classification in Scikit-Learn

Previous Article: Extracting Image Patches with Scikit-Learn

Series: Scikit-Learn Tutorials

Scikit-Learn

You May Also Like

  • Generating Gaussian Quantiles with Scikit-Learn
  • Spectral Biclustering with Scikit-Learn
  • Scikit-Learn Complete Cheat Sheet
  • ValueError: Estimator Does Not Support Sparse Input in Scikit-Learn
  • Scikit-Learn TypeError: Cannot Broadcast Due to Shape Mismatch
  • AttributeError: 'dict' Object Has No Attribute 'predict' in Scikit-Learn
  • KeyError: Missing 'param_grid' in Scikit-Learn GridSearchCV
  • Scikit-Learn ValueError: 'max_iter' Must Be Positive Integer
  • Fixing Log Function Error with Negative Values in Scikit-Learn
  • RuntimeError: Distributed Computing Backend Not Found in Scikit-Learn
  • Scikit-Learn TypeError: '<' Not Supported Between 'str' and 'int'
  • AttributeError: GridSearchCV Has No Attribute 'fit_transform' in Scikit-Learn
  • Fixing Scikit-Learn Split Error: Number of Splits > Number of Samples
  • Scikit-Learn TypeError: Cannot Concatenate 'str' and 'int'
  • ValueError: Cannot Use 'predict' Before Fitting Model in Scikit-Learn
  • Fixing AttributeError: NoneType Has No Attribute 'predict' in Scikit-Learn
  • Scikit-Learn ValueError: Cannot Reshape Array of Incorrect Size
  • LinAlgError: Matrix is Singular to Machine Precision in Scikit-Learn
  • Fixing TypeError: ndarray Object is Not Callable in Scikit-Learn