Pandas: Drop columns whose average is less than a threshold

Updated: February 22, 2024 By: Guest Contributor Post a comment


Pandas is a powerful library in Python for data manipulation and analysis. In this tutorial, we will explore how to drop columns in a DataFrame whose average value is below a specified threshold. This can be particularly useful when preprocessing data for machine learning or statistical analysis, enabling you to quickly eliminate features that do not meet certain criteria. We will start with basic examples and gradually move to more advanced techniques. Let’s dive in!

Getting Started

To start, ensure you have Pandas installed:

pip install pandas

Let’s create a sample DataFrame to work with:

import pandas as pd
import numpy as np

df = pd.DataFrame({
    'A': np.random.rand(10),
    'B': np.random.rand(10) * 10,
    'C': np.random.rand(10) * 100

This creates a DataFrame with 3 columns (‘A’, ‘B’, ‘C’) filled with random values scaled differently. Now, let’s define our problem more concretely. We aim to remove columns whose average value is below a specified threshold. For example, if the threshold is 5, columns ‘A’ would likely be removed.

Basic Example

Here’s how to drop columns based on their average value:

threshold = 5

cols_to_drop = [col for col in df.columns if df[col].mean() < threshold]
df.drop(cols_to_drop, axis=1, inplace=True)


In this example, based on the threshold of 5, specific columns with an average below this value are identified and dropped by using df.drop(), with inplace=True meaning the changes are applied directly to the DataFrame.

Using Functions

For a more dynamic and reusable approach, we can encapsulate this logic inside a function:

def drop_cols_below_threshold(df, threshold):
    cols_to_drop = [col for col in df.columns if df[col].mean() < threshold]
    df.drop(cols_to_drop, axis=1, inplace=True)
    return df

# Usage:
df_filtered = drop_cols_below_threshold(df, th)

This function, drop_cols_below_threshold, can now be placed in a utility library and reused across different projects and datasets, making it a valuable asset for any data scientist’s toolkit.

Advanced Techniques

As we delve into more advanced techniques, let’s consider additional complexities, such as handling NaN values and only dropping columns based on the average of non-missing values.

To address NaN values, we adjust our strategy:

df['D'] = np.nan

def drop_cols_below_threshold_with_na(df, threshold):
    cols_to_drop = [col for col in df.columns if df[col].fillna(0).mean() < threshold]
    df.drop(cols_to_drop, axis=1, inplace=True)
    return df

# Configuration with handling NaN values:
th_with_na = 5
df_filtered_na = drop_cols_below_threshold_with_na(df, th_with_na)

This adjusted function drop_cols_below_threshold_with_na first fills NaN values with 0s before computing the average. While this approach allows for the inclusion of columns with missing values, it alters the actual data and may not always be desirable. As always, the approach should be tailored to the specific requirements of your data and analysis tasks.

Furthermore, with Pandas, there are often multiple ways to achieve the same result. Another approach might involve using the DataFrame.mean() function directly with its skipna=True parameter to ignore NaN values when computing averages:

th_ignore_na = 5
cols_to_drop_ignore_na = [col for col in df.columns if df[col].mean(skipna=True) < th_ignore_na]
df.drop(cols_to_drop_ignore_na, axis=1, inplace=True)


This method keeps the computation pure by not altering NaN values, providing a more accurate reflection of the non-missing data.


Dropping columns based on their average value is a common data preprocessing step. By using Pandas, we explored several ways to achieve this, from basic to more advanced techniques. Whether integrating this functionality into custom functions or applying directly to your DataFrame, understanding how to manipulate data based on its statistical properties is an essential skill for data scientists and analysts alike.