Using DataFrame.dropna() method in Pandas

Updated: February 20, 2024 By: Guest Contributor Post a comment


In this tutorial, we’ll explore the versatility of the DataFrame.dropna() method in Pandandas, a powerful tool for handling missing data in data sets. Managing missing values is a critical step in pre-processing data for analysis, machine learning models, or data visualization. The dropna() method offers a flexible way to either filter out missing data or fill them with alternate values. We will guide you through a variety of use cases, showcasing the method’s parameters and providing code examples ranging from basic to advanced scenarios.

Getting Started

Before diving into examples, ensure you have Pandas installed in your environment. You can do this via pip:

pip install pandas

Now, let’s import pandas and create a simple DataFrame to work with:

import pandas as pd
import numpy as np

# Creating a simple DataFrame
DF = pd.DataFrame({
  'A': [1, np.nan, 3],
  'B': [np.nan, 2, np.nan],
  'C': [np.nan, np.nan, 1]

This produces:

     A    B    C
0  1.0  NaN  NaN
1  NaN  2.0  NaN
2  3.0  NaN  1.0

Basic Usage

At its most basic, dropna() allows us to remove rows with missing data:

DF_cleaned = DF.dropna()


Empty DataFrame
Columns: [A, B, C]
Index: []

In this example, since all rows have at least one NaN value, the entire DataFrame is emptied.

Column-wise Removal

To remove columns instead of rows, you can use the axis parameter:

DF_cleaned = DF.dropna(axis=1)


Empty DataFrame
Columns: []
Index: []

Again, since every column has at least one NaN, all columns are deleted.

Customizing Thresholds

For more control, you may specify a threshold for the minimum number of non-na observations with the how and thresh parameters:

DF_threshold = DF.dropna(thresh=2)


Empty DataFrame
Columns: [A, B, C]
Index: []

Since no rows have at least two non-NaN values, the DataFrame remains empty.

Specific Columns

You can also target specific columns for dropping rows with missing values:

DF_col_specific = DF.dropna(subset=['A', 'C'])

This filters out rows based on the presence of NaN values in the specified columns. The output is:

     A    B    C
2  3.0  NaN  1.0

Advanced Scenarios

Let’s explore some more complex scenarios where dropna() can be particularly useful.

Combining with Other Data Manipulation Methods

Often, you might want to chain dropna() with other Pandas methods for more efficient data cleaning:

DF_chained = DF.fillna(0).dropna(axis=1).

This technique is useful for quickly replacing NaN values with a default before removing completely blank columns.

Working with Large DataFrames

For larger DataFrames, specifying columns or setting a threshold can help manage performance and ensure important data isn’t inadvertently removed:

import pandas as pd
import numpy as np

# Dummy large DataFrame
rows, cols = 10000, 10
DF_large = pd.DataFrame(np.random.randn(rows, cols), columns=['Col' + str(i) for i in range(cols)])
DF_large.iloc[:100, :2] = np.nan

# Efficient cleaning
DF_clean = DF_large.dropna(axis='columns', thresh=rows-100)

In this scenario, by setting a threshold just below the total row count but above the number of corrupted rows, you maintain most columns intact.


The DataFrame.dropna() method in Pandas is a powerful ally in the battle against missing data. Its flexibility allows for a wide range of data cleaning strategies, from simple row or column removals to advanced techniques tailored for large datasets. Hopefully, this tutorial has illuminated the method’s capabilities and will assist in your data handling tasks.

Remember, though the impulse might be to quickly discard any data containing NaN, thoughtful application of dropna() can preserve valuable insights that would otherwise be lost.