Sling Academy
Home/PyTorch/Binary Classification with PyTorch: Implementing a Simple Feedforward Network

Binary Classification with PyTorch: Implementing a Simple Feedforward Network

Last updated: December 14, 2024

Binary classification is a fundamental task in machine learning where we categorize data points into one of two distinct classes. In this article, we'll explore how to implement a simple feedforward neural network for binary classification using the PyTorch deep learning library. We will cover data preparation, model definition, training, and evaluation.

Prerequisites

Before we dive into code, ensure you have PyTorch installed. You can install it using pip if you haven't already:

pip install torch torchvision

Data Preparation

For binary classification, our dataset should consist of samples with features and corresponding labels (0 or 1). Let's start by creating some synthetic data for simplicity:

import torch
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

# Create synthetic data
data, labels = make_blobs(n_samples=1000, centers=2, n_features=2, random_state=42)

# Standardize features
data = StandardScaler().fit_transform(data)

# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42)

Model Definition

The feedforward network will be simple, consisting of an input layer, one hidden layer, and an output layer with one neuron (representing the binary output). Using PyTorch, we define the model:

import torch.nn as nn
import torch.nn.functional as F

class SimpleFeedforward(nn.Module):
    def __init__(self):
        super(SimpleFeedforward, self).__init__()
        self.fc1 = nn.Linear(2, 10)  # Two input features, ten neurons in hidden layer
        self.fc2 = nn.Linear(10, 1)  # One output neuron for binary classification

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = torch.sigmoid(self.fc2(x))  # Sigmoid to output probability
        return x

Training the Model

We use the binary cross-entropy loss function for binary tasks and the stochastic gradient descent optimizer:

model = SimpleFeedforward()
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

# Convert data to PyTorch tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.float32)

# Training loop
for epoch in range(100):
    model.train()

    # Zero the gradients
    optimizer.zero_grad()

    # Forward pass
    outputs = model(X_train)
    loss = criterion(outputs.squeeze(), y_train)

    # Backward pass and optimization
    loss.backward()
    optimizer.step()

    if (epoch+1) % 10 == 0:
        print(f'Epoch [{epoch+1}/100], Loss: {loss.item():.4f}')

Evaluating the Model

After training, evaluate the model's performance on the test set. Convert the test data into tensors and compute predictions:

# Convert test data to tensor
X_test = torch.tensor(X_test, dtype=torch.float32)

def evaluate(model, X_test, y_test):
    model.eval()  # Set the model to evaluation mode
    with torch.no_grad():
        predictions = model(X_test).squeeze()
        predictions = (predictions >= 0.5).int()  # Convert probabilities to binary predictions
        accuracy = (predictions == torch.tensor(y_test)).sum().item() / len(y_test)
        print(f'Test Accuracy: {accuracy * 100:.2f}%')

evaluate(model, X_test, y_test)

Conclusion

In this article, we implemented a simple feedforward neural network using PyTorch to solve a binary classification problem. The process involved preparing data, constructing the model, and iterating through training and evaluation. This foundational approach can be extended and modified for more complex datasets and network architectures as you grow comfortable and proficient with PyTorch.

Next Article: PyTorch for Beginners: Understanding Neural Networks for Classification Tasks

Previous Article: PyTorch Tutorial: Creating a Custom Neural Network for Classification

Series: PyTorch Neural Network Classification

PyTorch

You May Also Like

  • Addressing "UserWarning: floor_divide is deprecated, and will be removed in a future version" in PyTorch Tensor Arithmetic
  • In-Depth: Convolutional Neural Networks (CNNs) for PyTorch Image Classification
  • Implementing Ensemble Classification Methods with PyTorch
  • Using Quantization-Aware Training in PyTorch to Achieve Efficient Deployment
  • Accelerating Cloud Deployments by Exporting PyTorch Models to ONNX
  • Automated Model Compression in PyTorch with Distiller Framework
  • Transforming PyTorch Models into Edge-Optimized Formats using TVM
  • Deploying PyTorch Models to AWS Lambda for Serverless Inference
  • Scaling Up Production Systems with PyTorch Distributed Model Serving
  • Applying Structured Pruning Techniques in PyTorch to Shrink Overparameterized Models
  • Integrating PyTorch with TensorRT for High-Performance Model Serving
  • Leveraging Neural Architecture Search and PyTorch for Compact Model Design
  • Building End-to-End Model Deployment Pipelines with PyTorch and Docker
  • Implementing Mixed Precision Training in PyTorch to Reduce Memory Footprint
  • Converting PyTorch Models to TorchScript for Production Environments
  • Deploying PyTorch Models to iOS and Android for Real-Time Applications
  • Combining Pruning and Quantization in PyTorch for Extreme Model Compression
  • Using PyTorch’s Dynamic Quantization to Speed Up Transformer Inference
  • Applying Post-Training Quantization in PyTorch for Edge Device Efficiency