Sling Academy
Home/PyTorch/How to Set Up PyTorch and Start Your First Project

How to Set Up PyTorch and Start Your First Project

Last updated: December 14, 2024

PyTorch is an open-source machine learning library widely used for deep learning applications like computer vision and natural language processing. In this guide, we will walk through the process of setting up PyTorch and starting your first project, emphasizing clarity with code examples.

Getting Started with PyTorch

Before diving into setting up PyTorch, ensure you have Python installed on your system. Python 3.7 or above is recommended. You can download Python from the official Python website.

Step 1: Setting Up a Virtual Environment

It's best practice to create a virtual environment for your Python projects to manage dependencies effectively. Use the following commands to create and activate a virtual environment:

# Install virtualenv if it's not installed
git $ pip install virtualenv

# Create a virtual environment named `pytorch-env`
$ virtualenv pytorch-env

# Activate the virtual environment (Linux and macOS)
$ source pytorch-env/bin/activate

# Activate the virtual environment (Windows)
$ .\pytorch-env\Scripts\activate

Once the virtual environment is activated, your terminal should reflect this status.

Step 2: Installing PyTorch

With your virtual environment activated, you can now install PyTorch. The installation command can vary depending on your system configuration—CPU or GPU. You can visit the PyTorch website for the latest installation commands. Here’s an example command for installing PyTorch with CPU support:

$ pip install torch torchvision

For those with a CUDA-enabled GPU, the command may look like this:

$ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Ensure the specific versions are compatible with your hardware and CUDA version.

Your First PyTorch Project: A Simple Linear Regression

After installing PyTorch, let's create a basic project that demonstrates how PyTorch operates. We'll start with a simple linear regression model to predict y from x.

Step 3: Import PyTorch

Start by creating a new Python file, say linear_regression.py, and import the necessary libraries:

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

Step 4: Define the Model

Create a simple linear regression model using PyTorch's neural network module:

class LinearRegressionModel(nn.Module):
    def __init__(self):
        super(LinearRegressionModel, self).__init__()
        self.linear = nn.Linear(1, 1)  # one input and one output

    def forward(self, x):
        return self.linear(x)

Step 5: Initialize Model, Loss, and Optimizer

Next, instantiate the model, define a loss function, and select an optimizer:

# Initialize the model, loss function and the optimizer
model = LinearRegressionModel()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

Step 6: Train the Model

Use a simple training loop to update your model's weights:

# Sample data
X_train = torch.tensor([[1.0], [2.0], [3.0], [4.0]])
y_train = torch.tensor([[2.0], [4.0], [6.0], [8.0]])

# Training loop
n_epochs = 1000
for epoch in range(n_epochs):
    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(X_train)

    # Compute the loss
    loss = criterion(y_pred, y_train)

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if (epoch+1) % 100 == 0:
        print(f'Epoch {epoch+1}: loss = {loss.item():.4f}')

After training, your model is ready to make predictions.

Step 7: Test the Model

Use your trained model to test new data:

with torch.no_grad():  # Disables gradient calculation
    X_test = torch.tensor([[5.0]])
    y_test_pred = model(X_test)
    print(f'Predicted value for input 5.0: {y_test_pred.item():.4f}')

And that's it! You've successfully set up PyTorch and built a simple linear regression model from scratch.

Conclusion

PyTorch offers the flexibility, scalability, and accessibility needed for developing sophisticated machine learning models. This guide serves as the first step on your PyTorch journey. As you gain more experience, you can explore more advanced topics, such as using GPUs for faster computation and implementing other types of neural networks.

Next Article: What to Expect When Learning PyTorch: A Roadmap

Series: The First Steps with PyTorch

PyTorch

You May Also Like

  • Addressing "UserWarning: floor_divide is deprecated, and will be removed in a future version" in PyTorch Tensor Arithmetic
  • In-Depth: Convolutional Neural Networks (CNNs) for PyTorch Image Classification
  • Implementing Ensemble Classification Methods with PyTorch
  • Using Quantization-Aware Training in PyTorch to Achieve Efficient Deployment
  • Accelerating Cloud Deployments by Exporting PyTorch Models to ONNX
  • Automated Model Compression in PyTorch with Distiller Framework
  • Transforming PyTorch Models into Edge-Optimized Formats using TVM
  • Deploying PyTorch Models to AWS Lambda for Serverless Inference
  • Scaling Up Production Systems with PyTorch Distributed Model Serving
  • Applying Structured Pruning Techniques in PyTorch to Shrink Overparameterized Models
  • Integrating PyTorch with TensorRT for High-Performance Model Serving
  • Leveraging Neural Architecture Search and PyTorch for Compact Model Design
  • Building End-to-End Model Deployment Pipelines with PyTorch and Docker
  • Implementing Mixed Precision Training in PyTorch to Reduce Memory Footprint
  • Converting PyTorch Models to TorchScript for Production Environments
  • Deploying PyTorch Models to iOS and Android for Real-Time Applications
  • Combining Pruning and Quantization in PyTorch for Extreme Model Compression
  • Using PyTorch’s Dynamic Quantization to Speed Up Transformer Inference
  • Applying Post-Training Quantization in PyTorch for Edge Device Efficiency