Sling Academy
Home/PyTorch/PyTorch complete cheat sheet

PyTorch complete cheat sheet

Last updated: December 21, 2024

PyTorch is a Python-based scientific computing package that uses the power of graphics processing units and also provides maximum flexibility and speed. It is an open-source machine learning library that is widely used for applications such as deep learning, natural language processing, and computer vision. In this cheat sheet, we'll cover some of the essential functionalities, including tensors, data loading, and key modules like nn, optim, and autograd.

1. Basic Operations with Tensors

Tensors are the central core of PyTorch like multidimensional arrays. They enable the manipulation of data of arbitrary dimensions.

import torch

# Create a simple tensor
x = torch.tensor([1, 2, 3])
print(x)

# Tensor with random values
rand_tensor = torch.rand(3, 3)
print(rand_tensor)

# Tensor with all zeros
zero_tensor = torch.zeros(5, 5)
print(zero_tensor)

# Adding two tensors
y = torch.tensor([4, 5, 6])
z = x + y
print(z)

2. Data Loading with DataLoader

Loading data effectively is crucial for any machine learning task. PyTorch provides the DataLoader class for this purpose, which can load datasets efficiently using multiple threads.

from torch.utils.data import DataLoader, Dataset

class MyDataset(Dataset):
    def __init__(self, data):
        self.data = data

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        return self.data[idx]

# Sample data
data = [i for i in range(100)]

# Create the dataset
dataset = MyDataset(data)

# Create the DataLoader
dataloader = DataLoader(dataset, batch_size=10, shuffle=True)

# Iterate through the DataLoader
for batch in dataloader:
    print(batch)

3. Building Neural Networks using nn.Module

The torch.nn module provides powerful tools to design and train neural networks.

import torch.nn as nn

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(10, 5)
        self.fc2 = nn.Linear(5, 2)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

Here, we've defined a simple two-layer neural network, showcasing the use of nn.Linear for linear transformation and torch.relu as an activation function.

4. Optimization and Loss Computation

PyTorch provides a variety of optimizers and loss functions available through the torch.optim and torch.nn modules respectively.

model = SimpleNN()

# Setting the loss function
loss_fn = nn.MSELoss()

# Setting the optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

# Sample training loop
for epoch in range(3):
    for batch in dataloader:
        # Forward pass
        outputs = model(batch.float())

        # Compute loss
        loss = loss_fn(outputs, torch.randn_like(outputs))
        
        # Backward pass
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    print(f'Epoch {epoch+1}, Loss: {loss.item()}')

5. Automatic Differentiation with Autograd

PyTorch's Autograd package provides automatic differentiation to calculate gradients for all operations on tensors.

x = torch.tensor(2.0, requires_grad=True)
y = x**2
z = 3*y

# Compute gradients
z.backward()
print(x.grad)

By setting requires_grad=True, PyTorch keeps track of operations to compute the derivative with respect to that tensor.

These key functionalities provide the building blocks for constructing and training deep learning models with PyTorch. With its imperative style and ability to integrated multiple CPUs and GPUs, PyTorch remains a popular choice among researchers and developers alike.

Previous Article: PyTorch Workflow for Complex Projects

Series: The First Steps with PyTorch

PyTorch

You May Also Like

  • In-Depth: Convolutional Neural Networks (CNNs) for PyTorch Image Classification
  • Implementing Ensemble Classification Methods with PyTorch
  • Using Quantization-Aware Training in PyTorch to Achieve Efficient Deployment
  • Accelerating Cloud Deployments by Exporting PyTorch Models to ONNX
  • Automated Model Compression in PyTorch with Distiller Framework
  • Transforming PyTorch Models into Edge-Optimized Formats using TVM
  • Deploying PyTorch Models to AWS Lambda for Serverless Inference
  • Scaling Up Production Systems with PyTorch Distributed Model Serving
  • Applying Structured Pruning Techniques in PyTorch to Shrink Overparameterized Models
  • Integrating PyTorch with TensorRT for High-Performance Model Serving
  • Leveraging Neural Architecture Search and PyTorch for Compact Model Design
  • Building End-to-End Model Deployment Pipelines with PyTorch and Docker
  • Implementing Mixed Precision Training in PyTorch to Reduce Memory Footprint
  • Converting PyTorch Models to TorchScript for Production Environments
  • Deploying PyTorch Models to iOS and Android for Real-Time Applications
  • Combining Pruning and Quantization in PyTorch for Extreme Model Compression
  • Using PyTorch’s Dynamic Quantization to Speed Up Transformer Inference
  • Applying Post-Training Quantization in PyTorch for Edge Device Efficiency
  • Optimizing Mobile Deployments with PyTorch and ONNX Runtime