Sling Academy
Home/PyTorch/Convert a NumPy array to a PyTorch tensor and vice versa

Convert a NumPy array to a PyTorch tensor and vice versa

Last updated: April 14, 2023

This concise, practical article shows you how to convert NumPy arrays into PyTorch tensors and vice versa. Without any further ado, let’s get straight to the main points.

Turning NumPy arrays into PyTorch tensors

There are several built-in functions that can help us get the job done easily.

Using torch.from_numpy(ndarray)

This function creates a tensor from a NumPy array and shares the same memory. This means that any changes to the tensor will be reflected in the original array and vice versa.

Example:

import torch
import numpy as np

# create a sample Numpy array
arr = np.array([1, 2, 3])

# convert the Numpy array to a PyTorch tensor
tensor = torch.from_numpy(arr)

# print out the tensor
print(tensor)
# Ouput: tensor([1, 2, 3])

# try to modify the tensor
tensor[0] = 100

# print out the original array
print(arr)
# Output: [100   2   3]

Using torch.tensor(data)

This function creates a tensor from any data that can be converted to a NumPy array. It copies the data and does not share a memory with the original data. You can also specify the dtype of the tensor as needed.

Example:

import torch
import numpy as np

# sample NumPy array
arr = np.array([
    [1, 2, 3],
    [4, 5, 6]
])

# convert NumPy array to PyTorch tensor
tensor = torch.tensor(arr)

# print the tensor
print(tensor)

Output:

tensor([[1, 2, 3],
        [4, 5, 6]])

Using torch.Tensor()

Note: Not to be confused with the previous method because this time we have the T in Tensor capitalized.

This is an alias for torch.FloatTensor(data) which creates a tensor from any data that can be converted to a NumPy array. It also copies the data and does not share the memory with the original data. The dtype of the tensor is always float32.

Example:

import torch
import numpy as np

# sample NumPy array
arr = np.array([
    [1, 2, 3],
    [4, 5, 6]
])

# convert NumPy array to PyTorch tensor
tensor = torch.Tensor(arr)

# print the tensor
print(tensor)

# print the data type of the tensor
print(tensor.dtype)

Output:

tensor([[1., 2., 3.],
        [4., 5., 6.]])
torch.float32

Converting PyTorch tensors to NumPy arrays

You can convert a given PyTorch tensor to a NumPy array in several different ways. Let’s explore them one by one.

Using tensor.numpy()

The tensor.numpy() method returns a NumPy array that shares memory with the input tensor. This means that any changes to the output array will be reflected in the original tensor and vice versa.

Example:

import torch

torch.manual_seed(100)
my_tensor = torch.rand(2, 3)

# convert tensor to numpy array
arr = my_tensor.numpy()

# print the arry
print(arr)

# modify the array
arr[0, 0] = 100

# print the original tensor
print(my_tensor)

Output:

[[0.1116643  0.8158431  0.26256198]
 [0.4838776  0.6765036  0.75391096]]
tensor([[100.0000,   0.8158,   0.2626],
        [  0.4839,   0.6765,   0.7539]])

If you want to create a totally new, standalone NumPy array from a given tensor, please see the next solution.

Using tensor.clone().numpy()

tensor.clone().numpy() will return a NumPy array that copies the data from the input tensor and preserves its computation graph. This means that any changes to the output array will not affect the original tensor and vice versa

Example:

import torch

my_tensor = torch.tensor([
    [1, 2, 3],
    [4, 5, 6]
])

# convert tensor to numpy array
arr = my_tensor.clone().numpy()

# print the arry
print("NumPy array: \n", arr)

# modify the array
arr[0, 0] = 100

# print the original tensor
print("Original tensor: \n", my_tensor)

Output:

NumPy array: 
 [[1 2 3]
 [4 5 6]]

Original tensor: 
 tensor([[1, 2, 3],
        [4, 5, 6]])

Next Article: Using manual_seed() function in PyTorch

Previous Article: PyTorch: How to create a tensor from a Python list

Series: Working with Tensors in PyTorch

PyTorch

You May Also Like

  • Addressing "UserWarning: floor_divide is deprecated, and will be removed in a future version" in PyTorch Tensor Arithmetic
  • In-Depth: Convolutional Neural Networks (CNNs) for PyTorch Image Classification
  • Implementing Ensemble Classification Methods with PyTorch
  • Using Quantization-Aware Training in PyTorch to Achieve Efficient Deployment
  • Accelerating Cloud Deployments by Exporting PyTorch Models to ONNX
  • Automated Model Compression in PyTorch with Distiller Framework
  • Transforming PyTorch Models into Edge-Optimized Formats using TVM
  • Deploying PyTorch Models to AWS Lambda for Serverless Inference
  • Scaling Up Production Systems with PyTorch Distributed Model Serving
  • Applying Structured Pruning Techniques in PyTorch to Shrink Overparameterized Models
  • Integrating PyTorch with TensorRT for High-Performance Model Serving
  • Leveraging Neural Architecture Search and PyTorch for Compact Model Design
  • Building End-to-End Model Deployment Pipelines with PyTorch and Docker
  • Implementing Mixed Precision Training in PyTorch to Reduce Memory Footprint
  • Converting PyTorch Models to TorchScript for Production Environments
  • Deploying PyTorch Models to iOS and Android for Real-Time Applications
  • Combining Pruning and Quantization in PyTorch for Extreme Model Compression
  • Using PyTorch’s Dynamic Quantization to Speed Up Transformer Inference
  • Applying Post-Training Quantization in PyTorch for Edge Device Efficiency