Sling Academy
Home/PyTorch/PyTorch: How to create a tensor from a Python list

PyTorch: How to create a tensor from a Python list

Last updated: April 14, 2023

When working with PyTorch, there might be cases where you want to create a tensor from a Python list. For example, you want to create a custom tensor with some specific values that are not easily generated by the built-in tensor creation functions, like a tensor with some pattern or sequence that is not available in torch.arange() or torch.linspace().

This practical, code-first article shows you how to convert a Python list into a PyTorch tensor. Without any further ado, let’s get our hands dirty.

Turning Python lists into PyTorch tensors

We can get the job done easily by using the torch.tensor() function.

Example:

import torch

my_list = [1, 2, 3, 4, 5]
my_tensor = torch.tensor(my_list)

print(my_tensor)
# Output: tensor([1, 2, 3, 4, 5])

The code above creates a one-dimensional tensor with five elements. You can see the shape of my_tensor by using the shape attribute like so:

print(my_tensor.shape)
# Output: torch.Size([5])

In case you have a nested list, such as a list of lists, you can create a multi-dimensional tensor by using the same function.

Example:

import torch

my_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
my_tensor = torch.tensor(my_list)

print(my_tensor)

Output:

tensor([[1, 2, 3],
        [4, 5, 6],
        [7, 8, 9]])

In the example above, we create a two-dimensional tensor with three rows and three columns. You can verify that as follows:

print("Dimension of my_tensor:", my_tensor.dim())
print("Shape of my_tensor:", my_tensor.shape)

Specifying data type

You can also specify the data type of the output tensor by using the dtype argument in the torch.tensor() function. In the following example, we will create a tensor of integers:

import torch

# a Python list of floats
my_list = [1., 2., 3., 4.]

# convert the list to a PyTorch tensor of integers
my_tensor = torch.tensor(my_list, dtype=torch.int32)

print(my_tensor.dtype)

Output:

torch.int32

Next Article: Convert a NumPy array to a PyTorch tensor and vice versa

Previous Article: What are PyTorch tensors?

Series: Working with Tensors in PyTorch

PyTorch

You May Also Like

  • Addressing "UserWarning: floor_divide is deprecated, and will be removed in a future version" in PyTorch Tensor Arithmetic
  • In-Depth: Convolutional Neural Networks (CNNs) for PyTorch Image Classification
  • Implementing Ensemble Classification Methods with PyTorch
  • Using Quantization-Aware Training in PyTorch to Achieve Efficient Deployment
  • Accelerating Cloud Deployments by Exporting PyTorch Models to ONNX
  • Automated Model Compression in PyTorch with Distiller Framework
  • Transforming PyTorch Models into Edge-Optimized Formats using TVM
  • Deploying PyTorch Models to AWS Lambda for Serverless Inference
  • Scaling Up Production Systems with PyTorch Distributed Model Serving
  • Applying Structured Pruning Techniques in PyTorch to Shrink Overparameterized Models
  • Integrating PyTorch with TensorRT for High-Performance Model Serving
  • Leveraging Neural Architecture Search and PyTorch for Compact Model Design
  • Building End-to-End Model Deployment Pipelines with PyTorch and Docker
  • Implementing Mixed Precision Training in PyTorch to Reduce Memory Footprint
  • Converting PyTorch Models to TorchScript for Production Environments
  • Deploying PyTorch Models to iOS and Android for Real-Time Applications
  • Combining Pruning and Quantization in PyTorch for Extreme Model Compression
  • Using PyTorch’s Dynamic Quantization to Speed Up Transformer Inference
  • Applying Post-Training Quantization in PyTorch for Edge Device Efficiency