PyTorch: Determine the memory usage of a tensor (in bytes)

Updated: April 18, 2023 By: Khue Post a comment

In order to calculate the memory used by a PyTorch tensor in bytes, you can use the following formula:

memory_in_bytes = tensor.element_size() * tensor.nelement()

his will give you the size of the tensor data in memory, whether it is on CPU or GPU. The element_size() method returns the number of bytes for one element of the tensor, and the nelement() method returns the total number of elements in the tensor. Multiply them together, and you will have the desired result. To understand better, see the example below:

import torch

torch.manual_seed(100)

random_tensor = torch.rand(3, 4)

memory_usage_in_bytes = random_tensor.element_size() * random_tensor.nelement()

print(f"Memory Usage: {memory_usage_in_bytes} bytes")

Output:

Memory Usage: 48 bytes

If you want to get the result in bits, just multiply it by 8.

Note that the mentioned formula only works well for contiguous tensors, which means that the tensor data is stored in a single block of memory. If you have a non-contiguous tensor, such as a view or a slice of another tensor, then this formula is not reliable and has a chance to produce an incorrect result. The memory usage can be overestimated because it will count the memory of the original tensor as well. To make sure that the output is always trustworthy, you can make your tensor contiguous by calling the contiguous() method before applying the formula.

Example:

import torch

# Define a function to calculate the memory usage of a tensor in bytes
def memory_usage(tensor):
  return tensor.element_size() * tensor.nelement()

# Create a contiguous tensor of shape (2, 3) and dtype torch.float32
y = torch.tensor([[1., 2., 3.], [4., 5., 6.]])

# Create a non-contiguous tensor by transposing y
z = y.t()

# Make z contiguous by calling the contiguous() method
z = z.contiguous()

# Print the memory usage of z
print(memory_usage(z))

Output:

24