PyTorch cheatsheet: Operations on tensors

Tensors are multi-dimensional arrays, similar to NumPy arrays, used in PyTorch for efficient computation and storage of numerical data, often employed in deep learning tasks.

In this Answer, we will look into all the operations we can perform on tensors in PyTorch. In order to do this, we need to know what tensors are and how to create them.

Arithmetic operations on tensors

Arithmetic operations on tensors form the backbone of numerical computations in PyTorch. These operations encompass fundamental mathematical operations such as addition, subtraction, multiplication, division, exponentiation, and more, enabling manipulation, transformation, and computation on multi-dimensional data structures.

Basic operations

PyTorch provides functions to perform basic arithmetic operations on tensors, such as:

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Mathematic operations
addition = torch.add(tensor1, tensor2)
subtraction = torch.sub(tensor1, tensor2)
multiplication = torch.mul(tensor1, tensor2)
division = torch.div(tensor1, tensor2)
matrix_multiplication = torch.matmul(tensor1, tensor2)

Element-wise operations

We can also perform advanced arithmetic operations on the tensor, which change the values of each element of the tensor.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Element-wise Operations
exponential = torch.exp(tensor1)
logarithm = torch.log(tensor1)
square_root = torch.sqrt(tensor1)
absolute_value = torch.abs(tensor1)
sin = torch.sin(tensor1)
cos = torch.cos(tensor1)
tan = torch.tan(tensor1)

Reduction operations

PyTorch provides its users with a range of reduction operations that they can perform on the entire tensor.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
tensor2 = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)
# Reduction Operations
summation = torch.sum(tensor1)
mean = torch.mean(tensor1)
maximum = torch.max(tensor1)
minimum = torch.min(tensor1)

Gradient computation

We can compute the gradients to perform backpropagation for our neural networks through the following commands in PyTorch.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32, requires_grad=True) # Set requires_grad=True
tensor2 = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)
# Perform operations
tensor1_sum = tensor1.sum()
# Backpropagation
tensor1_sum.backward()

Tensor manipulation operations

Tensor manipulation operations are fundamental techniques in PyTorch, allowing efficient handling, transformation, and processing of tensors. These operations encompass a wide range of functionalities, including indexing, slicing, reshaping, concatenation, splitting, and more.

Indexing and slicing

Indexing and slicing tensors allow you to access and extract specific elements or sub-tensors along different dimensions, enabling efficient data manipulation and processing within neural network operations.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Indexing and Slicing
indexing = tensor1[0, 1]
slicing = tensor1[:, 1]
reshaping = torch.reshape(tensor1, (1, 4))

Concatenation and splitting

In PyTorch, concatenation refers to combining tensors along a specified dimension which is useful for joining tensors to form larger ones. Conversely, splitting tensors involves dividing them into smaller tensors along a given dimension, partitioning a tensor into multiple parts based on desired sizes or number of splits.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Concatenation and Splitting
concatenation = torch.cat((tensor1, tensor2), dim=1)
splitting = torch.split(tensor1, split_size_or_sections = 1, dim = 1)

Conversions

Converting tensors to NumPy arrays and vice versa facilitates interoperability between PyTorch and NumPy, enabling seamless integration of data manipulation and computation functionalities across both frameworks.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Conversion
numpy_array = tensor1.numpy()
from_numpy = torch.from_numpy(numpy_array)

Comparisons

Comparison operations in PyTorch for tensors facilitate element-wise comparisons and logical operations, empowering efficient evaluation of conditions and computations within deep learning workflows.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Comparison
elementwise_comparison = torch.eq(tensor1, tensor2)
logical_and = torch.logical_and(tensor1 > 2, tensor2 < 7)

Device management

Device management in PyTorch involves allocating and utilizing computational resources such as CPUs and GPUs efficiently for tensor operations, facilitating accelerated model training and inference on compatible hardware configurations.

import torch
tensor1 = torch.tensor([[1, 2], [3, 4]])
tensor2 = torch.tensor([[5, 6], [7, 8]])
# Device Management
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tensor1_gpu = tensor1.to(device)

In this Answer, we have walked through all the operations we can perform on tensors, from arithmetic to manipulative. These operations enable programmers to create tensors according to their needs, enabling efficient performance.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved