Make A Simple PyTorch Autograd Computational Graph

Build an autograd backward graph by performing operations on PyTorch Autograd Tensors

Build an autograd backward graph by performing operations on PyTorch Autograd Tensors

Video Transcript


This PyTorch autograd tutorial will show you how to create a simple PyTorch Autograd Computational graph.


First, we import PyTorch

import torch


Then we check the PyTorch version we are using

print(torch.__version__)

We are using PyTorch 1.1.0.


To create our PyTorch computational graph, we'll need three Tensors.


We create the first tensor using PyTorch rand

grad_tensor_a = torch.randn(3,3, requires_grad=True)

This will give us a 3 by 3 by 4 PyTorch tensor with random numbers.

Note that the requires_grad has been set to true in order to have the tensor be able to use Autograd.


Let's print this first tensor so that we get an idea of what's inside of it:

print(grad_tensor_a)


Next, let's define the second tensor we'll use for the autograd computational graph example:

grad_tensor_b = torch.randn(3,3, requires_grad=True)


It's defined identically to the grad_tensor_a Tensor so we don't have to print it as the only thing different will be the random numbers generated that now populate it.


We also define a third tenso we'll use for the graph:

grad_tensor_c = torch.randn(3,3, requires_grad=True)

Similar to grad_tensor_a and grad_tensor_b, it's an identical construction so we don't print it.


Now, let's get to the graph building part.


By doing computations with Tensors which have requires_grad set to True, PyTorch automatically constructs our graph for us.


So we can use PyTorch matrix multiplication to multiply two of our tensors together:

grad_tensor_multiplication = torch.mm(grad_tensor_a, grad_tensor_b)


When we print the variable we assigned the matrix multplication to, we can see the result:

print(grad_tensor_multiplication)

Notice that the resulting Tensor has a grad_fn attribute.

Also notice that it says that it's a Mmbackward function.

We'll come back to what that means in a moment.


Next let's continue building the computational graph by adding the matrix multiplication result to the third tensor created earlier:

grad_tensor_sum = grad_tensor_multiplication + grad_tensor_c


When we print the variable we assigned the tensor sum to, we can see the result:

print(grad_tensor_sum)

Notice that the resulting Tensor also has a grad_fn attribute.

Also notice that it says that it's an AddBackward function.


This tells you that for this sum result and the previous multiplication result, PyTorch is not only building the computational graph but also making sure it understands what the operations were, whether it's an mm for matrix multiplication or Add for an addition.


One thing you can do is to get the function out of the tensor by using the PyTorch grad_fn functionality:

grad_tensor_sum.grad_fn

This gives you the object at a location in your memory.


More interesting than this however, is that you can move backwards manually to see what operations came before this using the PyTorch next_functions functionality:

grad_tensor_sum.grad_fn.next_functions

We can see the initial mmbackward object and an accumulated grad object.


Perfect - this PyTorch autograd tutorial showed you how to create a simple PyTorch Autograd Computational graph and then explored how to see how PyTorch was keeping track of operations.

Receive the Data Science Weekly Newsletter every Thursday

Easy to unsubscribe at any time. Your e-mail address is safe.