This PyTorch autograd tutorial will show you how to create a PyTorch autograd tensor.

First, we import PyTorch.

```
import torch
```

Then we check the PyTorch version we are using.

```
print(torch.__version__)
```

We are using PyTorch 1.1.0.

For this PyTorch autograd example, we will create two PyTorch tensors.

The first tensor is created the normal way.

```
non_grad_tensor = torch.rand(2, 3, 4)
```

We’re going to create a Python variable non_grad_tensor, we’re going to use torch.rand to create a PyTorch tensor that is 2x3x4.

The second tensor is created so that it’s a PyTorch autograd tensor.

```
yes_grad_tensor = torch.rand(2, 3, 4, requires_grad=True)
```

We assign it to the Python variable yes_grad_tensor.

We again are going to use torch.rand.

It’s going to be 2x3x4 as well but this time, the difference is we are going to use this option to say requires_grad is set to True.

This will give us a 2x3x4 PyTorch autograd tensor that we can use to do automatic differentiation on our deep learning networks.

For now, we will not calculate any type of gradient or do any backward pass.

We will just check to see if we were able to create a PyTorch tensor with autograd.

First, we print the non-autograd tensor to see what we got.

```
print(non_grad_tensor)
```

We see that it is a PyTorch tensor, we see that it is 2x3x4, and all the numbers have decimal places which lead us to conclude that it is a floating point tensor.

Then we print the PyTorch autograd tensor to see what we got.

```
print(yes_grad_tensor)
```

So print yes_grad_tensor, and we see another 2x3x4 PyTorch tensor that is filled with random floating point numbers, 2x3x4.

Only this time, we see an addition that shows the option that we set to True above, so we see that requires_grad is set to True.

Perfect - We were able to use PyTorch’s tensor creation option requires_grad to create a PyTorch autograd tensor.