How to Check if PyTorch is Using the GPU?

When working with deep learning models in PyTorch, it’s crucial to ensure that your model is running on the GPU for faster training and inference. Using a GPU instead of a CPU can significantly speed up computations, especially with large datasets and complex models. In this post, we’ll walk through how to check if PyTorch is utilizing the GPU and how to gather relevant information about the available CUDA devices, including GPU memory usage.

Thank me by sharing on Twitter 🙏

How Do I Check PyTorch GPU Availability?

PyTorch makes it easy to check if CUDA (NVIDIA’s parallel computing platform) is available and if your model can leverage the GPU. Here’s a simple way to do it:

1. Check if CUDA is Available

The first thing to check is whether CUDA is available on your system. You can do this by using the function:

Python
import torch

torch.cuda.is_available()

This will return True if CUDA is available on your machine, meaning you can use the GPU for computation. If it returns False, PyTorch will use the CPU instead.

2. Get the Number of Available GPU Devices

You can check the number of GPUs available on your system using:

Python
torch.cuda.device_count()

3. Identify the Current GPU Device

To get the index of the currently active device, you can use:

Python
torch.cuda.current_device()

4. Get the Name of the GPU

To retrieve the name of the GPU PyTorch is using, you can use:

Python
torch.cuda.get_device_name(0)

5. Set the Device for PyTorch

To ensure PyTorch uses the GPU, you can explicitly set the device using the following line of code:

Python
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

This will ensure that the GPU is used if available, or it will fall back to the CPU.

Gathering Additional Information When Using CUDA

If CUDA is being used, you might want to track memory usage. You can retrieve additional information like the GPU’s name and its memory usage with this code:

Python
if device.type == 'cuda':
    print(torch.cuda.get_device_name(0))
    print('Memory Usage:')
    print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
    print('Cached:   ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')

Full Code Snippet for Checking CUDA Details

Here’s a full shortcut code that you can copy, paste, and run in your project to check if PyTorch is using the GPU and print detailed information about your CUDA setup:

Python
import torch

# Check if CUDA is available and set the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)

# CUDA details if available
if device.type == 'cuda':
    print('CUDA Device Name:', torch.cuda.get_device_name(0))
    print('Number of Available GPUs:', torch.cuda.device_count())
    print('Current CUDA Device:', torch.cuda.current_device())
    print('Memory Usage:')
    print('  Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
    print('  Cached:   ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')

Sample Output

When you run this code on a machine with a GPU, your output might look like this:

Plaintext
Using device: cuda
CUDA Device Name: GeForce GTX 1650
Number of Available GPUs: 1
Current CUDA Device: 0
Memory Usage:
  Allocated: 0.6 GB
  Cached:    0.9 GB

This output tells you that:

  • PyTorch is using the GPU (cuda).
  • The GPU device in use is a “GeForce GTX 950M”.
  • There is one GPU available.
  • Device 0 is the currently selected CUDA device.
  • 0.5 GB of memory is currently allocated, and 0.8 GB is cached on the GPU.

Conclusion

Knowing whether PyTorch is using the GPU and monitoring its memory usage is essential for optimizing deep learning workloads. With these simple commands, you can quickly verify your system’s GPU status, switch between CPU and GPU modes, and gather detailed memory usage statistics. This can greatly improve the efficiency of training large models and help you troubleshoot performance bottlenecks.

Share this:

Leave a Reply