My girlfriend needed PyTorch and Tensorflow for a machine learning course she’s taking. I have an RTX 3070 and she has a laptop with an integrated GPU. Models took forever to run, I wanted to help. I took multiple days to figure out how to get it working, and I’m here to save you the trouble. The install with the guide took 10~ minutes on a friend’s computer, almost all of it spent waiting for downloads.

It is very important to follow all the steps in the guide. Install docker as instructed, and actually update your drivers.

Tl;Dr

For more details and debugging tips, scroll down.
Make sure you have your GPU drivers updated, you can update them here: Nvidia, or with the GeForce Experience app.
Open an administrator CMD and run:

wsl --install

Reboot your computer if you did not have WSL previously installed.

wsl --set-default-version 2
wsl --update
wsl --install -d Ubuntu-24.04

Open WSL and run:

sudo apt update
sudo apt install docker.io

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install -y cuda-toolkit

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Test with:

sudo docker run -it --gpus all nvidia/cuda:12.4.1-runtime-ubuntu20.04 nvidia-smi

Done.

Install requirements

I will assume you are somewhat proficient at computers, and will dive deeper into the harder parts. I will attempt to give maximum copypastability. Make sure you have your GPU drivers updated, you can update them here: Nvidia, or with the GeForce Experience app. This is actually important as they bundle necessary CUDA drivers, at the time of writing, they bundle 12.4.89, and we will use a docker image requiring 12.4.1.

Install WSL2 and Ubuntu

Open CMD and run:

wsl --install

Reboot your computer if you did not have WSL previously installed.

wsl --set-default-version 2
wsl --update
wsl --install -d Ubuntu-24.04

This will guide you through installing WSL if you don’t have it setup, and give helpful links if you happen to not have virtualization enabled. Log into your Ubuntu install, from now on any commands specified will be ran inside WSL.

If you already had WSL with Ubuntu-24.04, you may have an issue with the lack of systemd and it is needed for Docker to work. Run this inside WSL:

sudo echo -e "[boot]\nsystemd=true">/etc/wsl.conf

Then, exit WSL, and run the following in CMD or Powershell:

wsl --shutdown

Proceed as usual.

Install CUDA on WSL

Run the following inside WSL

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install cuda-toolkit

This will install whatever is the latest version of the CUDA libraries and helper inside WSL, which does not bundle the drivers, WSL will use the drivers from Windows.

Install Docker

To install Docker, run the following inside WSL:

sudo apt update
sudo apt install -y docker.io

There are other ways to install newer docker versions (docs.docker.com/install) but we don’t need a newer version.

Install Nvidia Container Toolkit

The Nvidia Container Toolkit configures docker to be able to use your GPU.

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Done

You are done, whenever you want a docker container to have access to your GPU, you can use the --gpus all flag. You can test it with:

sudo docker run -it --gpus all nvidia/cuda:12.4.1-runtime-ubuntu20.04 nvidia-smi

Bonus: Actually use it

I made an image with both Tensorflow, PyTorch and some other commonly used packages, you can try it with:

sudo docker run -it --gpus all ghcr.io/ttmx/tf-torch-docker:main bash

Run these commands inside of it to test your setup:

# Test that tensorflow can access GPU
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
# Test that pytorch can access GPU
python -c "import torch; print(torch.cuda.is_available())"

If this is working well, I recommend you to use VSCode dev containers for development. For this, click here, install the extension, open whatever folder you want to work in and create a .devcontainer/devcontainer.json file with the following content:

{
  "image": "ghcr.io/ttmx/tf-torch-docker:main", 
  "runArgs": [
    "--gpus",
    "all",
    "--ipc=host"
  ],
  "customizations": {
    "vscode": {
    "extensions": [
      "ms-python.python",
      "ms-toolsai.jupyter"
      ]
    }
  }
}

Feel free to change the image, I also recommend tensorflow/tensorflow:latest-gpu or pytorch/pytorch:2.2.2-cuda12.1-cudnn8-devel if you only want one of the libraries. Also feel free to change the extensions, but these will get you up and running with jupyter notebooks.

ipc=host prevents errors with small amounts of PyTorch shared memory. You can replace this arg with –shm-size 2G (or a different number) if you know your shared memory needs better.

Press F1 and type Dev Containers: Reopen in Container, and you are done. You can now develop with GPU acceleration on WSL2. For using Jupyter you can just open your .ipynb file, select the Python 3.11 kernel and you are good to go!

If you have questions, feel free to dm me on Twitter, I’d be happy to improve this guide.