Install CUDA on WSL (Ubuntu)

To use CUDA on WSL, first install WSL on the host Windows 11 and then set up CUDA. However, in order to use CUDA on WSL, your Windows version must be Windows 10 Insider Preview Build 20150 or later or Windows 11. If you are using Windows 10, you can still use it as long as it is version 22H2 (the version that is not yet out of support). The NVIDIA GPU driver must also be installed in advance. This driver must be compatible with WSL2. Detailed instructions and information can be found in the official Microsoft documentation.
https://learn.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl

There are several advantages to using CUDA on WSL 2 (Windows Subsystem for Linux 2)

1. Integrated environment

  • The ability to work on both Windows and Linux allows for an integrated development environment. This makes it easier to work on Windows and on Linux smoothly.

2. Seamless Migration

  • Existing Linux-based CUDA applications can be run on Windows without any special changes.

3. Debugging and Testing

  • By testing your CUDA application on WSL 2, you can continue debugging and other tasks on Windows while making sure it works under Linux.

4. Efficient Use of Resources

  • WSL 2 shares resources (CPU, memory, disk, etc.) with Windows, eliminating the need for a separate Linux machine.

5. Software Compatibility

  • Some scientific computing, data analysis, and machine learning libraries are only available in a Linux environment; WSL 2 allows you to use these libraries on a Windows machine.

6. Speed Up Development

  • Using CUDA on WSL 2 speeds up the cycle from code build to testing and debugging.

Windows 11 is equipped with an NVIDIA grab board; drivers for NVIDIA GPUs should be installed in Windows. In this state, the GPU is recognized even though WSL has just been installed. Check it with the following command, a tool that displays the status of your NVIDIA GPU.

nvidia-smi

The following is displayed

—————————————————————————————
| NVIDIA-SMI 535.98.01 Driver Version: 536.99 CUDA Version: 12.2 |
|—————————————– ———————-
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | MIG M. | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr.
| MIG M.
|========================================= ====================== ======================|
| 0 NVIDIA GeForce RTX 4060 … On | 00000000:01:00.0 Off | N/A |
| N/A 43C P8 1W / 120W | 102MiB / 8188MiB | 0% Default |
| N/A | N/A
—————————————– ———————-

—————————————————————————————
| Processes: | GPU GI CI PID
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| =======================================================================================|
| 0 N/A N/A 23 G /Xwayland N/A
—————————————————————————————

Running nvidia-smi (NVIDIA System Management Interface ) will display detailed information about the NVIDIA GPUs on your system. Below is the main information typically displayed:

1. Driver Version

  • The version of the installed NVIDIA driver.

2. GPU Identification

  • GPU name, ID, bus ID, etc.

3. GPU Usage

  • GPU utilization (%), memory usage (MB), temperature (°C), etc.

4. Process Information

  • ID, name, and memory usage of processes using the GPU.

5. Power and Clock

  • GPU power usage (W) and clock speed (MHz).

6. Fan Speed

  • GPU’s fan speed (% or RPM).

7. Miscellaneous Settings and Status

  • Presence of ECC errors, performance mode, compute mode, etc.

This information is very useful for checking GPU health, identifying performance bottlenecks, and monitoring resource usage. In particular, nvidia-smi is a frequently used tool in the development and operation of applications that actively use GPU resources, such as machine learning, data analysis, gaming, and graphics rendering.

However, a question arises: if the GPU driver is installed in Windows but CUDA is not, will installing CUDA on the WSL be effective?
In our research, it seems that it does work in this situation: if the NVIDIA GPU driver is installed on Windows, WSL2 will use that driver to access the GPU. Therefore, the idea is that if you install the CUDA Toolkit within WSL, you can run CUDA-based workloads within the WSL environment.

So install the CUDA Toolkit within WSL.
https://developer.nvidia.com/cuda-toolkit

wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.2.1/local_installers/cuda-repo-wsl-ubuntu-12-2-local_12.2.1-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-2-local_12.2.1-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-2-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda

Shut down once.

sudo shutdown now

After booting, check with the following commands.

nvcc is the NVIDIA CUDA Compiler command line tool used to compile CUDA programs.

The -V option is used to display nvcc version information.

Thus, the nvcc -V command will display the version information and release date of the installed CUDA compiler. This allows you to see which version of CUDA is installed on your system.

nvcc -V

An error has occurred.

Command ‘nvcc’ not found, but can be installed with:
sudo apt install nvidia-cuda-toolkit

Check with the following command

/usr/local/cuda/bin/nvcc -V The command displays version information using the nvcc tool in the specified path ( /usr/local/cuda/bin/).

Specifically:

  • /usr/local/cuda/bin/: This is generally the path to the bin directory in the default directory where the CUDA Toolkit is installed. This directory contains CUDA-related executables and tools.
  • nvcc: The NVIDIA CUDA Compiler executable.
  • -V: Option to display nvcc version information.

Thus, the command /usr/local/cuda/bin/nvcc -V will display the version information of the CUDA compiler installed in the /usr/local/cuda/bin/ path. This is often used to check the version of CUDA installed in a particular path.

/usr/local/cuda/bin/nvcc -V

If you are curious, enter the following command. This adds commands to the ~/.bashrc file to set environment variables when the shell starts. These commands are added directly to the ~/.bashrc file without opening an editor to edit them.

echo 'export PATH="/usr/local/cuda/bin:$PATH"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"' >> ~/.bashrc
  1. echo 'export PATH="/usr/local/cuda/bin:$PATH"' >> ~/.bashrc:
    • This command adds the /usr/local/cuda/bin directory to the top of the PATH environment variable.
    • PATH is an environment variable that holds a list of directories where executables are searched. With this change, if CUDA executables are located in this directory, users will be able to execute them without specifying the full path (allowing nvcc -V instead of /usr/local/cuda/bin/nvcc -V).
  2. echo 'export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"' >> ~/.bashrc:
    • This command adds the /usr/local/cuda/lib64 directory to the top of the LD_LIBRARY_PATH environment variable.
    • LD_LIBRARY_PATH is an environment variable that holds a list of directories to search for dynamic libraries on Linux systems. This change will ensure that if CUDA-related dynamic libraries are located in this directory, they will be loaded correctly at runtime.

Running these commands will provide the environment for CUDA tools and libraries to work properly. For those who want to check immediately without logging out, these changes will be applied by reloading the ~/.bashrc file.

. ~/.bashrc

What is a .bashrc file?

The .bashrc file is a script file that is loaded when the Bash shell starts in interactive mode. This file is usually located in the user’s home directory ( ~ ). By editing the .bashrc file, the shell environment can be customized and certain commands can be executed automatically.

Specifically, the following settings and customizations are commonly made in the .bashrc file

  • Environment variable settings: for example, PATH or LD_LIBRARY_PATH.
  • Configuration of aliases: defining short forms for frequently used commands.
  • Customizing the shell prompt.
  • Any other command you wish to automatically execute at shell startup.

The previous command ( echo 'export PATH="/usr/local/cuda/bin:$PATH"' >> ~/.bashrc, etc.) adds CUDA-related directories to the environment variables. This will cause these environment variables to be set automatically each time a new terminal session is started.

For this setting to take effect, it is usually necessary to open a new terminal window or run the source ~/.bashrc command to reload the .bashrc file.

If you do not need to install CUDA on WSL, the following are possible

If you are using Docker on WSL and using a GPU within a Docker container, you can install the NVIDIA Container Toolkit to allow the Docker container to utilize the host’s GPU. Once you have completed this configuration, you can run CUDA-based applications and tools within the container.

However, the following points need to be considered

  1. CUDA within containers: In order for a Docker container to take advantage of GPUs, it must have CUDA built into its container image. Many official deep learning and GPU computation Docker images (e.g. nvidia/cuda ) already have CUDA and other necessary tools installed.
  2. CUDA on WSL: There is basically no need to install CUDA on WSL (Ubuntu) itself. This is because the actual GPU computation is done inside the Docker container. However, if you want to run CUDA-based tools or applications directly on the WSL (without using Docker), you will need to install CUDA on the WSL.
  3. Compatibility: Make sure that the CUDA version of the Docker image you are using is compatible with the NVIDIA driver version of the host system.

In short, if you run GPU-based workloads only within a Docker container, you do not need to install CUDA on the WSL. However, if you want to run CUDA workloads on the WSL itself, you will need to install CUDA.

Please share if you like it!
TOC