-
Uncomment the
"runArgs"
section in.devcontainer/devcontainer.json
. -
Uncomment the
nvidia
channel andpytorch-cuda
package lines inenv.yml
. -
Install the Nvidia Container Toolkit and configure Docker by following the instructions in this guide. Focus on the sections "Installing with Apt" and "Configuring Docker", and do not follow the "Rootless mode" section.
⚠️ Windows Users: First, verify that you can runnvidia-smi
in the Ubuntu terminal. Next, install and configure the Nvidia Container Toolkit in the Ubuntu terminal.⚠️ Important: When following the "Configuring Docker" step, usesudo service docker restart
instead ofsudo systemctl restart docker
. -
After installing the devcontainer, verify GPU access by running the following command in the Visual Studio Code (devcontainer) terminal:
nvidia-smi
You should see GPU statistics displayed.
-
Verify that PyTorch CUDA runtime is installed and configured correctly by running the following code in a Python console:
import torch print(torch.cuda.is_available()) # This should print 'True' if CUDA support is enabled
If the output is
True
, it means that PyTorch is correctly set up to use CUDA. If it printsFalse
, CUDA support may not be properly configured.