Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPG error "public key is not available" in Ubuntu 20.04 CUDA 11.4.0 image while building #257

Open
wilkesreid opened this issue Apr 28, 2022 · 28 comments

Comments

@wilkesreid
Copy link

wilkesreid commented Apr 28, 2022

1. Issue or feature description

The following Dockerfile does not build today (April 28, 2022), even though it built successfully yesterday:

FROM    nvidia/cuda:11.4.0-runtime-ubuntu20.04
RUN     apt-get update

The error is the following:

GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64  InRelease' is not signed.

2. Steps to reproduce the issue

Create the above dockerfile and attempt to docker build it.

3. Information to attach (optional if deemed irrelevant)

I am running Docker version 20.10.14, build a224086 on WSL 2 Ubuntu 20.04 on Windows 10 Pro, Version 21H2, OS Build 19044.1645

nvidia-smi on my host machine:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.65       Driver Version: 471.96       CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:0A:00.0  On |                  N/A |
|  0%   55C    P0    61W / 250W |   1515MiB /  8192MiB |    ERR!      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
@wilkesreid
Copy link
Author

Possible duplicate of #258

@klueska
Copy link
Contributor

klueska commented Apr 28, 2022

https://forums.developer.nvidia.com/t/notice-cuda-linux-repository-key-rotation/212772

@wilkesreid
Copy link
Author

wilkesreid commented Apr 28, 2022

The instructions in that notice do not work in the docker image.

FROM  nvidia/cuda:11.4.0-runtime-ubuntu20.04
...
RUN   apt-key del 7fa2af80
ADD   https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb .
RUN   dpkg -i cuda-keyring_1.0-1_all.deb
...
RUN   apt-get update

Results in

Conflicting values set for option Signed-By regarding source https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /: /usr/share/keyrings/cuda-archive-keyring.gpg !=

@memray
Copy link

memray commented Apr 28, 2022

The instructions in that notice do not work in the docker image.

FROM  nvidia/cuda:11.4.0-runtime-ubuntu20.04
...
RUN   apt-key del 7fa2af80
ADD   https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb .
RUN   dpkg -i cuda-keyring_1.0-1_all.deb
...
RUN   apt-get update

Results in

Conflicting values set for option Signed-By regarding source https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /: /usr/share/keyrings/cuda-archive-keyring.gpg !=

Tried following the methods in announcement and didn't work either. Well done Nvidia.

@memray
Copy link

memray commented Apr 28, 2022

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

@jtran1999
Copy link

I noticed different keys available to day for the repo.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get -y install cuda

@zestrells
Copy link

This is what I used in the dockerfile to fix this issue.

    && apt-key del 7fa2af80 \
    && curl -L -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-keyring_1.0-1_all.deb \
    && dpkg -i cuda-keyring_1.0-1_all.deb \

@mirekphd
Copy link

mirekphd commented Apr 30, 2022

None of these hacks above are sufficiently reliable yet, as NVIDIA is still working on the changes. Some latest CUDA and Ubuntu versions are already working (images such as CUDA 11.6 for Ubuntu 20.04 can be rebuild from their code at Gitlab), but others (older CUDA/Ubuntu versions such as CUDA 11.2) may still fail.

So given that CUDA 11.1 .. 11.6 toolkits are compatible with the same drivers version (>=450.80.02) it should be possible to adapt NVIDIA's Dockerfile with the latest CUDA version (11.6) and your OS of choice:
https://gitlab.com/nvidia/container-images/cuda/-/tree/master/dist/11.6.2

More info: https://gitlab.com/nvidia/container-images/cuda/-/issues/158

@372046933
Copy link

nvidia/cuda:11.2.1-base-ubuntu20.04 is updated this afternoon. I pull the updated image and the pub key problem disappeared

@kinue00
Copy link

kinue00 commented May 7, 2022

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

seems working for me (kaldi's official dockerfile ubuntu1804)

@VictorZuanazzi
Copy link

We are having the same problem with the image nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04 and 11.3. Any overview of which images are fixed?

@zjuPeco
Copy link

zjuPeco commented May 13, 2022

RUN apt-key del 7fa2af80
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/7fa2af80.pub

This works for me

@Co0perator
Copy link

This may add nothing productive to the conversation but I thought it worth mentioning
Most people seem to be getting this error in Ubuntu 20.04 Docker containers, I'm getting it on Ubuntu 20.04 Desktop.

@ghost
Copy link

ghost commented May 24, 2022

RUN apt-key del 7fa2af80
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/7fa2af80.pub

This works for me

This worked for me as well.

@LordNex
Copy link

LordNex commented Jul 19, 2022

having the same issue on a 4gb Jetson Nano

bfineran referenced this issue in neuralmagic/sparseml Jul 22, 2022
* remove sparseml Dockerfile cuda temporary error fix

the original error has been fixed (https://github.com/NVIDIA/nvidia-docker/issues/1632) and pulling the latest nvidia cuda ubuntu image and running the fix will produce an error

**test_plan**
build passes locally (second rm command gets a no such file error previously)

* make fix conditional

* command fix
desaixie referenced this issue in desaixie/DeepLIIF Jul 29, 2022
First error is from Ubuntu cuda GPG error "public key is not available": https://github.com/NVIDIA/nvidia-docker/issues/1632#issuecomment-1112667716.
Second error is from setup.py requiring README.md but Dockerfile does not copy README.md into the docker image.
@zanedurante
Copy link

I found this issue searching up my problem on google (working with a GCP instance with nvidia GPU, no docker). Eventually following the steps here worked for me: https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key/

@aycaecemgul
Copy link

I found this issue searching up my problem on google (working with a GCP instance with nvidia GPU, no docker). Eventually following the steps here worked for me: https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key/

This is the only solution that worked for ubuntu 20.04 desktop.

@StatsGary
Copy link

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

This worked for me.

@pxorig
Copy link

pxorig commented Jan 12, 2023

try this:

FROM nvidia/cuda:10.1-cudnn8-devel-ubuntu18.04
ENV LANG C.UTF-8
RUN  rm -rf /var/lib/apt/lists/* \
         /etc/apt/sources.list.d/cuda.list \
          /etc/apt/sources.list.d/nvidia-ml.list 
RUN apt-get update 

@ri-cao
Copy link

ri-cao commented Mar 12, 2023

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

This trick works for me. Thank you! My system is Ubuntu 22.04. The base singularity image was built on Ubuntu 18.04

@PolarisLight
Copy link

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

Thanks, this works for me.

@karray
Copy link

karray commented May 17, 2023

A workaround seems to do the trick for me. Add those lines before apt-get update

RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list

Also check out discussions here

I encountered a similar issue with the nvidia/cuda Docker container, but the specific files mentioned in the workaround were not present in my container. Instead, the relevant file in my case was /etc/apt/sources.list.d/cuda-ubuntu2204-x86_64.list.

Therefore, it is advisable to check the destination of the apt source list before attempting to delete them.

@lehcode
Copy link

lehcode commented Apr 8, 2024

Working Docker solution is as follows, also with persistence across services being built;

ADD "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin" "/tmp/cuda-ubuntu2004.pin"
ADD "https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda-repo-ubuntu2004-11-3-local_11.3.1-465.19.01-1_amd64.deb" "/tmp/cuda-repo-ubuntu2004-11-3-local_11.3.1-465.19.01-1_amd64.deb"
ADD "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub" "/etc/apt/7fa2af80.pub"

RUN cat /etc/apt/7fa2af80.pub | apt-key add - && \
    dpkg -i /tmp/cuda-repo-ubuntu2004-11-3-local_11.3.1-465.19.01-1_amd64.deb && \
    apt update && apt -y install cuda && \

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests