Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run the example notebook "Using the API with NILMTK-CONTRIB" with GPU #57

Open
w52191 opened this issue May 29, 2021 · 5 comments
Open

Comments

@w52191
Copy link

w52191 commented May 29, 2021

Hi,
Anyone succuessfully run "Using the API with NILMTK-CONTRIB" recently? I used

conda create -n nilm -c conda-forge -c nilmtk nilmtk-contrib
conda install cudatoolkit=11.0 cudnn
pip install tensorflow-gpu==2.4.0

to install the virtual evn. Then, when I tried to run the code, I got the warning:

Started training for  WindowGRU
Joint training for  WindowGRU
............... Loading Data for training ...................
Loading data for  Dataport  dataset
Loading building ...  1
Loading data for meter ElecMeterID(instance=2, building=1, dataset='REDD')     
Done loading data all meters for this chunk.
Dropping missing values
Training processing
First model training for  fridge
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer gru_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
Train on 10206 samples, validate on 1802 samples
Epoch 1/10
10206/10206 [==============================] - ETA: 0s - loss: 0.0099
Epoch 00001: val_loss improved from inf to 0.00629, saving model to windowgru-temp-weights-74894.h5
10206/10206 [==============================] - 78s 8ms/sample - loss: 0.0099 - val_loss: 0.0063
Epoch 2/10
10206/10206 [==============================] - ETA: 0s - loss: 0.0065
......

Any idea how to solve the issue? It's slower than CPU (takes aroud 45s/epoch).

My config:
RTX3070
CUDA: 11.0
cudnn: 8.1
tensorflow-gpu: 2.4.0

@Rohitkr1997
Copy link

I am facing a similar problem. Did you get solution to your problem?

@w52191
Copy link
Author

w52191 commented Jul 26, 2021

Unfortunately, no!

@sastry3009
Copy link

@w52191 The problem will happen because of you install the Cuda toolkit and TensorFlow GPU separately use the following command to create the environment it will solve your issue

conda create -n tf-gpu-cuda8 tensorflow-gpu cudatoolkit

also refer
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/

@Rebecca29
Copy link

@w52191 The problem will happen because of you install the Cuda toolkit and TensorFlow GPU separately use the following command to create the environment it will solve your issue

conda create -n tf-gpu-cuda8 tensorflow-gpu cudatoolkit

also refer https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/

Hi, I encountered the same issue when i run the API, I have installed both of these two packages in the same environment. This command seems like also to create a new env and install these 2 packages. May I ask where is the difference?

@Rebecca29
Copy link

Is there any update for this issue? I tried all the ways online, it's still not working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants