Replies: 4 comments
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> howkong |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> howkong |
Beta Was this translation helpful? Give feedback.
-
>>> howkong
[February 25, 2021, 11:12am]
I am trying to build a software that runs Deepspeech inference along
with another Tensorflow-based network.
However, when I ran my software, I find that the two tasks are
simutaneously trying to allocate large chunks of memory on all 4 of my
GPUs, starving each other of memory and cause cuDNN runtime errors.
But run the two models on two separate processes, each having 2 visible
GPUs, they run just fine.
Since the
deepspeech-gpu
pip package doesn't depend ontensorflow-gpu
, I assume thatdeepspeech-gpu
has Tensorflow runtimesintegrated, and the integrated Tensorflow runtime is conflicting with
the Tensorflow that I manually installed via pip.
Is my assumption correct? If it is, is there a way to run Deepspeech on
my existing Tensorflow installation?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/how-to-run-deepspeech-on-an-existing-tensorflow-installation]
Beta Was this translation helpful? Give feedback.
All reactions