Replies: 19 comments
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> elpimous_robot |
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> saikishor |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey |
Beta Was this translation helpful? Give feedback.
-
>>> saikishor |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> gvoysey
[December 10, 2017, 2:58am]
I am attempting to build a version of
deepspeech-gpu
bindings and thenative_client
for ARMv8 with GPU support. The target platform isNVIDIA's Jetson-class embedded systems -- the TX-1/2 in particular, but
I have access to a PX2 as well.
These systems run ubuntu 16.04 LTS for aarch64. Cuda 8.0, Cudnn 6, and
the compute capability is 5.2.
I have the Deepspeech repo as of commit
e5757d21a38d40923c1de9b86597685f365150ee
, the Mozilla fork oftensorflow as of commit
08894f64fc67b7a8031fc68cb838a27009c3e6e6
, andbazel
0.5.4
. My python version is3.5.2
.I have added the
--config=cuda
option to the suggested build command.Here's the session output:
ubuntunvidia:~/Source/deepspeech/tensorflow$ bazel build -c opt --config=cuda --copt=-O3 //tensorflow:libtensorflow_cc.so //tensorflow:libtensorflow_framework.so //native_client:deepspeech //native_client:deepspeech_utils //native_client:libctc_decoder_with_kenlm.so //native_client:generate_trie
....
547 / 671] Compiling native_client/kenlm/util/double-conversion/bignum-dtoERROR: /home/ubuntu/Source/deepspeech/tensorflow/native_client/BUILD:48:1:C++ compilation of rule '//native_client:deepspeech' failed (Exit 1).
In file included from native_client/kenlm/util/double-conversion/bignum-dtoa.h:31:0, from native_client/kenlm/util/double-conversion/bignum-dtoa.cc:30:
native_client/kenlm/util/double-conversion/utils.h:71:2: error: #error Target architecture was not detected as supported by Double-Conversion.
#error Target architecture was not detected as supported by Double-Conversion.
^
What is a more appropriate list of build targets to give bazel? I'm
willing to go without the language model for now if i have to -- the raw
output from the NN is good enough for my purposes right now.
[This is an archived TTS discussion thread from discourse.mozilla.org/t/arm-native-client-with-gpu-support]
Beta Was this translation helpful? Give feedback.
All reactions