Replies: 19 comments
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty |
Beta Was this translation helpful? Give feedback.
-
>>> Shravan_Shetty
[December 14, 2020, 12:14pm]
I have been trying to create japanese model - I have collected around 70
hours of audio. slash
While training the model in docker - i receive these errors
> root80dfb52cfddf:/DeepSpeech# python -u DeepSpeech.py slash
> > --train_files /home/anon/Downloads/jaSTTDatasets/final-train.csv slash
> > --train_batch_size 24 slash
> > --dev_files /home/anon/Downloads/jaSTTDatasets/final-dev.csv slash
> > --dev_batch_size 24 slash
> > --test_files /home/anon/Downloads/jaSTTDatasets/final-test.csv slash
> > --test_batch_size 24 slash
> > --epochs 5 slash
> > --bytes_output_mode slash
> > --checkpoint_dir /home/anon/Downloads/jaSTTDatasets/checkpoint
> I Could not find best validating checkpoint.
> I Loading most recent checkpoint from /home/anon/Downloads/jaSTTDatasets/checkpoint/train-976
> I Loading variable from checkpoint: beta1_power
> I Loading variable from checkpoint: beta2_power
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias/Adam
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/bias/Adam_1
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel/Adam
> I Loading variable from checkpoint: cudnn_lstm/rnn/multi_rnn_cell/cell_0/cudnn_compatible_lstm_cell/kernel/Adam_1
> I Loading variable from checkpoint: global_step
> I Loading variable from checkpoint: layer_1/bias
> I Loading variable from checkpoint: layer_1/bias/Adam
> I Loading variable from checkpoint: layer_1/bias/Adam_1
> I Loading variable from checkpoint: layer_1/weights
> I Loading variable from checkpoint: layer_1/weights/Adam
> I Loading variable from checkpoint: layer_1/weights/Adam_1
> I Loading variable from checkpoint: layer_2/bias
> I Loading variable from checkpoint: layer_2/bias/Adam
> I Loading variable from checkpoint: layer_2/bias/Adam_1
> I Loading variable from checkpoint: layer_2/weights
> I Loading variable from checkpoint: layer_2/weights/Adam
> I Loading variable from checkpoint: layer_2/weights/Adam_1
> I Loading variable from checkpoint: layer_3/bias
> I Loading variable from checkpoint: layer_3/bias/Adam
> I Loading variable from checkpoint: layer_3/bias/Adam_1
> I Loading variable from checkpoint: layer_3/weights
> I Loading variable from checkpoint: layer_3/weights/Adam
> I Loading variable from checkpoint: layer_3/weights/Adam_1
> I Loading variable from checkpoint: layer_5/bias
> I Loading variable from checkpoint: layer_5/bias/Adam
> I Loading variable from checkpoint: layer_5/bias/Adam_1
> I Loading variable from checkpoint: layer_5/weights
> I Loading variable from checkpoint: layer_5/weights/Adam
> I Loading variable from checkpoint: layer_5/weights/Adam_1
> I Loading variable from checkpoint: layer_6/bias
> I Loading variable from checkpoint: layer_6/bias/Adam
> I Loading variable from checkpoint: layer_6/bias/Adam_1
> I Loading variable from checkpoint: layer_6/weights
> I Loading variable from checkpoint: layer_6/weights/Adam
> I Loading variable from checkpoint: layer_6/weights/Adam_1
> I Loading variable from checkpoint: learning_rate
> I STARTING Optimization
> Epoch 0 | Training | Elapsed Time: 0:00:24 | Steps: 22 | Loss: 26.007061 E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/5350.wav
> Epoch 0 | Training | Elapsed Time: 0:00:26 | Steps: 24 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/21545.wav
> Epoch 0 | Training | Elapsed Time: 0:00:48 | Steps: 46 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/12658.wav
> Epoch 0 | Training | Elapsed Time: 0:00:55 | Steps: 53 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/5370.wav
> Epoch 0 | Training | Elapsed Time: 0:00:56 | Steps: 54 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/12779.wav
> Epoch 0 | Training | Elapsed Time: 0:01:29 | Steps: 83 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/5369.wav
> Epoch 0 | Training | Elapsed Time: 0:01:34 | Steps: 87 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/708.wav
> Epoch 0 | Training | Elapsed Time: 0:02:19 | Steps: 126 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/804.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/787.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/926.wav
> Epoch 0 | Training | Elapsed Time: 0:02:25 | Steps: 131 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/966.wav
> Epoch 0 | Training | Elapsed Time: 0:03:17 | Steps: 172 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/1412.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/1009.wav
> Epoch 0 | Training | Elapsed Time: 0:04:20 | Steps: 219 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/549.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/138.wav
> Epoch 0 | Training | Elapsed Time: 0:04:49 | Steps: 239 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/1445.wav
> Epoch 0 | Training | Elapsed Time: 0:05:30 | Steps: 267 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/575.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/583.wav
> Epoch 0 | Training | Elapsed Time: 0:05:35 | Steps: 271 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/25882.wav
> Epoch 0 | Training | Elapsed Time: 0:05:37 | Steps: 272 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/543.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/660.wav
> Epoch 0 | Training | Elapsed Time: 0:06:34 | Steps: 310 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/16574.wav
> Epoch 0 | Training | Elapsed Time: 0:06:36 | Steps: 311 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/123.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/23026.wav,/home/anon/Downloads/jaSTTDatasets/processedAudio/25289.wav
> Epoch 0 | Training | Elapsed Time: 0:06:39 | Steps: 313 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/16154.wav
> Epoch 0 | Training | Elapsed Time: 0:06:40 | Steps: 314 | Loss: inf E The following files caused an infinite (or NaN) loss: /home/anon/Downloads/jaSTTDatasets/processedAudio/3614.wav
Similar issue has already been addressed in topic I am getting
Validation Loss:
inf
however the solution states that we should just run training and read
console messages for corrupt files. slash
Is there any other way to easily filter out problamatic files, the files
seem to have a proper header and have some audio when opened in VLC. Its
also in format mono, 16khz. slash
I have picked 2 of the problem files at random and attached them, please
review them as well.
Since i have a small dataset, i would like to repair these files rather
than delete them. slash
Current csv has a record of around 35000
files.problemFiles.zip
(171.2 KB)
[This is an archived TTS discussion thread from discourse.mozilla.org/t/help-with-japanese-model]
Beta Was this translation helpful? Give feedback.
All reactions