Replies: 1 comment
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
>>> xiaoroubao1996
[December 19, 2020, 1:52pm]
Hi, I am very happy to use DeepSpeech, but I ran into a problem
recently. When I fine-tune model, the loss decreased steadily, and then
suddenly became very high, causing the training to fail. The code is
based on 0.9.3 and the parameter is showed as below.
tensorflow-gpu==1.14.0 slash
CUDA version: 10.0 slash
cudnn : 7 slash
OS Platform: Linux Ubuntu 7.5.0 slash
Python:3.6
## Epoch 30 slash | Training slash | Elapsed Time: 0:16:17 slash | Steps: 916 slash | Loss: 17.802369 Epoch 30 slash | Validation slash | Elapsed Time: 0:01:41 slash | Steps: 54 slash | Loss: 16.134481 slash | Dataset: .../data/cleaned_dev_without_p.csv I Saved new best validating model with loss 16.134481 to: .../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4/best_dev-1494871
## Epoch 31 slash | Training slash | Elapsed Time: 0:16:18 slash | Steps: 916 slash | Loss: 17.626082 Epoch 31 slash | Validation slash | Elapsed Time: 0:01:40 slash | Steps: 54 slash | Loss: 15.901346 slash | Dataset: .../data/cleaned_dev_without_p.csv I Saved new best validating model with loss 15.901346 to: .../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4/best_dev-1495787
## Epoch 32 slash | Training slash | Elapsed Time: 0:16:12 slash | Steps: 916 slash | Loss: 17.451552 Epoch 32 slash | Validation slash | Elapsed Time: 0:01:40 slash | Steps: 54 slash | Loss: 15.847291 slash | Dataset: .../data/cleaned_dev_without_p.csv I Saved new best validating model with loss 15.847291 to: .../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4/best_dev-1496703
## Epoch 33 slash | Training slash | Elapsed Time: 0:16:15 slash | Steps: 916 slash | Loss: 17.296975 Epoch 33 slash | Validation slash | Elapsed Time: 0:01:41 slash | Steps: 54 slash | Loss: 15.649380 slash | Dataset: .../data/cleaned_dev_without_p.csv I Saved new best validating model with loss 15.649380 to: .../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4/best_dev-1497619
## Epoch 34 slash | Training slash | Elapsed Time: 0:16:05 slash | Steps: 916 slash | Loss: 219.944838 Epoch 34 slash | Validation slash | Elapsed Time: 0:01:41 slash | Steps: 54 slash | Loss: 638.858899 slash | Dataset: .../data/cleaned_dev_without_p.csv
## Epoch 35 slash | Training slash | Elapsed Time: 0:15:33 slash | Steps: 916 slash | Loss: 700.915154 Epoch 35 slash | Validation slash | Elapsed Time: 0:01:40 slash | Steps: 54 slash | Loss: 637.616827 slash | Dataset: .../data/cleaned_dev_without_p.csv
## Epoch 36 slash | Training slash | Elapsed Time: 0:15:29 slash | Steps: 916 slash | Loss: 699.750439 Epoch 36 slash | Validation slash | Elapsed Time: 0:01:41 slash | Steps: 54 slash | Loss: 636.648227 slash | Dataset: .../data/cleaned_dev_without_p.csv
## Epoch 37 slash | Training slash | Elapsed Time: 0:15:33 slash | Steps: 916 slash | Loss: 698.731593 Epoch 37 slash | Validation slash | Elapsed Time: 0:01:41 slash | Steps: 54 slash | Loss: 635.740994 slash | Dataset: .../data/cleaned_dev_without_p.csv
## Epoch 38 slash | Training slash | Elapsed Time: 0:15:33 slash | Steps: 916 slash | Loss: 697.751794 Epoch 38 slash | Validation slash | Elapsed Time: 0:01:40 slash | Steps: 54 slash | Loss: 634.852043 slash | Dataset: .../data/cleaned_dev_without_p.csv
Epoch 39 slash | Training slash | Elapsed Time: 0:15:35 slash | Steps: 916 slash | Loss:
696.783801 slash
Epoch 39 slash | Validation slash | Elapsed Time: 0:01:40 slash | Steps: 54 slash | Loss:
633.970629 slash | Dataset: .../data/cleaned_dev_without_p.csv
python -u DeepSpeech.py slash
'.../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4' slash
'.../ini_fin_test/keep_ini_fin_aug_big_cleaned_drop4/summaries' slash
'.../test_output/output_ini_fin_aug_big_cleaned.json' slash
[This is an archived TTS discussion thread from discourse.mozilla.org/t/fine-tuning-in-chinese-model-and-loss-suddenly-increases]
Beta Was this translation helpful? Give feedback.
All reactions