You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I found that you used this code to init the pre-trained model params. But when I training the model with pre-trained model, it will re-init the ResNet params every epoch when calling model_fn. Is this an intentional behavior? Thanks a lot.
if is_training:
exclude = [base_architecture + '/logits', 'global_step']
variables_to_restore = tf.contrib.slim.get_variables_to_restore(exclude=exclude)
tf.train.init_from_checkpoint(pre_trained_model,
{v.name.split(':')[0]: v for v in variables_to_restore})
The text was updated successfully, but these errors were encountered:
Hello,
I found that you used this code to init the pre-trained model params. But when I training the model with pre-trained model, it will re-init the ResNet params every epoch when calling model_fn. Is this an intentional behavior? Thanks a lot.
The text was updated successfully, but these errors were encountered: