You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using your repo as a source to implement a SRGAN. I have some differences especially in the dataset.
I have a 2 .npy files that contents 8000 patches of size 1616 (both LR and SR), I made the correspondence modifications to implement mini-batch training using tf.train.shuffle_batch.
During training, I have an increment of memory of 1GB per 50 iterations (more or less) and sooner or later, depending of the batch-size, the cpu-memory consumption reach the maximum and the train stops.
Perhaps my problem is naive, I am a newbie with tensorflow. What would you recommend to prevent the problem of such as high memory consumption?
The text was updated successfully, but these errors were encountered:
Hi luicho21, it sounds like your whole dataset is being loaded into memory over and over. you will probably have to adjust the input pipeline a bit more to use your data.
Your dataset sounds like it may be small enough to fit in memory - i would recommend using tf.data.Dataset.from_tensor_slices
Hi @trevor-m
I am using your repo as a source to implement a SRGAN. I have some differences especially in the dataset.
I have a 2 .npy files that contents 8000 patches of size 1616 (both LR and SR), I made the correspondence modifications to implement mini-batch training using tf.train.shuffle_batch.
During training, I have an increment of memory of 1GB per 50 iterations (more or less) and sooner or later, depending of the batch-size, the cpu-memory consumption reach the maximum and the train stops.
Perhaps my problem is naive, I am a newbie with tensorflow. What would you recommend to prevent the problem of such as high memory consumption?
The text was updated successfully, but these errors were encountered: