Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM problem #5

Open
luissalgueiro opened this issue Sep 26, 2018 · 1 comment
Open

OOM problem #5

luissalgueiro opened this issue Sep 26, 2018 · 1 comment

Comments

@luissalgueiro
Copy link

Hi @trevor-m

I am using your repo as a source to implement a SRGAN. I have some differences especially in the dataset.

I have a 2 .npy files that contents 8000 patches of size 1616 (both LR and SR), I made the correspondence modifications to implement mini-batch training using tf.train.shuffle_batch.

During training, I have an increment of memory of 1GB per 50 iterations (more or less) and sooner or later, depending of the batch-size, the cpu-memory consumption reach the maximum and the train stops.

Perhaps my problem is naive, I am a newbie with tensorflow. What would you recommend to prevent the problem of such as high memory consumption?

@trevor-m
Copy link
Owner

Hi luicho21, it sounds like your whole dataset is being loaded into memory over and over. you will probably have to adjust the input pipeline a bit more to use your data.

Your dataset sounds like it may be small enough to fit in memory - i would recommend using tf.data.Dataset.from_tensor_slices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants