You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just replaced input1.txt and input2.txt with my own dataset. When 'local variable 'final_features' referenced before assignment' occured, I deannotated 'final_features = feature_extractor('11_0.tif','16_0.tif')' in Train.py. Then I began to run Train.py, after a few minutes it showed like this:
Resource exhausted: OOM when allocating tensor with shape[1,64,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
The text was updated successfully, but these errors were encountered:
@illutheplanet This could occur due to several reasons like:
I would suggest just initializing final_features as np.empty([..,...]) with size equal to the number of samples you use.
Your GPU runs out of memory due to large filter size and outputs that needs to be stored at intermediate steps. I would suggest storing variables to disk and continue after closing your sessions where ever possible. Please keep in mind you cannot clear sessions when traversing or computing your network graph.
I just replaced input1.txt and input2.txt with my own dataset. When 'local variable 'final_features' referenced before assignment' occured, I deannotated 'final_features = feature_extractor('11_0.tif','16_0.tif')' in Train.py. Then I began to run Train.py, after a few minutes it showed like this:
Resource exhausted: OOM when allocating tensor with shape[1,64,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Did you run it successfully?What files are "modelh6.h5" and "model.json"?
I just replaced input1.txt and input2.txt with my own dataset. When 'local variable 'final_features' referenced before assignment' occured, I deannotated 'final_features = feature_extractor('11_0.tif','16_0.tif')' in Train.py. Then I began to run Train.py, after a few minutes it showed like this:
Resource exhausted: OOM when allocating tensor with shape[1,64,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
The text was updated successfully, but these errors were encountered: