Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to visulization the result,if there is a tool can do it ? #168

Open
sanersbug opened this issue Jul 25, 2018 · 33 comments
Open

how to visulization the result,if there is a tool can do it ? #168

sanersbug opened this issue Jul 25, 2018 · 33 comments

Comments

@sanersbug
Copy link

the result is a submission.json file , how to visualization it ?

@newcoder0531
Copy link

newcoder0531 commented Jul 25, 2018

I hope this help:

from pycocotools.coco import COCO
from pycocotools import mask as cocomask
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pylab
import random
import json
import os
from PIL import Image
import numpy.ma as npm
from skimage import measure,draw

def delete_zero_bfstr(ss):
    for i in range(len(ss)):
        if ss[i]=='0':
            continue
        else:
            ss=ss[i:]
            break
    return ss

def find_id_ann(ann,imgid):
    l=[]
    for anni in ann:
        if str(anni['image_id'])==imgid:
            l.append(anni)
    return l

with open('E:/torch/open-solution-mapping-challenge/tmp/prediction.json','r') as f:
    prediction_json=json.load(f)

testimages_dir='E:/torch/open-solution-mapping-challenge/data/test_images'
testimages_list=os.listdir(testimages_dir)

for image_id in testimages_list:
    img_filepath=os.path.join(testimages_dir,image_id)
    img=mpimg.imread(img_filepath)
    img_real=mpimg.imread(img_filepath)
    mask=np.zeros(img.shape)[:,:,0]
    img_id=delete_zero_bfstr(image_id.split('.')[0])
    img_annlist=find_id_ann(prediction_json,img_id)
    for ann in img_annlist:
        m=cocomask.decode(ann['segmentation'])
        mask+=m
    mask=mask>0
    contours = measure.find_contours(mask, 0.5)
    img.flags.writeable=True
    img[:,:,0][mask]=255
    plt.figure()
    plt.subplot(1,2,1)
    plt.title('origin image')
    plt.imshow(img_real)
    plt.axis('off')
    plt.subplot(1,2,2)
    plt.title('masked image')
    plt.imshow(img)
    for n, contour in enumerate(contours):
        plt.plot(contour[:, 1], contour[:, 0], color='red',linewidth=1)
    plt.axis('off') 
    plt.show()

It is best to execute under the python compiler. If you want to display one by one, please set a breakpoint in the for loop.
Please rewrite the path,the prediction.json is same as submission.json.And rewrite the test_images path.

And now you get a submission.json.Can you tell me how big is your training set and val set?And how many epochs have you trained?

Is there a big difference between the results of the cross-validation in the training process and the results in the evaluation?
I have encountered such a problem now, I hope you can tell me your situation.

@sanersbug
Copy link
Author

sanersbug commented Jul 25, 2018

@newcoder0531 wow!!!,thank you very much !!!!! After i complete it , i'll answer your question, just wait.

@sanersbug
Copy link
Author

@newcoder0531
I have just begun to do this project a few days ago. This time i also have many problems .
I use default parameters to complete the train

  1. I use the data which is included in 'annotation-small.json', the epoch also is default (100)
  2. My result is very bad , i don't know how to enhance the effect . the validation accuracy is only about 0.7...
    res

@newcoder0531
Copy link

@sanersbug
If this is the result of the submission.json,then you have not encountered the same problem as me.
I think you result is right.The reason for the inaccuracy is learning rate and train set.You should use the all train image to fit the network and gradually reduce the learning rate in naptune.yaml.In order to do that,you should load your 'best.torch' in checkpoint/unet path under you experiment_dir.
the issue #160 can help you to load model.
And more what's your computer system?Linux or Windows?I want to collect some information to solve my current problem.

@sanersbug
Copy link
Author

@newcoder0531 OK ,thank you . My computer system is Linux (Ubuntu 16.04 CUDA8.0)

@sanersbug
Copy link
Author

@newcoder0531 Another question , do you know how to train with own data? I have make the label which are 'tif' files, but i don't now how to produce the 'annotation.json' file, if there is any tools to do this ? thanks a lot!

@newcoder0531
Copy link

@sanersbug Sorry,I don't have experience in this area.I think it's hard for you to complete the annotation yourself.
Maybe you can find help from these links below.
http://cocodataset.org/#format-data
https://www.crowdai.org/challenges/mapping-challenge
Good luck!

@sanersbug
Copy link
Author

@newcoder0531 thank you very much!

@jakubczakon
Copy link
Collaborator

jakubczakon commented Jul 26, 2018

@sanersbug This project supports image (png) labels but you are welcome to change the loaders to suit your needs.

It should be modified here:

https://github.com/neptune-ml/open-solution-mapping-challenge/blob/master/src/loaders.py#L40-L42 I believe

@sanersbug
Copy link
Author

@jakubczakon OK,Thank you, I will try it!

@sanersbug
Copy link
Author

@jakubczakon Sorry to disturb again , i have tried to train with my own data , but it's too hard for me .
I even don't know where and how to modify the code you have supply, I have saw the code with too much time , but i have no idea. Could you give me some hints?

@animeshsahu80
Copy link

image
i am getting this error,what to do?

@jakubczakon
Copy link
Collaborator

@animeshsahu80 looking at this issue pandas-dev/pandas#24839 it may be due to numpy/pandas version.
Which versions if those libraries are you using?

@animeshsahu80
Copy link

image
image

@animeshsahu80
Copy link

also when i try to print(img.flags) i get this
image

@animeshsahu80
Copy link

also just to check whether code is working on my machine i just ran 1 epoch, will there be any kind of segmentation after running predection?

@jakubczakon
Copy link
Collaborator

Which operating system are you using?

And I don't think there will be any reasonable predictions after 1 epoch.

@animeshsahu80
Copy link

ubuntu 16.04

@jakubczakon
Copy link
Collaborator

That is unexpected as I am working on this system and it runs just fine.
It seems that there could be a fix with changing the

img = img.DO_SOMETHING()

to

img = img.copy().DO_SOMETHING()

Can you point to the exact place in the code where it fails?
Is it https://github.com/neptune-ml/open-solution-mapping-challenge/blob/master/src/utils.py#L277 ?

@animeshsahu80
Copy link

animeshsahu80 commented Jul 25, 2019

`with open('E:/torch/open-solution-mapping-challenge/tmp/prediction.json','r') as f:
prediction_json=json.load(f)

testimages_dir='E:/torch/open-solution-mapping-challenge/data/test_images'
testimages_list=os.listdir(testimages_dir)

for image_id in testimages_list:
img_filepath=os.path.join(testimages_dir,image_id)
img=mpimg.imread(img_filepath)
img_real=mpimg.imread(img_filepath)
mask=np.zeros(img.shape)[:,:,0]
img_id=delete_zero_bfstr(image_id.split('.')[0])
img_annlist=find_id_ann(prediction_json,img_id)
for ann in img_annlist:
m=cocomask.decode(ann['segmentation'])
mask+=m
mask=mask>0
contours = measure.find_contours(mask, 0.5)
img.flags.writeable=True
img[:,:,0][mask]=255
plt.figure()
plt.subplot(1,2,1)
plt.title('origin image')
plt.imshow(img_real)
plt.axis('off')
plt.subplot(1,2,2)
plt.title('masked image')
plt.imshow(img)
for n, contour in enumerate(contours):
plt.plot(contour[:, 1], contour[:, 0], color='red',linewidth=1)
plt.axis('off')
plt.show()`

i am using this code mentioned above ,
img.flags.writeable=True
and i am getting error in this line

@jakubczakon
Copy link
Collaborator

jakubczakon commented Jul 25, 2019

Instead of

img.flags.writebale=True

can you try this

img = img.copy() 
img[:,:,0][mask]=255

or this

img_copy = img.copy() 
img_copy[:,:,0][mask]=255

@jakubczakon
Copy link
Collaborator

jakubczakon commented Jul 25, 2019

Also what is mping in img=mpimg.imread(img_filepath) ?
I usually load with PIL.Image.open(img_filepath, 'r') can you try loading with PIL instead?

@animeshsahu80
Copy link

Thanks!!
sure i'll try with PIL

@animeshsahu80
Copy link

thanks, PIL and .copy() method both are working, but i am getting an output of
image
is this because of of only 1 epoch

@jakubczakon
Copy link
Collaborator

I think so, basically, if your mAP is still at zero during training I wouldn't expect anything else than emty predictions.

@lunaalvarez
Copy link

Hi,
I see that all evaluation steps in the code run on validation image set. Is there a reason why we are not using test_images and how can we add them into evaluation? Thanks

@jakubczakon
Copy link
Collaborator

jakubczakon commented Apr 29, 2020

Hi @lunaalvarez,

For test images (and any other folders), it is prepared in a following way:

python main.py predict_on_dir \
--pipeline_name unet_tta_scoring_model \
--chunk_size 1000 \
--dir_path path/to/inference_directory \
--prediction_path path/to/predictions.json

We have this predict_on_dir command.
Once that is done, you'll get the predictions.json which you can calculate values on.
You can also exploration notebook to inspect the results.

I hope this helps!

@lunaalvarez
Copy link

Ooops, quite obvious really. Dont know how i missed it. Thanks for the quick reply :)

@willhunger
Copy link

Hi, @jakubczakon .
I got the predication based on default test_images (all the training parameters are default, with annotation-small.json).

 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.729
 Average Precision  (AP) @[ IoU=0.50      | area= small | maxDets=100 ] = 0.102
 Average Precision  (AP) @[ IoU=0.50      | area= large | maxDets=100 ] = 0.825
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.796
 Average Recall     (AR) @[ IoU=0.50      | area= small | maxDets=100 ] = 0.353
 Average Recall     (AR) @[ IoU=0.50      | area= large | maxDets=100 ] = 0.882
2020-08-04 11-21-15 mapping-challenge >>> Mean precision on validation is 0.7294194291809457
2020-08-04 11-21-15 mapping-challenge >>> Mean recall on validation is 0.7956491552881277

And follow the code above to visualization, but I got weird mask pics.
image

It seems this learned something strange, these masks don't belong to the original picture.
Did this visualization procedure go wrong? I mean, the precision is near to 0.73 and the result shouldn't like this.
What do you think?

@jakubczakon
Copy link
Collaborator

It seems like some image indexing in the prediction visualization is wrong -> it clearly learned to find them.

Did you run the exploration notebook from the repo to create it? -> Go to notebook

I just want to make sure that we are on the same page with what is not working :)

@willhunger
Copy link

Thank you, @jakubczakon . The reason I got this error is the default prediction dataset is /data/raw/val/images, I just mixed that with test_images.

@jakubczakon
Copy link
Collaborator

Understood.
Does it work now?

@willhunger
Copy link

@jakubczakon It worked, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants