-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mrxs file format #13
Comments
First, regarding the strange zero-padded regions. I think that is actually part of the original image. Could you open it in QuPath and verify that you get the same? The reason why it might be especially slow, is because the zero-padded regions are assumed to be tissue, as glass is filtered depending on how far away it is from the color white. I could add an option to include uint 0 in the glass class to be filtered. Also, you opened both these in FP, right? The same WSI in the MRXS and TIFF format, right? Could you send me the prompts for both? Include all. I don't care about inference, just what is displayed when opening the WSI. I assume that the wrong magnification is used somehow for the MRXS format. If so, it could be quick to fix. Also, for the TIFF image, the segmentation looks OK, was this without extracting the H-image? |
Yes, I opened them both in FP Log for MRXS:
Log for Tiff:
A 20X objective was used as far as I can tell.
Yea, I did not extract the H-image. It works quite well on the original image without separating stains. It does pick up some smaller patches and areas without epithelia, but pretty good otherwise.. However, I haven't tested it on different combinations of markers yet. |
Just from naively reading the prompts, I see no reason why it would produce such strange predictions on an image from the MRXS format, compared to the TIFF format. Strange indeed. I guess I would need to see one of those images and do some checks. Hence, please share one or two of those images, and I can see if I can find the issue. Please share both the TIFF and MRXS, as you said it worked on the MXRS format. |
Could you send me your email so I can send a link to the data download? |
My e-mail is made publicly available on my user profile: [email protected] |
Apologies I didn't realize that. Thanks for that! |
But does that mean that extracting the H image was not necessary? The main reason why you were getting worse performance was due to using the MXRS format directly (instead of the TIFF format), which produced some strange predictions? What I will do for sanity checking, is to try to read patches using openslide in python, and see if there is a difference between the patches, when extracting from the assumed same patch level. I will also see if I get the same behaviour as you get in FP, of course |
In this case using the whole WSI without deconvolution worked, but I'm not
sure if it will always work for other images with DAB brown as we are going
to try a panel of different markers that will label different cells and
compartments in the tissue.. The channel that will be consistent across all
is probably going to be H channel. however, I will try the whole image
first before trying H channel Everytime.
Yes, I get worse performance when using MRXS directly instead of the tiff
format.
Thanks. That approach using python makes sense. Let me know if there is
anything else I can try to troubleshoot so it makes it easier for you. I
appreciate you're busy, so thanks for responding promptly..
…On Mon, Jan 17, 2022, 23:57 André Pedersen ***@***.***> wrote:
But does that mean that extracting the H image was not necessary? The main
reason why you were getting worse performance was due to using the MXRS
format directly (instead of the TIFF format), which produced some strange
predictions?
What I will do for sanity checking, is to try to read patches using
openslide in python, and see if there is a difference between the patches,
when extracting from the assumed same patch level. I will also see if I get
the same behaviour as you get in FP, of course
—
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADJQ2IQGZLBJXRVVDAXUHFLUWQG2JANCNFSM5MDNGORA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Interesting. I have been quite busy lately on my PhD work, and therefore have not had any time to provide an H-image extractor for you, but I can look into this later this week. In order to use this you will have to use Pipelines instead of methods added to the Process widget. I can give you guidelines on how to import it and use it in your workflow, when I have it ready. How is it going with training segmentation models in MIB? Are you getting any further? Also, have you gotten a trained model yet which you have imported into FP and is using for inference to produce predictions you can import to QuPath? I guess you saw that I have added multi-class support for two of the scripts relevant for our pipeline (therefore, you can now train multi-class models and deploy them with FP without any issues):
Only tile import from MIB to QuPath remains, which is not detrimental as long as you can import predictions from FP (it is actually better to import predictions from FP than from MIB, if you want to run inference on a full WSI). Therefore, it has not been prioritized. It is also more tricky to add support for. |
Hi @andreped With pipelines, is this the documentation ?: We still haven't started on training segmentation models as the users are starting to generate the training data. I've shown them how to use FP and QuPath on tiff files. I wrote a powershell script to convert a folder of tiffs into pyramidal tiffs using vips so they can avoid using command line. Not the most elegant but it works.. If its of use to you let me know and I can share it. Thanks a lot for the multi-class support. I am keen on getting it started once we have training data. Cheers |
@pr4deepr I guess you saw my reply on mail. In conclusion, having support for the raw mrxs format is outside the scope of our software, as this is a formatting issue the developers of the mrxs format themselves and/or OpenSlide should fix. You will likely have the same issues in QuPath, if you were to use the format directly, unless Peter Bankhead found some fix and integrated it. To read more about the issue, see this thread. However, I would recommend converting your WSIs to a non-properitary format, which should make it easier to use them in the future. Alternatively, you could look into fixing the mrxs format using 3DHistech's slide converter, which produces the same format, but fixes the encoding issue (Info on the software can be found here). |
Hi @andreped Thanks, I did see the reply today. So, QuPath does open MRXS files and it seems to work fine in our hands in QuPath. But, it is possible that there could be compatibility issues down the line. I will stick to converting WSIs to a non-proprietary format. Will make life easier in the long run.
Just curious is this worth adding for FastPathology? Also, as a note vips does allow conversion of mrxs to pyramidal tiffs, I just realised it yesterday. It can be a bit slow sometimes.. Cheers |
@pr4deepr If glass-rich regions are imputed with 0 integers then yes. Filtering redundant patches makes inference a lot faster. You probably found that inference on the full mrxs WSI in FP was quite slow. That is because all black regions are assumed to be tissue. I will attempt to add a fix before the weekend, but might a week until it is added to the official release of FP, as it requires that FAST is updated first.
Conversion is slow as the images are so large. Reading and writing large images from disk is rarely fast. Honestly, I was surprised it wasn't even slower the first time I did it :P But was the mrxs format the default format that you got from the scanner, or was it stored in another format? If it is NDPI, then it is already supported by FP. We are also working on adding support for Olympus' CellSens VSI, which hopefully will be added in the upcoming release of FP. Sorry, for not replying until now. Has been some hectic weeks and I forgot to reply the first time I saw it. |
@pr4deepr I have now added a fix such that the zero-padded background is included into the glass class, which means that patches containing either lots of glass or the zero-padded regions are neglected. This should speed up inference on your data by quite a bit! The solution is not ready yet, but I have made a PR #153 to FAST. When it has been merged, a new release of FAST will be made, and then I will add these changes to FP. Will let you know when it is ready for you to try :] |
Thanks a lot for the update @andreped. Looking forward to the new release.
Actually, converting larger tiffs to pyramidal tiffs using vips is faster than converting comparatively smaller mrxs files, but I don't if the file being tiff in the first place helps or if its to do with mrxs file.
The mrxs files we used are the ones exported using the 3D Histech converter. Essentially, the slide scanner acquires the images as mrxs files and stores it on a server. We use the 3D histech software to export and convert the files from mrxs to mrxs. Cheers |
OK, but everything works fine with FP, when you correct the mrxs-formatted WSIs? So it is really just the filtering zero-padded regions that remains, at least for now? |
@pr4deepr Just wanted to ask one last time if you were able to use FP as you would want using the mrxs files? I believe it should work if you were able to reformat the files, using the 3DHistech converter tool. Could you try if they work in the new release of FP? |
I'll test it and get back to you.. |
@pr4deepr any update? |
Was having some trouble with running TensorRT installation on Windows. Now, when I convert mrxs to tiff using the 3DHistech exporter and load tis converted tiff into FP, I get this error:
Converting the tiff exported from 3DHistech into pyramidal tiff via vips seems to solve this problem. |
Running inference using OpenVINO should be quite fast, but that uses either the integrated GPU (Intel) or CPU. Most likely the pathologists that you are working with does not have a machine with GPU, and therefore OpenVINO is probably the most optimal inference engine. TensorRT only works with dedicated GPUs (NVIDIA). Regarding the TIFF image you are testing. Note that there is only registered 1 level, which means that it is not pyramidal, as you mentioned. FAST depend on having image planes that can actually get kept in memory. What QuPath does is to pyramidarize the image for you, and therefore you might be able to read it there, but in general, I would suggest working with pyramidal, tiled images. Makes life easier for you, Pete, us, and everyone. But then I believe our platform can read the mrxs format, given that it is stored in a pyramidal, tiled format, and that 3DHistech has not corrupted their own format (which they are also able to fix, in a separate software/plugin). Since this problem is solved, I will close this issue, but feel free to reopen it again, if you are getting new issues with the format. However, most of the troubles you had were outside the scope of what we can do in FAST/FP. |
Thanks.. Just adding this here for future reference what doesn't work: what works: Thanks for all the help @andreped .. Appreciate it! |
Thought its better to create a new issue instead of using the other one for mrxs file format
#6 (comment)
I can open a mrxs file in FP now, but the file opens like this:
When I run predictions using the Epithelium models in NoCodeSeg, it takes a while and the end result looks like this on an inset:
However, when I export the mrxs file to a tiled-tiff using the CaseViewer software, followed by conversion in vips to a pyramidal tiff (jpeg, quality = 85), and then run predictions, I get this:
The image magnification is 10 for both.
I could share the dataset with you but will have to be via email.
We wouldn't need any agreements. Its just that I can't share it directly as its a cross-institutional project.
The owner will have to share it with you directly.
Cheers
Pradeep
The text was updated successfully, but these errors were encountered: