This project contains the code to train, test, and deploy semantic segmentation models. The training data may be available by request.
If you find this code useful please cite:
bibtex
Original | Classified |
---|---|
Scripts may be run from the root directory of this project, or from any of the sub-directories.
Most of the scripts rely on our tfmodels
package for CNN models, or svs_reader
package for efficient and extensible reading of SVS format whole slide images.
pip install numpy opencv-contrib-python openslide-python tensorflow-gpu pandas
git clone https://github.com/BioImageInformatics/gleason_grade
cd gleason_grade
git clone https://github.com/BioImageInformatics/svs_reader
git clone https://github.com/BioImageInformatics/tfmodels
Tested on Ubuntu. Python 2.7 and 3.6
- Creating data from image / mask pairs
- Training a segmentation network using
tfmodels
- Validating performance on image / mask pairs
- Applying the model to a whole slide
svs
image
gleason_grade/
|__ data/
|__ train_jpg (1)
|__ train_mask (2)
|__ save_tfrecord.py (3)
|__ ...misc utility scripts...
|__ densenet/ (4)
|__ densenet.py (5)
|__ train.py (6)
|__ test.py (7)
|__ experiment_name/ (8)
|__ logs/ (8a)
|__ snapshots/ (8b)
|__ inference/ (8c)
|__ debug/ (8d)
...
|__ densenet_small/
...
|__ fcn8s/
...
|__ notebooks/ (9)
|__ tfhub/ (10)
|__ create_tfhub_training.py (11)
|__ retrain.py (12)
|__ deploy_retrained.py (13)
|__ test_retrained.py (14)
|__ run_retrain.sh (15)
|__ run_deploy.sh (16)
- A directory with source training images
- A directory with source training masks, name matched to (1)
- Utility for translating (1) and (2) into
tfrecord
format for training - Model directory. Each model gets its own directory for organizing snapshots and results.
- The model definition file. This extends one of the base classes in
tfmodels
- Training script. Each model directory has a copy.
- Testing script. Each model directory has a copy.
- By default the trained models populate a folder with the structure:
- a tensorflow logs for visualization via
tensorboard
- b model snapshots for restoring
- c placeholder for inference outputs generated by this model
- d placeholder for misc debugging output -- images, masks, etc.
- a tensorflow logs for visualization via
- A set of jupyter notebooks for running various experiments, and collecting results. Notably,
colorize_numpy.ipynb
will read the output files in a given directory and produce a color-coded png based on a given color scheme. - Tensorflow HUB experiments.
- Translate images in (1) and (2) into labelled tiles for classifier trainig
- The retraining script from
tensorflow/examples/image_retraining
- Script to apply retrained Hub modules to SVS files
- Run a test on retrained Hub classifiers
- Utility script to hold options for retraining
- Utility script to hold options for deploy