Skip to content

Commit

Permalink
Merge pull request #185 from felenitaribeiro/paper
Browse files Browse the repository at this point in the history
paper edits
  • Loading branch information
felenitaribeiro authored Oct 30, 2023
2 parents 9e31372 + 61c4404 commit b922078
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 1 deletion.
Binary file modified paper/figure1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The pre-trained model consists of a 3D U-Net model [@cicek_2016] initially train

The SMILE-UHURA challenge dataset [@Chatterjee_Mattern_Dubost_Schreiber_Nürnberger_Speck_2023] was collected as part of the StudyForrest project [@forstmann_multi-modal_2014]. It consists of 3D multi-slab time-of-flight magnetic resonance angiography (MRA) data acquired at a 7T Siemens MAGNETOM magnetic resonance scanner [@hanke_high-resolution_2014] with an isotropic resolution of 300$\mu$m. Twenty right-handed individuals (21-38 years, 12 males) participated in the study, but we used 14 samples for model training (the "train" and "validate" sets available through the challenge). Before model training, the MRA data were pre-processed as described below.

Data augmentation was performed to increase the amount of training data and to leverage the self-similarity of large and small vessels. The input data were cropped at random locations and sizes at each training epoch and then resized to 64×64×64 using nearest-neighbor interpolation (patch 1). This procedure is equivalent to zooming in or out for patches smaller or larger than 64×64×64. We generated multiple copies (5 more copies per patch) of each of these patches and applied rotation by 90°, 180°, and 270° (copies 1-3) or blurring using two different Gaussian filters (copies 4 and 5) were applied, totalling six copies per patch at each epoch. Four patches (or batch size equal to 4) are generated for each input data and training epoch.
Data augmentation was performed to increase the amount of training data and to leverage the self-similarity of large and small vessels. The input data is cropped at random locations and sizes at each training epoch and then resized to 64×64×64 using nearest-neighbor interpolation (patch 1). The minimum size for each dimension of the cropped patch is 32, and the maximum is the dimension size of the original image. This procedure is equivalent to zooming in or out for patches smaller or larger than 64×64×64. We generated multiple copies (5 more copies per patch) of each of these patches and applied rotation by 90°, 180°, and 270° (copies 1-3) or blurring using two different Gaussian filters (copies 4 and 5), totalling six copies per patch at each epoch. Four unique patches are generated per training data and training epoch and, with data augmentation, that amounts to 24 images per training sample and epoch (4 patches x 6 copies). By increasing the number of unique patches per training sample and epoch and setting the minimum size for each dimension of the cropped patch to 32, pre-trained models were more stable across a range of random seeds (used to initialize model weights).

We pre-trained three distinct models, each using a specific set of labels: one using manually corrected labels provided for the challenge and the two others using the OMELETTE 1 (O1) and OMELETTE 2 (O2) labels. The OMELLETE labels were generated in an automated fashion [@mattern_2021] using two different sets of parameters. Each model was trained for 5000 epochs at an initial learning rate of 0.001, which was reduced when the loss reached a plateau using ReduceLROnPlateau. The Tversky loss [@salehi_tversky_2017; @chatterjee_ds6_2022] determined the learning objective, with α = 0.3 and β = 0.7.

Expand Down

0 comments on commit b922078

Please sign in to comment.