wdika's picture
Create README.md
c7c4424 verified
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- multitask-image-reconstruction-image-segmentation
- MTLRS
- ATOMMIC
- pytorch
model-index:
- name: MTL_MTLRS_SKMTEA_poisson2d_4x
results: []
---
## Model Overview
ulti-Task Learning for MRI Reconstruction and Segmentation (MTLRS) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_MTLRS_SKMTEA_poisson2d_4x/blob/main/MTL_MTLRS_SKMTEA_poisson2d_4x.atommic
mode: test
```
### Usage
You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: MTLRS
joint_reconstruction_segmentation_module_cascades: 5
task_adaption_type: multi_task_learning
use_reconstruction_module: true
reconstruction_module_recurrent_layer: IndRNN
reconstruction_module_conv_filters:
- 64
- 64
- 2
reconstruction_module_conv_kernels:
- 5
- 3
- 3
reconstruction_module_conv_dilations:
- 1
- 2
- 1
reconstruction_module_conv_bias:
- true
- true
- false
reconstruction_module_recurrent_filters:
- 64
- 64
- 0
reconstruction_module_recurrent_kernels:
- 1
- 1
- 0
reconstruction_module_recurrent_dilations:
- 1
- 1
- 0
reconstruction_module_recurrent_bias:
- true
- true
- false
reconstruction_module_depth: 2
reconstruction_module_time_steps: 8
reconstruction_module_conv_dim: 2
reconstruction_module_num_cascades: 1
reconstruction_module_dimensionality: 2
reconstruction_module_no_dc: true
reconstruction_module_keep_prediction: true
reconstruction_module_accumulate_predictions: true
segmentation_module: AttentionUNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 64
segmentation_module_pooling_layers: 2
segmentation_module_dropout: 0.0
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
4x: MSE = 0.001105 +/- 0.001758 NMSE = 0.0211 +/- 0.02706 PSNR = 30.48 +/- 5.296 SSIM = 0.8324 +/- 0.1064 DICE = 0.8889 +/- 0.1177 F1 = 0.2471 +/- 0.203 HD95 = 7.594 +/- 3.673 IOU = 0.2182 +/- 0.1944
## Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022