File size: 3,737 Bytes
6a77aba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- RIM
- ATOMMIC
- pytorch
model-index:
- name: REC_RIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Recurrent Inference Machines (RIM) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_RIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: GRU
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 1
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001278 +/- 0.006025 NMSE = 0.04409 +/- 0.1243 PSNR = 31.53 +/- 6.786 SSIM = 0.7692 +/- 0.3035
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |