GTEx citation update.
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ library_name: transformers
|
|
21 |
# Model Card for Phikon-v2
|
22 |
|
23 |
Phikon-v2 is a Vision Transformer Large pre-trained with Dinov2 self-supervised method on PANCAN-XL, a dataset of 450M 20x magnification histology images sampled from 60K whole slide images.
|
24 |
-
PANCAN-XL only incorporates publicly available datasets: CPTAC (6,193 WSI) and TCGA (29,502 WSI) for malignant tissue, and
|
25 |
|
26 |
Phikon-v2 improves upon [Phikon](https://huggingface.co/owkin/phikon), our previous fondation model pre-trained with iBOT on 40M histology images from TCGA (6k WSI), on a large variety of weakly-supervised tasks tailored for biomarker discovery.
|
27 |
Phikon-v2 is evaluated on external cohorts to avoid any data contamination with PANCAN-XL pre-training dataset, and benchmarked against an exhaustive panel of representation learning and foundation models.
|
|
|
21 |
# Model Card for Phikon-v2
|
22 |
|
23 |
Phikon-v2 is a Vision Transformer Large pre-trained with Dinov2 self-supervised method on PANCAN-XL, a dataset of 450M 20x magnification histology images sampled from 60K whole slide images.
|
24 |
+
PANCAN-XL only incorporates publicly available datasets: CPTAC (6,193 WSI) and TCGA (29,502 WSI) for malignant tissue, and GTEx for normal tissue (13,302 WSI).
|
25 |
|
26 |
Phikon-v2 improves upon [Phikon](https://huggingface.co/owkin/phikon), our previous fondation model pre-trained with iBOT on 40M histology images from TCGA (6k WSI), on a large variety of weakly-supervised tasks tailored for biomarker discovery.
|
27 |
Phikon-v2 is evaluated on external cohorts to avoid any data contamination with PANCAN-XL pre-training dataset, and benchmarked against an exhaustive panel of representation learning and foundation models.
|