kiankaydee
commited on
Commit
·
8f0bd34
1
Parent(s):
cbdd05e
dump code
Browse files
README.md
CHANGED
@@ -1,13 +1,17 @@
|
|
1 |
# Masked Autoencoders are Scalable Learners of Cellular Morphology
|
2 |
-
Official repo for Recursion's
|
3 |
-
|
4 |
-
Paper:
|
|
|
|
|
5 |
|
6 |
![vit_diff_mask_ratios](https://github.com/recursionpharma/maes_microscopy/assets/109550980/c15f46b1-cdb9-41a7-a4af-bdc9684a971d)
|
7 |
|
8 |
|
9 |
## Provided code
|
10 |
-
|
|
|
|
|
11 |
```
|
12 |
import timm.models.vision_transformer as vit
|
13 |
|
@@ -29,7 +33,7 @@ def vit_base_patch16_256(**kwargs):
|
|
29 |
return vit.vit_base_patch16_224(**default_kwargs)
|
30 |
```
|
31 |
|
32 |
-
Additional code will be released as the date of the workshop gets closer.
|
33 |
-
|
34 |
## Provided models
|
35 |
-
|
|
|
|
|
|
1 |
# Masked Autoencoders are Scalable Learners of Cellular Morphology
|
2 |
+
Official repo for Recursion's two recently accepted papers:
|
3 |
+
- Spotlight full-length paper at [CVPR 2024](https://cvpr.thecvf.com/Conferences/2024/AcceptedPapers) -- Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology
|
4 |
+
- Paper: link to be shared soon!
|
5 |
+
- Spotlight workshop paper at [NeurIPS 2023 Generative AI & Biology workshop](https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/GenBio)
|
6 |
+
- Paper: https://arxiv.org/abs/2309.16064
|
7 |
|
8 |
![vit_diff_mask_ratios](https://github.com/recursionpharma/maes_microscopy/assets/109550980/c15f46b1-cdb9-41a7-a4af-bdc9684a971d)
|
9 |
|
10 |
|
11 |
## Provided code
|
12 |
+
See the repo for ingredients required for defining our MAEs. Users seeking to re-implement training will need to stitch together the Encoder and Decoder modules according to their usecase.
|
13 |
+
|
14 |
+
Furthermore the baseline Vision Transformer architecture backbone used in this work can be built with the following code snippet from Timm:
|
15 |
```
|
16 |
import timm.models.vision_transformer as vit
|
17 |
|
|
|
33 |
return vit.vit_base_patch16_224(**default_kwargs)
|
34 |
```
|
35 |
|
|
|
|
|
36 |
## Provided models
|
37 |
+
A publicly available model for research can be found via Nvidia's BioNemo platform, which handles inference and auto-scaling for you: https://www.rxrx.ai/phenom
|
38 |
+
|
39 |
+
We are not able to release model weights at this time.
|