Update README.md
Browse files
README.md
CHANGED
@@ -4,16 +4,40 @@ license: apache-2.0
|
|
4 |
|
5 |
## Overview
|
6 |
|
7 |
-
[SatlasPretrain](https://satlas-pretrain.allen.ai) is a large-scale remote sensing image understanding dataset
|
8 |
-
|
|
|
9 |
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
The pre-trained backbones are expected to improve performance on a wide range of remote sensing and geospatial tasks, such as planetary and environmental monitoring.
|
14 |
-
They have already been deployed to develop robust models for detecting solar farms, wind turbines, offshore platforms, and tree cover in [Satlas](https://satlas.allen.ai), a platform for global geospatial data generated by AI from satellite imagery.
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
The backbones can be loaded for fine-tuning on downstream tasks:
|
19 |
|
@@ -26,13 +50,7 @@ The backbones can be loaded for fine-tuning on downstream tasks:
|
|
26 |
swin_state_dict = {k[len(swin_prefix):]: v for k, v in full_state_dict.items() if k.startswith(swin_prefix)}
|
27 |
model.load_state_dict(swin_state_dict)
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
- `satlas-model-v1-highres.pth`: inputs 8-bit RGB high-resolution images, with 0-255 RGB values normalized to 0-1 by dividing by 255.
|
32 |
-
- `satlas-model-v1-lowres.pth`: inputs the TCI image from Sentinel-2 L1C scenes, which is an 8-bit image already processed from the B04 (red), B03 (green), and B02 (blue) bands. Normalize the 0-255 RGB values to 0-1 by dividing by 255.
|
33 |
-
|
34 |
-
Please see [the SatlasPretrain github](https://github.com/allenai/satlas/blob/main/SatlasPretrain.md) for more examples and usage options.
|
35 |
-
Models that use nine Sentinel-2 bands are also available there.
|
36 |
|
37 |
## Code
|
38 |
|
|
|
4 |
|
5 |
## Overview
|
6 |
|
7 |
+
[SatlasPretrain](https://satlas-pretrain.allen.ai) is a large-scale remote sensing image understanding dataset,
|
8 |
+
intended for pre-training powerful foundation models on a variety of types of satellite and aerial images.
|
9 |
+
It pairs remote sensing images with hundreds of millions of labels derived from [OpenStreetMap](https://www.openstreetmap.org/), [WorldCover](https://esa-worldcover.org/), other existing datasets, and new manual annotation.
|
10 |
|
11 |
+
Quick links:
|
12 |
+
|
13 |
+
- [Learn more about the dataset](https://satlas-pretrain.allen.ai)
|
14 |
+
- [Download the dataset](https://github.com/allenai/satlas/blob/main/SatlasPretrain.md)
|
15 |
+
- [Fine-tune the pre-trained foundation models](https://github.com/allenai/satlaspretrain_models/)
|
16 |
+
|
17 |
+
## Dataset
|
18 |
+
|
19 |
+
The dataset is contained in the tar files in the `dataset/` folder of this repository.
|
20 |
+
Our [Github repository](https://github.com/allenai/satlas/blob/main/SatlasPretrain.md) contains details about the format of the dataset and how to use it, as well as pre-training code.
|
21 |
+
|
22 |
+
The dataset is released under [ODC-BY](https://github.com/allenai/satlas/blob/main/DataLicense).
|
23 |
+
|
24 |
+
## Models
|
25 |
+
|
26 |
+
The models here are Swin Transformer and Resnet backbones pre-trained on the different types of remote sensing images in SatlasPretrain:
|
27 |
+
|
28 |
+
- Sentinel-2
|
29 |
+
- Sentinel-1
|
30 |
+
- Landsat 8/9
|
31 |
+
- 0.5 - 2 m/pixel aerial imagery
|
32 |
|
33 |
The pre-trained backbones are expected to improve performance on a wide range of remote sensing and geospatial tasks, such as planetary and environmental monitoring.
|
34 |
+
They have already been deployed to develop robust models for detecting solar farms, wind turbines, offshore platforms, and tree cover in [Satlas](https://satlas.allen.ai), a platform for accessing global geospatial data generated by AI from satellite imagery.
|
35 |
+
|
36 |
+
[See here](https://github.com/allenai/satlaspretrain_models/) for details on how to use the models and the expected inputs.
|
37 |
+
|
38 |
+
The model weights are released under [ODC-BY](https://github.com/allenai/satlas/blob/main/DataLicense).
|
39 |
|
40 |
+
### Usage and Input Normalization
|
41 |
|
42 |
The backbones can be loaded for fine-tuning on downstream tasks:
|
43 |
|
|
|
50 |
swin_state_dict = {k[len(swin_prefix):]: v for k, v in full_state_dict.items() if k.startswith(swin_prefix)}
|
51 |
model.load_state_dict(swin_state_dict)
|
52 |
|
53 |
+
They can also be easily initialized using the lightweight [`satlaspretrain_models` package](https://github.com/allenai/satlaspretrain_models/).
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
## Code
|
56 |
|