tcd / README.md
joshvm's picture
Update README.md
d97d4da verified
---
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-segmentation
pretty_name: 'OAM-TCD: A globally diverse dataset of high-resolution tree cover maps'
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: height
dtype: int16
- name: width
dtype: int16
- name: annotation
dtype: image
- name: oam_id
dtype: string
- name: license
dtype: string
- name: biome
dtype: int8
- name: crs
dtype: string
- name: bounds
sequence: float32
length: 4
- name: validation_fold
dtype: int8
- name: biome_name
dtype: string
- name: lat
dtype: float32
- name: lon
dtype: float32
- name: segments
dtype: string
- name: meta
dtype: string
- name: coco_annotations
dtype: string
splits:
- name: train
num_bytes: 3450583573.0
num_examples: 4169
- name: test
num_bytes: 360073480.0
num_examples: 439
download_size: 3550643933
dataset_size: 3810657053.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- trees
- biology
- ecology
- forest
---
# Dataset Card for OAM-TCD: A globally diverse dataset of high-resolution tree cover maps
![Example annotation for image 1445]( example_test_annotation_1445.jpg)
_Annotation example in OAM-TCD (ID 1445), RGB image licensed CC BY-4.0, attribution contributors of OIN._
_Left: RGB aerial image, Middle: annotations shown, distinguished by instance ID, Right: annotations identified by class (blue = tree, orange = canopy)_
## Dataset Details
OAM-TCD is a dataset of high-resolution (10 cm/px) tree cover maps with instance-level masks for 280k trees and 56k tree groups.
Images in the dataset are provided as 2048x2048 px RGB GeoTIFF tiles. The dataset can be used to train both instance segmentation models and semantic segmentation models.
For more information please read [our preprint on arXiv](https://arxiv.org/abs/2407.11743). This paper was accepted into NeurIPS 2024 in the Datasets and Benchmarks track. The citation will be updated once the proceedings are online.
[![](https://zenodo.org/badge/DOI/10.5281/zenodo.11617167.svg)](https://doi.org/10.5281/zenodo.11617167)
Please contact josh [at] restor.eco for any questions, or you can post an issue on the associated Github repository for support.
### Dataset Description
- **Curated by:** Restor / ETH Zurich
- **Funded by:** Restor / ETH Zurich , supported by a Google.org AI for Social Good grant (ID: TF2012-096892, AI and ML for advancing the monitoring of Forest Restoration)
- **License:** CC-BY 4.0
OIN declares that all imagery contained within is licensed as [CC-BY 4.0](https://github.com/openimagerynetwork/oin-register) however some images are labelled as CC BY-NC 4.0 or CC BY-SA 4.0 in their metadata. Annotations are predominantly released under a CC-BY 4.0 license, with around 10% licensed as CC BY-NC 4.0 or CC BY-SA 4.0. These less permissive images are distributed in separate repositories to avoid any ambiguity for downstream use.
To ensure that image providers' rights are upheld, we split these images into license-specific repositories, allowing users to pick which combinations of compatible licenses are appropriate for their application. We have initially released model variants that are trained on CC BY + CC BY-NC imagery. CC BY-SA imagery was removed from the training split, but it can be used for evaluation.
The other repositories/datasets are:
- `restor/tcd-nc` containing only `CC BY-NC 4.0` licensed images
- `restor/tcd-sa` containing only `CC BY-SA 4.0` licensed images
### Dataset Sources
All imagery in the dataset is sourced from OpenAerialMap (OAM, part of the Open Imagery Network / OIN).
## Uses
![Prediction map over city of Zurich using a model trained on OAM-TCD](zurich_predictions_side_by_side_small.jpg)
_Tree semantic segmentation for Zurich, predicted at 10 cm/px. Predictions with a confidence
of < 0.4 are hidden. Left - 10 cm RGB orthomosaic provided by the Swiss Federal Office of
Topography swisstopo/SWISSIMAGE 10 cm (2022), Right - prediction heatmap using `restor/tcd-segormer-mit-b5`.
Base map tiles by Stamen Design, under CC BY 4.0. Data by OpenStreetMap, under ODbL._
We anticipate that most users of the dataset wish to map tree cover in aerial orthomosaics, either captured by drones/unmanned aerial vehicles (UAVs) or from aerial surveys such as those provided by governmental organisations.
### Direct Use
The dataset supports applications where the user provides an RGB input image and expects a tree (canopy) map as an output. Depending on the type of trained model, the result could be a binary segmentation mask or a list of detected trees/groups of tree instances. The dataset can also be combined with other license-compatible data sources to train models, aside from our baseline releases. The dataset can also act as a benchmark for other tree detection models; we specify a test split which users can evaluate against, but currently there is no formal infrastructure or a leader board for this.
### Out-of-Scope Use
The dataset does not contained detailed annotations for trees that are in closed canopy i.e. are touching. Thus the current release is not suitable for training models to delineate individual trees in closed canopy forest. The dataset contains images at a fixed resolution of 10 cm/px. Models trained on this dataset at nominal resolution may under-perform if applied to images with significantly different resolutions (e.g. satellite imagery).
The dataset does not directly support applications related to carbon sequestration measurement (e.g. carbon credit verification) or above ground biomass estimation as it does not contain any structural or species information which is required for accurate allometric calculations (Reierson et. al, 2021). Similarly models trained on the dataset should not be used for any decision-making or policy applications without further validation on appropriate data, particularly if being tested in locations that are under-represented in the dataset.
## Dataset Structure
The dataset contains pairs of images, semantic masks and object segments (instance polygons). The masks contain instance-level annotations for (1) individual **trees** and (2) groups of trees, which we label **canopy**. For training our models we binarise the masks. Metadata from OAM for each image is provided and described below.
The dataset is released with suggested training and test splits, stratified by biome. These splits were used to derive results presented in the main paper. Where known, each image is also tagged with its terrestrial biome index [-1, 14]. This relationship was defined by looking for intersections between tile polygons and reference biome polygons, an index of -1 means a biome wasn't able to be matched. Tiles sourced from a given OAM image are isolated to a single fold (and split) to avoid train/test leakage.
k-fold cross-validation indices within the training set are also provided. That is, each image is assigned an integer [0, 4] which assigns it to a validation fold. Users are also free to pick their own validation protocol (for example one could split the data into biome folds), but results may not be directly comparable with results from the release paper.
## Dataset Creation
### Curation Rationale
The use-case within Restor (Crowther et. al, 2022) is to feed into a broader framework for restoration site assessment. Many users of the Restor platform are stakeholders in restoration projects; some have access to tools like UAVs and are interested in providing data for site monitoring. Our goal was to facilitate training tree canopy detection models that would work robustly in any location. The dataset was curated with this diversity challenge in mind - it contains images from around the world and (by serendipity) covers most terrestrial biome classes.
It was important during the curation process that the data sources be open-access and so we selected OpenAerialMap as our image source. OAM contains a large amount of permissively licensed global imagery at high resolution (chosen to be < 10 cm/px for our application).
### Source Data
#### Data Collection and Processing
We used the OAM API to download a list of surveys on the platform. Using the metadata, we discarded surveys that had a ground sample distance of greater than 10 cm/px (for example satellite imagery). The remaining sites were binned into 1 degree square regions across the world. There are sites in OAM that have been uploaded as multiple assets, and naive random sampling would tend to pick several from the same location. We then sampled sites from each bin and random non-empty tiles from each site until we had reached around 5000 tiles. This was arbitrarily constrained by our estimated annotation budget.
Interestingly we did not make any attempt to filter for images that had trees, but in practice there are few negative images in the dataset. Similarly we did not try to filter for images captured in a particular season, so there are trees without leaves in the dataset.
#### Who are the source data producers?
The images are provided by users of OpenAerialMap / contributors of Open Imagery Network.
### Annotations
#### Annotation process
Annotation was outsourced to commercial data labelling companies who provided access to teams of professional annotators. We experimented with several labelling providers and compensation strategies.
Annotators were provided with a guideline document that provided examples of how we expected images should be labeled. This document evolved over the course of the project as we encountered edge cases and questions from annotation teams. As described in the main paper, annotators were instructed to attempt to label open canopy trees individually (i.e. trees that were not touching). If possible, small groups of trees should also be labelled individually and we suggested < 5 trees as an upper bound. Annotators were encouraged to look for cues that indicated whether an object was a tree or not, such as the presence of (relatively long) shadows and crown shyness (inter-crown spacing). Larger groups of trees, or ambiguous regions would be labelled as "canopy". Annotators were provided with full size image tiles (2048 x 2048) and most images were annotated by a single person from a team of several annotators.
There are numerous structures for annotator compensation - for example, paying per polygon, paying per image and paying by total annotation time. The images in OAM-TCD are complex and per-image was excluded early on as the reported annotation time varied significantly. Anecdotally we found that the most practical compensation structure was to pay for a fixed block of annotation time with regular review meetings with labeling team managers. Overall, the cost per image was between 5-10 USD and the total annotation cost was approximately 25k USD. Unfortunately we do not have accurate estimates for time spent annotating all images, but we did advise annotators that if they spent more than 45-60 minutes on a single image that they should flag it for review.
#### Who are the annotators?
We did not have direct contact with any annotators and their identities were anonymised during communication, for example when providing feedback through managers.
#### Personal and Sensitive Information
Contact information is present in the metadata for imagery. We do not distribute this data directly, but each image tile is accompanied by a URL pointing to a JSON document on OpenAerialMap where it is publicly available. Otherwise, the imagery is provided at a low enough resolution that it is not possible to identify individual people.
The image tiles in the dataset contain geospatial information which is not obfuscated, however as one of the purposes of OpenAerialMap is humanitarian mapping (e.g. tracing objects for inclusion in OpenStreetMap), accurate location information is required and uploaders are aware that this information would be available to other users. We also assume that image providers had the right to capture imagery where they did, including following local regulations that govern UAV activity.
An argument for keeping accurate geospatial information is that annotations can be verified against independent sources, for example global land cover maps. The annotations can also be combined with other datasets like multispectral satellite imagery or products like Global Ecosystem Dynamics Investigation (GEDI, Dubayah et. al, 2020)
## General dataset statistics
The dataset contains 5072 image tiles sourced from OpenAerialMap; of these 4608 are licensed as CC-BY 4.0, 272 are licensed as CC BY-NC 4.0 and 192 are licensed as CC BY-SA 4.0. As described earlier, we split these images into separate repositories to keep licensing distinct. Only around 5% of imagery in the training split has a less permissive non-commercial license and we are re-training models on only the CC-BY portion of the data to maximise accessibility and re-use.
The training dataset split contains 4406 images and the test split contains 666 images. All images are the same size (2048x2048 px) and the same ground sample distance (10 cm/px). The geographic distribution of the dataset is shown below:
![Global distribution of annotations in the OAM-TCD dataset](annotation_map.png)
_Global distribution of annotations in the OAM-TCD dataset_
Table 1, below, shows the number of tiles that correspond to each of the 14 terrestrial biomes described by (Olson et. al, 2021).
The majority of the dataset covers (1) tropical and temperate broadleaf forest. Some biomes are clearly under-represented - notably (6) boreal forest/taiga; (9) flooded grasslands and savannas; (11) tundra; and (14) mangrove. Some of these biomes, mangrove in particular, are likely under-represented due to our sampling method (by binned location), as their geographic extent is relatively small. These statistics could be used to guide subsequent data collection in a more targeted fashion.
![Biome distribution](biome_distribution_table.jpeg)
_Distribution of images in terrestrial biomes, and in each of the suggested cross-validation folds_
It is important to note that the biome classification is purely spatial and without inspecting images individually, one cannot make assumptions about what type of landscape was actually imaged, or if it is a natural ecosystem representative of that biome. We do not currently annotate images with a land use category, but this would potentially be a useful secondary measure of diversity in the dataset.
## Bias, Risks, and Limitations
There are several potential sources of bias in our dataset. The first is geographic, related to where users of OAM are likely to capture data - accessible locations that are amenable to UAV flights. Some locations and countries place strong restrictions on UAV possession and use, for example. One of the use-cases for OAM is providing traceable imagery for OpenStreetMap which is also likely to bias what sorts of scenes users capture.
The second is bias from annotators, who were not ecologists. Benchmark results from models trained on the dataset suggest that overall label quality is sufficient for accurate semantic segmentation. However, for instance segmentation annotators had freedom the choose whether to individually label trees or not. This naturally resulted in some inconsistency between what annotators determined was a tree, and at what point to annotate a group of trees as a group. We discuss in the main paper the issue of conflicting definitions for "tree" among researchers and monitoring protocols.
The example annotations above highlight some of the inconsistencies described above. Some annotators labeled individual trees within group labels; in the bottom plot most palm trees are individually segmented, but some groups are not. A future goal for the project is to attempt to improve label consistency, identify incorrect labels and attempt to split group labels into individuals. After annotation was complete, we contracted two different labelling organisations to review (and re-label) subsets of the data; we have not released this data yet, but plan to in the future.
The greatest risk that we foresee om releasing this dataset is usage in out-of-scope scenarios. For example, using trained models on imagery from regions/biomes that the dataset is not representative of without additional validation. Similarly there is a risk that users apply the model in inappropriate ways, such as measuring canopy cover on imagery taken during periods of abscission (when trees lose leaves). It is important that users carefully consider timing (seasonality) when comparing time-series predictions.
While we believe that the risk of malicious or unethical use is low - given that other global tree maps exist and are readily available - it is possible that models trained on the dataset could be used to identify areas of tree cover for illegal logging or other forms of land exploitation. Given that our models can segment tree cover at high resolution, it could also be used for automated surveillance or military mapping purposes.
### Recommendations
Please read the bias information above and take it into when using the dataset. Ensure that you have a good validation protocol in place before using a model trained on this dataset.
## Citation
If you use OAM-TCD in your own work or research, please cite our arXiv paper: and reference the dataset DOI
**BibTeX:**
After the paper is peer reviewed, this citation will be updated.
```
@misc{veitchmichaelis2024oamtcdgloballydiversedataset,
title={OAM-TCD: A globally diverse dataset of high-resolution tree cover maps},
author={Josh Veitch-Michaelis and Andrew Cottam and Daniella Schweizer and Eben N. Broadbent and David Dao and Ce Zhang and Angelica Almeyda Zambrano and Simeon Max},
year={2024},
eprint={2407.11743},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.11743},
}
```
## Dataset Card Authors
Josh Veitch-Michaelis (josh [at] restor.eco)
## Dataset Card Contact
Please contact josh [at] restor.eco if you have any queries about the dataset, including requests for image removal if you believe your rights have been infringed.
### Further Examples
![Example annotation for image 1594]( example_test_annotation_1594.jpg)
![Example annotation for image 2242]( example_test_annotation_2242.jpg)
![Example annotation for image 555]( example_test_annotation_555.jpg)
_Annotation examples in OAM-TCD (IDs 1594, 2242, 555), all RGB images licensed CC BY-4.0, attribution contributors of OIN)_
### References
[1] Gyri Reiersen, David Dao, Björn Lütjens, Konstantin Klemmer, Xiaoxiang Zhu, and Ce Zhang.449
Tackling the overestimation of forest carbon with deep learning and aerial imagery. CoRR,450
abs/2107.11320, 2021.451
[2] Thomas W. Crowther, Stephen M. Thomas, Johan van den Hoogen, Niamh Robmann, Al-452
fredo Chavarría, Andrew Cottam, et al. Restor: Transparency and connectivity for the global453
environmental movement. One Earth, 5(5):476–481, 2022.454
[3] Ralph Dubayah, James Bryan Blair, Scott Goetz, Lola Fatoyinbo, Matthew Hansen, et al. The455
global ecosystem dynamics investigation: High-resolution laser ranging of the earth’s forests456
and topography. Science of Remote Sensing, 1:100002, June 2020.