jhtdb / README.md
dgcnz's picture
docs: add large_50, large_100 to readme
6fad5c5
metadata
dataset_info:
  - config_name: large_100
    features:
      - name: lrs
        sequence:
          array4_d:
            shape:
              - 3
              - 16
              - 16
              - 16
            dtype: float32
      - name: hr
        dtype:
          array4_d:
            shape:
              - 3
              - 64
              - 64
              - 64
            dtype: float32
    splits:
      - name: train
        num_bytes: 268237120
        num_examples: 80
      - name: validation
        num_bytes: 33529640
        num_examples: 10
      - name: test
        num_bytes: 33529640
        num_examples: 10
    download_size: 329464088
    dataset_size: 335296400
  - config_name: large_50
    features:
      - name: lrs
        sequence:
          array4_d:
            shape:
              - 3
              - 16
              - 16
              - 16
            dtype: float32
      - name: hr
        dtype:
          array4_d:
            shape:
              - 3
              - 64
              - 64
              - 64
            dtype: float32
    splits:
      - name: train
        num_bytes: 134118560
        num_examples: 40
      - name: validation
        num_bytes: 16764820
        num_examples: 5
      - name: test
        num_bytes: 16764820
        num_examples: 5
    download_size: 164732070
    dataset_size: 167648200
  - config_name: small_50
    features:
      - name: lrs
        sequence:
          array4_d:
            shape:
              - 3
              - 4
              - 4
              - 4
            dtype: float32
      - name: hr
        dtype:
          array4_d:
            shape:
              - 3
              - 16
              - 16
              - 16
            dtype: float32
    splits:
      - name: train
        num_bytes: 2220320
        num_examples: 40
      - name: validation
        num_bytes: 277540
        num_examples: 5
      - name: test
        num_bytes: 277540
        num_examples: 5
    download_size: 2645696
    dataset_size: 2775400

Super-resolution of Velocity Fields in Three-dimensional Fluid Dynamics

This dataset loader attempts to reproduce the data of Wang et al. (2024)'s experiments on Super-resolution of 3D Turbulence.

References:

  • Wang et al. (2024): "Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution"

Usage

For a given configuration (e.g. large_50):

>>> ds = datasets.load_dataset("dl2-g32/jhtdb", name="large_50")
>>> ds
DatasetDict({
    train: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 40
    })
    validation: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 5
    })
    test: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 5
    })
})

Each split contains the input lrs which corresponds on a sequence of low resolution samples from time t - ws/2, ..., t, ... ts + ws/2 (ws = window size) and hr corresponds to the high resolution sample at time t. All the parameters per data point are specified in the corresponding metadata_*.csv.

Specifically, for the default configuration, for each datapoint we have 3 low resolution samples and 1 high resolution sample. Each of the former have shapes (3, 16, 16, 16) and the latter has shape (3, 64, 64, 64).

Replication

This dataset is entirely generated by scripts/generate.py and each configuration is fully specified in their corresponding scripts/*.yaml.

Usage

python -m scripts.generate --config scripts/small_100.yaml --token edu.jhu.pha.turbulence.testing-201311

This will create two folders on datasets/jhtdb:

  1. A tmp folder that will store all samples accross runs to serve as a cache.
  2. The corresponding subset, small_50 for example. This folder will contain a metadata_*.csv and data *.zip for each split.

Note:

  • For the small variants, the default token is enough, but for the large variants a token has to be requested. More details here.
  • For reference, the large_100 takes ~15 minutes to generate for a total of ~300MB.