Datasets:
Size:
10M<n<100M
License:
Start dataset card
#1
by
KevinZ
- opened
This view is limited to 50 files because it contains too many changes.
See the raw diff here.
- README.md +7 -99
- cosmos_video_decoder.py +0 -93
- magvit2.ckpt +0 -3
- train_v1.1/actions/l_hand_closure.bin → train_v0/actions.bin +2 -2
- train_v0/metadata.json +1 -0
- {train_v1.1 → train_v0}/segment_ids.bin +2 -2
- train_v0/size.txt +1 -0
- {val_v1.1 → train_v0}/video.bin +2 -2
- train_v1.1/actions/driving_command.bin +0 -3
- train_v1.1/actions/joint_pos.bin +0 -3
- train_v1.1/actions/neck_desired.bin +0 -3
- train_v1.1/actions/r_hand_closure.bin +0 -3
- train_v1.1/metadata.json +0 -1
- train_v1.1/video.bin +0 -3
- train_v2.0/metadata.json +0 -1
- train_v2.0/metadata/metadata_0.json +0 -1
- train_v2.0/metadata/metadata_1.json +0 -1
- train_v2.0/metadata/metadata_10.json +0 -1
- train_v2.0/metadata/metadata_11.json +0 -1
- train_v2.0/metadata/metadata_12.json +0 -1
- train_v2.0/metadata/metadata_13.json +0 -1
- train_v2.0/metadata/metadata_14.json +0 -1
- train_v2.0/metadata/metadata_15.json +0 -1
- train_v2.0/metadata/metadata_16.json +0 -1
- train_v2.0/metadata/metadata_17.json +0 -1
- train_v2.0/metadata/metadata_18.json +0 -1
- train_v2.0/metadata/metadata_19.json +0 -1
- train_v2.0/metadata/metadata_2.json +0 -1
- train_v2.0/metadata/metadata_20.json +0 -1
- train_v2.0/metadata/metadata_21.json +0 -1
- train_v2.0/metadata/metadata_22.json +0 -1
- train_v2.0/metadata/metadata_23.json +0 -1
- train_v2.0/metadata/metadata_24.json +0 -1
- train_v2.0/metadata/metadata_25.json +0 -1
- train_v2.0/metadata/metadata_26.json +0 -1
- train_v2.0/metadata/metadata_27.json +0 -1
- train_v2.0/metadata/metadata_28.json +0 -1
- train_v2.0/metadata/metadata_29.json +0 -1
- train_v2.0/metadata/metadata_3.json +0 -1
- train_v2.0/metadata/metadata_30.json +0 -1
- train_v2.0/metadata/metadata_31.json +0 -1
- train_v2.0/metadata/metadata_32.json +0 -1
- train_v2.0/metadata/metadata_33.json +0 -1
- train_v2.0/metadata/metadata_34.json +0 -1
- train_v2.0/metadata/metadata_35.json +0 -1
- train_v2.0/metadata/metadata_36.json +0 -1
- train_v2.0/metadata/metadata_37.json +0 -1
- train_v2.0/metadata/metadata_38.json +0 -1
- train_v2.0/metadata/metadata_39.json +0 -1
- train_v2.0/metadata/metadata_4.json +0 -1
README.md
CHANGED
@@ -1,104 +1,12 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
pretty_name: 1X World Model Challenge Dataset
|
4 |
-
size_categories:
|
5 |
-
- 10M<n<100M
|
6 |
-
|
7 |
-
---
|
8 |
Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
|
9 |
|
10 |
Download with:
|
11 |
```
|
12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
13 |
-
```
|
14 |
-
|
15 |
-
Changes from v1.1:
|
16 |
-
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
17 |
-
- Blur applied to faces
|
18 |
-
- Shared a new raw video dataset under CC-BY-NC-SA 4.0: https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data
|
19 |
-
- Example scripts to decode Cosmos Tokenized bins `cosmos_video_decoder.py` and load in frame data `unpack_data.py`
|
20 |
-
|
21 |
-
Contents of train/val_v2.0:
|
22 |
-
|
23 |
-
The training dataset is shareded into 100 independent shards. The definitions are as follows:
|
24 |
-
|
25 |
-
- **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
|
26 |
-
- **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
27 |
-
- **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
|
28 |
-
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard.
|
29 |
-
|
30 |
-
#### Index-to-State Mapping (NEW)
|
31 |
-
```
|
32 |
-
{
|
33 |
-
0: HIP_YAW
|
34 |
-
1: HIP_ROLL
|
35 |
-
2: HIP_PITCH
|
36 |
-
3: KNEE_PITCH
|
37 |
-
4: ANKLE_ROLL
|
38 |
-
5: ANKLE_PITCH
|
39 |
-
6: LEFT_SHOULDER_PITCH
|
40 |
-
7: LEFT_SHOULDER_ROLL
|
41 |
-
8: LEFT_SHOULDER_YAW
|
42 |
-
9: LEFT_ELBOW_PITCH
|
43 |
-
10: LEFT_ELBOW_YAW
|
44 |
-
11: LEFT_WRIST_PITCH
|
45 |
-
12: LEFT_WRIST_ROLL
|
46 |
-
13: RIGHT_SHOULDER_PITCH
|
47 |
-
14: RIGHT_SHOULDER_ROLL
|
48 |
-
15: RIGHT_SHOULDER_YAW
|
49 |
-
16: RIGHT_ELBOW_PITCH
|
50 |
-
17: RIGHT_ELBOW_YAW
|
51 |
-
18: RIGHT_WRIST_PITCH
|
52 |
-
19: RIGHT_WRIST_ROLL
|
53 |
-
20: NECK_PITCH
|
54 |
-
21: Left hand closure state (0 = open, 1 = closed)
|
55 |
-
22: Right hand closure state (0 = open, 1 = closed)
|
56 |
-
23: Linear Velocity
|
57 |
-
24: Angular Velocity
|
58 |
-
}
|
59 |
-
|
60 |
-
|
61 |
-
Previous version: v1.1
|
62 |
-
|
63 |
-
- **magvit2.ckpt** - weights for [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.
|
64 |
-
|
65 |
-
Contents of train/val_v1.1:
|
66 |
-
- **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
|
67 |
-
- **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
68 |
-
- **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `joint_pos[i]`, `driving_command[i]`, `neck_desired[i]`, and so on. The shapes and definitions of the arrays are as follows (N is the number of frames):
|
69 |
-
- **joint_pos** `(N, 21)`: Joint positions. See `Index-to-Joint Mapping` below.
|
70 |
-
- **driving_command** `(N, 2)`: Linear and angular velocities.
|
71 |
-
- **neck_desired** `(N, 1)`: Desired neck pitch.
|
72 |
-
- **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
|
73 |
-
- **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
|
74 |
-
#### Index-to-Joint Mapping (OLD)
|
75 |
-
```
|
76 |
-
{
|
77 |
-
0: HIP_YAW
|
78 |
-
1: HIP_ROLL
|
79 |
-
2: HIP_PITCH
|
80 |
-
3: KNEE_PITCH
|
81 |
-
4: ANKLE_ROLL
|
82 |
-
5: ANKLE_PITCH
|
83 |
-
6: LEFT_SHOULDER_PITCH
|
84 |
-
7: LEFT_SHOULDER_ROLL
|
85 |
-
8: LEFT_SHOULDER_YAW
|
86 |
-
9: LEFT_ELBOW_PITCH
|
87 |
-
10: LEFT_ELBOW_YAW
|
88 |
-
11: LEFT_WRIST_PITCH
|
89 |
-
12: LEFT_WRIST_ROLL
|
90 |
-
13: RIGHT_SHOULDER_PITCH
|
91 |
-
14: RIGHT_SHOULDER_ROLL
|
92 |
-
15: RIGHT_SHOULDER_YAW
|
93 |
-
16: RIGHT_ELBOW_PITCH
|
94 |
-
17: RIGHT_ELBOW_YAW
|
95 |
-
18: RIGHT_WRIST_PITCH
|
96 |
-
19: RIGHT_WRIST_ROLL
|
97 |
-
20: NECK_PITCH
|
98 |
-
}
|
99 |
-
|
100 |
-
```
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
We also provide a small `val_v1.1` data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pretty_name: 1X World Model Challenge Dataset
|
4 |
+
size_categories:
|
5 |
+
- 10M<n<100M
|
6 |
+
---
|
|
|
7 |
Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).
|
8 |
|
9 |
Download with:
|
10 |
```
|
11 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
12 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
cosmos_video_decoder.py
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
NOTE: Download the Cosmos-Tokenizer repository and pre-trained model weights before running this script.
|
3 |
-
For full installation and setup instructions, please refer to:
|
4 |
-
https://github.com/NVIDIA/Cosmos-Tokenizer#readme
|
5 |
-
"""
|
6 |
-
|
7 |
-
import math
|
8 |
-
from pathlib import Path
|
9 |
-
|
10 |
-
import av
|
11 |
-
import numpy as np
|
12 |
-
import torch
|
13 |
-
|
14 |
-
from cosmos_tokenizer.utils import tensor2numpy
|
15 |
-
from cosmos_tokenizer.video_lib import CausalVideoTokenizer
|
16 |
-
|
17 |
-
input_dir = Path("../worldmodel/val_v2.0")
|
18 |
-
output_dir = Path("/tmp/reconst_1xgpt/")
|
19 |
-
model_name = "Cosmos-Tokenizer-DV8x8x8"
|
20 |
-
decoder_path = Path("pretrained_ckpts") / model_name / "decoder.jit"
|
21 |
-
|
22 |
-
print(f"Output directory exists: {input_dir.exists()}")
|
23 |
-
print(f"Decoder path exists: {decoder_path.exists()}")
|
24 |
-
|
25 |
-
rank = 0
|
26 |
-
metadata_path = input_dir / f"metadata_{rank}.json"
|
27 |
-
if not metadata_path.exists():
|
28 |
-
raise FileNotFoundError(f"Metadata file not found at {metadata_path}")
|
29 |
-
|
30 |
-
with open(metadata_path, "r") as f:
|
31 |
-
metadata_shard = json.load(f)
|
32 |
-
|
33 |
-
total_frames = metadata_shard["shard_num_frames"]
|
34 |
-
print(f"Total frames: {total_frames}")
|
35 |
-
|
36 |
-
encoded_video_dataset = np.memmap(input_dir / f"video_{rank}.bin", dtype=np.int32, mode="r", shape=(math.ceil(total_frames / 17), 3, 32, 32))
|
37 |
-
|
38 |
-
print(f"Encoded video dataset shape: {encoded_video_dataset.shape}")
|
39 |
-
|
40 |
-
indices = torch.tensor(encoded_video_dataset, device="cuda") if not isinstance(encoded_video_dataset, torch.Tensor) else encoded_video_dataset
|
41 |
-
|
42 |
-
try:
|
43 |
-
decoder = CausalVideoTokenizer(checkpoint_dec=str(decoder_path))
|
44 |
-
if decoder._dec_model is None:
|
45 |
-
raise RuntimeError(f"Failed to load decoder model from {decoder_path}")
|
46 |
-
print("Decoder initialized successfully.")
|
47 |
-
except Exception as e:
|
48 |
-
raise RuntimeError(f"Error loading decoder: {str(e)}") from e
|
49 |
-
|
50 |
-
batch_size = 1
|
51 |
-
fps = 30
|
52 |
-
output_file = output_dir / "reconstructed_video.mp4"
|
53 |
-
|
54 |
-
first_batch = torch.from_numpy(encoded_video_dataset[0:1]).cuda()
|
55 |
-
with torch.no_grad():
|
56 |
-
first_output = decoder.decode(first_batch).float()
|
57 |
-
_, _, height, width = first_output.shape[-4:]
|
58 |
-
|
59 |
-
print(f"Output video dimensions: {width}x{height}")
|
60 |
-
|
61 |
-
|
62 |
-
ec = av.open(str(output_file), mode="w")
|
63 |
-
es = ec.add_stream("hevc_nvenc", rate=30)
|
64 |
-
es.width = 256
|
65 |
-
es.height = 256
|
66 |
-
|
67 |
-
|
68 |
-
num_batches = math.ceil(len(encoded_video_dataset) / batch_size)
|
69 |
-
for i in range(num_batches):
|
70 |
-
start_idx = i * batch_size
|
71 |
-
end_idx = min((i + 1) * batch_size, len(encoded_video_dataset))
|
72 |
-
|
73 |
-
batch = torch.from_numpy(encoded_video_dataset[start_idx:end_idx]).cuda()
|
74 |
-
with torch.no_grad():
|
75 |
-
# [B, 3, 17, 256, 256]
|
76 |
-
reconstructed_batch = decoder.decode(batch)
|
77 |
-
|
78 |
-
# (B, 17, 256, 256, 3)
|
79 |
-
reconstructed_batch = tensor2numpy(reconstructed_batch)
|
80 |
-
|
81 |
-
# frame: 17, 256, 256, 3
|
82 |
-
for this_batch in reconstructed_batch:
|
83 |
-
for single_frame in this_batch: # Temporal dimension
|
84 |
-
# 256, 256, 3
|
85 |
-
for ep in es.encode(av.VideoFrame.from_ndarray(single_frame, format="rgb24")):
|
86 |
-
ec.mux(ep)
|
87 |
-
|
88 |
-
print(f"Processed batch {i + 1}/{num_batches}", flush=True)
|
89 |
-
if i == 100:
|
90 |
-
break
|
91 |
-
|
92 |
-
ec.close()
|
93 |
-
print(f"Video saved to: {output_file}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
magvit2.ckpt
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:5324feabfec9599e2989a5c377bbc211bc05a96fa963f6d96f3ee8be0a83fac9
|
3 |
-
size 1151275392
|
|
|
|
|
|
|
|
train_v1.1/actions/l_hand_closure.bin → train_v0/actions.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4443c81737cbd73ab5bdf79fd615d85a88e296a52e1d864e572ffe751a7fd585
|
3 |
+
size 237474780
|
train_v0/metadata.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unet": "1x-technologies/worldmodel_unet_v0", "query":"","num_images": 11873739, "s": 20, "vocab_size": 1000}
|
{train_v1.1 → train_v0}/segment_ids.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9165bbdaed2a30a479908708eddbdcc464b9a8a2d1988789cf3136578de77df2
|
3 |
+
size 47494956
|
train_v0/size.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
11873739
|
{val_v1.1 → train_v0}/video.bin
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3bc3d2775b4ee677fb1401e9d4bc2db3d5cec11cc6609c99d18363b021b4e769
|
3 |
+
size 9498991200
|
train_v1.1/actions/driving_command.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:a640b146582d2f41a4186b7b8094e050439fea3fd5da0b136838697e0b5be6c0
|
3 |
-
size 86660672
|
|
|
|
|
|
|
|
train_v1.1/actions/joint_pos.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:93112b9540ef28009d11b370462b9b925575966f87e82e35fac2429f0839326b
|
3 |
-
size 909937056
|
|
|
|
|
|
|
|
train_v1.1/actions/neck_desired.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:f339fe53d40ea5d6d9014abbd397d4e74a96728b76a10eb6c0f7009a4648e77d
|
3 |
-
size 129991008
|
|
|
|
|
|
|
|
train_v1.1/actions/r_hand_closure.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:01b255b71f0b5a60b05458bcc832bca7f8a6a96f7dc943179a38ffe260715197
|
3 |
-
size 43330336
|
|
|
|
|
|
|
|
train_v1.1/metadata.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"token_dtype": "uint32", "s": 16, "h": 16, "w": 16, "vocab_size": 262144, "hz": 30, "tokenizer_ckpt": "data/magvit2.ckpt", "num_images": 10832584}
|
|
|
|
train_v1.1/video.bin
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:d08bc89dfb07486ab5d53696b6cecc249768e47a82b7e2701e6fffd5b3d15874
|
3 |
-
size 11092566016
|
|
|
|
|
|
|
|
train_v2.0/metadata.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"num_shards": 100, "query": null, "hz": 30, "num_images": 11254161}
|
|
|
|
train_v2.0/metadata/metadata_0.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 0}
|
|
|
|
train_v2.0/metadata/metadata_1.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 1}
|
|
|
|
train_v2.0/metadata/metadata_10.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 10}
|
|
|
|
train_v2.0/metadata/metadata_11.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 11}
|
|
|
|
train_v2.0/metadata/metadata_12.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 12}
|
|
|
|
train_v2.0/metadata/metadata_13.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 13}
|
|
|
|
train_v2.0/metadata/metadata_14.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 14}
|
|
|
|
train_v2.0/metadata/metadata_15.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 15}
|
|
|
|
train_v2.0/metadata/metadata_16.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 16}
|
|
|
|
train_v2.0/metadata/metadata_17.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 17}
|
|
|
|
train_v2.0/metadata/metadata_18.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 18}
|
|
|
|
train_v2.0/metadata/metadata_19.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 19}
|
|
|
|
train_v2.0/metadata/metadata_2.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 2}
|
|
|
|
train_v2.0/metadata/metadata_20.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 20}
|
|
|
|
train_v2.0/metadata/metadata_21.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 21}
|
|
|
|
train_v2.0/metadata/metadata_22.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 22}
|
|
|
|
train_v2.0/metadata/metadata_23.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 23}
|
|
|
|
train_v2.0/metadata/metadata_24.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 24}
|
|
|
|
train_v2.0/metadata/metadata_25.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 25}
|
|
|
|
train_v2.0/metadata/metadata_26.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 26}
|
|
|
|
train_v2.0/metadata/metadata_27.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 27}
|
|
|
|
train_v2.0/metadata/metadata_28.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 28}
|
|
|
|
train_v2.0/metadata/metadata_29.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 29}
|
|
|
|
train_v2.0/metadata/metadata_3.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 3}
|
|
|
|
train_v2.0/metadata/metadata_30.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 30}
|
|
|
|
train_v2.0/metadata/metadata_31.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 31}
|
|
|
|
train_v2.0/metadata/metadata_32.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 32}
|
|
|
|
train_v2.0/metadata/metadata_33.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 33}
|
|
|
|
train_v2.0/metadata/metadata_34.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 34}
|
|
|
|
train_v2.0/metadata/metadata_35.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 35}
|
|
|
|
train_v2.0/metadata/metadata_36.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 36}
|
|
|
|
train_v2.0/metadata/metadata_37.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 37}
|
|
|
|
train_v2.0/metadata/metadata_38.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 38}
|
|
|
|
train_v2.0/metadata/metadata_39.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 39}
|
|
|
|
train_v2.0/metadata/metadata_4.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"shard_num_frames": 112542, "shard_ind": 4}
|
|
|
|