File size: 2,948 Bytes
5de9890
b9b21e1
 
 
5de9890
 
 
b9b21e1
 
5de9890
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
task_categories:
- text-to-video
language:
- en
tags:
- data-juicer
- multimodal
- text-to-video
---
# <span style="font-family: 'Courier New', monospace; font-weight: bold">Data-Juicer Sandbox: A Comprehensive Suite for Multimodal Data-Model Co-development

## Project description

The emergence of large-scale multi-modal generative models has drastically advanced artificial intelligence, introducing unprecedented levels of performance and functionality. 
However, optimizing these models remains challenging due to historically isolated paths of model-centric and data-centric developments, leading to suboptimal outcomes and inefficient resource utilization. 
In response, we present a novel sandbox suite tailored for integrated data-model co-development. This sandbox provides a comprehensive experimental platform, enabling rapid iteration and insight-driven refinement of both data and models. 
Our proposed "Probe-Analyze-Refine" workflow, validated through applications on [T2V-Turbo](https://github.com/Ji4chenLi/t2v-turbo) and achieve a new state-of-the-art on [VBench leaderboard](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard) with 1.09% improvement from T2V-Turbo. Our experiment code and model are released at [Data-Juicer Sandbox](https://github.com/modelscope/data-juicer/blob/main/docs/Sandbox.md).


## Dataset Information

- The whole dataset is available [here](http://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/MM_data/our_refined_data/Data-Juicer-T2V/data_juicer_t2v_optimal_data_pool.zip) (About 227.5GB).
- Number of samples: 147,176 (Include videos and keep ~12.09% from the original dataset)
- The original dataset totals 1,217k instances from [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid) (606k), [Panda-70M](https://github.com/snap-research/Panda-70M) (605k), and [MSR-VTT](https://www.microsoft.com/en-us/research/publication/msr-vtt-a-large-video-description-dataset-for-bridging-video-and-language/) (6k).


## Refining Recipe

```yaml
# global parameters
# global parameters
project_name: 'Data-Juicer-recipes-T2V-optimal'
dataset_path: '/path/to/your/dataset'  # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'

np: 4  # number of subprocess to process your dataset

# process schedule
# a list of several process operators with their arguments
process:
  - video_nsfw_filter:
      hf_nsfw_model: Falconsai/nsfw_image_detection
      score_threshold: 0.000195383
      frame_sampling_method: uniform
      frame_num: 3
      reduce_mode: avg
      any_or_all: any
      mem_required: '1GB'
  - video_frames_text_similarity_filter:
      hf_clip: openai/clip-vit-base-patch32
      min_score: 0.306337
      max_score: 1.0
      frame_sampling_method: uniform
      frame_num: 3
      horizontal_flip: false
      vertical_flip: false
      reduce_mode: avg
      any_or_all: any
      mem_required: '10GB'
```