File size: 22,337 Bytes
94867b2 7805d95 94867b2 f9044a2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 7805d95 94867b2 ca97fdf 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 1a431f4 94867b2 7805d95 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 7805d95 94867b2 7805d95 94867b2 7805d95 94867b2 7805d95 94867b2 7805d95 94867b2 7805d95 94867b2 7805d95 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 1cd2cc8 94867b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 |
---
backbone:
- diffusion
domain:
- multi-modal
frameworks:
- pytorch
license: cc-by-nc-nd-4.0
metrics:
- realism
- image-video similarity
studios:
- damo/Image-to-Video
tags:
- image2video generation
- diffusion model
- 图到视频
- 图生视频
- 图片生成视频
- 生成
tasks:
- image-to-video
widgets:
- examples:
- inputs:
- data: XXX/test.jpg.
name: image_path
name: 1
title: 示例1
inferencespec:
cpu: 4
gpu: 1
gpu_memory: 15000
memory: 32000
inputs:
- name: image_path
title: 图片的路径
type: str
validator:
max_words: /
task: image-to-video
---
# I2VGen-XL高清图像生成视频大模型
本项目**I2VGen-XL**旨在解决根据输入图像生成高清视频任务。**I2VGen-XL**由达摩院研发的高清视频生成基础模型之一,其核心部分包含两个阶段,分别解决语义一致性和清晰度的问题,参数量共计约37亿,模型经过在大规模视频和图像数据混合预训练,并在少量精品数据上微调得到,该数据分布广泛、类别多样化,模型对不同的数据均有良好的泛化性。项目相比于现有视频生成模型,**I2VGen-XL**在清晰度、质感、语义、时序连续性等方面均具有明显的优势。
此外,**I2VGen-XL**的许多设计理念和设计细节(比如核心的UNet部分)都继承于我们已经公开的工作**VideoComposer**,您可以参考我们的[VideoComposer](https://videocomposer.github.io)和本项目[ModelScope](https://github.com/modelscope/modelscope)的了解详细细节。
The **I2VGen-XL** project aims to address the task of HD video generation based on input images. **I2VGen-XL** is one of the HQ video generation base models developed by DAMO Academy. Its core components consist of two stages, each addressing the issues of semantic consistency and video quality. The total number of parameters is approximately 3.7 billion. The model has been pre-trained on a large-scale mixture of video and image data and fine-tuned on a small amount of high-quality data. This data distribution is extensive and diverse, and the model demonstrates good generalization to different types of data. Compared to existing video generation models, the **I2VGen-XL** project has significant advantages in terms of quality, texture, semantics, and temporal continuity.
Additionally, many design concepts and details of **I2VGen-XL** (such as the core UNet) are inherited from our publicly available work, **VideoComposer**. For detailed information, please refer to our [VideoComposer](https://videocomposer.github.io) and the Github code repository for this [ModelScope](https://github.com/modelscope/modelscope) project.
<center>
<p align="center">
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/image/Fig_twostage.png"/><br/>
Fig.1 I2VGen-XL
<p>
</center>
<font color="#dd0000">体验地址(Project experience address):</font> <font color="#0000ff">https://modelscope.cn/studios/damo/I2VGen-XL-Demo/summary</font>
## 模型介绍 (Introduction)
如图Fig.2所示,**I2VGen-XL**是一种基于隐空间的视频扩散模型(VLDM),其通过我们专门设计的时空UNet(ST-UNet)在隐空间中进行时空建模并通过解码器重建出最终视频(具体模型结构可以参考[VideoComposer](https://videocomposer.github.io))。为能够生成720P视频,我们将**I2VGen-XL**分为两个阶段,第一阶段是在低分辨率条件下保证语义一致性,第二阶是利用新的VLDM进行去噪以提高视频分辨率以及同时提升时间和空间上的一致性。通过在模型、数据和训练上的联合优化,**I2VGen-XL**主要具有以下几个特点:
- 高清&宽屏,可以直接生成720P(1280*720)分辨率的视频,且相比于现有的开源项目,不仅分辨率得到有效提高,其生产的宽屏视频可以适合更多的场景
- 连续性,通过特定训练和推理策略,在视频的细节生成的稳定性上(时间和空间维度)有明显提高
- 质感好,通过收集特定的风格的视频数据训练,使得生成的视频在质感上得到明显提升,可以生成科技感、电影色、卡通风格和素描等类型视频
- 无水印,模型通过我们内部大规模无水印视频/图像训练,并在高质量数据微调得到,生成的无水印视频可适用更多视频平台,减少许多限制
以下为生成的部分案例:
As shown in Fig.2, **I2VGen-XL** is a video latent diffusion model. It utilizes our designed ST-UNet ((for model details, please refer to [VideoComposer](https://videocomposer.github.io))) to perform spatio-temporal modeling in the latent space and reconstruct the generated video through a decoder. In order to generate 720P videos, we divide I2VGen-XL into two stages. The first stage ensures semantic consistency with low resolutions, while the second stage utilizes the new VLDM to denoise and improve video resolution, as well as enhance temporal and spatial consistency. Through joint optimization of the model, data, and training, **I2VGen-XL** has the following characteristics.
- High-definition & widescreen, can directly generate 720P (1280*720) resolution videos, and compared to existing open source projects, not only is the resolution effectively improved, but the widescreen videos it produces can also be suitable for more scenarios.
- Continuity, through specific training and inference strategies, there is a significant improvement in the stability of detail generation in videos (in the time and space dimensions).
- Good texture, by collecting specific style video data for training, the generated model has a significant improvement in texture and can generate technology, film color, cartoon style, sketch and other types of videos.
- No watermark, the model is trained on a large-scale watermark-free video/image dataset internally and fine-tuned on high-quality data, generating watermark-free videos that can be applied to more video platforms and reducing many restrictions.
Below are some examples generated by the model:
<center>
<p align="center">
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/image/fig1_overview.jpg"/>
<br/>
Fig.2 VLDM
<p>
</center>
**为方便展示,本页面展示为低分辨率GIF格式,但是GIF会下降视频质量,720P的视频效果可以参下面对应的视频链接**
**For display purposes, this page shows low-resolution GIF format. However, GIF format may reduce video quality. For specific effects, please refer to the video link below.**
<center>
<table><center>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/dragon2_rank_02-00-0021-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/laoshu_rank_02-01-0810-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424319402790.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423628044217.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/ac10af0b1c524b778aff60be5b7ecc4f_2_02_00_0065_rank_02-00-1256-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/ast_rank_02-00-0773-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423965629168.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423969933887.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/e3733444344741f1970cf2e92e617182_1_02_00_0199.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/b307dad96c3d440e80514b1b3f3be5fd_1_rank_02-00-0068-000000.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423966661082.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424613631285.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/robot1_rank_02-01-0009-009999.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/d82ed4ad01034243ba88eaf9311c1edf_3_02_01_0193.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424612211915.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424613123188.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/airship_0_rank_02-00-000000_rank_02-00-0653-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/airship_1_rank_02-01-000000_rank_02-00-1428-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424616459162.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424614735831.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/0ba38f2f287f446dac8de87291073e0c_3_rank_02-01-0118-000000.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/03b401c825a2479eaf7b1b3252683a4b_3_02_00_0110_rank_02-00-1009-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424617591002.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423631572030.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/3e89356e6bd3470aaf3900b1b34c3ec2_0_rank_02-01-0126-000000.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/6fd21439fce644afa3a2e9b057956d0f_0000000_rank_02-01-0159-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423629092176.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424616071017.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/293fdf76aa404971b1fbb66baf9cbaac_1_02_00_0123_rank_02-00-0288-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/426a7bee22034a88872dc8277ddbbf06_0_02_01_0023_rank_02-01-1090-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424317682762.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424313138794.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/a15bb09862b74b3c983a54b379912f81_0_02_00_0055_rank_02-01-0443-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/7716d91802614bf9a99174c05bd08f32_3_02_01_0157_rank_02-01-1199-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/423631376023.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424616459198.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/indian_rank_02-00-0800-001024.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/bike_rank_02-01-0007-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424314646086.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424610479196.mp4">HQ Video</a>
</center></td>
</tr>
<tr>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/panda_rank_02-01-0007-009999.gif"/>
</center></td>
<td ><center>
<img src="https://huggingface.co/damo-vilab/MS-Image2Video/resolve/main/assets/gif/bf19a66dca0a47799923c47249982ffd_0000000_rank_02-01-0960-001024.gif"/>
</center></td>
</tr>
<tr>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424321438157.mp4">HQ Video</a>
</center></td>
<td ><center>
<a href="https://cloud.video.taobao.com/play/u/null/p/1/e/6/t/1/424614283086.mp4">HQ Video</a>
</center></td>
</tr>
</table>
</center>
> [<font color="#dd0000">2023.08.25 更新</font>] ModelScope发布1.8.4版本,I2VGen-XL模型更新到模型参数文件 v1.1.0;
### 依赖项 (Dependency)
首先你需要确定你的系统安装了`ffmpeg`命令,如果没有,可以通过以下命令来安装:
First, you need to ensure that your system has installed the `ffmpeg` command. If it is not installed, you can install it using the following command:
```bash
sudo apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
```
其次,本**I2VGen-XL**项目适配ModelScope代码库,以下是本项目需要安装的部分依赖项。
The **I2VGen-XL** project is compatible with the ModelScope codebase, and the following are some of the dependencies that need to be installed for this project.
```bash
pip install modelscope==1.8.4
pip install xformers==0.0.20
pip install torch==2.0.1
pip install open_clip_torch>=2.0.2
pip install opencv-python-headless
pip install opencv-python
pip install einops>=0.4
pip install rotary-embedding-torch
pip install fairscale
pip install scipy
pip install imageio
pip install pytorch-lightning
pip install torchsde
```
### 快速使用 (Inference)
关于更多的尝试,请关注我们将公开的技术报告和开源代码。
For more experiments, please stay tuned for our upcoming technical report and open-source code release.
### 代码范例 (Code example)
```python
from modelscope.pipelines import pipeline
from modelscope.outputs import OutputKeys
pipe = pipeline(task='image-to-video', model='damo/Image-to-Video', model_revision='v1.1.0')
# IMG_PATH: your image path (url or local file)
output_video_path = pipe(IMG_PATH, output_video='./output.mp4')[OutputKeys.OUTPUT_VIDEO]
print(output_video_path)
```
如果想生成超分视频的话, 示例见下:
If you want to generate high-resolution video, please use the following code:
```python
from modelscope.pipelines import pipeline
from modelscope.outputs import OutputKeys
# if you only have one GPU, please make it's GPU memory bigger than 50G, or you can use two GPUs, and set them by device
pipe1 = pipeline(task='image-to-video', model='damo/Image-to-Video', model_revision='v1.1.0', device='cuda:0')
pipe2 = pipeline(task='video-to-video', model='damo/Video-to-Video', model_revision='v1.1.0', device='cuda:0')
# image to video
output_video_path = pipe1("test.jpg", output_video='./i2v_output.mp4')[OutputKeys.OUTPUT_VIDEO]
# video resolution
p_input = {'video_path': output_video_path}
new_output_video_path = pipe2(p_input, output_video='./v2v_output.mp4')[OutputKeys.OUTPUT_VIDEO]
```
更多超分细节, 请访问 <a href="https://modelscope.cn/models/damo/Video-to-Video/summary">Video-to-Video</a>。 我们也提供了用户接口,请移步<a href="https://modelscope.cn/studios/damo/I2VGen-XL-Demo/summary">I2VGen-XL-Demo</a>。
Please visit <a href="https://modelscope.cn/models/damo/Video-to-Video/summary">Video-to-Video</a> for more details. We also provide user interface:<a href="https://modelscope.cn/studios/damo/I2VGen-XL-Demo/summary">I2VGen-XL-Demo</a>.
### 模型局限 (Limitation)
本**I2VGen-XL**项目的模型在处理以下情况会存在局限性:
- 小目标生成能力有限,在生成较小目标的时候,会存在一定的错误
- 快速运动目标生成能力有限,当生成快速运动目标时,可能会出现一些假象和不合理的情况
- 生成速度较慢,生成高清视频会明显导致生成速度减慢
此外,我们研究也发现,生成的视频空间上的质量和时序上的变化速度在一定程度上存在互斥现象,在本项目我们选择了其折中的模型,兼顾两者间的平衡。
The model of the **I2VGen-XL** project still have some following limitations:
- Limited ability to generate small objects, there may be some errors when generating smaller objects.
- Limited ability to generate fast-moving objects, there may be some artifacts when generating fast-moving objects.
- Slow generation speed, generating high-definition videos significantly slows down the generation speed.
In addition, our research has also found that there is a trade-off between the quality of the generated video in spatial and temporal changes. In this project, we have chosen a model that strikes a balance between the two.
**如果您正在尝试使用我们的模型,我们建议您首先在第一阶段中得到语义符合预期的视频后(离线运行的时候可以修改`configuration.json`文件中的`Seed`生成不同视频),再尝试第二阶段的视频修正(因为该过程比较耗时),这样可以提高您的使用效率,也更容易得到更好的结果。**
**If you are trying to use our model, we suggest that you first obtain semantic-expected videos in the first stage (you can modify the `Seed` in the `configuration.json` file when running offline to generate different videos). Then, you can try video refining in the second stage (as this process takes more time). This will improve your efficiency and make it easier to achieve better results.**
## 训练数据介绍 (Training Data)
我们训练数据主要来源来源广泛,具备以下几个属性:
- 混合训练,模型有按照视频图像比7:1训练模型,以此保证视频生成质量
- 类别分布广,数据数十亿的总体量涵盖人、动物、机车、科幻、场景等等绝大多数的实际数据
- 来源分布广,数据来源于开源数据、视频网站以及其他内部数据,具有多分辨率、长宽比等
- 精品数据构建,为了提升模型生成的质量,我们构建了约20w的精品数据对预训练模型进行微调
Our training data mainly comes from various sources and has the following attributes:
- Mixed training: The model is trained with a 7:1 ratio of video to image to ensure the quality of video generation.
- Wide class distribution: The data set covers most real-world categories, including people, animals, locomotives, science fiction, scenes, etc. with a total volume of billions of data points.
- Wide source distribution: The data comes from open-source data, video websites, and other internal sources, with varying resolutions and aspect ratios.
- High-quality data construction: To improve the quality of the model-generated videos, we constructed approximately 200,000 high-quality data pairs for fine-tuning the pre-training model.
更强更灵活的视频生成模型会持续发布,及其背后技术报告正在撰写中,欢迎及时关注。
More powerful models will continue to be released, and the technical report behind them are currently being written. Please stay tuned for updates and timely information.
## 相关论文以及引用信息 (Reference)
```
@article{videocomposer2023,
title={VideoComposer: Compositional Video Synthesis with Motion Controllability},
author={Wang, Xiang* and Yuan, Hangjie* and Zhang, Shiwei* and Chen, Dayou* and Wang, Jiuniu and Zhang, Yingya and Shen, Yujun and Zhao, Deli and Zhou, Jingren},
journal={arXiv preprint arXiv:2306.02018},
year={2023}
}
@inproceedings{videofusion2023,
title={VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation},
author={Luo, Zhengxiong and Chen, Dayou and Zhang, Yingya and Huang, Yan and Wang, Liang and Shen, Yujun and Zhao, Deli and Zhou, Jingren and Tan, Tieniu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}
```
## 使用协议 (License Agreement)
我们的代码和模型权重仅可用于个人/学术研究,暂不支持商用。
Our code and model weights are only available for personal/academic research use and are currently not supported for commercial use.
## 联系我们 (Contact Us)
如果你想联系我们的算法/产品同学, 或者想加入我们的算法团队(实习/正式), 欢迎发邮件至: <[email protected]>。
If you would like to contact us, or join our team (internship/formal), please feel free to email us at <[email protected]>.
|