lixinhao commited on
Commit
8b526c0
·
verified ·
1 Parent(s): 1c0d9de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -96,7 +96,10 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
96
  First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
97
  ```
98
  pip install transformers==4.39.2
99
- pip install timm
 
 
 
100
  pip install flash-attn --no-build-isolation
101
  ```
102
  Then you could use our model:
 
96
  First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
97
  ```
98
  pip install transformers==4.39.2
99
+ pip install av
100
+ pip install imageio
101
+ pip install decord
102
+ pip install opencv-python
103
  pip install flash-attn --no-build-isolation
104
  ```
105
  Then you could use our model: