Update README.md
Browse files
README.md
CHANGED
@@ -96,7 +96,10 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
|
|
96 |
First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
|
97 |
```
|
98 |
pip install transformers==4.39.2
|
99 |
-
pip install
|
|
|
|
|
|
|
100 |
pip install flash-attn --no-build-isolation
|
101 |
```
|
102 |
Then you could use our model:
|
|
|
96 |
First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
|
97 |
```
|
98 |
pip install transformers==4.39.2
|
99 |
+
pip install av
|
100 |
+
pip install imageio
|
101 |
+
pip install decord
|
102 |
+
pip install opencv-python
|
103 |
pip install flash-attn --no-build-isolation
|
104 |
```
|
105 |
Then you could use our model:
|