Update README.md
Browse files
README.md
CHANGED
@@ -94,10 +94,13 @@ VideoChat-Flash-2B is constructed upon UMT-L (300M) and Qwen2_5-2B, employing on
|
|
94 |
## 🚀 How to use the model
|
95 |
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
|
|
|
|
|
|
101 |
```python
|
102 |
from transformers import AutoModel, AutoTokenizer
|
103 |
|
|
|
94 |
## 🚀 How to use the model
|
95 |
|
96 |
|
97 |
+
We provide the simple conversation process for using our model. You need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) to use our visual encoder.
|
98 |
+
```
|
99 |
+
pip install transformers==4.39.2
|
100 |
+
pip install timm
|
101 |
+
pip install flash-attn --no-build-isolation
|
102 |
+
```
|
103 |
+
Then you could use our model:
|
104 |
```python
|
105 |
from transformers import AutoModel, AutoTokenizer
|
106 |
|