lixinhao commited on
Commit
1c0d9de
·
verified ·
1 Parent(s): 9c4dc80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -93,9 +93,7 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
93
 
94
  ## 🚀 How to use the model
95
 
96
-
97
-
98
- We provide the simple conversation process for using our model. You need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) to use our visual encoder.
99
  ```
100
  pip install transformers==4.39.2
101
  pip install timm
 
93
 
94
  ## 🚀 How to use the model
95
 
96
+ First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
 
 
97
  ```
98
  pip install transformers==4.39.2
99
  pip install timm