Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,12 @@ It is _**the largest open-source vision/vision-language foundation model (14B)**
|
|
19 |
|
20 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/QmVXOyr4uFQLx5Q-WLn9-.png)
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
## Model details
|
23 |
|
24 |
**Model type:**
|
@@ -56,12 +62,6 @@ The primary intended users of the model are researchers and hobbyists in compute
|
|
56 |
## Evaluation dataset
|
57 |
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
|
58 |
|
59 |
-
## How to Run?
|
60 |
-
|
61 |
-
Please refer to this [README](https://github.com/OpenGVLab/InternVL/tree/main/llava#internvl-for-multimodal-dialogue-using-llava) to run this model.
|
62 |
-
|
63 |
-
Note: We have retained the original documentation of LLaVA 1.5 as a more detailed manual. In most cases, you will only need to refer to the new documentation that we have added.
|
64 |
-
|
65 |
## Acknowledgement
|
66 |
|
67 |
This model card is adapted from [LLaVA's model card](https://huggingface.co/liuhaotian/llava-v1.5-13b). Thanks for their awesome work!
|
|
|
19 |
|
20 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/QmVXOyr4uFQLx5Q-WLn9-.png)
|
21 |
|
22 |
+
## How to Run?
|
23 |
+
|
24 |
+
Please refer to this [README](https://github.com/OpenGVLab/InternVL/tree/main/llava#internvl-for-multimodal-dialogue-using-llava) to run this model.
|
25 |
+
|
26 |
+
Note: We have retained the original documentation of LLaVA 1.5 as a more detailed manual. In most cases, you will only need to refer to the new documentation that we have added.
|
27 |
+
|
28 |
## Model details
|
29 |
|
30 |
**Model type:**
|
|
|
62 |
## Evaluation dataset
|
63 |
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
|
64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
## Acknowledgement
|
66 |
|
67 |
This model card is adapted from [LLaVA's model card](https://huggingface.co/liuhaotian/llava-v1.5-13b). Thanks for their awesome work!
|