Update README.md
Browse files
README.md
CHANGED
@@ -43,14 +43,14 @@ For more details about data preparation, please see [here](https://github.com/Op
|
|
43 |
|
44 |
| name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | TextVQA<br>(val) | SEEDv1<br>(image) | VizWiz<br>(test) | GQA<br>(test) |
|
45 |
| ------------------ | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ------- | ----------------- | ---------------- | ------------- |
|
46 |
-
| GPT
|
47 |
| Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | 82.3 | - | - | - |
|
48 |
| Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 74.6 | 70.7 | - | - |
|
49 |
-
| Qwen
|
50 |
-
| Qwen
|
51 |
| | | | | | | | | | | | | | | |
|
52 |
-
| LLaVA
|
53 |
-
| InternVL
|
54 |
|
55 |
- MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
|
56 |
- In most benchmarks, InternVL-Chat-V1.2 achieves better performance than LLaVA-NeXT-34B.
|
@@ -65,7 +65,7 @@ The hyperparameters used for finetuning are listed in the following table.
|
|
65 |
|
66 |
| Hyperparameter | Trainable Param | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|
67 |
| ------------------ | ---------------- | ----------------- | ------------- | ------ | ---------- | ------------ |
|
68 |
-
| InternVL
|
69 |
|
70 |
|
71 |
## Model Details
|
|
|
43 |
|
44 |
| name | image size | MMMU<br>(val) | MMMU<br>(test) | MathVista<br>(testmini) | MMB<br>(test) | MMB−CN<br>(test) | MMVP | MME | ScienceQA<br>(image) | POPE | TextVQA<br>(val) | SEEDv1<br>(image) | VizWiz<br>(test) | GQA<br>(test) |
|
45 |
| ------------------ | ---------- | ------------- | -------------- | ----------------------- | ------------- | ---------------- | ---- | -------- | -------------------- | ---- | ------- | ----------------- | ---------------- | ------------- |
|
46 |
+
| GPT−4V\* | unknown | 56.8 | 55.7 | 49.9 | 77.0 | 74.4 | 38.7 | 1409/517 | - | - | 78.0 | 71.6 | - | - |
|
47 |
| Gemini Ultra\* | unknown | 59.4 | - | 53.0 | - | - | - | - | - | - | 82.3 | - | - | - |
|
48 |
| Gemini Pro\* | unknown | 47.9 | - | 45.2 | 73.6 | 74.3 | 40.7 | 1497/437 | - | - | 74.6 | 70.7 | - | - |
|
49 |
+
| Qwen−VL−Plus\* | unknown | 45.2 | 40.8 | 43.3 | 67.0 | 70.7 | - | 1681/502 | - | - | 78.9 | 65.7 | - | - |
|
50 |
+
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
|
51 |
| | | | | | | | | | | | | | | |
|
52 |
+
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1 |
|
53 |
+
| InternVL−Chat−V1.2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1672/509 | 83.3 | 88.0 | 69.7 | 75.6 | 60.0 | 64.0 |
|
54 |
|
55 |
- MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
|
56 |
- In most benchmarks, InternVL-Chat-V1.2 achieves better performance than LLaVA-NeXT-34B.
|
|
|
65 |
|
66 |
| Hyperparameter | Trainable Param | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
|
67 |
| ------------------ | ---------------- | ----------------- | ------------- | ------ | ---------- | ------------ |
|
68 |
+
| InternVL−Chat−V1.2 | 40B (full model) | 512 | 1e-5 | 1 | 2048 | 0.05 |
|
69 |
|
70 |
|
71 |
## Model Details
|