Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
huangfx1020 commited on
Commit
605edd6
·
verified ·
1 Parent(s): c240d7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -30,7 +30,8 @@ Specifically, (1) we first construct **a large-scale and high-quality human-rela
30
  ## Result
31
  human-llava has a good performance in both general and special fields
32
 
33
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/668782ad3a2276d5d0dea273/n2YAkXsnVKxCp15eE4dKp.png)
 
34
 
35
 
36
  ## News and Update 🔥🔥🔥
 
30
  ## Result
31
  human-llava has a good performance in both general and special fields
32
 
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/668782ad3a2276d5d0dea273/FjdsGPvIjRvJj2XjshsMS.png)
35
 
36
 
37
  ## News and Update 🔥🔥🔥