Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
ponytail commited on
Commit
93e0bf8
Β·
verified Β·
1 Parent(s): fc1ad54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ human-llava has a good performance in both general and special fields
33
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/zFuyEPb6ZOt-HHadE2K9-.png)
34
 
35
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
36
- * Sep.12, 2024. **πŸ€—[HumanCaption-10M](https://huggingface.co/OpenFace-CQUPT/HumanCaption-10M), is released!πŸ‘πŸ‘πŸ‘**
37
  * Sep.8, 2024. **πŸ€—[HumanLLaVA-llama-3-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**
38
 
39
 
 
33
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64259db7d3e6fdf87e4792d0/zFuyEPb6ZOt-HHadE2K9-.png)
34
 
35
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
36
+ * Sep.12, 2024. **πŸ€—[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!πŸ‘πŸ‘πŸ‘**
37
  * Sep.8, 2024. **πŸ€—[HumanLLaVA-llama-3-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**
38
 
39