Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
ponytail commited on
Commit
40a1b9b
·
verified ·
1 Parent(s): 2068c4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: meta-llama/Meta-Llama-3-8B-Instruct
4
  library_name: transformers
5
  tags:
6
  - AIGC
7
- - LlaVA
8
  datasets:
9
  - OpenFace-CQUPT/FaceCaption-15M
10
  metrics:
@@ -68,7 +68,7 @@ print(predict)
68
  HumanCaption-10M(self construct): Coming Soon!
69
 
70
  #### Instruction Tuning Stage
71
- All public data sets have been filtered, and we will consider publishing all processed text in the future
72
 
73
  HumanCaptionHQ-300K(self construct): Coming Soon!
74
 
 
4
  library_name: transformers
5
  tags:
6
  - AIGC
7
+ - LLaVA
8
  datasets:
9
  - OpenFace-CQUPT/FaceCaption-15M
10
  metrics:
 
68
  HumanCaption-10M(self construct): Coming Soon!
69
 
70
  #### Instruction Tuning Stage
71
+ **All public data sets have been filtered, and we will consider publishing all processed text in the future**
72
 
73
  HumanCaptionHQ-300K(self construct): Coming Soon!
74