Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MJ-Bench
/
DDPO-alignment-gpt-4v
like
0
Follow
MJ-Bench
6
Text-to-Image
stable-diffusion
stable-diffusion-diffusers
DDPO
arxiv:
2407.04842
Model card
Files
Files and versions
Community
main
DDPO-alignment-gpt-4v
/
README.md
Commit History
Upload README.md with huggingface_hub
ca1adea
verified
yichaodu
commited on
Jul 9, 2024
Update README.md
ead69eb
verified
Zhaorun
commited on
Jul 9, 2024
Upload README.md with huggingface_hub
f23a498
verified
yichaodu
commited on
Jul 8, 2024