Model Details
The model has been finetuned to optimize the conversion of short prompts into detailed prompts, enabling stable diffusion or flux-based image generation. This refinement enables users to craft more specific and nuanced requests, resulting in higher-quality and more coherent images.
Model Description
- Developed by: [Imran Ali]
- Model type: [T5 (Text-to-Text Transfer Transformer)]
- Language(s) (NLP): [English]
- License: [apache-2.0]
- Finetuned from model: [t5-small]
- Demo: Demo Space
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load the tokenizer and model
tokenizer = T5Tokenizer.from_pretrained("imranali291/flux-prompt-enhancer")
model = T5ForConditionalGeneration.from_pretrained("imranali291/flux-prompt-enhancer")
# Example input
input_text = "Futuristic cityscape at twilight descent."
# Tokenize input
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate output
output = model.generate(input_ids, max_length=128, eos_token_id=tokenizer.eos_token_id, do_sample=True, top_p=0.9, temperature=0.7, repetition_penalty=2.5)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for imranali291/flux-prompt-enhancer
Base model
google/flan-t5-small