GPT-2
Fine tune gpt2 model on Urdu news dataset using a causal language modeling (CLM) objective.
How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Imran1/gpt2-urdu-news")
model = AutoModelForCausalLM.from_pretrained("Imran1/gpt2-urdu-news")
Training data
I fine tune gpt2 for downstream task like text generation, only for 1000 sample so it may not be good. Due to resources limitation.
Evaluation results
training loss 3.042
- Downloads last month
- 186
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.