File size: 1,028 Bytes
8d20180 3a9d5b0 75b1271 3a9d5b0 8d20180 b16a057 5e97f15 8d20180 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
base_model:
- openai-community/gpt2
language:
- en
- ta
license: mit
tags:
- gpt2
- text-generation
- QnQ
datasets:
- varshil27/1mg-train-data-LLama2-formatted
- karthikqnq/1mgdataset
- anjandash/java-8m-methods-v2
metrics:
- accuracy
---
# QnQGPT Model
This is a custom GPT model based on GPT-2 architecture.
## Model Details
- Model Type: GPT-2
- Base Model: gpt2
- Training Data: [Describe your training data]
- Use Cases: [Describe intended use cases]
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("karthikqnq/qnqgpt")
tokenizer = AutoTokenizer.from_pretrained("karthikqnq/qnqgpt")
# Generate text
text = "Hello, how are"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
result = tokenizer.decode(outputs[0])
print(result)
```
## Training Details
[Add your training details here]
## Limitations
[Add model limitations here]
## License
This model is released under the MIT License. |