Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,42 @@
|
|
1 |
---
|
2 |
title: English Tamil
|
3 |
emoji: 🐢
|
4 |
-
colorFrom:
|
5 |
-
colorTo:
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.23.0
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
---
|
|
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
title: English Tamil
|
3 |
emoji: 🐢
|
4 |
+
colorFrom: green
|
5 |
+
colorTo: gray
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.23.0
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
---
|
12 |
+
### Model Information
|
13 |
|
14 |
+
Training Details
|
15 |
+
|
16 |
+
- **This model has been fine-tuned for English to Tamil translation.**
|
17 |
+
- **Training Duration: Over 10 hours**
|
18 |
+
- **Loss Achieved: 0.6**
|
19 |
+
- **Model Architecture**
|
20 |
+
- **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**
|
21 |
+
|
22 |
+
## Inference
|
23 |
+
1. **How to use the model in our notebook**:
|
24 |
+
```python
|
25 |
+
# Load model directly
|
26 |
+
import torch
|
27 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
28 |
+
|
29 |
+
checkpoint = "suriya7/English-to-Tamil"
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
31 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
|
32 |
+
|
33 |
+
def language_translator(text):
|
34 |
+
tokenized = tokenizer([text], return_tensors='pt')
|
35 |
+
out = model.generate(**tokenized, max_length=128)
|
36 |
+
return tokenizer.decode(out[0],skip_special_tokens=True)
|
37 |
+
|
38 |
+
text_to_translate = "hardwork never fail"
|
39 |
+
output = language_translator(text_to_translate)
|
40 |
+
print(output)
|
41 |
+
```
|
42 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|