File size: 836 Bytes
d8ec0de b5ae030 d8ec0de b5ae030 01195c7 b5ae030 01195c7 b5ae030 ca7f5f2 01195c7 b5ae030 01195c7 b5ae030 d8ec0de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
{}
---
## Example Usage
This section demonstrates how to use the `XiaoZhang98/byT5-DRS` model with the Hugging Face Transformers library to process an example sentence.
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
# Initialize the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('XiaoZhang98/byT5-DRS', max_length=512)
model = T5ForConditionalGeneration.from_pretrained("XiaoZhang98/byT5-DRS")
# Example sentence
example = "I am a student."
# Tokenize and prepare the input
x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids']
# Generate output
output = model.generate(x)
# Decode and print the output text
pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(pred_text)
|