Merge branch 'main' of https://huggingface.co/edugp/data2vec-nlp-base
Browse files
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
model-index:
|
5 |
+
- name: data2vec-nlp-base
|
6 |
+
results: []
|
7 |
+
---
|
8 |
+
# Data2Vec NLP Base
|
9 |
+
|
10 |
+
This model was converted from `fairseq`.
|
11 |
+
The original weights can be found in https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt
|
12 |
+
|
13 |
+
Example usage:
|
14 |
+
```python
|
15 |
+
from transformers import RobertaTokenizer, Data2VecForSequenceClassification, Data2VecConfig
|
16 |
+
import torch
|
17 |
+
|
18 |
+
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
|
19 |
+
config = Data2VecConfig.from_pretrained("edugp/data2vec-nlp-base")
|
20 |
+
model = Data2VecForSequenceClassification.from_pretrained("edugp/data2vec-nlp-base", config=config)
|
21 |
+
# Fine-tune this model
|
22 |
+
|
23 |
+
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
24 |
+
outputs = model(**inputs)
|
25 |
+
|
26 |
+
prediction_logits = outputs.logits
|
27 |
+
```
|