Update README.md
Browse files
README.md
CHANGED
@@ -98,12 +98,12 @@ license: mit
|
|
98 |
pipeline_tag: feature-extraction
|
99 |
---
|
100 |
|
101 |
-
[xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned with [SimCSE](http://dx.doi.org/10.18653/v1/2021.emnlp-main.552) (Gao et al., EMNLP 2021).
|
102 |
|
103 |
-
See a similar English model released by Gao et al.: https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base
|
104 |
|
105 |
-
Fine-tuning was done using the [reference implementation of SimCSE](https://github.com/princeton-nlp/SimCSE) and the 1M sentences from English Wikipedia released by the authors.
|
106 |
-
As a sentence representation we
|
107 |
|
108 |
Fine-tuning command:
|
109 |
```bash
|
|
|
98 |
pipeline_tag: feature-extraction
|
99 |
---
|
100 |
|
101 |
+
[xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned for sentence embeddings with [SimCSE](http://dx.doi.org/10.18653/v1/2021.emnlp-main.552) (Gao et al., EMNLP 2021).
|
102 |
|
103 |
+
See a similar English model released by Gao et al.: https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base.
|
104 |
|
105 |
+
Fine-tuning was done using the [reference implementation of unsupervised SimCSE](https://github.com/princeton-nlp/SimCSE) and the 1M sentences from English Wikipedia released by the authors.
|
106 |
+
As a sentence representation, we used the average of the last hidden states (`pooler_type=avg`), which is compatible with Sentence-BERT.
|
107 |
|
108 |
Fine-tuning command:
|
109 |
```bash
|