Fix typo
Browse files
README.md
CHANGED
@@ -101,7 +101,7 @@ pprint(results)
|
|
101 |
We evaluate ModernBERT across a range of tasks, including natural language understanding (GLUE), general retrieval (BEIR), long-context retrieval (MLDR), and code retrieval (CodeSearchNet and StackQA).
|
102 |
|
103 |
**Key highlights:**
|
104 |
-
- On GLUE, ModernBERT-base surpasses other similarly-sized encoder models, and ModernBERT-large is second only to
|
105 |
- For general retrieval tasks, ModernBERT performs well on BEIR in both single-vector (DPR-style) and multi-vector (ColBERT-style) settings.
|
106 |
- Thanks to the inclusion of code data in its training mixture, ModernBERT as a backbone also achieves new state-of-the-art code retrieval results on CodeSearchNet and StackQA.
|
107 |
|
|
|
101 |
We evaluate ModernBERT across a range of tasks, including natural language understanding (GLUE), general retrieval (BEIR), long-context retrieval (MLDR), and code retrieval (CodeSearchNet and StackQA).
|
102 |
|
103 |
**Key highlights:**
|
104 |
+
- On GLUE, ModernBERT-base surpasses other similarly-sized encoder models, and ModernBERT-large is second only to Deberta-v3-large.
|
105 |
- For general retrieval tasks, ModernBERT performs well on BEIR in both single-vector (DPR-style) and multi-vector (ColBERT-style) settings.
|
106 |
- Thanks to the inclusion of code data in its training mixture, ModernBERT as a backbone also achieves new state-of-the-art code retrieval results on CodeSearchNet and StackQA.
|
107 |
|