Update README.md
Browse files
README.md
CHANGED
@@ -210,6 +210,11 @@ metrics:
|
|
210 |
- spbleu
|
211 |
- chrf++
|
212 |
inference: false
|
|
|
|
|
|
|
|
|
|
|
213 |
---
|
214 |
|
215 |
# NLLB-200
|
@@ -219,7 +224,7 @@ This is the model card of NLLB-200's distilled 600M variant.
|
|
219 |
Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
|
220 |
|
221 |
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
|
222 |
-
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation
|
223 |
- License: CC-BY-NC
|
224 |
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
|
225 |
|
@@ -250,4 +255,4 @@ SentencePiece model is released along with NLLB-200.
|
|
250 |
• Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
|
251 |
|
252 |
## Carbon Footprint Details
|
253 |
-
• The carbon dioxide (CO2e) estimate is reported in Section 8.8.
|
|
|
210 |
- spbleu
|
211 |
- chrf++
|
212 |
inference: false
|
213 |
+
|
214 |
+
co2_eq_emissions:
|
215 |
+
emissions: 104310000
|
216 |
+
source: "No Language Left Behind: Scaling Human-Centered Machine Translation"
|
217 |
+
hardware_used: "NVIDIA A100"
|
218 |
---
|
219 |
|
220 |
# NLLB-200
|
|
|
224 |
Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
|
225 |
|
226 |
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
|
227 |
+
- Paper or other resource for more information: [NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation](https://huggingface.co/papers/2207.04672)
|
228 |
- License: CC-BY-NC
|
229 |
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
|
230 |
|
|
|
255 |
• Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
|
256 |
|
257 |
## Carbon Footprint Details
|
258 |
+
• The carbon dioxide (CO2e) estimate is reported in Section 8.8 and in the model card.
|