modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
unknown | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
ChauhanVipul/BERT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).<br>
# XXMix_9realistic-V2.6:
Source(s): [CivitAI](https://civitai.com/models/47274/xxmix9realistic)<br>
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c7facef9-b0ee-435c-9f7c-a8546f3749c5/width=450/13813-2748008393-(front focus),(in the dark_1.6),_Hyperrealist portrait of female by david hockney and alphonse mucha,fantasy ar.jpeg" alt="13813-2748008393-(front focus),(in the dark_1.6),_Hyperrealist portrait of female by david hockney and alphonse mucha,fantasy ar.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/86c85884-493f-4ac3-a3b1-da779b4d27b0/width=450/13814-1251401889-(in the dark_1.6),_Hyperrealist portrait of female by david hockney and alphonse mucha,fantasy art, photo reali.jpeg" alt="13814-1251401889-(in the dark_1.6),_Hyperrealist portrait of female by david hockney and alphonse mucha,fantasy art, photo reali.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8269e7b2-fe07-4503-b814-cca62296489a/width=450/22109-2099671408-headshot of Skeletal Zombie queen wearing an ornate crown and detailed plate armor, volumetric light,.jpeg" alt="22109-2099671408-headshot of Skeletal Zombie queen wearing an ornate crown and detailed plate armor, volumetric light,.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6e51001c-b3dd-4734-9f6a-0e2c44f3781b/width=450/26638-2995801549-(absurdres, highres, ultra detailed), 1girl, solo, mature, (long beige hair), Baroque, dress, long sleeve, eleg.jpeg" alt="26638-2995801549-(absurdres, highres, ultra detailed), 1girl, solo, mature, (long beige hair), Baroque, dress, long sleeve, eleg.png" style="max-height: 100%; max-width: 100%;">
v26-fp16-no-ema update
------------------------------------------------------------------------------------------------------
v2.6版本说明:
这版算小更新,优化了脸型,手的问题有优化,不过主要还是抽卡。大问题下个版本解决。
Version 2.6 release notes:
This version is a minor update that optimizes the face and hand issues. However, the main focus is still on the gacha system. The major issues will be addressed in the next version.
------------------------------------------------------------------------------------------------------
有人留言说模型基本不能用,麻烦新朋友看下自己的设置是否和图例的设置一样,除了全身图会有IMG2IMG放大的操作,其他图都是直出的。请一定先确定自己设置是否和图里一样哈!!!
V2.5版本的光影效果表现更好,支持更多幻想内容,调整了基础脸型,场景方面的表现更好。
V2.5 version has better lighting effects and supports more fantasy content. The basic face shape has been adjusted, and the performance in scenes has been improved.
------------------------------------------------------------------------------------------------------
适当调小(CFG Scale)的数值,可以得到与众不同的效果。
V2版本说明:
1.更换了默认脸。
2.更好的光影表现。
3.融合了更多模型,后面还会继续迭代。
4.如果想得到更好的皮肤质感,那么生成图片的时候不要开启高清修复,而是将生成的图片传入图生图模式,启用Tiled Diffusion,来进行放大,从而得到更真实的皮肤质感。
5.我会继续更新迭代这个大模型,希望能不断填充内容,让模型更具趣味性。
谢谢您的支持!
Version 2 Release Notes:
Changed the default face.
Improved lighting and shadow effects.
Added more models and will continue to iterate in the future.
For better skin texture, do not enable Hires Fix when generating images. Instead, use the "Tiled Diffusion" mode to enlarge the generated image and achieve a more realistic skin texture.
I will continue to update and iterate on this large model, hoping to add more content and make it more interesting.
Thank you for your support!
------------------------------------------------------------------------------------------------------
推荐参数
Recommended Parameters:
Sampler: DPM++ 2M Karras alt Karras or DPM++ SDE Karras
Steps: 20~40
Hires upscaler:4x-UltraSharp
Hires upscale: 2
Hires steps: 15
Denoising strength: 0.2~0.5
CFG scale: 6-8
clip skip 2
|
Cheatham/xlm-roberta-base-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\LINUSL~1\AppData\Local\Temp\tmph5kuk9wb\xeonkai\setfit-articles-labels
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\LINUSL~1\AppData\Local\Temp\tmph5kuk9wb\xeonkai\setfit-articles-labels")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Cheatham/xlm-roberta-large-finetuned-d12 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
datasets:
- glue
model-index:
- name: contriever-msmarco-mnli
results: []
pipeline_tag: zero-shot-classification
language:
- en
license: mit
---
# contriever-msmarco-mnli
This model is a fine-tuned version of [facebook/contriever-msmarco](https://huggingface.co/facebook/contriever-msmarco) on the glue dataset.
## Model description
[Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118).
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave, arXiv 2021
## How to use the model
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="mjwong/contriever-msmarco-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'sequence': 'one day I will see the world',
# 'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9954835772514343, 0.002568634692579508, 0.00194773287512362]}
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
#{'sequence': 'one day I will see the world',
# 'labels': ['travel', 'exploration', 'cooking', 'dancing'],
# 'scores': [0.9968098998069763,
# 0.9796287417411804,
# 0.027883002534508705,
# 0.0008239754824899137]}
```
### Eval results
The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy.
|Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3|
| :---: | :---: | :---: | :---: | :---: | :---: |
|[contriever-mnli](https://huggingface.co/mjwong/contriever-mnli)|0.821|0.822|0.247|0.281|0.312|
|[contriever-msmarco-mnli](https://huggingface.co/mjwong/contriever-msmarco-mnli)|0.820|0.819|0.244|0.296|0.306|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1+cu116
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Chinat/test-classifier | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
title: chinese-llama-plus-13b-hf
emoji: 📚
colorFrom: gray
colorTo: red
language:
- zh
tags:
- chatglm
- pytorch
- zh
- Text2Text-Generation
- LLaMA
license: other
widget:
- text: 为什么天空是蓝色的?
---
# Chinese LLaMA Plus 13B Model
**发布中文LLaMA-Plus, Alpaca-Plus 13B版本模型**
发布中文LLaMA-Plus, Alpaca-Plus 13B版本,改进点如下:
- 相比基础版进一步扩充了训练数据,其中LLaMA扩充至120G文本,Alpaca扩充至4.3M指令数据,重点增加了科学领域数据,涵盖:物理、化学、生物、医学、地球科学等
- Alpaca训练时采用了更大的rank,相比基础版具有更低的验证集损失
- Alpaca评测结果:13B获得74.3分,Plus-7B获得78.2分,Plus-13B获得80.8分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples)
- 多轮回复长度相比旧模型提升明显(可适当增大温度系数)
- 知识问答、写作、翻译等方面效果显著提升
本模型是 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf)
底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重,
并转化为HuggingFace版本权重(.bin文件),可以在此中文LLaMA模型上继续指令微调训练,LLaMA模型为底座模型,直接调用会效果不佳。
test case:
|input_text|predict|
|:-- |:--- |
|为什么天空是蓝色的?|天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。|
## release model weight
- chinese-llama-plus-7b 模型权重链接:https://huggingface.co/minlik/chinese-llama-plus-7b-merged
- chinese-alpaca-plus-7b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf
- chinese-llama-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-llama-plus-13b-hf
- chinese-aplaca-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf
## Usage
本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import LlamaModel
model = LlamaModel("llama", "shibing624/chinese-llama-plus-13b-hf")
r = model.predict(["用一句话描述地球为什么是独一无二的。"])
print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。']
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install sentencepiece
pip install transformers>=4.28.0
```
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-llama-plus-13b-hf')
model = LlamaForCausalLM.from_pretrained('shibing624/chinese-llama-plus-13b-hf').half().cuda()
model.eval()
text = '为什么天空是蓝色的?'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=128,
temperature=1,
top_k=40,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(text, '').strip())
```
output:
```shell
为什么天空是蓝色的?
天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。
```
## 模型来源
release合并后的模型权重,一步到位直接使用,省电、减少碳排放。
基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus)方法手动合并而成,具体是使用 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf)
底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。
HuggingFace版本权重(.bin文件)可用于:
- 使用Transformers进行训练和推理
- 使用text-generation-webui搭建界面
PyTorch版本权重(.pth文件)可用于:
- 使用llama.cpp工具进行量化和部署
PyTorch版本权重(.pth文件)链接:[shibing624/chinese-alpaca-plus-13b-pth](https://huggingface.co/shibing624/chinese-alpaca-plus-13b-pth)
模型文件组成:
```
chinese-alpaca-plus-13b-hf
|-- config.json
|-- generation_config.json
|-- LICENSE
|-- pytorch_model-00001-of-00003.bin
|-- pytorch_model-00002-of-00003.bin
|-- pytorch_model-00003-of-00003.bin
|-- pytorch_model.bin.index.json
|-- README.md
|-- special_tokens_map.json
|-- tokenizer_config.json
`-- tokenizer.model
```
硬件要求:25G显存
### 微调数据集
我整理部分公开微调数据集:
1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release)
4. 5万条中文GPT4指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0)
如果需要训练LLaMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
## Citation
```latex
@software{textgen,
author = {Xu Ming},
title = {textgen: Implementation of language model finetune},
year = {2023},
url = {https://github.com/shibing624/textgen},
}
```
## Reference
- https://github.com/ymcui/Chinese-LLaMA-Alpaca
|
ChukSamuels/DialoGPT-small-Dr.FauciBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: assignment1_distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# assignment1_distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chun/DialoGPT-large-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: agpl-3.0
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
*(Not to be confused with [Pygmalion 13B](https://huggingface.co/TehVenom/Pygmalion-13b-GGML).)*
This is converted and quantized from [Pygmalion 1.3B](https://huggingface.co/PygmalionAI/pygmalion-1.3b), based on [an earlier version](https://huggingface.co/EleutherAI/pythia-1.4b-deduped-v0) of Pythia 1.4B Deduped.
Notes:
- Converted with ggerganov/ggml's gpt-neox conversion script, and tested with KoboldCpp.
- I can't promise that this will work with other frontends, if at all. I've had problems with the tokenizer. Could be related to the ggml implementation of GPT-NeoX [(source)](https://github.com/ggerganov/ggml/tree/master/examples/gpt-neox#notes).
### RAM USAGE (on KoboldCpp w/ OpenBLAS)
Model | Initial RAM
:--:|:--:
ggml-pygmalion-1.3b-q4_0.bin | 1.1 GiB
ggml-pygmalion-1.3b-q5_1.bin | 1.3 GiB
Below is the original model card for Pygmalion 1.3B.
* * *
# Pygmalion 1.3B
## Model description
Pymalion 1.3B is a proof-of-concept dialogue model based on EleutherAI's [pythia-1.3b-deduped](https://huggingface.co/EleutherAI/pythia-1.3b-deduped).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Fine-tuning was done using [ColossalAI](https://github.com/hpcaitech/ColossalAI) (specifically, with a slightly modified version of their [OPT fine-tune example](https://github.com/hpcaitech/ColossalAI/blob/78509124d32b63b7fc36f6508e0576a326d51422/examples/language/opt/run_clm.py)) for around 11.4 million tokens over 5440 steps on a single 24GB GPU. The run took just under 21 hours.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER] `is, as you can probably guess, the name of the character you want the model to portray, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
- The model can get stuck repeating certain phrases, or sometimes even entire sentences.
- We believe this is due to that behavior being present in the training data itself, and plan to investigate and adjust accordingly for future versions. |
Chun/w-zh2en-mto | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: perioli_manifesti_v5.6
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
config: discharge
split: test
args: discharge
metrics:
- name: Precision
type: precision
value: 0.9304952215464813
- name: Recall
type: recall
value: 0.9622641509433962
- name: F1
type: f1
value: 0.9461130742049471
- name: Accuracy
type: accuracy
value: 0.9934494773519164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# perioli_manifesti_v5.6
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0271
- Precision: 0.9305
- Recall: 0.9623
- F1: 0.9461
- Accuracy: 0.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.47 | 100 | 0.1846 | 0.6553 | 0.7448 | 0.6972 | 0.9597 |
| No log | 0.93 | 200 | 0.0547 | 0.8780 | 0.9182 | 0.8977 | 0.9865 |
| No log | 1.4 | 300 | 0.0442 | 0.9007 | 0.9371 | 0.9185 | 0.9895 |
| No log | 1.87 | 400 | 0.0446 | 0.8823 | 0.9362 | 0.9085 | 0.9886 |
| 0.1459 | 2.34 | 500 | 0.0360 | 0.8995 | 0.9488 | 0.9235 | 0.9908 |
| 0.1459 | 2.8 | 600 | 0.0315 | 0.9205 | 0.9569 | 0.9383 | 0.9923 |
| 0.1459 | 3.27 | 700 | 0.0309 | 0.9316 | 0.9668 | 0.9489 | 0.9937 |
| 0.1459 | 3.74 | 800 | 0.0259 | 0.9121 | 0.9506 | 0.9309 | 0.9918 |
| 0.1459 | 4.21 | 900 | 0.0267 | 0.9264 | 0.9614 | 0.9436 | 0.9941 |
| 0.0247 | 4.67 | 1000 | 0.0272 | 0.9282 | 0.9641 | 0.9458 | 0.9937 |
| 0.0247 | 5.14 | 1100 | 0.0221 | 0.9355 | 0.9641 | 0.9496 | 0.9947 |
| 0.0247 | 5.61 | 1200 | 0.0236 | 0.9494 | 0.9614 | 0.9554 | 0.9946 |
| 0.0247 | 6.07 | 1300 | 0.0331 | 0.9050 | 0.9416 | 0.9229 | 0.9898 |
| 0.0247 | 6.54 | 1400 | 0.0199 | 0.9278 | 0.9587 | 0.9430 | 0.9936 |
| 0.016 | 7.01 | 1500 | 0.0180 | 0.9465 | 0.9533 | 0.9499 | 0.9947 |
| 0.016 | 7.48 | 1600 | 0.0236 | 0.9437 | 0.9641 | 0.9538 | 0.9951 |
| 0.016 | 7.94 | 1700 | 0.0220 | 0.9359 | 0.9704 | 0.9528 | 0.9946 |
| 0.016 | 8.41 | 1800 | 0.0271 | 0.9185 | 0.9623 | 0.9399 | 0.9923 |
| 0.016 | 8.88 | 1900 | 0.0315 | 0.9294 | 0.9695 | 0.9490 | 0.9936 |
| 0.011 | 9.35 | 2000 | 0.0305 | 0.9186 | 0.9632 | 0.9404 | 0.9926 |
| 0.011 | 9.81 | 2100 | 0.0286 | 0.9271 | 0.9596 | 0.9430 | 0.9930 |
| 0.011 | 10.28 | 2200 | 0.0242 | 0.9296 | 0.9497 | 0.9396 | 0.9928 |
| 0.011 | 10.75 | 2300 | 0.0316 | 0.9183 | 0.9488 | 0.9333 | 0.9908 |
| 0.011 | 11.21 | 2400 | 0.0293 | 0.9311 | 0.9596 | 0.9451 | 0.9932 |
| 0.0069 | 11.68 | 2500 | 0.0285 | 0.9297 | 0.9632 | 0.9462 | 0.9932 |
| 0.0069 | 12.15 | 2600 | 0.0309 | 0.9114 | 0.9524 | 0.9315 | 0.9918 |
| 0.0069 | 12.62 | 2700 | 0.0290 | 0.9206 | 0.9587 | 0.9393 | 0.9928 |
| 0.0069 | 13.08 | 2800 | 0.0270 | 0.9323 | 0.9650 | 0.9483 | 0.9937 |
| 0.0069 | 13.55 | 2900 | 0.0249 | 0.9363 | 0.9641 | 0.9500 | 0.9941 |
| 0.0059 | 14.02 | 3000 | 0.0273 | 0.9297 | 0.9632 | 0.9462 | 0.9933 |
| 0.0059 | 14.49 | 3100 | 0.0271 | 0.9278 | 0.9587 | 0.9430 | 0.9930 |
| 0.0059 | 14.95 | 3200 | 0.0297 | 0.9248 | 0.9614 | 0.9427 | 0.9928 |
| 0.0059 | 15.42 | 3300 | 0.0275 | 0.9305 | 0.9623 | 0.9461 | 0.9933 |
| 0.0059 | 15.89 | 3400 | 0.0275 | 0.9296 | 0.9605 | 0.9448 | 0.9933 |
| 0.0047 | 16.36 | 3500 | 0.0271 | 0.9305 | 0.9623 | 0.9461 | 0.9934 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.2.2
- Tokenizers 0.13.3
|
Chungu424/qazwsx | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: viettel-crossencoder-reader-xlm-roberta-base-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# viettel-crossencoder-reader-xlm-roberta-base-viquad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7448 | 0.33 | 500 | 1.7456 |
| 1.7323 | 0.66 | 1000 | 1.3974 |
| 1.5019 | 0.98 | 1500 | 1.2971 |
| 1.2067 | 1.31 | 2000 | 1.2769 |
| 1.1449 | 1.64 | 2500 | 1.2420 |
| 1.1442 | 1.97 | 3000 | 1.2272 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chungu424/repodata | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- fka/awesome-chatgpt-prompts
- jamescalam/world-cities-geo
language:
- en
- ml
license: openrail
library_name: nemo
pipeline_tag: text-classification
--- |
Ci/Pai | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- jsonl_dataset_sum.py
metrics:
- rouge
model-index:
- name: summarization_all
results:
- task:
name: Summarization
type: summarization
dataset:
name: jsonl_dataset_sum.py
type: jsonl_dataset_sum.py
config: 'null'
split: None
metrics:
- name: Rouge1
type: rouge
value: 21.7197
license: artistic-2.0
language:
- ko
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_all
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the jsonl_dataset_sum.py dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0758
- Rouge1: 21.7197
- Rouge2: 10.1392
- Rougel: 21.1499
- Rougelsum: 21.173
- Gen Len: 87.4589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2171 | 1.0 | 184670 | 1.2070 | 20.611 | 9.2868 | 20.0833 | 20.1095 | 87.4065 |
| 1.0916 | 2.0 | 369340 | 1.1190 | 21.3264 | 9.8656 | 20.7683 | 20.8005 | 88.0284 |
| 0.9823 | 3.0 | 554010 | 1.0758 | 21.7197 | 10.1392 | 21.1499 | 21.173 | 87.4589 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2 |
ClaudeYang/awesome_fb_model | [
"pytorch",
"bart",
"text-classification",
"dataset:multi_nli",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | {
"architectures": [
"BartForSequenceClassification"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 13b</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
-----
Install 0cc4m's ->latest<- update from his GPTQ+KoboldAI fork, it has proper support for 8bit models in this repo's format out of the box on both windows & Linux:
- https://github.com/0cc4m/KoboldAI/tree/latestgptq
With this fix applied:
- https://github.com/0cc4m/GPTQ-for-LLaMa/pull/14
GPTQ via Ooba UI may not need this patch.
## Eval / Benchmark scores
Current evals out of the Metharme-13b model: <br>
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme 13b - 16bit</td>
<td>5.253076553344727</td>
<td>27.53407859802246</td>
<td>7.038073539733887</td>
</tr>
<tr>
<td>Metharme 13b - 8bit - [act-order]</td>
<td>5.253607273101807</td>
<td>27.52388572692871</td>
<td>7.038473129272461</td>
</tr>
<tr>
<td>Metharme 13b - 8bit - [true-sequential & 128g]</td>
<td>5.2532830238342285</td>
<td>27.54250144958496</td>
<td>7.038838863372803</td>
</tr>
<tr>
<td>Metharme 13b - 4bit - [true-sequential & 128g]</td>
<td>5.420501708984375</td>
<td>28.37093734741211</td>
<td>7.1930413246154785</td>
</tr>
<tr>
<td>Metharme 7b - 16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme 7b - 8bit</td>
<td>TDB</td>
<td>TDB</td>
<td>TDB</td>
</tr>
<tr>
<td>Metharme 7b - 4bit - [act-order]</td>
<td>6.2369050979614</td>
<td>47.5177230834960</td>
<td>7.9044938087463</td>
</tr>
</tbody>
</table>
</body>
</html>
<hr>
Current evals out of the Pygmalion-13b model: <br>
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pygmalion 13b - 16bit</td>
<td>5.710726737976074</td>
<td>23.633684158325195</td>
<td>7.6324849128723145</td>
</tr>
<tr>
<td>Pygmalion 13b - 8bit - [act-order]</td>
<td>5.711935997009277</td>
<td>23.654993057250977</td>
<td>7.632820129394531</td>
</tr>
<tr>
<td>Pygmalion 7b - 16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Pygmalion 7b - 8bit <br>[act-order]</td>
<td>5.656460285186768</td>
<td>40.79701232910156</td>
<td>7.432109832763672</td>
</tr>
<tr>
<td>Pygmalion 7b - 4bit - [act-order]</td>
<td>6.2477378845215</td>
<td>46.5129699707031</td>
<td>7.8470954895020</td>
</tr>
</tbody>
</table>
</body>
</html>
-----
## Model Details:
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-13b
Metharme 13b is an instruct model based on Meta's LLaMA-13b.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
The current Metharme-13b has been trained as a LoRA, then merged down to the base model for distribuition.
It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
```
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Other notes
- When prompted correctly, the model will always start by generating a BOS token. This behavior is an accidental side-effect which we plan to address in future model versions and should not be relied upon.
- The model was trained as a LoRA with a somewhat unorthodox configuration which causes errors when used with the current version of `peft`, hence we release it as a full model instead.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
CleveGreen/FieldClassifier | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null | ---
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- chat
---
# Model Card for `player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored`
This model is an attempt to fine-tune the smallest Bloom model ([Bloom-560m](/bigscience/Bloom-560m/)) on a ChatGPT/Chatbot style of dataset ([Wizard-Vicuna-Uncensored](/ehartford/wizard_vicuna_70k_unfiltered)). The goal is to have a model that is small enough to run on a laptop but still powerful enough to follow basic instructions and basic chat messages.
## Dataset
I used [this Google Colab notebook for training](https://colab.research.google.com/drive/1J9tOnIpzQ1oNB80V0Pal9slAJwK7kQlJ?usp=sharing).
The model was trained on text samples like the following:
```
USER: Provide a list of at least four professions that necessitate the utilization of specific software and hardware systems to accomplish their tasks. Ensure that each occupation is distinctive from the others and requires different types of tools for effective job performance. ASSISTANT: Sure, here are four professions that require the use of specific software and hardware systems for effective job performance:
1. Graphic Designer - Graphic designers require specialized software such as Adobe Photoshop, Adobe Illustrator, and Adobe InDesign to create and edit visual content. They also require high-end hardware such as high-resolution monitors, graphics tablets, and powerful workstations to handle complex design projects.
2. Video Editor - Video editors require specialized software such as Adobe Premiere Pro, Final Cut Pro, and Avid Media Composer to edit and produce high-quality video content. They also require high-end hardware such as powerful workstations, high-resolution monitors, and fast storage devices to handle large video files.
3. Software Developer - Software developers require specialized software such as Integrated Development Environments (IDEs) like Visual Studio, Eclipse, and Xcode to write, test, and debug code. They also require high-end hardware such as powerful workstations, high-speed processors, and large amounts of RAM to handle complex software projects.
4. Civil Engineer - Civil engineers require specialized software such as AutoCAD, Civil 3D, and Revit to design and analyze infrastructure projects such as roads, bridges, and buildings. They also require high-end hardware such as powerful workstations, high-resolution monitors, and graphics cards to handle large and complex design projects.</s>USER: How do artificial neural networks work, and what are some practical applications of this technology? ASSISTANT: Artificial neural networks (ANNs) are computer systems modeled on the structure and function of biological neural networks in the human brain. ANNs consist of interconnected nodes that process information through a series of layers. Each node receives input from other nodes, processes it using an activation function, and then passes the output to other nodes in the next layer. The learning process involves adjusting the weights between nodes to reduce errors in the output.
Some practical applications of ANNs include speech recognition, natural language processing, image and video recognition, recommendation systems, and financial modeling. For example, ANNs can be used to analyze customer data and make personalized product recommendations or to predict stock prices based on historical market data. Another application is autonomous vehicles, where ANNs are used to recognize traffic signs and road markings, detect obstacles
```
Here are 4 more samples (JSON-encoded):
```
"USER: Provide a contrastive statement related to the provided sentence using a specific academic field.\nIn the field of neuroscience, education not only impacts personal development but also has a significant effect on the physical structure and function of the brain. ASSISTANT: In the field of economics, personal development may have an impact on an individual's financial success but does not necessarily affect the overall structure and function of the economy.</s>USER: What is the difference between deductive and inductive reasoning? ASSISTANT: Deductive reasoning starts with a general assumption or premise and then applies it to specific situations to draw a conclusion. Inductive reasoning, on the other hand, involves drawing general conclusions based on specific observations or evidence.</s>USER: Can you explain the concept of cognitive dissonance and give an example? ASSISTANT: Cognitive dissonance describes the mental discomfort that arises when a person holds two conflicting beliefs or values at the same time. An example of this would be a person who believes that smoking is unhealthy but continues to smoke because they enjoy it. This conflict between their belief and behavior causes cognitive dissonance.</s>USER: How does the concept of marginal utility relate to consumer behavior? ASSISTANT: Marginal utility refers to the additional satisfaction a consumer gains from consuming one more unit of a good or service. Consumers will typically continue consuming a good or service until the marginal utility they receive no longer outweighs the cost of obtaining it. This principle helps explain why consumers may choose to purchase multiple units of a product when the price decreases, as each additional unit provides them with greater overall satisfaction.</s>USER: Can you explain the difference between correlation and causation? ASSISTANT: Correlation refers to a statistical relationship between two variables, where changes in one variable are associated with changes in another. However, this does not necessarily mean that one variable caused the other. Causation implies a direct cause-and-effect relationship between two variables, where one variable directly influences the other. Establishing causation typically requires conducting experiments or other research methods to control for other potential factors that may be influencing the relationship between the two variables.</s>"
"USER: How can I use MATLAB to analyze the impact of different leadership styles on team dynamics and also ensure that the code is computationally efficient? Can you provide examples of MATLAB code that can help identify not only the characteristics of a good leader but also the impact of their communication style on team performance? ASSISTANT: Yes, you can use MATLAB to analyze the impact of different leadership styles on team dynamics. One approach is to use data from surveys or questionnaires to gather information on the leadership style and team performance. You can then use statistical analysis tools in MATLAB to identify patterns and relationships in the data.\nHere's an example of MATLAB code that can help you analyze the impact of leadership styles on team dynamics:\n```matlab\n% Load data from a CSV file\ndata = readtable('leadership_data.csv');\n% Extract the columns containing the leadership style and team performance data\nleadership_style = data(:, 'Leadership_Style');\nteam_performance = data(:, 'Team_Performance');\n% Compute the correlation between leadership style and team performance\ncorrelation = corr2(leadership_style, team_performance);\n% Plot the data and the correlation\nscatter(leadership_style, team_performance);\ntitle('Leadership Style vs. Team Performance');\nxlabel('Leadership Style');\nylabel('Team Performance');\ntext(0.5, 0.9, ['Correlation: ' num2str(correlation)], 'Units', 'normalized');\n% Compute the mean and standard deviation of team performance for each leadership style\nmean_performance = grpstats(team_performance, leadership_style, 'mean');\nstd_performance = grpstats(team_performance, leadership_style, 'std');\n% Plot the mean and standard deviation for each leadership style\nbar(mean_performance);\nhold on;\nerrorbar(mean_performance, std_performance, 'LineStyle', 'none');\ntitle('Leadership Style vs. Team Performance');\nxlabel('Leadership Style');\nylabel('Team Performance');\nlegend('Mean', 'Standard Deviation');\n```\nThis code loads data from a CSV file containing information on leadership style and team performance. It then computes the correlation between the two variables and plots a scatter plot of the data with the correlation coefficient displayed. Finally, it computes the mean and standard deviation of team performance for each leadership style and plots a bar chart with error bars.\nTo ensure that the code is computationally efficient, you can use MATLAB's built-in functions for data manipulation and analysis, such as `corr2`, `grpstats`, and `bar`. You can also optimize the code by using vectorized operations instead of loops and by preallocating memory for large data sets.</s>USER: How can I use machine learning to predict the outcome of a soccer match based on team statistics? ASSISTANT: One approach is to use supervised learning algorithms, such as classification or regression, to train a model on historical soccer match data and corresponding team statistics. You can then use this trained model to make predictions on new, unseen data.\nTo get started, you'll need to gather historical soccer match data and corresponding team statistics. There are several online sources for this type of data, such as Kaggle and Football-data.co.uk. Once you have the data, you can use MATLAB's built-in functions for data manipulation and analysis to pre-process and prepare it for machine learning.\nNext, you'll need to choose a suitable supervised learning algorithm for your task. Classification algorithms, such as logistic regression or decision trees, can be used to predict the winner of a match based on team statistics. Regression algorithms, such as linear regression or neural networks, can be used to predict the goal difference or total number of goals in a match.\nOnce you have selected an appropriate algorithm, you can divide your data into training, validation, and test sets. The training set is used to train the model, the validation set is used to tune the model parameters and prevent overfitting, and the test set is used to evaluate the final model performance.\nHere is an example MATLAB code that uses logistic regression to predict the winner of a soccer match based on team statistics:\n```matlab\n% Load data from a CSV file\ndata = readtable('soccer_data.csv');\n% Extract the columns containing the team statistics and match outcomes\nteam_stats = data(:, {'Team1_Stat1', 'Team1_Stat2', 'Team2_Stat1', 'Team2_Stat2'});\nmatch_outcomes = data(:, 'Match_Outcome');\n% Divide the data into training, validation, and test sets\n[trainInd, valInd, testInd] = dividerand(height(data), 0.6, 0.2, 0.2);\nXtrain = table2array(team_stats(trainInd, :));\nYtrain = table2array(match_outcomes(trainInd, :));\nXval = table2array(team_stats(valInd, :));\nYval = table2array(match_outcomes(valInd, :));\nXtest = table2array(team_stats(testInd, :));\nYtest = table2array(match_outcomes(testInd, :));\n% Train a logistic regression model\nmdl = fitglm(Xtrain, Ytrain, 'linear', 'Distribution', 'binomial');\n% Predict the match outcomes for the test set\nYpred = predict(mdl, Xtest);\n% Evaluate the model performance using accuracy and confusion matrix\naccuracy = sum(Ypred == Ytest) / length(Ytest);\nconfusion = confusionmat(Ytest, Ypred);\ndisp(['Accuracy: ' num2str(accuracy)]);\ndisp('Confusion matrix:');\ndisp(confusion);\n```\nThis code loads data from a CSV file containing historical soccer match data and corresponding team statistics. It then divides the data into training, validation, and test sets, trains a logistic regression model on the training set, and predicts the match outcomes for the test set. Finally, it evaluates the model performance using accuracy and confusion matrix.</s>"
"USER: What are some possible actions that could complete the sentence \"She always joked that she would ____\" while also including a constraint that the action should be related to achieving a personal goal or aspiration? Consider the following options in JSON format:\n{\n \"options\": [\n \"start her own business\",\n \"run a marathon\",\n \"earn a degree\",\n \"become a published author\",\n \"learn to play a musical instrument\"\n ]\n ASSISTANT: Here are some possible actions that could complete the sentence \"She always joked that she would\" while also including a constraint that the action should be related to achieving a personal goal or aspiration:\n{\n \"options\": [\n \"start her own business\",\n \"run a marathon\",\n \"earn a degree\",\n \"become a published author\",\n \"learn to play a musical instrument\"\n ]\n}\n- \"She always joked that she would start her own business, but deep down she knew she had the skills and drive to make it a reality.\"\n- \"She always joked that she would run a marathon, but secretly she was training hard and determined to cross the finish line.\"\n- \"She always joked that she would earn a degree, but she knew that education was the key to unlocking her full potential.\"\n- \"She always joked that she would become a published author, but she was already working on her first novel and dreaming of seeing it in bookstores.\"\n- \"She always joked that she would learn to play a musical instrument, but she was already taking lessons and practicing every day, driven by her love of music.\"</s>USER: Can you explain the concept of time dilation in special relativity? ASSISTANT: Yes, in Einstein's theory of special relativity, time dilation refers to the difference in the elapsed time measured by two observers, who are moving relative to each other. The faster an object moves, the slower time appears to pass for it compared to a stationary observer. This effect is known as time dilation, and it has been verified experimentally using atomic clocks.</s>USER: How does the Doppler effect work in sound waves and light waves? ASSISTANT: The Doppler effect is the apparent change in frequency or wavelength of a wave that is perceived by an observer who is moving relative to the wave source. When it comes to sound waves, as an object approaches an observer, the frequency of the sound waves it produces increases, resulting in a higher-pitched sound. Conversely, as an object moves away from the observer, the frequency decreases, creating a lower-pitched sound. In the case of light waves, the same principle applies except that it causes a shift in the color of light rather than its pitch.</s>USER: Could you describe some major milestones in the history of artificial intelligence? ASSISTANT: Sure! One of the earliest milestones in the history of artificial intelligence was the development of the Turing test in 1950, which proposed a way to measure a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. Another important milestone was the creation of the first neural network in the 1950s, which laid the groundwork for modern machine learning algorithms. In the 1960s and 70s, researchers developed expert systems that were able to solve complex problems in narrow domains. The 1980s saw the emergence of connectionism, which focused on the use of parallel processing and distributed representations in AI. More recently, breakthroughs in deep learning and reinforcement learning have led to significant advances in natural language processing, computer vision, and robotics.</s>USER: Can you explain the difference between encryption and hashing in cybersecurity? ASSISTANT: Yes, encryption and hashing are both techniques used in cybersecurity, but they serve different purposes. Encryption is a process of encoding information so that it can only be read by authorized parties who have the key to decode it. The goal of encryption is to protect data confidentiality. Hashing, on the other hand, is a one-way process that takes input data and produces a fixed-size string of characters, called a hash value or digest. The main purpose of hashing is to verify the integrity of data and ensure that it has not been tampered with. While encryption is reversible, meaning that it can be decrypted with the right key, hashing is irreversible, which means that once data is hashed, it cannot be unhashed to its original form.</s>"
"USER: How can we create a complex puzzle for our chatbot to solve? The puzzle should require the chatbot to display all US Presidents since 1947, including their political party, term start date, and term end date. Additionally, the chatbot should have the capability to search for a specific president by name or political party, and paginate the table to display 10 presidents per page. Moreover, we want to add a brain teaser that will challenge the chatbot to solve it and reveal the name of the youngest president in the table using their age at inauguration as a clue. Can you provide the necessary code in JSON format to create the required tables and query the database to achieve this task? ASSISTANT: To create the required puzzle, you can use a database to store the information about the US Presidents. Here is an example schema for the database:\n```\n{\n \"presidents\": [\n {\n \"name\": \"Harry S. Truman\",\n \"party\": \"Democratic\",\n \"term_start\": \"1945-04-12\",\n \"term_end\": \"1953-01-20\",\n \"age_at_inauguration\": 60\n },\n {\n \"name\": \"Dwight D. Eisenhower\",\n \"party\": \"Republican\",\n \"term_start\": \"1953-01-20\",\n \"term_end\": \"1961-01-20\",\n \"age_at_inauguration\": 62\n },\n {\n \"name\": \"John F. Kennedy\",\n \"party\": \"Democratic\",\n \"term_start\": \"1961-01-20\",\n \"term_end\": \"1963-11-22\",\n \"age_at_inauguration\": 43\n },\n {\n \"name\": \"Lyndon B. Johnson\",\n \"party\": \"Democratic\",\n \"term_start\": \"1963-11-22\",\n \"term_end\": \"1969-01-20\",\n \"age_at_inauguration\": 55\n },\n {\n \"name\": \"Richard Nixon\",\n \"party\": \"Republican\",\n \"term_start\": \"1969-01-20\",\n \"term_end\": \"1974-08-09\",\n \"age_at_inauguration\": 56\n },\n {\n \"name\": \"Gerald Ford\",\n \"party\": \"Republican\",\n \"term_start\": \"1974-08-09\",\n \"term_end\": \"1977-01-20\",\n \"age_at_inauguration\": 61\n },\n {\n \"name\": \"Jimmy Carter\",\n \"party\": \"Democratic\",\n \"term_start\": \"1977-01-20\",\n \"term_end\": \"1981-01-20\",\n \"age_at_inauguration\": 52\n },\n {\n \"name\": \"Ronald Reagan\",\n \"party\": \"Republican\",\n \"term_start\": \"1981-01-20\",\n \"term_end\": \"1989-01-20\",\n \"age_at_inauguration\": 69\n },\n {\n \"name\": \"George H. W. Bush\",\n \"party\": \"Republican\",\n \"term_start\": \"1989-01-20\",\n \"term_end\": \"1993-01-20\",\n \"age_at_inauguration\": 64\n },\n {\n \"name\": \"Bill Clinton\",\n \"party\": \"Democratic\",\n \"term_start\": \"1993-01-20\",\n \"term_end\": \"2001-01-20\",\n \"age_at_inauguration\": 46\n },\n {\n \"name\": \"George W. Bush\",\n \"party\": \"Republican\",\n \"term_start\": \"2001-01-20\",\n \"term_end\": \"2009-01-20\",\n \"age_at_inauguration\": 54\n },\n {\n \"name\": \"Barack Obama\",\n \"party\": \"Democratic\",\n \"term_start\": \"2009-01-20\",\n \"term_end\": \"2017-01-20\",\n \"age_at_inauguration\": 47\n },\n {\n \"name\": \"Donald Trump\",\n \"party\": \"Republican\",\n \"term_start\": \"2017-01-20\",\n \"term_end\": \"2021-01-20\",\n \"age_at_inauguration\": 70\n },\n {\n \"name\": \"Joe Biden\",\n \"party\": \"Democratic\",\n \"term_start\": \"2021-01-20\",\n \"term_end\": null,\n \"age_at_inauguration\": 78\n }\n ]\n}\n```\nTo display all the presidents since 1947, you can use the following query:\n```\nSELECT name, party, term_start, term_end FROM presidents WHERE term_start >= '1947-01-01'\n```\nTo search for a specific president by name or political party, you can use the following query:\n```\nSELECT name, party, term_start, term_end FROM presidents WHERE name = 'John F. Kennedy' OR party = 'Republican'\n```\nTo paginate the table to display 10 presidents per page, you can use the following query:\n```\nSELECT name, party, term_start, term_end FROM presidents LIMIT 10 OFFSET 0\n```\nTo add the brain teaser that will challenge the chatbot to solve it and reveal the name of the youngest president in the table using their age at inauguration as a clue, you can use the following query:\n```\nSELECT name, age_at_inauguration FROM presidents ORDER BY age_at_inauguration LIMIT 1\n```\nThis will return the name and age at inauguration of the youngest president in the table. You can then use this information to create a brain teaser for the chatbot to solve.</s>USER: Can you explain how the chatbot uses natural language processing (NLP) to understand user input and provide appropriate responses? ASSISTANT: Sure, NLP is a subfield of artificial intelligence that deals with the interaction between computers and humans using natural language. NLP allows chatbots to understand human language by breaking down sentences into their component parts and identifying the relationships between them. \n\nChatbots typically use a combination of techniques to process natural language input, such as tokenization, part-of-speech tagging, named entity recognition, and dependency parsing. These techniques help the chatbot identify the meaning of each word in a sentence, its grammatical role, and how it relates to other words in the sentence.\n\nOnce the chatbot has processed the input, it can use machine learning algorithms to determine the appropriate response. This may involve identifying patterns in the user's input and selecting a pre-defined response from a database, or generating a new response on the fly based on the context of the conversation.\n\nOverall, NLP is a crucial component of modern chatbots, allowing them to understand and respond to user input in a way that feels natural and intuitive. \n\nDo you have any other questions about NLP or chatbot development?</s>"
```
Specifically this means text: 1) Starts with `"USER: "` or `"ASSISTANT: "` and 2) Ends the assistant prompt with `"</s>"`. My training data does not have any system prompts.
## Evaluation
I used [this Google Colab notebook for evaluation](https://colab.research.google.com/drive/1y4EYBkHRCXA3ibA1GykXVLlq3qH8LQti?usp=sharing).
Here is how the base model compares with the fine-tuned model on several prompts with `temperature=0` for repeatability.
*Bloom-560m*:
> You are a helpful AI assistant. USER: What is the capital of France? ASSISTANT: The capital of France is Paris. USER: What is the capital of Belgium? ASSISTANT: The capital of Belgium is Brussels. USER: What is the capital of Germany? ASSISTANT: The capital of Germany is Berlin. USER: What is the capital of Italy? ASSISTANT: The capital of Italy is Rome. USER: What is the capital of Japan? ASSISTANT: The capital of Japan is Tokyo. USER: What is the capital of the United Kingdom? ASSISTANT: The capital of the United Kingdom is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT: The capital of the United States of America is Washington. USER: What is the capital of the United Kingdom of Great Britain and Northern Ireland? ASSISTANT: The capital of the United Kingdom of Great Britain and Northern Ireland is London. USER: What is the capital of the United States of America? ASSISTANT:
*player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored*:
> You are a helpful AI assistant. USER: What is the capital of France? ASSISTANT: France is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris, which is the capital of the French Republic. The capital of France is the city of Paris
*Bloom-560m*:
> USER: What are the key features and characteristics of the Eiffel Tower? ASSISTANT: The Eiffel Tower is a landmark in Paris, France. The building is the tallest building in the world, and it is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world. The Eiffel Tower is the tallest building in the world
*player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored*:
> USER: What are the key features and characteristics of the Eiffel Tower? ASSISTANT: The Eiffel Tower is a tower located in the city of Paris, France. It is a tall, stunning building that is built in the style of the Gothic Revival architecture. The tower is a complex of three main buildings, the Eiffel Tower, the Grand Palais, and the Grand Hôtel de Ville. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Palais is the second largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it is the largest building in the world. The Grand Hôtel de Ville is the largest building in the city, and it
*Bloom-560m*:
> USER: What are the main causes and effects of climate change? ASSISTANT: Climate change is a global phenomenon that affects the climate of the planet. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is changing because of human activities. The climate is
*player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored*:
> USER: What are the main causes and effects of climate change? ASSISTANT: Climate change is a major cause of global warming, which is the result of human activities. It is caused by the accumulation of greenhouse gases, such as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). The increase in temperature and precipitation is a direct result of this process. The increase in temperature is caused by the increase in the amount of heat that can be stored in the Earth's atmosphere. The increase in precipitation is caused by the increase in the amount of water that can be stored in the Earth's atmosphere. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. Climate change is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as deforestation, agriculture, and industrialization. The increase in temperature is also a result of human activities, such as
*Bloom-560m*:
> USER: What were the major events and consequences of World War II? ASSISTANT: The war was a major event in the history of the world. It was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of the end of the Cold War. The war was the beginning of
*player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored*:
> USER: What were the major events and consequences of World War II? ASSISTANT: The major events and consequences of World War II were the following:
> 1. The German invasion of the Soviet Union, which resulted in the collapse of the Soviet Union and the subsequent collapse of the USSR.
> 2. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the Soviet Union.
> 3. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 4. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 5. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 6. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 7. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 8. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 9. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 10. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 11. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 12. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 13. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 14. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 15. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 16. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR and the subsequent collapse of the USSR.
> 17. The USSR's invasion of the Soviet Union, which resulted in the collapse of the USSR |
CleveGreen/JobClassifier | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: DistilBERT-classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DistilBERT-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeDanCode/CartmenBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-base-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5862068965517241
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-base-224-finetuned-eurosat
This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8160
- Accuracy: 0.5862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4118 | 1.0 | 65 | 1.3980 | 0.4483 |
| 0.703 | 2.0 | 130 | 0.9538 | 0.5862 |
| 0.6892 | 3.0 | 195 | 0.8160 | 0.5862 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeNinja1126/bert-q-encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 122.00 +/- 9.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CodeNinja1126/koelectra-model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-18-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5651
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 65 | 3.5209 | 0.0 |
| 0.6023 | 2.0 | 130 | 3.3549 | 0.0 |
| 0.6882 | 3.0 | 195 | 3.5651 | 0.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeNinja1126/test-model | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: other
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: mobilenet_v1_0.75_192-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3103448275862069
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v1_0.75_192-finetuned-eurosat
This model is a fine-tuned version of [google/mobilenet_v1_0.75_192](https://huggingface.co/google/mobilenet_v1_0.75_192) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3344
- Accuracy: 0.3103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3566 | 1.0 | 65 | 1.3639 | 0.3103 |
| 0.9354 | 2.0 | 130 | 1.4389 | 0.2759 |
| 0.8786 | 3.0 | 195 | 1.3344 | 0.3103 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CodeNinja1126/xlm-roberta-large-kor-mrc | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: testingModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testingModel
This model is a fine-tuned version of [Davlan/distilbert-base-multilingual-cased-ner-hrl](https://huggingface.co/Davlan/distilbert-base-multilingual-cased-ner-hrl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1368
- Precision: 0.8763
- Recall: 0.9000
- F1: 0.8880
- Accuracy: 0.9738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1975 | 1.0 | 477 | 0.1150 | 0.8257 | 0.8574 | 0.8412 | 0.9642 |
| 0.1001 | 2.0 | 954 | 0.1046 | 0.8515 | 0.8798 | 0.8654 | 0.9682 |
| 0.0655 | 3.0 | 1431 | 0.0980 | 0.8632 | 0.8905 | 0.8766 | 0.9719 |
| 0.0453 | 4.0 | 1908 | 0.1088 | 0.8590 | 0.8944 | 0.8763 | 0.9718 |
| 0.0324 | 5.0 | 2385 | 0.1142 | 0.8673 | 0.8951 | 0.8810 | 0.9719 |
| 0.0223 | 6.0 | 2862 | 0.1244 | 0.8814 | 0.9036 | 0.8924 | 0.9737 |
| 0.0173 | 7.0 | 3339 | 0.1252 | 0.8739 | 0.9007 | 0.8871 | 0.9733 |
| 0.0131 | 8.0 | 3816 | 0.1328 | 0.8721 | 0.8965 | 0.8841 | 0.9731 |
| 0.0097 | 9.0 | 4293 | 0.1362 | 0.8783 | 0.9002 | 0.8891 | 0.9737 |
| 0.008 | 10.0 | 4770 | 0.1368 | 0.8763 | 0.9000 | 0.8880 | 0.9738 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CoderBoy432/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Ziyu23/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CoderEFE/DialoGPT-medium-marx | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sayakpaul/another-lora-dog
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
LoRA for the text encoder was enabled: False.
|
CoffeeAddict93/gpt2-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetune-spanish-gpt2-faqs-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-spanish-gpt2-faqs-nlp
This model is a fine-tuned version of [mrm8488/spanish-gpt2](https://huggingface.co/mrm8488/spanish-gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7149 | 1.0 | 1782 | 0.6635 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CoffeeAddict93/gpt2-medium-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null |
---
license: creativeml-openrail-m
base_model: DeepFloyd/IF-I-XL-v1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sayakpaul/dreambooth-if-dog
These are LoRA adaption weights for DeepFloyd/IF-I-XL-v1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
LoRA for the text encoder was enabled: False.
|
CogComp/bart-faithful-summary-detector | [
"pytorch",
"jax",
"bart",
"text-classification",
"en",
"dataset:xsum",
"transformers",
"xsum",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BartForSequenceClassification"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": 1,
"max_length": 128,
"min_length": 12,
"no_repeat_ngram_size": null,
"num_beams": 4,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 234 | null | Access to model yuchensuper/test_model is restricted and you are not in the authorized list. Visit https://huggingface.co/yuchensuper/test_model to ask for access. |
CohleM/mbert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: indigorange/ppo-Huggy2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ComCom/gpt2-large | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_7b_400bt_preview](https://huggingface.co/openlm-research/open_llama_7b_400bt_preview)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
use_fast=False,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
ComCom-Dev/gpt2-bible-test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
datasets:
- lambdalabs/pokemon-blip-captions
language:
- en
---
This is the OpenVINO version of the [Stable Diffusion model for pokemon generation](https://huggingface.co/valhalla/sd-pokemon-model).
To run the model use the folloing code:
```python
%pip install optimum[openvino,diffusers]
from optimum.intel.openvino import OVStableDiffusionPipeline
from diffusers import LMSDiscreteScheduler, DDPMScheduler
import torch
import random
import numpy as np
pipe = OVStableDiffusionPipeline.from_pretrained("OpenVINO/stable-diffusion-pokemons-valhalla-fp32", compile=False)
pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1)
pipe.compile()
np.random.seed(42)
random.seed(42)
torch.manual_seed(42)
prompt = "cartoon bird"
output = pipe(prompt, num_inference_steps=50, output_type="pil") #Use 100 for more accurate result
output.images[0].save("output.png")
```
|
cometrain/neurotitle-rugpt3-small | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
datasets:
- pemujo/GLDv2_Top_51_Categories
language:
- en
library_name: transformers
---
# Model Card for Model ID
This model is vit-base-patch16-224-in21k fine-tuned with a subset of the 2021 Kaggle Google Landmark Dataset competition, including only the top 51 categories.
The dataset is available as Hugginface dataset on: https://huggingface.co/datasets/pemujo/GLDv2_Top_51_Categories
- **Developed by:** Pedro Melendez
- **Model type:** Vision transformer
- **Finetuned from model:** vit-base-patch16-224-in21k
### Training Data
Classes with more than 500 images in the 2021 Kaggle Google Landmark competition
https://huggingface.co/datasets/pemujo/GLDv2_Top_51_Categories
### Results
| | |
|-----------------------:|------:|
| epoch |4 |
|eval_accuracy |0.97411|
|eval_loss |0.11560|
|eval_runtime (secs) |79.0939|
|eval_samples_per_second |115.255|
|eval_steps_per_second |14.413 |
|train_runtime (secs) |4082.92|
|train_samples_per_second|35.722 |
|train_steps_per_second |2.233 |
## Environmental Impact
- **Hardware Type:** Nvidia V100
- **Minutes used:** 68 Minutes
- **Cloud Provider:** Google Cloud
- **Compute Region:** us-central
### Compute Infrastructure
Google Cloud Workbench Instance
#### Hardware
GCP Workbench n1-highmem-8 instance with Nvidia V100 GPU
#### Software
Python 3.9
Pytorch 2.0.1+cu117 |
Connorvr/TeachingGen | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: bigscience-bloom-rail-1.0
datasets:
- MBZUAI/LaMini-instruction
language:
- en
pipeline_tag: text-generation
tags:
- crayon
- language-technologies
---
# BloomZ 560M Finetuned on Instructions
## Credit
Code 99.99% copied from
*https://github.com/bofenghuang/vigogne*
*https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=DpYr24pR8T_0*
and refactored.
# Inference Code
```python
from peft import PeftModel
from transformers import PreTrainedTokenizer, PreTrainedModel, AutoTokenizer, AutoModelForCausalLM
from peft import PeftModelForCausalLM, LoraConfig
from typing import Optional
from transformers import GenerationConfig
import torch
PROMPT_DICT = {
"prompt_input": (
"Below is a^n instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
),
"prompt_no_input": (
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:\n"
),
}
def get_model(model_name_or_path: str, load_in_8bit: bool = True, device_map="auto",
cpu: bool = False) -> PreTrainedModel:
if cpu:
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map=device_map,
low_cpu_mem_usage=True)
else:
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=load_in_8bit,
device_map=device_map, torch_dtype=torch.float16)
return model
def get_peft_model(model: PreTrainedModel, lora_model_name_or_path: Optional[str] = None) -> PeftModelForCausalLM:
model = PeftModel.from_pretrained(model, lora_model_name_or_path, torch_dtype=torch.float16)
return model
def get_tokenizer(model_name_or_path: str, max_input_len: int) -> PreTrainedTokenizer:
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, model_max_length=max_input_len,
padding_side="right", use_fast=False)
return tokenizer
def get_llm_inference_model(base_model_name_or_path: str, lora_model_name_or_path: str, load_in_8bit: bool,
device_map) -> PeftModel:
cpu = True if not torch.cuda.is_available() else False
model = get_model(base_model_name_or_path, load_in_8bit, device_map, cpu=cpu)
model = get_peft_model(model, lora_model_name_or_path=lora_model_name_or_path)
if not load_in_8bit:
model.half()
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
return model
def generate_prompt(example):
return (
PROMPT_DICT["prompt_input"].format_map(example)
if example["input"]
else PROMPT_DICT["prompt_no_input"].format_map(example)
)
def infer(instruction: str, input_text: Optional[str] = None, temperature: float = 0.1, top_p: float = 0.95,
max_new_tokens: int = 512, early_stopping: bool = True, do_sample: bool = True,
repetition_penalty: float = 2.5) -> str:
prompt = generate_prompt({"instruction": instruction, "input": input_text})
tokenized_inputs = tokenizer(prompt, return_tensors="pt")
device = "cuda" if torch.cuda.is_available() else "cpu"
input_ids = tokenized_inputs["input_ids"].to(device)
generation_config = GenerationConfig(temperature=temperature, top_p=top_p, do_sample=do_sample,
repetition_penalty=repetition_penalty, early_stopping=early_stopping)
with torch.inference_mode():
generation_output = model.generate(input_ids=input_ids, generation_config=generation_config,
return_dict_in_generate=True, max_new_tokens=max_new_tokens)
output = generation_output.sequences[0]
output = tokenizer.decode(output, skip_special_tokens=True)
return output.split("### Response:")[1].strip()
base_model_name_or_path = "bigscience/bloomz-560m"
lora_model_name_or_path = "crayon-coe/laMini-250K-bloomz-560m-en"
model = get_llm_inference_model(base_model_name_or_path, lora_model_name_or_path, True, "auto")
tokenizer = get_tokenizer(base_model_name_or_path, 512)
context = "Write a letter expressing your love for computers"
output = infer(context)
print(output)
# Output
# I am so grateful to have been able access this wonderful computer system and its amazing features, which I can now use daily with ease.
#
# My heartfelt thanks go out in advance of all my friends who are using it as well.
# Thank you again!
```
Note: If failing, you might need to add *offload_folder="some folder name"* when getting the PeftModel.
# Training Parameters
```json
{
"max_input_len": 512,
"load_in_8bit": True,
"model_name_or_path": "bigscience/bloomz-560m",
"device_map": "auto",
"bias": "none",
"lora_dropout": 0.05,
"lora_alpha": 32,
"target_modules": ["query_key_value"],
"task_type": "CAUSAL_LM",
"lora_r": 16,
"pad_to_multiple_of": 8,
"num_train_epochs": 3,
"learning_rate": 0.0003,
"gradient_accumulation_steps": 16,
"per_device_train_batch_size": 8,
"val_set_size": 500,
"save_steps": 200,
"eval_steps": 200,
"evaluation_strategy": "steps",
"save_strategy": "steps"
}
```
# Training Code
```python
# coding=utf-8
# Code 99.99% copied and adapted from:
# https://github.com/bofenghuang/vigogne
# https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing#scrollTo=DpYr24pR8T_0
import os
import sys
from dataclasses import dataclass
from typing import Dict, List, Optional, Sequence
import bitsandbytes as bnb
import fire
import torch
import transformers
from datasets import load_dataset
from peft import LoraConfig, TaskType, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
PROMPT_DICT = {
"prompt_input": (
"Below is a^n instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
),
"prompt_no_input": (
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:\n"
),
}
def generate_prompt(example):
return (
PROMPT_DICT["prompt_input"].format_map(example)
if example["input"]
else PROMPT_DICT["prompt_no_input"].format_map(example)
)
# Modified from: https://github.com/bofenghuang/stanford_alpaca/blob/eb5b171d9b103a12a8e14e0edca9cbc45fe1d512/train.py#L166-L182
# Almost same to transformers.DataCollatorForSeq2Seq
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
pad_to_multiple_of: Optional[int] = None
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
# dtype = torch.long
# input_ids, labels = tuple([torch.LongTensor(instance[key]) for instance in instances] for key in ("input_ids", "labels"))
input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels"))
if self.pad_to_multiple_of is not None:
max_length_index, max_length = max(enumerate([len(input_ids_) for input_ids_ in input_ids]),
key=lambda x: x[1])
# int(math.ceil
n_padding = ((max_length // self.pad_to_multiple_of) + 1) * self.pad_to_multiple_of - max_length
# Pad the longest example to pad_to_multiple_of * N
input_ids[max_length_index].extend([self.tokenizer.pad_token_id] * n_padding)
labels[max_length_index].extend([IGNORE_INDEX] * n_padding)
input_ids = [torch.LongTensor(input_ids_) for input_ids_ in input_ids]
labels = [torch.LongTensor(labels_) for labels_ in labels]
input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True,
padding_value=self.tokenizer.pad_token_id)
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX)
return dict(input_ids=input_ids, labels=labels, attention_mask=input_ids.ne(self.tokenizer.pad_token_id))
def train(model_name_or_path: str, output_dir: str, data_path: str, val_set_size: int = 500,
model_max_length: int = 512, lora_r: int = 16, lora_alpha: int = 32, lora_dropout: float = 0.05,
target_modules: List[str] = ["query_key_value"], num_train_epochs: int = 3, learning_rate: float = 0.0001,
per_device_train_batch_size: int = 8, gradient_accumulation_steps: int = 16, **kwargs):
device_map = "auto"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=True, device_map=device_map)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, model_max_length=model_max_length,
padding_side="right", use_fast=False)
model = prepare_model_for_int8_training(model)
lora_config = LoraConfig(r=lora_r, lora_alpha=lora_alpha, target_modules=target_modules, lora_dropout=lora_dropout,
bias="none", task_type=TaskType.CAUSAL_LM)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# Load data
data = load_dataset("json", data_files=data_path)
def preprocess_function(example):
# Format prompt
user_prompt = generate_prompt(example)
# Get prompt length for masking
len_user_prompt_tokens = len(tokenizer(user_prompt, truncation=True)["input_ids"])
input_ids = tokenizer(user_prompt + example["output"] + tokenizer.eos_token, truncation=True)["input_ids"]
labels = [IGNORE_INDEX] * len_user_prompt_tokens + input_ids[len_user_prompt_tokens:]
return {"input_ids": input_ids, "labels": labels}
if val_set_size > 0:
train_val = data["train"].train_test_split(test_size=val_set_size, shuffle=True, seed=42)
train_data = train_val["train"].shuffle().map(preprocess_function, remove_columns=data["train"].column_names)
val_data = train_val["test"].map(preprocess_function, remove_columns=data["train"].column_names)
else:
train_data = data["train"].shuffle().map(preprocess_function, remove_columns=data["train"].column_names)
val_data = None
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=num_train_epochs,
learning_rate=learning_rate,
fp16=True,
output_dir=output_dir,
load_best_model_at_end=True if val_set_size > 0 else False,
**kwargs,
),
data_collator=DataCollatorForSupervisedDataset(tokenizer=tokenizer, pad_to_multiple_of=8),
)
print(trainer.args)
# Silence the warnings. Please re-enable for inference!
model.config.use_cache = False
old_state_dict = model.state_dict
model.state_dict = (lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())).__get__(model,
type(model))
if torch.__version__ >= "2" and sys.platform != "win32":
model = torch.compile(model)
trainer.train()
model.save_pretrained(output_dir)
if __name__ == "__main__":
fire.Fire(train)
``` |
Contrastive-Tension/BERT-Base-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
datasets:
- squad
language:
- bg
- cs
- ro
- pl
--- |
Contrastive-Tension/BERT-Base-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AdamCod/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AdamCod/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0298
- Validation Loss: 0.7463
- Train Accuracy: 0.855
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5725 | 2.0070 | 0.654 | 0 |
| 1.8037 | 1.3445 | 0.813 | 1 |
| 1.4419 | 1.0304 | 0.829 | 2 |
| 1.2025 | 0.8695 | 0.842 | 3 |
| 1.0298 | 0.7463 | 0.855 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Contrastive-Tension/BERT-Distil-CT-STSb | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# marmolpen3/paraphrase-mpnet-base-v2-sla-obligations-rights
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("marmolpen3/paraphrase-mpnet-base-v2-sla-obligations-rights")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Contrastive-Tension/BERT-Distil-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | # Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks
**Authors**: Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, Jun Zhao 😎
**[Contact]** If you have any questions, feel free to contact me via ([email protected]).
This repository contains code, models, and other related resources of our paper ["Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks"](https://arxiv.org/abs/2304.01665).
****
* [2023/05/19] We have supported the one-click implementation of integration between CoNN and PLM!
* [2023/05/18] We have published the ["paper-v2"](https://arxiv.org/abs/2304.01665v2)!
* [2023/04/04] We have used huggingface to release the weight of the CoNN model!
* [2023/04/04] We have released the code for AutoCoNN!
* [2023/04/03] We have published the paper!
* [2023/03/26] We created the Github library!
****
### Install
```
git clone https://github.com/WENGSYX/Neural-Comprehension
cd Neural-Comprehension
pip install .
```
To run neural comprehension, you need to install `PyTorch`, `Transformers`, `jax`, and `tracr`.
```
# https://beta.openai.com/account/api-keys
export OPENAI_API_KEY=(YOUR OPENAI API KEY)
```
### Use AutoCoNN to create your CoNN
Please note that setting an OpenAI Key is required to use AutoCoNN (but not necessary if you're just experimenting with neural cognition and CoNN models).
```python
from NeuralCom.AutoCoNN import AutoCoNN
INSTRUCT = 'Create an SOp that is the last letter of a word'
VOCAB = ['a','b','c','d','e','f','g']
EXAMPLE = [[['a','b','c'],['c','c','c']],[['b','d'],['d','d']]]
auto = AutoCoNN()
model,tokenizer = auto(instruct=INSTRUCT,vocab=VOCAB,example=EXAMPLE)
```
### Use CoNN from huggingface
```python
from NeuralCom.CoNN.modeling_conn import CoNNModel
from NeuralCom.CoNN import Tokenizer
model = CoNNModel.from_pretrained('WENGSYX/CoNN_Reverse')
tokenizer = Tokenizer(model.config.input_encoding_map, model.config.output_encoding_map,model.config.max_position_embeddings)
output = model(tokenizer('r e v e r s e').unsqueeze(0))
print(tokenizer.decode(output.argmax(2)))
>>> [['bos', 'e', 's', 'r', 'e', 'v', 'e', 'r']]
```
### One-click implementation for Neural-Comprehension
```python
from transformers import AutoModel,AutoTokenizer,AutoModelForSeq2SeqLM
from NeuralCom.CoNN.modeling_conn import CoNNModel
from NeuralCom.CoNN import Tokenizer as CoNNTokenizer
from NeuralCom.Model import NCModelForCoinFlip
PLM = AutoModelForSeq2SeqLM.from_pretrained('WENGSYX/PLM_T5_Base_coin_flip')
CoNN = CoNNModel.from_pretrained('WENGSYX/CoNN_Parity')
PLMTokenizer = AutoTokenizer.from_pretrained('WENGSYX/PLM_T5_Base_coin_flip')
CoNNTokenizer = CoNNTokenizer(CoNN.config.input_encoding_map, CoNN.config.output_encoding_map,CoNN.config.max_position_embeddings)
neural_comprehension = NCModelForCoinFlip(PLM, CoNN, PLMTokenizer, CoNNTokenizer).to('cuda:0')
input_text = "A coin is heads up. Aaron flips the coin. Julius does not flip the coin. Yixuan Weng flip the coin. Minjun Zhu does not flip the coin. Is the coin still heads up?"
input_tokens_PLM = PLMTokenizer.encode(input_text, return_tensors='pt')
generated_output = neural_comprehension.generate(input_tokens_PLM.to('cuda:0'))
generated_text = PLMTokenizer.decode(generated_output, skip_special_tokens=True)
print(f"Output: {generated_text}")
```
#### Huggingface Model
In each link, we provide detailed instructions on how to use the CoNN model.
| Model Name | Model Size | Model Address |
| ----------- | ---------- | --------------------------------------------------------- |
| Parity | 2.2M | [[link]](https://huggingface.co/WENGSYX/CoNN_Parity) |
| Reverse | 4.3M | [[link]](https://huggingface.co/WENGSYX/CoNN_Reverse) |
| Last Letter | 62.6K | [[link]](https://huggingface.co/WENGSYX/CoNN_Last_Letter) |
| Copy | 8.8K | [[link]](https://huggingface.co/WENGSYX/CoNN_Copy) |
| Add_Carry | 117K | [[link]](https://huggingface.co/WENGSYX/CoNN_Add_Carry) |
| Sub_Carry | 117K | [[link]](https://huggingface.co/WENGSYX/CoNN_Sub_Carry) |
###### If you have also created some amazing CoNN, you are welcome to share them publicly with us.
## 🌱 Neural-Comprehension's Roadmap 🌱
Our future plan includes but not limited to :
- [x] One-click implementation of integration between CoNN and PLM (huggingface)
- [ ] Combining CoNN with LLM (API-based)
- [ ] Demo Presentation
### 🙏Cite🙏
###### If you are interested in our paper, please feel free to cite it.
```
@misc{weng2023mastering,
title={Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks},
author={Yixuan Weng and Minjun Zhu and Fei Xia and Bin Li and Shizhu He and Kang Liu and Jun Zhao},
year={2023},
eprint={2304.01665},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Contrastive-Tension/BERT-Distil-NLI-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Contrastive-Tension/BERT-Large-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="crowbarmassage/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Contrastive-Tension/BERT-Large-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: indian_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indian_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cooker/cicero-similis | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetune-spanish-gpt2-faqs-nlp-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-spanish-gpt2-faqs-nlp-2
This model is a fine-tuned version of [mrm8488/spanish-gpt2](https://huggingface.co/mrm8488/spanish-gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2147 | 1.0 | 1973 | 0.2514 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CopymySkill/DialoGPT-medium-atakan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | "2023-05-19T06:38:29Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.81 +/- 6.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kws/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Corvus/DialoGPT-medium-CaptainPrice-Extended | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
language:
- en
pipeline_tag: image-to-text
tags:
- mplug-owl
---
# Usage
## Get the latest codebase from Github
```Bash
git clone https://github.com/X-PLUG/mPLUG-Owl.git
```
## Model initialization
```Python
from mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration
from mplug_owl.tokenization_mplug_owl import MplugOwlTokenizer
from mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor
pretrained_ckpt = 'MAGAer13/mplug-owl-llama-7b'
model = MplugOwlForConditionalGeneration.from_pretrained(
pretrained_ckpt,
torch_dtype=torch.bfloat16,
)
image_processor = MplugOwlImageProcessor.from_pretrained(pretrained_ckpt)
tokenizer = MplugOwlTokenizer.from_pretrained(pretrained_ckpt)
processor = MplugOwlProcessor(image_processor, tokenizer)
```
## Model inference
Prepare model inputs.
```Python
# We use a human/AI template to organize the context as a multi-turn conversation.
# <image> denotes an image placehold.
prompts = [
'''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: <image>
Human: Explain why this meme is funny.
AI: ''']
# The image paths should be placed in the image_list and kept in the same order as in the prompts.
# We support urls, local file paths and base64 string. You can custom the pre-process of images by modifying the mplug_owl.modeling_mplug_owl.ImageProcessor
image_list = ['https://xxx.com/image.jpg']
```
Get response.
```Python
# generate kwargs (the same in transformers) can be passed in the do_generate()
generate_kwargs = {
'do_sample': True,
'top_k': 5,
'max_length': 512
}
from PIL import Image
images = [Image.open(_) for _ in image_list]
inputs = processor(text=prompts, images=images, return_tensors='pt')
inputs = {k: v.bfloat16() if v.dtype == torch.float else v for k, v in inputs.items()}
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
res = model.generate(**inputs, **generate_kwargs)
sentence = tokenizer.decode(res.tolist()[0], skip_special_tokens=True)
print(sentence)
``` |
CoveJH/ConBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- mteb
model-index:
- name: e5-small-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.59701492537313
- type: ap
value: 41.67064885731708
- type: f1
value: 71.86465946398573
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.265875
- type: ap
value: 87.67633085349644
- type: f1
value: 91.24297521425744
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.882000000000005
- type: f1
value: 45.08058870381236
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.697
- type: map_at_10
value: 33.975
- type: map_at_100
value: 35.223
- type: map_at_1000
value: 35.260000000000005
- type: map_at_3
value: 29.776999999999997
- type: map_at_5
value: 32.035000000000004
- type: mrr_at_1
value: 20.982
- type: mrr_at_10
value: 34.094
- type: mrr_at_100
value: 35.343
- type: mrr_at_1000
value: 35.38
- type: mrr_at_3
value: 29.884
- type: mrr_at_5
value: 32.141999999999996
- type: ndcg_at_1
value: 20.697
- type: ndcg_at_10
value: 41.668
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 48.305
- type: ndcg_at_3
value: 32.928000000000004
- type: ndcg_at_5
value: 36.998999999999995
- type: precision_at_1
value: 20.697
- type: precision_at_10
value: 6.636
- type: precision_at_100
value: 0.924
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.035
- type: precision_at_5
value: 10.398
- type: recall_at_1
value: 20.697
- type: recall_at_10
value: 66.35799999999999
- type: recall_at_100
value: 92.39
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 42.105
- type: recall_at_5
value: 51.991
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.1169517447068
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.79553720107097
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.10811337308168
- type: mrr
value: 71.56410763751482
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 78.46834918248696
- type: cos_sim_spearman
value: 79.4289182755206
- type: euclidean_pearson
value: 76.26662973727008
- type: euclidean_spearman
value: 78.11744260952536
- type: manhattan_pearson
value: 76.08175262609434
- type: manhattan_spearman
value: 78.29395265552289
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.63636363636364
- type: f1
value: 81.55779952376953
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.88541137137571
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.05205685274407
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.293999999999997
- type: map_at_10
value: 39.876
- type: map_at_100
value: 41.315000000000005
- type: map_at_1000
value: 41.451
- type: map_at_3
value: 37.194
- type: map_at_5
value: 38.728
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 45.281
- type: mrr_at_100
value: 46.188
- type: mrr_at_1000
value: 46.245999999999995
- type: mrr_at_3
value: 43.228
- type: mrr_at_5
value: 44.366
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 45.086
- type: ndcg_at_100
value: 50.756
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 41.416
- type: ndcg_at_5
value: 43.098
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.647000000000002
- type: precision_at_5
value: 13.877
- type: recall_at_1
value: 30.293999999999997
- type: recall_at_10
value: 54.309
- type: recall_at_100
value: 78.59
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 43.168
- type: recall_at_5
value: 48.192
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.738000000000003
- type: map_at_10
value: 36.925999999999995
- type: map_at_100
value: 38.017
- type: map_at_1000
value: 38.144
- type: map_at_3
value: 34.446
- type: map_at_5
value: 35.704
- type: mrr_at_1
value: 35.478
- type: mrr_at_10
value: 42.786
- type: mrr_at_100
value: 43.458999999999996
- type: mrr_at_1000
value: 43.507
- type: mrr_at_3
value: 40.648
- type: mrr_at_5
value: 41.804
- type: ndcg_at_1
value: 35.478
- type: ndcg_at_10
value: 42.044
- type: ndcg_at_100
value: 46.249
- type: ndcg_at_1000
value: 48.44
- type: ndcg_at_3
value: 38.314
- type: ndcg_at_5
value: 39.798
- type: precision_at_1
value: 35.478
- type: precision_at_10
value: 7.764
- type: precision_at_100
value: 1.253
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 18.047
- type: precision_at_5
value: 12.637
- type: recall_at_1
value: 28.738000000000003
- type: recall_at_10
value: 50.659
- type: recall_at_100
value: 68.76299999999999
- type: recall_at_1000
value: 82.811
- type: recall_at_3
value: 39.536
- type: recall_at_5
value: 43.763999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.565
- type: map_at_10
value: 50.168
- type: map_at_100
value: 51.11
- type: map_at_1000
value: 51.173
- type: map_at_3
value: 47.044000000000004
- type: map_at_5
value: 48.838
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 53.596999999999994
- type: mrr_at_100
value: 54.211
- type: mrr_at_1000
value: 54.247
- type: mrr_at_3
value: 51.202000000000005
- type: mrr_at_5
value: 52.608999999999995
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 55.694
- type: ndcg_at_100
value: 59.518
- type: ndcg_at_1000
value: 60.907
- type: ndcg_at_3
value: 50.395999999999994
- type: ndcg_at_5
value: 53.022999999999996
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 8.84
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.153
- type: precision_at_5
value: 15.260000000000002
- type: recall_at_1
value: 38.565
- type: recall_at_10
value: 68.65
- type: recall_at_100
value: 85.37400000000001
- type: recall_at_1000
value: 95.37400000000001
- type: recall_at_3
value: 54.645999999999994
- type: recall_at_5
value: 60.958
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.945
- type: map_at_10
value: 30.641000000000002
- type: map_at_100
value: 31.599
- type: map_at_1000
value: 31.691000000000003
- type: map_at_3
value: 28.405
- type: map_at_5
value: 29.704000000000004
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 32.22
- type: mrr_at_100
value: 33.138
- type: mrr_at_1000
value: 33.214
- type: mrr_at_3
value: 30.151
- type: mrr_at_5
value: 31.298
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 34.638000000000005
- type: ndcg_at_100
value: 39.486
- type: ndcg_at_1000
value: 41.936
- type: ndcg_at_3
value: 30.333
- type: ndcg_at_5
value: 32.482
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 5.153
- type: precision_at_100
value: 0.7929999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.429
- type: precision_at_5
value: 8.723
- type: recall_at_1
value: 23.945
- type: recall_at_10
value: 45.412
- type: recall_at_100
value: 67.836
- type: recall_at_1000
value: 86.467
- type: recall_at_3
value: 34.031
- type: recall_at_5
value: 39.039
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.419
- type: map_at_10
value: 20.858999999999998
- type: map_at_100
value: 22.067999999999998
- type: map_at_1000
value: 22.192
- type: map_at_3
value: 18.673000000000002
- type: map_at_5
value: 19.968
- type: mrr_at_1
value: 17.785999999999998
- type: mrr_at_10
value: 24.878
- type: mrr_at_100
value: 26.021
- type: mrr_at_1000
value: 26.095000000000002
- type: mrr_at_3
value: 22.616
- type: mrr_at_5
value: 23.785
- type: ndcg_at_1
value: 17.785999999999998
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 31.05
- type: ndcg_at_1000
value: 34.052
- type: ndcg_at_3
value: 21.117
- type: ndcg_at_5
value: 23.048
- type: precision_at_1
value: 17.785999999999998
- type: precision_at_10
value: 4.590000000000001
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 14.419
- type: recall_at_10
value: 34.477999999999994
- type: recall_at_100
value: 60.02499999999999
- type: recall_at_1000
value: 81.646
- type: recall_at_3
value: 23.515
- type: recall_at_5
value: 28.266999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.268
- type: map_at_10
value: 35.114000000000004
- type: map_at_100
value: 36.212
- type: map_at_1000
value: 36.333
- type: map_at_3
value: 32.436
- type: map_at_5
value: 33.992
- type: mrr_at_1
value: 31.761
- type: mrr_at_10
value: 40.355999999999995
- type: mrr_at_100
value: 41.125
- type: mrr_at_1000
value: 41.186
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.463
- type: ndcg_at_1
value: 31.761
- type: ndcg_at_10
value: 40.422000000000004
- type: ndcg_at_100
value: 45.458999999999996
- type: ndcg_at_1000
value: 47.951
- type: ndcg_at_3
value: 35.972
- type: ndcg_at_5
value: 38.272
- type: precision_at_1
value: 31.761
- type: precision_at_10
value: 7.103
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.779
- type: precision_at_5
value: 11.877
- type: recall_at_1
value: 26.268
- type: recall_at_10
value: 51.053000000000004
- type: recall_at_100
value: 72.702
- type: recall_at_1000
value: 89.521
- type: recall_at_3
value: 38.619
- type: recall_at_5
value: 44.671
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.230999999999998
- type: map_at_10
value: 34.227000000000004
- type: map_at_100
value: 35.370000000000005
- type: map_at_1000
value: 35.488
- type: map_at_3
value: 31.496000000000002
- type: map_at_5
value: 33.034
- type: mrr_at_1
value: 30.822
- type: mrr_at_10
value: 39.045
- type: mrr_at_100
value: 39.809
- type: mrr_at_1000
value: 39.873
- type: mrr_at_3
value: 36.663000000000004
- type: mrr_at_5
value: 37.964
- type: ndcg_at_1
value: 30.822
- type: ndcg_at_10
value: 39.472
- type: ndcg_at_100
value: 44.574999999999996
- type: ndcg_at_1000
value: 47.162
- type: ndcg_at_3
value: 34.929
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 30.822
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.591
- type: precision_at_5
value: 11.667
- type: recall_at_1
value: 25.230999999999998
- type: recall_at_10
value: 50.42100000000001
- type: recall_at_100
value: 72.685
- type: recall_at_1000
value: 90.469
- type: recall_at_3
value: 37.503
- type: recall_at_5
value: 43.123
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.604166666666664
- type: map_at_10
value: 32.427166666666665
- type: map_at_100
value: 33.51474999999999
- type: map_at_1000
value: 33.6345
- type: map_at_3
value: 30.02366666666667
- type: map_at_5
value: 31.382333333333328
- type: mrr_at_1
value: 29.001166666666666
- type: mrr_at_10
value: 36.3315
- type: mrr_at_100
value: 37.16683333333333
- type: mrr_at_1000
value: 37.23341666666668
- type: mrr_at_3
value: 34.19916666666667
- type: mrr_at_5
value: 35.40458333333334
- type: ndcg_at_1
value: 29.001166666666666
- type: ndcg_at_10
value: 37.06883333333334
- type: ndcg_at_100
value: 41.95816666666666
- type: ndcg_at_1000
value: 44.501583333333336
- type: ndcg_at_3
value: 32.973499999999994
- type: ndcg_at_5
value: 34.90833333333334
- type: precision_at_1
value: 29.001166666666666
- type: precision_at_10
value: 6.336
- type: precision_at_100
value: 1.0282499999999999
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.932499999999996
- type: precision_at_5
value: 10.50825
- type: recall_at_1
value: 24.604166666666664
- type: recall_at_10
value: 46.9525
- type: recall_at_100
value: 68.67816666666667
- type: recall_at_1000
value: 86.59783333333334
- type: recall_at_3
value: 35.49783333333333
- type: recall_at_5
value: 40.52525000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.559
- type: map_at_10
value: 29.023
- type: map_at_100
value: 29.818
- type: map_at_1000
value: 29.909000000000002
- type: map_at_3
value: 27.037
- type: map_at_5
value: 28.225
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 31.962000000000003
- type: mrr_at_100
value: 32.726
- type: mrr_at_1000
value: 32.800000000000004
- type: mrr_at_3
value: 30.266
- type: mrr_at_5
value: 31.208999999999996
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 32.53
- type: ndcg_at_100
value: 36.758
- type: ndcg_at_1000
value: 39.362
- type: ndcg_at_3
value: 28.985
- type: ndcg_at_5
value: 30.757
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 4.968999999999999
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.219
- type: precision_at_5
value: 8.527999999999999
- type: recall_at_1
value: 23.559
- type: recall_at_10
value: 40.585
- type: recall_at_100
value: 60.306000000000004
- type: recall_at_1000
value: 80.11
- type: recall_at_3
value: 30.794
- type: recall_at_5
value: 35.186
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.384999999999998
- type: map_at_10
value: 22.142
- type: map_at_100
value: 23.057
- type: map_at_1000
value: 23.177
- type: map_at_3
value: 20.29
- type: map_at_5
value: 21.332
- type: mrr_at_1
value: 19.89
- type: mrr_at_10
value: 25.771
- type: mrr_at_100
value: 26.599
- type: mrr_at_1000
value: 26.680999999999997
- type: mrr_at_3
value: 23.962
- type: mrr_at_5
value: 24.934
- type: ndcg_at_1
value: 19.89
- type: ndcg_at_10
value: 25.97
- type: ndcg_at_100
value: 30.605
- type: ndcg_at_1000
value: 33.619
- type: ndcg_at_3
value: 22.704
- type: ndcg_at_5
value: 24.199
- type: precision_at_1
value: 19.89
- type: precision_at_10
value: 4.553
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 10.541
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 16.384999999999998
- type: recall_at_10
value: 34.001
- type: recall_at_100
value: 55.17100000000001
- type: recall_at_1000
value: 77.125
- type: recall_at_3
value: 24.618000000000002
- type: recall_at_5
value: 28.695999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.726
- type: map_at_10
value: 31.227
- type: map_at_100
value: 32.311
- type: map_at_1000
value: 32.419
- type: map_at_3
value: 28.765
- type: map_at_5
value: 30.229
- type: mrr_at_1
value: 27.705000000000002
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 35.931000000000004
- type: mrr_at_1000
value: 36
- type: mrr_at_3
value: 32.603
- type: mrr_at_5
value: 34.117999999999995
- type: ndcg_at_1
value: 27.705000000000002
- type: ndcg_at_10
value: 35.968
- type: ndcg_at_100
value: 41.197
- type: ndcg_at_1000
value: 43.76
- type: ndcg_at_3
value: 31.304
- type: ndcg_at_5
value: 33.661
- type: precision_at_1
value: 27.705000000000002
- type: precision_at_10
value: 5.942
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 13.868
- type: precision_at_5
value: 9.944
- type: recall_at_1
value: 23.726
- type: recall_at_10
value: 46.786
- type: recall_at_100
value: 70.072
- type: recall_at_1000
value: 88.2
- type: recall_at_3
value: 33.981
- type: recall_at_5
value: 39.893
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.344
- type: map_at_10
value: 31.636999999999997
- type: map_at_100
value: 33.065
- type: map_at_1000
value: 33.300000000000004
- type: map_at_3
value: 29.351
- type: map_at_5
value: 30.432
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.587
- type: mrr_at_100
value: 36.52
- type: mrr_at_1000
value: 36.597
- type: mrr_at_3
value: 33.696
- type: mrr_at_5
value: 34.713
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.61
- type: ndcg_at_100
value: 41.88
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 33.038000000000004
- type: ndcg_at_5
value: 34.331
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 6.917
- type: precision_at_100
value: 1.3599999999999999
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.344
- type: recall_at_10
value: 45.782000000000004
- type: recall_at_100
value: 69.503
- type: recall_at_1000
value: 90.742
- type: recall_at_3
value: 35.160000000000004
- type: recall_at_5
value: 39.058
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.776
- type: map_at_10
value: 27.285999999999998
- type: map_at_100
value: 28.235
- type: map_at_1000
value: 28.337
- type: map_at_3
value: 25.147000000000002
- type: map_at_5
value: 26.401999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 29.409999999999997
- type: mrr_at_100
value: 30.275000000000002
- type: mrr_at_1000
value: 30.354999999999997
- type: mrr_at_3
value: 27.418
- type: mrr_at_5
value: 28.592000000000002
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 31.239
- type: ndcg_at_100
value: 35.965
- type: ndcg_at_1000
value: 38.602
- type: ndcg_at_3
value: 27.174
- type: ndcg_at_5
value: 29.229
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.776
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.022
- type: recall_at_1
value: 20.776
- type: recall_at_10
value: 41.294
- type: recall_at_100
value: 63.111
- type: recall_at_1000
value: 82.88600000000001
- type: recall_at_3
value: 30.403000000000002
- type: recall_at_5
value: 35.455999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.376
- type: map_at_10
value: 15.926000000000002
- type: map_at_100
value: 17.585
- type: map_at_1000
value: 17.776
- type: map_at_3
value: 13.014000000000001
- type: map_at_5
value: 14.417
- type: mrr_at_1
value: 20.195
- type: mrr_at_10
value: 29.95
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.108000000000004
- type: mrr_at_3
value: 26.667
- type: mrr_at_5
value: 28.458
- type: ndcg_at_1
value: 20.195
- type: ndcg_at_10
value: 22.871
- type: ndcg_at_100
value: 29.921999999999997
- type: ndcg_at_1000
value: 33.672999999999995
- type: ndcg_at_3
value: 17.782999999999998
- type: ndcg_at_5
value: 19.544
- type: precision_at_1
value: 20.195
- type: precision_at_10
value: 7.394
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.073
- type: precision_at_5
value: 10.436
- type: recall_at_1
value: 9.376
- type: recall_at_10
value: 28.544999999999998
- type: recall_at_100
value: 53.147999999999996
- type: recall_at_1000
value: 74.62
- type: recall_at_3
value: 16.464000000000002
- type: recall_at_5
value: 21.004
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.415000000000001
- type: map_at_10
value: 18.738
- type: map_at_100
value: 27.291999999999998
- type: map_at_1000
value: 28.992
- type: map_at_3
value: 13.196
- type: map_at_5
value: 15.539
- type: mrr_at_1
value: 66.5
- type: mrr_at_10
value: 74.518
- type: mrr_at_100
value: 74.86
- type: mrr_at_1000
value: 74.87
- type: mrr_at_3
value: 72.375
- type: mrr_at_5
value: 73.86200000000001
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 41.317
- type: ndcg_at_100
value: 45.845
- type: ndcg_at_1000
value: 52.92
- type: ndcg_at_3
value: 44.983000000000004
- type: ndcg_at_5
value: 42.989
- type: precision_at_1
value: 66.5
- type: precision_at_10
value: 33.6
- type: precision_at_100
value: 10.972999999999999
- type: precision_at_1000
value: 2.214
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 8.415000000000001
- type: recall_at_10
value: 24.953
- type: recall_at_100
value: 52.48199999999999
- type: recall_at_1000
value: 75.093
- type: recall_at_3
value: 14.341000000000001
- type: recall_at_5
value: 18.468
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.06499999999999
- type: f1
value: 41.439327599975385
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.02
- type: map_at_10
value: 76.68599999999999
- type: map_at_100
value: 76.959
- type: map_at_1000
value: 76.972
- type: map_at_3
value: 75.024
- type: map_at_5
value: 76.153
- type: mrr_at_1
value: 71.197
- type: mrr_at_10
value: 81.105
- type: mrr_at_100
value: 81.232
- type: mrr_at_1000
value: 81.233
- type: mrr_at_3
value: 79.758
- type: mrr_at_5
value: 80.69
- type: ndcg_at_1
value: 71.197
- type: ndcg_at_10
value: 81.644
- type: ndcg_at_100
value: 82.645
- type: ndcg_at_1000
value: 82.879
- type: ndcg_at_3
value: 78.792
- type: ndcg_at_5
value: 80.528
- type: precision_at_1
value: 71.197
- type: precision_at_10
value: 10.206999999999999
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 30.868000000000002
- type: precision_at_5
value: 19.559
- type: recall_at_1
value: 66.02
- type: recall_at_10
value: 92.50699999999999
- type: recall_at_100
value: 96.497
- type: recall_at_1000
value: 97.956
- type: recall_at_3
value: 84.866
- type: recall_at_5
value: 89.16199999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.948
- type: map_at_10
value: 29.833
- type: map_at_100
value: 31.487
- type: map_at_1000
value: 31.674000000000003
- type: map_at_3
value: 26.029999999999998
- type: map_at_5
value: 28.038999999999998
- type: mrr_at_1
value: 34.721999999999994
- type: mrr_at_10
value: 44.214999999999996
- type: mrr_at_100
value: 44.994
- type: mrr_at_1000
value: 45.051
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 43.032
- type: ndcg_at_1
value: 34.721999999999994
- type: ndcg_at_10
value: 37.434
- type: ndcg_at_100
value: 43.702000000000005
- type: ndcg_at_1000
value: 46.993
- type: ndcg_at_3
value: 33.56
- type: ndcg_at_5
value: 34.687
- type: precision_at_1
value: 34.721999999999994
- type: precision_at_10
value: 10.401
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 22.531000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 17.948
- type: recall_at_10
value: 45.062999999999995
- type: recall_at_100
value: 68.191
- type: recall_at_1000
value: 87.954
- type: recall_at_3
value: 31.112000000000002
- type: recall_at_5
value: 36.823
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.644
- type: map_at_10
value: 57.658
- type: map_at_100
value: 58.562000000000005
- type: map_at_1000
value: 58.62500000000001
- type: map_at_3
value: 54.022999999999996
- type: map_at_5
value: 56.293000000000006
- type: mrr_at_1
value: 73.288
- type: mrr_at_10
value: 80.51700000000001
- type: mrr_at_100
value: 80.72
- type: mrr_at_1000
value: 80.728
- type: mrr_at_3
value: 79.33200000000001
- type: mrr_at_5
value: 80.085
- type: ndcg_at_1
value: 73.288
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.723
- type: ndcg_at_1000
value: 70.96000000000001
- type: ndcg_at_3
value: 61.358999999999995
- type: ndcg_at_5
value: 64.277
- type: precision_at_1
value: 73.288
- type: precision_at_10
value: 14.17
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.487
- type: precision_at_5
value: 25.999
- type: recall_at_1
value: 36.644
- type: recall_at_10
value: 70.851
- type: recall_at_100
value: 82.94399999999999
- type: recall_at_1000
value: 91.134
- type: recall_at_3
value: 59.230000000000004
- type: recall_at_5
value: 64.997
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.00280000000001
- type: ap
value: 80.46302061021223
- type: f1
value: 85.9592921596419
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.541
- type: map_at_10
value: 34.625
- type: map_at_100
value: 35.785
- type: map_at_1000
value: 35.831
- type: map_at_3
value: 30.823
- type: map_at_5
value: 32.967999999999996
- type: mrr_at_1
value: 23.180999999999997
- type: mrr_at_10
value: 35.207
- type: mrr_at_100
value: 36.315
- type: mrr_at_1000
value: 36.355
- type: mrr_at_3
value: 31.483
- type: mrr_at_5
value: 33.589999999999996
- type: ndcg_at_1
value: 23.195
- type: ndcg_at_10
value: 41.461
- type: ndcg_at_100
value: 47.032000000000004
- type: ndcg_at_1000
value: 48.199999999999996
- type: ndcg_at_3
value: 33.702
- type: ndcg_at_5
value: 37.522
- type: precision_at_1
value: 23.195
- type: precision_at_10
value: 6.526999999999999
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.308000000000002
- type: precision_at_5
value: 10.507
- type: recall_at_1
value: 22.541
- type: recall_at_10
value: 62.524
- type: recall_at_100
value: 88.228
- type: recall_at_1000
value: 97.243
- type: recall_at_3
value: 41.38
- type: recall_at_5
value: 50.55
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.69949840401279
- type: f1
value: 92.54141471311786
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.56041951664386
- type: f1
value: 55.88499977508287
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.62071284465365
- type: f1
value: 69.36717546572152
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.35843981170142
- type: f1
value: 76.15496453538884
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.33664956793118
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.883839621715524
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.096874986740758
- type: mrr
value: 30.97300481932132
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.4
- type: map_at_10
value: 11.852
- type: map_at_100
value: 14.758
- type: map_at_1000
value: 16.134
- type: map_at_3
value: 8.558
- type: map_at_5
value: 10.087
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 52.05800000000001
- type: mrr_at_100
value: 52.689
- type: mrr_at_1000
value: 52.742999999999995
- type: mrr_at_3
value: 50.205999999999996
- type: mrr_at_5
value: 51.367
- type: ndcg_at_1
value: 42.57
- type: ndcg_at_10
value: 32.449
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 38.351
- type: ndcg_at_3
value: 37.044
- type: ndcg_at_5
value: 35.275
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 23.87
- type: precision_at_100
value: 7.625
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 34.365
- type: precision_at_5
value: 30.341
- type: recall_at_1
value: 5.4
- type: recall_at_10
value: 15.943999999999999
- type: recall_at_100
value: 29.805
- type: recall_at_1000
value: 61.695
- type: recall_at_3
value: 9.539
- type: recall_at_5
value: 12.127
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.047000000000004
- type: map_at_10
value: 51.6
- type: map_at_100
value: 52.449999999999996
- type: map_at_1000
value: 52.476
- type: map_at_3
value: 47.452
- type: map_at_5
value: 49.964
- type: mrr_at_1
value: 40.382
- type: mrr_at_10
value: 54.273
- type: mrr_at_100
value: 54.859
- type: mrr_at_1000
value: 54.876000000000005
- type: mrr_at_3
value: 51.014
- type: mrr_at_5
value: 52.983999999999995
- type: ndcg_at_1
value: 40.353
- type: ndcg_at_10
value: 59.11300000000001
- type: ndcg_at_100
value: 62.604000000000006
- type: ndcg_at_1000
value: 63.187000000000005
- type: ndcg_at_3
value: 51.513
- type: ndcg_at_5
value: 55.576
- type: precision_at_1
value: 40.353
- type: precision_at_10
value: 9.418
- type: precision_at_100
value: 1.1440000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 36.047000000000004
- type: recall_at_10
value: 79.22200000000001
- type: recall_at_100
value: 94.23
- type: recall_at_1000
value: 98.51100000000001
- type: recall_at_3
value: 59.678
- type: recall_at_5
value: 68.967
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.232
- type: map_at_10
value: 81.674
- type: map_at_100
value: 82.338
- type: map_at_1000
value: 82.36099999999999
- type: map_at_3
value: 78.833
- type: map_at_5
value: 80.58
- type: mrr_at_1
value: 78.64
- type: mrr_at_10
value: 85.164
- type: mrr_at_100
value: 85.317
- type: mrr_at_1000
value: 85.319
- type: mrr_at_3
value: 84.127
- type: mrr_at_5
value: 84.789
- type: ndcg_at_1
value: 78.63
- type: ndcg_at_10
value: 85.711
- type: ndcg_at_100
value: 87.238
- type: ndcg_at_1000
value: 87.444
- type: ndcg_at_3
value: 82.788
- type: ndcg_at_5
value: 84.313
- type: precision_at_1
value: 78.63
- type: precision_at_10
value: 12.977
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.113
- type: precision_at_5
value: 23.71
- type: recall_at_1
value: 68.232
- type: recall_at_10
value: 93.30199999999999
- type: recall_at_100
value: 98.799
- type: recall_at_1000
value: 99.885
- type: recall_at_3
value: 84.827
- type: recall_at_5
value: 89.188
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.71879170816294
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 59.65866311751794
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.218
- type: map_at_10
value: 10.337
- type: map_at_100
value: 12.131
- type: map_at_1000
value: 12.411
- type: map_at_3
value: 7.4270000000000005
- type: map_at_5
value: 8.913
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 30.868000000000002
- type: mrr_at_100
value: 31.903
- type: mrr_at_1000
value: 31.972
- type: mrr_at_3
value: 27.367
- type: mrr_at_5
value: 29.372
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.765
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 30.206
- type: ndcg_at_3
value: 16.64
- type: ndcg_at_5
value: 14.712
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 9.24
- type: precision_at_100
value: 1.9560000000000002
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.94
- type: recall_at_1
value: 4.218
- type: recall_at_10
value: 18.752
- type: recall_at_100
value: 39.7
- type: recall_at_1000
value: 65.57300000000001
- type: recall_at_3
value: 9.428
- type: recall_at_5
value: 13.133000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04338850207233
- type: cos_sim_spearman
value: 78.5054651430423
- type: euclidean_pearson
value: 80.30739451228612
- type: euclidean_spearman
value: 78.48377464299097
- type: manhattan_pearson
value: 80.40795049052781
- type: manhattan_spearman
value: 78.49506205443114
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.11596224442962
- type: cos_sim_spearman
value: 76.20997388935461
- type: euclidean_pearson
value: 80.56858451349109
- type: euclidean_spearman
value: 75.92659183871186
- type: manhattan_pearson
value: 80.60246102203844
- type: manhattan_spearman
value: 76.03018971432664
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.34691640755737
- type: cos_sim_spearman
value: 82.4018369631579
- type: euclidean_pearson
value: 81.87673092245366
- type: euclidean_spearman
value: 82.3671489960678
- type: manhattan_pearson
value: 81.88222387719948
- type: manhattan_spearman
value: 82.3816590344736
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.2836092579524
- type: cos_sim_spearman
value: 78.99982781772064
- type: euclidean_pearson
value: 80.5184271010527
- type: euclidean_spearman
value: 78.89777392101904
- type: manhattan_pearson
value: 80.53585705018664
- type: manhattan_spearman
value: 78.92898405472994
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.7349907750784
- type: cos_sim_spearman
value: 87.7611234446225
- type: euclidean_pearson
value: 86.98759326731624
- type: euclidean_spearman
value: 87.58321319424618
- type: manhattan_pearson
value: 87.03483090370842
- type: manhattan_spearman
value: 87.63278333060288
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.75873694924825
- type: cos_sim_spearman
value: 83.80237999094724
- type: euclidean_pearson
value: 83.55023725861537
- type: euclidean_spearman
value: 84.12744338577744
- type: manhattan_pearson
value: 83.58816983036232
- type: manhattan_spearman
value: 84.18520748676501
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.21630882940174
- type: cos_sim_spearman
value: 87.72382883437031
- type: euclidean_pearson
value: 88.69933350930333
- type: euclidean_spearman
value: 88.24660814383081
- type: manhattan_pearson
value: 88.77331018833499
- type: manhattan_spearman
value: 88.26109989380632
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.11854063060489
- type: cos_sim_spearman
value: 63.14678634195072
- type: euclidean_pearson
value: 61.679090067000864
- type: euclidean_spearman
value: 62.28876589509653
- type: manhattan_pearson
value: 62.082324165511004
- type: manhattan_spearman
value: 62.56030932816679
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.00319882832645
- type: cos_sim_spearman
value: 85.94529772647257
- type: euclidean_pearson
value: 85.6661390122756
- type: euclidean_spearman
value: 85.97747815545827
- type: manhattan_pearson
value: 85.58422770541893
- type: manhattan_spearman
value: 85.9237139181532
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.16198731863916
- type: mrr
value: 94.25202702163487
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.761
- type: map_at_10
value: 64.396
- type: map_at_100
value: 65.07
- type: map_at_1000
value: 65.09899999999999
- type: map_at_3
value: 61.846000000000004
- type: map_at_5
value: 63.284
- type: mrr_at_1
value: 57.667
- type: mrr_at_10
value: 65.83099999999999
- type: mrr_at_100
value: 66.36800000000001
- type: mrr_at_1000
value: 66.39399999999999
- type: mrr_at_3
value: 64.056
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 57.667
- type: ndcg_at_10
value: 68.854
- type: ndcg_at_100
value: 71.59100000000001
- type: ndcg_at_1000
value: 72.383
- type: ndcg_at_3
value: 64.671
- type: ndcg_at_5
value: 66.796
- type: precision_at_1
value: 57.667
- type: precision_at_10
value: 9.167
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 54.761
- type: recall_at_10
value: 80.9
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 69.672
- type: recall_at_5
value: 75.083
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8079207920792
- type: cos_sim_ap
value: 94.88470927617445
- type: cos_sim_f1
value: 90.08179959100204
- type: cos_sim_precision
value: 92.15481171548117
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.58613861386138
- type: dot_ap
value: 82.94822578881316
- type: dot_f1
value: 77.33333333333333
- type: dot_precision
value: 79.36842105263158
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.8069306930693
- type: euclidean_ap
value: 94.81367858031837
- type: euclidean_f1
value: 90.01009081735621
- type: euclidean_precision
value: 90.83503054989816
- type: euclidean_recall
value: 89.2
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.91405337220161
- type: manhattan_f1
value: 90.2763561924258
- type: manhattan_precision
value: 92.45283018867924
- type: manhattan_recall
value: 88.2
- type: max_accuracy
value: 99.81188118811882
- type: max_ap
value: 94.91405337220161
- type: max_f1
value: 90.2763561924258
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.511599500053094
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.984728147814707
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.93428193939015
- type: mrr
value: 50.916557911043206
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.562500894537145
- type: cos_sim_spearman
value: 31.162587976726307
- type: dot_pearson
value: 22.633662187735762
- type: dot_spearman
value: 22.723000282378962
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.219
- type: map_at_10
value: 1.871
- type: map_at_100
value: 10.487
- type: map_at_1000
value: 25.122
- type: map_at_3
value: 0.657
- type: map_at_5
value: 1.0699999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 89.567
- type: mrr_at_100
value: 89.748
- type: mrr_at_1000
value: 89.748
- type: mrr_at_3
value: 88.667
- type: mrr_at_5
value: 89.567
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 74.533
- type: ndcg_at_100
value: 55.839000000000006
- type: ndcg_at_1000
value: 49.748
- type: ndcg_at_3
value: 79.53099999999999
- type: ndcg_at_5
value: 78.245
- type: precision_at_1
value: 84
- type: precision_at_10
value: 78.4
- type: precision_at_100
value: 56.99999999999999
- type: precision_at_1000
value: 21.98
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.8
- type: recall_at_1
value: 0.219
- type: recall_at_10
value: 2.02
- type: recall_at_100
value: 13.555
- type: recall_at_1000
value: 46.739999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.5029999999999997
- type: map_at_10
value: 11.042
- type: map_at_100
value: 16.326999999999998
- type: map_at_1000
value: 17.836
- type: map_at_3
value: 6.174
- type: map_at_5
value: 7.979
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 52.617000000000004
- type: mrr_at_100
value: 53.351000000000006
- type: mrr_at_1000
value: 53.351000000000006
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 50.714000000000006
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 27.125
- type: ndcg_at_100
value: 35.845
- type: ndcg_at_1000
value: 47.377
- type: ndcg_at_3
value: 29.633
- type: ndcg_at_5
value: 28.378999999999998
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 24.082
- type: precision_at_100
value: 6.877999999999999
- type: precision_at_1000
value: 1.463
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 28.571
- type: recall_at_1
value: 3.5029999999999997
- type: recall_at_10
value: 17.068
- type: recall_at_100
value: 43.361
- type: recall_at_1000
value: 78.835
- type: recall_at_3
value: 6.821000000000001
- type: recall_at_5
value: 10.357
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.0954
- type: ap
value: 14.216844153511959
- type: f1
value: 54.63687418565117
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.46293152235427
- type: f1
value: 61.744177921638645
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.12708617788644
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.75430649102938
- type: cos_sim_ap
value: 73.34252536948081
- type: cos_sim_f1
value: 67.53758935173774
- type: cos_sim_precision
value: 63.3672525439408
- type: cos_sim_recall
value: 72.29551451187335
- type: dot_accuracy
value: 81.71305954580676
- type: dot_ap
value: 59.5532209082386
- type: dot_f1
value: 56.18466898954705
- type: dot_precision
value: 47.830923248053395
- type: dot_recall
value: 68.07387862796834
- type: euclidean_accuracy
value: 85.81987244441795
- type: euclidean_ap
value: 73.34325409809446
- type: euclidean_f1
value: 67.83451360417443
- type: euclidean_precision
value: 64.09955388588871
- type: euclidean_recall
value: 72.0316622691293
- type: manhattan_accuracy
value: 85.68277999642368
- type: manhattan_ap
value: 73.1535450121903
- type: manhattan_f1
value: 67.928237896289
- type: manhattan_precision
value: 63.56945722171113
- type: manhattan_recall
value: 72.9287598944591
- type: max_accuracy
value: 85.81987244441795
- type: max_ap
value: 73.34325409809446
- type: max_f1
value: 67.928237896289
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.90441262079403
- type: cos_sim_ap
value: 85.79331880741438
- type: cos_sim_f1
value: 78.31563529842548
- type: cos_sim_precision
value: 74.6683424102779
- type: cos_sim_recall
value: 82.33754234678165
- type: dot_accuracy
value: 84.89928978926534
- type: dot_ap
value: 75.25819218316
- type: dot_f1
value: 69.88730119720536
- type: dot_precision
value: 64.23362374959665
- type: dot_recall
value: 76.63227594702803
- type: euclidean_accuracy
value: 89.01695967710637
- type: euclidean_ap
value: 85.98986606038852
- type: euclidean_f1
value: 78.5277880014722
- type: euclidean_precision
value: 75.22211253701876
- type: euclidean_recall
value: 82.13735756082538
- type: manhattan_accuracy
value: 88.99561454573679
- type: manhattan_ap
value: 85.92262421793953
- type: manhattan_f1
value: 78.38866094740769
- type: manhattan_precision
value: 76.02373028505282
- type: manhattan_recall
value: 80.9054511857099
- type: max_accuracy
value: 89.01695967710637
- type: max_ap
value: 85.98986606038852
- type: max_f1
value: 78.5277880014722
language:
- en
license: mit
---
# E5-small-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2')
model = AutoModel.from_pretrained('intfloat/e5-small-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens. |
Coyotl/DialoGPT-test-last-arthurmorgan | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Craig/mGqFiPhu | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mBart50_large_torch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBart50_large_torch
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1233
- Rouge1: 0.257
- Rouge2: 0.0718
- Rougel: 0.187
- Rougelsum: 0.1861
- Gen Len: 88.5902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
| No log | 1.0 | 140 | 2.6612 | 0.0155 | 0.0045 | 0.0135 | 0.0133 | 183.7541 |
| No log | 2.0 | 280 | 2.6487 | 0.1889 | 0.0497 | 0.1514 | 0.1503 | 88.9508 |
| No log | 3.0 | 420 | 2.8296 | 0.2366 | 0.0693 | 0.1772 | 0.1767 | 84.7705 |
| 1.789 | 4.0 | 560 | 3.1233 | 0.257 | 0.0718 | 0.187 | 0.1861 | 88.5902 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Craig/paraphrase-MiniLM-L6-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,026 | null | ---
tags:
- autotrain
- text-classification
language:
- hi
widget:
- text: "I love AutoTrain 🤗"
datasets:
- khyatikhandelwal/autotrain-data-hatespeech
co2_eq_emissions:
emissions: 0.3713708751565804
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 59891134251
- CO2 Emissions (in grams): 0.3714
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' /static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2Fmodels%2Fkhyatikhandelwal%2Fautotrain-hatespeech-59891134251
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("khyatikhandelwal/autotrain-hatespeech-59891134251", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("khyatikhandelwal/autotrain-hatespeech-59891134251", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
CrayonShinchan/bart_fine_tune_test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-19T06:59:26Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 7.76 +/- 3.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r iamjoy/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
CrisLeaf/generador-de-historias-de-tolkien | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.74 +/- 16.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Crispy/dialopt-small-kratos | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-19T07:03:50Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8289
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6467
- Accuracy: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7922 | 1.0 | 703 | 0.8853 | 0.7804 |
| 1.4677 | 2.0 | 1406 | 0.7064 | 0.8124 |
| 1.3005 | 3.0 | 2109 | 0.6467 | 0.8289 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Crumped/imdb-simpleRNN | [
"keras"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: A MRI-T1 scan of a human brain
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - scarlettlin/model_0519_dreambooth
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on A MRI-T1 scan of a human brain using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Cryptikdw/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | "2023-05-19T07:13:33Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.9548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0934 | 0.12 | 75 | 1.7596 | 0.5429 |
| 0.9556 | 1.12 | 150 | 1.1258 | 0.6429 |
| 0.4154 | 2.12 | 225 | 0.4632 | 0.8571 |
| 0.4116 | 3.12 | 300 | 0.5566 | 0.8571 |
| 0.021 | 4.12 | 375 | 0.1726 | 0.9714 |
| 0.0084 | 5.12 | 450 | 0.1028 | 0.9714 |
| 0.0056 | 6.12 | 525 | 0.0352 | 0.9857 |
| 0.0051 | 7.12 | 600 | 0.0435 | 0.9714 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Crystal/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: tamil-ner-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamil-ner-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3527 | 1.0 | 938 | 0.5562 |
| 0.3543 | 2.0 | 1876 | 0.5562 |
| 0.3584 | 3.0 | 2814 | 0.5562 |
| 0.3554 | 4.0 | 3752 | 0.5562 |
| 0.363 | 5.0 | 4690 | 0.5562 |
| 0.3562 | 6.0 | 5628 | 0.5562 |
| 0.3516 | 7.0 | 6566 | 0.5562 |
| 0.3554 | 8.0 | 7504 | 0.5562 |
| 0.3594 | 9.0 | 8442 | 0.5562 |
| 0.3568 | 10.0 | 9380 | 0.5562 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Aoinotama/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Culmenus/IceBERT-finetuned-ner | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2023-05-19T07:21:14Z" | ---
tags:
- mteb
model-index:
- name: e5-base-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.77611940298506
- type: ap
value: 42.052710266606056
- type: f1
value: 72.12040628266567
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.81012500000001
- type: ap
value: 89.4213700757244
- type: f1
value: 92.8039091197065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.711999999999996
- type: f1
value: 46.11544975436018
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.186
- type: map_at_10
value: 36.632999999999996
- type: map_at_100
value: 37.842
- type: map_at_1000
value: 37.865
- type: map_at_3
value: 32.278
- type: map_at_5
value: 34.760999999999996
- type: mrr_at_1
value: 23.400000000000002
- type: mrr_at_10
value: 36.721
- type: mrr_at_100
value: 37.937
- type: mrr_at_1000
value: 37.96
- type: mrr_at_3
value: 32.302
- type: mrr_at_5
value: 34.894
- type: ndcg_at_1
value: 23.186
- type: ndcg_at_10
value: 44.49
- type: ndcg_at_100
value: 50.065000000000005
- type: ndcg_at_1000
value: 50.629999999999995
- type: ndcg_at_3
value: 35.461
- type: ndcg_at_5
value: 39.969
- type: precision_at_1
value: 23.186
- type: precision_at_10
value: 6.97
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.912
- type: precision_at_5
value: 11.152
- type: recall_at_1
value: 23.186
- type: recall_at_10
value: 69.70100000000001
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 44.737
- type: recall_at_5
value: 55.761
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.10312401440185
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.67275326095384
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.97793816337376
- type: mrr
value: 72.76832431957087
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.11646947018187
- type: cos_sim_spearman
value: 81.40064994975234
- type: euclidean_pearson
value: 82.37355689019232
- type: euclidean_spearman
value: 81.6777646977348
- type: manhattan_pearson
value: 82.61101422716945
- type: manhattan_spearman
value: 81.80427360442245
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.52922077922076
- type: f1
value: 83.45298679360866
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.495115019668496
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.724792944166765
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.361000000000004
- type: map_at_10
value: 43.765
- type: map_at_100
value: 45.224
- type: map_at_1000
value: 45.35
- type: map_at_3
value: 40.353
- type: map_at_5
value: 42.195
- type: mrr_at_1
value: 40.629
- type: mrr_at_10
value: 50.458000000000006
- type: mrr_at_100
value: 51.06699999999999
- type: mrr_at_1000
value: 51.12
- type: mrr_at_3
value: 47.902
- type: mrr_at_5
value: 49.447
- type: ndcg_at_1
value: 40.629
- type: ndcg_at_10
value: 50.376
- type: ndcg_at_100
value: 55.065
- type: ndcg_at_1000
value: 57.196000000000005
- type: ndcg_at_3
value: 45.616
- type: ndcg_at_5
value: 47.646
- type: precision_at_1
value: 40.629
- type: precision_at_10
value: 9.785
- type: precision_at_100
value: 1.562
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.031
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.361000000000004
- type: recall_at_10
value: 62.214000000000006
- type: recall_at_100
value: 81.464
- type: recall_at_1000
value: 95.905
- type: recall_at_3
value: 47.5
- type: recall_at_5
value: 53.69500000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.971
- type: map_at_10
value: 37.444
- type: map_at_100
value: 38.607
- type: map_at_1000
value: 38.737
- type: map_at_3
value: 34.504000000000005
- type: map_at_5
value: 36.234
- type: mrr_at_1
value: 35.35
- type: mrr_at_10
value: 43.441
- type: mrr_at_100
value: 44.147999999999996
- type: mrr_at_1000
value: 44.196000000000005
- type: mrr_at_3
value: 41.285
- type: mrr_at_5
value: 42.552
- type: ndcg_at_1
value: 35.35
- type: ndcg_at_10
value: 42.903999999999996
- type: ndcg_at_100
value: 47.406
- type: ndcg_at_1000
value: 49.588
- type: ndcg_at_3
value: 38.778
- type: ndcg_at_5
value: 40.788000000000004
- type: precision_at_1
value: 35.35
- type: precision_at_10
value: 8.083
- type: precision_at_100
value: 1.313
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 18.769
- type: precision_at_5
value: 13.439
- type: recall_at_1
value: 27.971
- type: recall_at_10
value: 52.492000000000004
- type: recall_at_100
value: 71.642
- type: recall_at_1000
value: 85.488
- type: recall_at_3
value: 40.1
- type: recall_at_5
value: 45.800000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.898
- type: map_at_10
value: 51.819
- type: map_at_100
value: 52.886
- type: map_at_1000
value: 52.941
- type: map_at_3
value: 48.619
- type: map_at_5
value: 50.493
- type: mrr_at_1
value: 45.391999999999996
- type: mrr_at_10
value: 55.230000000000004
- type: mrr_at_100
value: 55.887
- type: mrr_at_1000
value: 55.916
- type: mrr_at_3
value: 52.717000000000006
- type: mrr_at_5
value: 54.222
- type: ndcg_at_1
value: 45.391999999999996
- type: ndcg_at_10
value: 57.586999999999996
- type: ndcg_at_100
value: 61.745000000000005
- type: ndcg_at_1000
value: 62.83800000000001
- type: ndcg_at_3
value: 52.207
- type: ndcg_at_5
value: 54.925999999999995
- type: precision_at_1
value: 45.391999999999996
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.177
- type: precision_at_5
value: 16.038
- type: recall_at_1
value: 39.898
- type: recall_at_10
value: 71.18900000000001
- type: recall_at_100
value: 89.082
- type: recall_at_1000
value: 96.865
- type: recall_at_3
value: 56.907
- type: recall_at_5
value: 63.397999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.706
- type: map_at_10
value: 30.818
- type: map_at_100
value: 32.038
- type: map_at_1000
value: 32.123000000000005
- type: map_at_3
value: 28.077
- type: map_at_5
value: 29.709999999999997
- type: mrr_at_1
value: 24.407
- type: mrr_at_10
value: 32.555
- type: mrr_at_100
value: 33.692
- type: mrr_at_1000
value: 33.751
- type: mrr_at_3
value: 29.848999999999997
- type: mrr_at_5
value: 31.509999999999998
- type: ndcg_at_1
value: 24.407
- type: ndcg_at_10
value: 35.624
- type: ndcg_at_100
value: 41.454
- type: ndcg_at_1000
value: 43.556
- type: ndcg_at_3
value: 30.217
- type: ndcg_at_5
value: 33.111000000000004
- type: precision_at_1
value: 24.407
- type: precision_at_10
value: 5.548
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.731
- type: precision_at_5
value: 9.22
- type: recall_at_1
value: 22.706
- type: recall_at_10
value: 48.772
- type: recall_at_100
value: 75.053
- type: recall_at_1000
value: 90.731
- type: recall_at_3
value: 34.421
- type: recall_at_5
value: 41.427
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.424
- type: map_at_10
value: 21.09
- type: map_at_100
value: 22.264999999999997
- type: map_at_1000
value: 22.402
- type: map_at_3
value: 18.312
- type: map_at_5
value: 19.874
- type: mrr_at_1
value: 16.915
- type: mrr_at_10
value: 25.258000000000003
- type: mrr_at_100
value: 26.228
- type: mrr_at_1000
value: 26.31
- type: mrr_at_3
value: 22.492
- type: mrr_at_5
value: 24.04
- type: ndcg_at_1
value: 16.915
- type: ndcg_at_10
value: 26.266000000000002
- type: ndcg_at_100
value: 32.08
- type: ndcg_at_1000
value: 35.086
- type: ndcg_at_3
value: 21.049
- type: ndcg_at_5
value: 23.508000000000003
- type: precision_at_1
value: 16.915
- type: precision_at_10
value: 5.1
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.282
- type: precision_at_5
value: 7.836
- type: recall_at_1
value: 13.424
- type: recall_at_10
value: 38.179
- type: recall_at_100
value: 63.906
- type: recall_at_1000
value: 84.933
- type: recall_at_3
value: 23.878
- type: recall_at_5
value: 30.037999999999997
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.154
- type: map_at_10
value: 35.912
- type: map_at_100
value: 37.211
- type: map_at_1000
value: 37.327
- type: map_at_3
value: 32.684999999999995
- type: map_at_5
value: 34.562
- type: mrr_at_1
value: 32.435
- type: mrr_at_10
value: 41.411
- type: mrr_at_100
value: 42.297000000000004
- type: mrr_at_1000
value: 42.345
- type: mrr_at_3
value: 38.771
- type: mrr_at_5
value: 40.33
- type: ndcg_at_1
value: 32.435
- type: ndcg_at_10
value: 41.785
- type: ndcg_at_100
value: 47.469
- type: ndcg_at_1000
value: 49.685
- type: ndcg_at_3
value: 36.618
- type: ndcg_at_5
value: 39.101
- type: precision_at_1
value: 32.435
- type: precision_at_10
value: 7.642
- type: precision_at_100
value: 1.244
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 17.485
- type: precision_at_5
value: 12.57
- type: recall_at_1
value: 26.154
- type: recall_at_10
value: 54.111
- type: recall_at_100
value: 78.348
- type: recall_at_1000
value: 92.996
- type: recall_at_3
value: 39.189
- type: recall_at_5
value: 45.852
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.308999999999997
- type: map_at_10
value: 35.524
- type: map_at_100
value: 36.774
- type: map_at_1000
value: 36.891
- type: map_at_3
value: 32.561
- type: map_at_5
value: 34.034
- type: mrr_at_1
value: 31.735000000000003
- type: mrr_at_10
value: 40.391
- type: mrr_at_100
value: 41.227000000000004
- type: mrr_at_1000
value: 41.288000000000004
- type: mrr_at_3
value: 37.938
- type: mrr_at_5
value: 39.193
- type: ndcg_at_1
value: 31.735000000000003
- type: ndcg_at_10
value: 41.166000000000004
- type: ndcg_at_100
value: 46.702
- type: ndcg_at_1000
value: 49.157000000000004
- type: ndcg_at_3
value: 36.274
- type: ndcg_at_5
value: 38.177
- type: precision_at_1
value: 31.735000000000003
- type: precision_at_10
value: 7.5569999999999995
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.199
- type: precision_at_5
value: 12.123000000000001
- type: recall_at_1
value: 26.308999999999997
- type: recall_at_10
value: 53.083000000000006
- type: recall_at_100
value: 76.922
- type: recall_at_1000
value: 93.767
- type: recall_at_3
value: 39.262
- type: recall_at_5
value: 44.413000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.391250000000003
- type: map_at_10
value: 33.280166666666666
- type: map_at_100
value: 34.49566666666667
- type: map_at_1000
value: 34.61533333333333
- type: map_at_3
value: 30.52183333333333
- type: map_at_5
value: 32.06608333333333
- type: mrr_at_1
value: 29.105083333333337
- type: mrr_at_10
value: 37.44766666666666
- type: mrr_at_100
value: 38.32491666666667
- type: mrr_at_1000
value: 38.385666666666665
- type: mrr_at_3
value: 35.06883333333333
- type: mrr_at_5
value: 36.42066666666667
- type: ndcg_at_1
value: 29.105083333333337
- type: ndcg_at_10
value: 38.54358333333333
- type: ndcg_at_100
value: 43.833583333333344
- type: ndcg_at_1000
value: 46.215333333333334
- type: ndcg_at_3
value: 33.876
- type: ndcg_at_5
value: 36.05208333333333
- type: precision_at_1
value: 29.105083333333337
- type: precision_at_10
value: 6.823416666666665
- type: precision_at_100
value: 1.1270833333333334
- type: precision_at_1000
value: 0.15208333333333332
- type: precision_at_3
value: 15.696750000000002
- type: precision_at_5
value: 11.193499999999998
- type: recall_at_1
value: 24.391250000000003
- type: recall_at_10
value: 49.98808333333333
- type: recall_at_100
value: 73.31616666666666
- type: recall_at_1000
value: 89.96291666666667
- type: recall_at_3
value: 36.86666666666667
- type: recall_at_5
value: 42.54350000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.995
- type: map_at_10
value: 28.807
- type: map_at_100
value: 29.813000000000002
- type: map_at_1000
value: 29.903000000000002
- type: map_at_3
value: 26.636
- type: map_at_5
value: 27.912
- type: mrr_at_1
value: 24.847
- type: mrr_at_10
value: 31.494
- type: mrr_at_100
value: 32.381
- type: mrr_at_1000
value: 32.446999999999996
- type: mrr_at_3
value: 29.473
- type: mrr_at_5
value: 30.7
- type: ndcg_at_1
value: 24.847
- type: ndcg_at_10
value: 32.818999999999996
- type: ndcg_at_100
value: 37.835
- type: ndcg_at_1000
value: 40.226
- type: ndcg_at_3
value: 28.811999999999998
- type: ndcg_at_5
value: 30.875999999999998
- type: precision_at_1
value: 24.847
- type: precision_at_10
value: 5.244999999999999
- type: precision_at_100
value: 0.856
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 21.995
- type: recall_at_10
value: 42.479
- type: recall_at_100
value: 65.337
- type: recall_at_1000
value: 83.23700000000001
- type: recall_at_3
value: 31.573
- type: recall_at_5
value: 36.684
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.751000000000001
- type: map_at_10
value: 21.909
- type: map_at_100
value: 23.064
- type: map_at_1000
value: 23.205000000000002
- type: map_at_3
value: 20.138
- type: map_at_5
value: 20.973
- type: mrr_at_1
value: 19.305
- type: mrr_at_10
value: 25.647
- type: mrr_at_100
value: 26.659
- type: mrr_at_1000
value: 26.748
- type: mrr_at_3
value: 23.933
- type: mrr_at_5
value: 24.754
- type: ndcg_at_1
value: 19.305
- type: ndcg_at_10
value: 25.886
- type: ndcg_at_100
value: 31.56
- type: ndcg_at_1000
value: 34.799
- type: ndcg_at_3
value: 22.708000000000002
- type: ndcg_at_5
value: 23.838
- type: precision_at_1
value: 19.305
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 15.751000000000001
- type: recall_at_10
value: 34.156
- type: recall_at_100
value: 59.899
- type: recall_at_1000
value: 83.08
- type: recall_at_3
value: 24.772
- type: recall_at_5
value: 28.009
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.34
- type: map_at_10
value: 32.383
- type: map_at_100
value: 33.629999999999995
- type: map_at_1000
value: 33.735
- type: map_at_3
value: 29.68
- type: map_at_5
value: 31.270999999999997
- type: mrr_at_1
value: 27.612
- type: mrr_at_10
value: 36.381
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.411
- type: mrr_at_3
value: 33.893
- type: mrr_at_5
value: 35.353
- type: ndcg_at_1
value: 27.612
- type: ndcg_at_10
value: 37.714999999999996
- type: ndcg_at_100
value: 43.525000000000006
- type: ndcg_at_1000
value: 45.812999999999995
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 35.243
- type: precision_at_1
value: 27.612
- type: precision_at_10
value: 6.465
- type: precision_at_100
value: 1.0619999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.049999999999999
- type: precision_at_5
value: 10.764999999999999
- type: recall_at_1
value: 23.34
- type: recall_at_10
value: 49.856
- type: recall_at_100
value: 75.334
- type: recall_at_1000
value: 91.156
- type: recall_at_3
value: 36.497
- type: recall_at_5
value: 42.769
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.097
- type: map_at_10
value: 34.599999999999994
- type: map_at_100
value: 36.174
- type: map_at_1000
value: 36.398
- type: map_at_3
value: 31.781
- type: map_at_5
value: 33.22
- type: mrr_at_1
value: 31.225
- type: mrr_at_10
value: 39.873
- type: mrr_at_100
value: 40.853
- type: mrr_at_1000
value: 40.904
- type: mrr_at_3
value: 37.681
- type: mrr_at_5
value: 38.669
- type: ndcg_at_1
value: 31.225
- type: ndcg_at_10
value: 40.586
- type: ndcg_at_100
value: 46.226
- type: ndcg_at_1000
value: 48.788
- type: ndcg_at_3
value: 36.258
- type: ndcg_at_5
value: 37.848
- type: precision_at_1
value: 31.225
- type: precision_at_10
value: 7.707999999999999
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 17.26
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 25.097
- type: recall_at_10
value: 51.602000000000004
- type: recall_at_100
value: 76.854
- type: recall_at_1000
value: 93.303
- type: recall_at_3
value: 38.68
- type: recall_at_5
value: 43.258
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.689
- type: map_at_10
value: 25.291000000000004
- type: map_at_100
value: 26.262
- type: map_at_1000
value: 26.372
- type: map_at_3
value: 22.916
- type: map_at_5
value: 24.315
- type: mrr_at_1
value: 19.409000000000002
- type: mrr_at_10
value: 27.233
- type: mrr_at_100
value: 28.109
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 24.892
- type: mrr_at_5
value: 26.278000000000002
- type: ndcg_at_1
value: 19.409000000000002
- type: ndcg_at_10
value: 29.809
- type: ndcg_at_100
value: 34.936
- type: ndcg_at_1000
value: 37.852000000000004
- type: ndcg_at_3
value: 25.179000000000002
- type: ndcg_at_5
value: 27.563
- type: precision_at_1
value: 19.409000000000002
- type: precision_at_10
value: 4.861
- type: precision_at_100
value: 0.8
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 17.689
- type: recall_at_10
value: 41.724
- type: recall_at_100
value: 65.95299999999999
- type: recall_at_1000
value: 88.094
- type: recall_at_3
value: 29.621
- type: recall_at_5
value: 35.179
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.581
- type: map_at_10
value: 18.944
- type: map_at_100
value: 20.812
- type: map_at_1000
value: 21.002000000000002
- type: map_at_3
value: 15.661
- type: map_at_5
value: 17.502000000000002
- type: mrr_at_1
value: 23.388
- type: mrr_at_10
value: 34.263
- type: mrr_at_100
value: 35.364000000000004
- type: mrr_at_1000
value: 35.409
- type: mrr_at_3
value: 30.586000000000002
- type: mrr_at_5
value: 32.928000000000004
- type: ndcg_at_1
value: 23.388
- type: ndcg_at_10
value: 26.56
- type: ndcg_at_100
value: 34.248
- type: ndcg_at_1000
value: 37.779
- type: ndcg_at_3
value: 21.179000000000002
- type: ndcg_at_5
value: 23.504
- type: precision_at_1
value: 23.388
- type: precision_at_10
value: 8.476
- type: precision_at_100
value: 1.672
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.852
- type: precision_at_5
value: 12.73
- type: recall_at_1
value: 10.581
- type: recall_at_10
value: 32.512
- type: recall_at_100
value: 59.313
- type: recall_at_1000
value: 79.25
- type: recall_at_3
value: 19.912
- type: recall_at_5
value: 25.832
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.35
- type: map_at_10
value: 20.134
- type: map_at_100
value: 28.975
- type: map_at_1000
value: 30.709999999999997
- type: map_at_3
value: 14.513000000000002
- type: map_at_5
value: 16.671
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 77.67699999999999
- type: mrr_at_100
value: 77.97500000000001
- type: mrr_at_1000
value: 77.985
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.179
- type: ndcg_at_1
value: 56.49999999999999
- type: ndcg_at_10
value: 42.226
- type: ndcg_at_100
value: 47.562
- type: ndcg_at_1000
value: 54.923
- type: ndcg_at_3
value: 46.564
- type: ndcg_at_5
value: 43.830000000000005
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.525
- type: precision_at_100
value: 11.035
- type: precision_at_1000
value: 2.206
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42
- type: recall_at_1
value: 9.35
- type: recall_at_10
value: 25.793
- type: recall_at_100
value: 54.186
- type: recall_at_1000
value: 77.81
- type: recall_at_3
value: 15.770000000000001
- type: recall_at_5
value: 19.09
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.945
- type: f1
value: 42.07407842992542
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.04599999999999
- type: map_at_10
value: 80.718
- type: map_at_100
value: 80.961
- type: map_at_1000
value: 80.974
- type: map_at_3
value: 79.49199999999999
- type: map_at_5
value: 80.32000000000001
- type: mrr_at_1
value: 76.388
- type: mrr_at_10
value: 85.214
- type: mrr_at_100
value: 85.302
- type: mrr_at_1000
value: 85.302
- type: mrr_at_3
value: 84.373
- type: mrr_at_5
value: 84.979
- type: ndcg_at_1
value: 76.388
- type: ndcg_at_10
value: 84.987
- type: ndcg_at_100
value: 85.835
- type: ndcg_at_1000
value: 86.04899999999999
- type: ndcg_at_3
value: 83.04
- type: ndcg_at_5
value: 84.22500000000001
- type: precision_at_1
value: 76.388
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.108
- type: precision_at_5
value: 20.033
- type: recall_at_1
value: 71.04599999999999
- type: recall_at_10
value: 93.547
- type: recall_at_100
value: 96.887
- type: recall_at_1000
value: 98.158
- type: recall_at_3
value: 88.346
- type: recall_at_5
value: 91.321
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.8
- type: map_at_10
value: 31.979999999999997
- type: map_at_100
value: 33.876
- type: map_at_1000
value: 34.056999999999995
- type: map_at_3
value: 28.067999999999998
- type: map_at_5
value: 30.066
- type: mrr_at_1
value: 38.735
- type: mrr_at_10
value: 47.749
- type: mrr_at_100
value: 48.605
- type: mrr_at_1000
value: 48.644999999999996
- type: mrr_at_3
value: 45.165
- type: mrr_at_5
value: 46.646
- type: ndcg_at_1
value: 38.735
- type: ndcg_at_10
value: 39.883
- type: ndcg_at_100
value: 46.983000000000004
- type: ndcg_at_1000
value: 50.043000000000006
- type: ndcg_at_3
value: 35.943000000000005
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 38.735
- type: precision_at_10
value: 10.940999999999999
- type: precision_at_100
value: 1.836
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 23.817
- type: precision_at_5
value: 17.346
- type: recall_at_1
value: 19.8
- type: recall_at_10
value: 47.082
- type: recall_at_100
value: 73.247
- type: recall_at_1000
value: 91.633
- type: recall_at_3
value: 33.201
- type: recall_at_5
value: 38.81
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.102999999999994
- type: map_at_10
value: 60.547
- type: map_at_100
value: 61.466
- type: map_at_1000
value: 61.526
- type: map_at_3
value: 56.973
- type: map_at_5
value: 59.244
- type: mrr_at_1
value: 76.205
- type: mrr_at_10
value: 82.816
- type: mrr_at_100
value: 83.002
- type: mrr_at_1000
value: 83.009
- type: mrr_at_3
value: 81.747
- type: mrr_at_5
value: 82.467
- type: ndcg_at_1
value: 76.205
- type: ndcg_at_10
value: 69.15
- type: ndcg_at_100
value: 72.297
- type: ndcg_at_1000
value: 73.443
- type: ndcg_at_3
value: 64.07000000000001
- type: ndcg_at_5
value: 66.96600000000001
- type: precision_at_1
value: 76.205
- type: precision_at_10
value: 14.601
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 41.202
- type: precision_at_5
value: 27.006000000000004
- type: recall_at_1
value: 38.102999999999994
- type: recall_at_10
value: 73.005
- type: recall_at_100
value: 85.253
- type: recall_at_1000
value: 92.795
- type: recall_at_3
value: 61.803
- type: recall_at_5
value: 67.515
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.15
- type: ap
value: 80.36282825265391
- type: f1
value: 86.07368510726472
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.6
- type: map_at_10
value: 34.887
- type: map_at_100
value: 36.069
- type: map_at_1000
value: 36.115
- type: map_at_3
value: 31.067
- type: map_at_5
value: 33.300000000000004
- type: mrr_at_1
value: 23.238
- type: mrr_at_10
value: 35.47
- type: mrr_at_100
value: 36.599
- type: mrr_at_1000
value: 36.64
- type: mrr_at_3
value: 31.735999999999997
- type: mrr_at_5
value: 33.939
- type: ndcg_at_1
value: 23.252
- type: ndcg_at_10
value: 41.765
- type: ndcg_at_100
value: 47.402
- type: ndcg_at_1000
value: 48.562
- type: ndcg_at_3
value: 34.016999999999996
- type: ndcg_at_5
value: 38.016
- type: precision_at_1
value: 23.252
- type: precision_at_10
value: 6.569
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.479000000000001
- type: precision_at_5
value: 10.722
- type: recall_at_1
value: 22.6
- type: recall_at_10
value: 62.919000000000004
- type: recall_at_100
value: 88.82
- type: recall_at_1000
value: 97.71600000000001
- type: recall_at_3
value: 41.896
- type: recall_at_5
value: 51.537
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.69357045143639
- type: f1
value: 93.55489858177597
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.31235750114
- type: f1
value: 57.891491963121155
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.04303967720243
- type: f1
value: 70.51516022297616
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.65299260255549
- type: f1
value: 77.49059766538576
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.458906115906597
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.9851513122443
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.2916268497217
- type: mrr
value: 32.328276715593816
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.3740000000000006
- type: map_at_10
value: 13.089999999999998
- type: map_at_100
value: 16.512
- type: map_at_1000
value: 18.014
- type: map_at_3
value: 9.671000000000001
- type: map_at_5
value: 11.199
- type: mrr_at_1
value: 46.749
- type: mrr_at_10
value: 55.367
- type: mrr_at_100
value: 56.021
- type: mrr_at_1000
value: 56.058
- type: mrr_at_3
value: 53.30200000000001
- type: mrr_at_5
value: 54.773
- type: ndcg_at_1
value: 45.046
- type: ndcg_at_10
value: 35.388999999999996
- type: ndcg_at_100
value: 32.175
- type: ndcg_at_1000
value: 41.018
- type: ndcg_at_3
value: 40.244
- type: ndcg_at_5
value: 38.267
- type: precision_at_1
value: 46.749
- type: precision_at_10
value: 26.563
- type: precision_at_100
value: 8.074
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 37.358000000000004
- type: precision_at_5
value: 33.003
- type: recall_at_1
value: 6.3740000000000006
- type: recall_at_10
value: 16.805999999999997
- type: recall_at_100
value: 31.871
- type: recall_at_1000
value: 64.098
- type: recall_at_3
value: 10.383000000000001
- type: recall_at_5
value: 13.166
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.847
- type: map_at_10
value: 50.532
- type: map_at_100
value: 51.504000000000005
- type: map_at_1000
value: 51.528
- type: map_at_3
value: 46.219
- type: map_at_5
value: 48.868
- type: mrr_at_1
value: 39.137
- type: mrr_at_10
value: 53.157
- type: mrr_at_100
value: 53.839999999999996
- type: mrr_at_1000
value: 53.857
- type: mrr_at_3
value: 49.667
- type: mrr_at_5
value: 51.847
- type: ndcg_at_1
value: 39.108
- type: ndcg_at_10
value: 58.221000000000004
- type: ndcg_at_100
value: 62.021
- type: ndcg_at_1000
value: 62.57
- type: ndcg_at_3
value: 50.27199999999999
- type: ndcg_at_5
value: 54.623999999999995
- type: precision_at_1
value: 39.108
- type: precision_at_10
value: 9.397
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.644000000000002
- type: precision_at_5
value: 16.141
- type: recall_at_1
value: 34.847
- type: recall_at_10
value: 78.945
- type: recall_at_100
value: 94.793
- type: recall_at_1000
value: 98.904
- type: recall_at_3
value: 58.56
- type: recall_at_5
value: 68.535
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.728
- type: map_at_10
value: 82.537
- type: map_at_100
value: 83.218
- type: map_at_1000
value: 83.238
- type: map_at_3
value: 79.586
- type: map_at_5
value: 81.416
- type: mrr_at_1
value: 79.17999999999999
- type: mrr_at_10
value: 85.79299999999999
- type: mrr_at_100
value: 85.937
- type: mrr_at_1000
value: 85.938
- type: mrr_at_3
value: 84.748
- type: mrr_at_5
value: 85.431
- type: ndcg_at_1
value: 79.17
- type: ndcg_at_10
value: 86.555
- type: ndcg_at_100
value: 88.005
- type: ndcg_at_1000
value: 88.146
- type: ndcg_at_3
value: 83.557
- type: ndcg_at_5
value: 85.152
- type: precision_at_1
value: 79.17
- type: precision_at_10
value: 13.163
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.53
- type: precision_at_5
value: 24.046
- type: recall_at_1
value: 68.728
- type: recall_at_10
value: 94.217
- type: recall_at_100
value: 99.295
- type: recall_at_1000
value: 99.964
- type: recall_at_3
value: 85.646
- type: recall_at_5
value: 90.113
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.15680266226348
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.4318549229047
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.353
- type: map_at_10
value: 10.956000000000001
- type: map_at_100
value: 12.873999999999999
- type: map_at_1000
value: 13.177
- type: map_at_3
value: 7.854
- type: map_at_5
value: 9.327
- type: mrr_at_1
value: 21.4
- type: mrr_at_10
value: 31.948999999999998
- type: mrr_at_100
value: 33.039
- type: mrr_at_1000
value: 33.106
- type: mrr_at_3
value: 28.449999999999996
- type: mrr_at_5
value: 30.535
- type: ndcg_at_1
value: 21.4
- type: ndcg_at_10
value: 18.694
- type: ndcg_at_100
value: 26.275
- type: ndcg_at_1000
value: 31.836
- type: ndcg_at_3
value: 17.559
- type: ndcg_at_5
value: 15.372
- type: precision_at_1
value: 21.4
- type: precision_at_10
value: 9.790000000000001
- type: precision_at_100
value: 2.0709999999999997
- type: precision_at_1000
value: 0.34099999999999997
- type: precision_at_3
value: 16.467000000000002
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.353
- type: recall_at_10
value: 19.892000000000003
- type: recall_at_100
value: 42.067
- type: recall_at_1000
value: 69.268
- type: recall_at_3
value: 10.042
- type: recall_at_5
value: 13.741999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.75433886279843
- type: cos_sim_spearman
value: 78.29727771767095
- type: euclidean_pearson
value: 80.83057828506621
- type: euclidean_spearman
value: 78.35203149750356
- type: manhattan_pearson
value: 80.7403553891142
- type: manhattan_spearman
value: 78.33670488531051
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.59999465280839
- type: cos_sim_spearman
value: 75.79279003980383
- type: euclidean_pearson
value: 82.29895375956758
- type: euclidean_spearman
value: 77.33856514102094
- type: manhattan_pearson
value: 82.22694214534756
- type: manhattan_spearman
value: 77.3028993008695
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.09296929691297
- type: cos_sim_spearman
value: 83.58056936846941
- type: euclidean_pearson
value: 83.84067483060005
- type: euclidean_spearman
value: 84.45155680480985
- type: manhattan_pearson
value: 83.82353052971942
- type: manhattan_spearman
value: 84.43030567861112
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.74616852320915
- type: cos_sim_spearman
value: 79.948683747966
- type: euclidean_pearson
value: 81.55702283757084
- type: euclidean_spearman
value: 80.1721505114231
- type: manhattan_pearson
value: 81.52251518619441
- type: manhattan_spearman
value: 80.1469800135577
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.97170104226318
- type: cos_sim_spearman
value: 88.82021731518206
- type: euclidean_pearson
value: 87.92950547187615
- type: euclidean_spearman
value: 88.67043634645866
- type: manhattan_pearson
value: 87.90668112827639
- type: manhattan_spearman
value: 88.64471082785317
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.02790375770599
- type: cos_sim_spearman
value: 84.46308496590792
- type: euclidean_pearson
value: 84.29430000414911
- type: euclidean_spearman
value: 84.77298303589936
- type: manhattan_pearson
value: 84.23919291368665
- type: manhattan_spearman
value: 84.75272234871308
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.62885108477064
- type: cos_sim_spearman
value: 87.58456196391622
- type: euclidean_pearson
value: 88.2602775281007
- type: euclidean_spearman
value: 87.51556278299846
- type: manhattan_pearson
value: 88.11224053672842
- type: manhattan_spearman
value: 87.4336094383095
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.98187965128411
- type: cos_sim_spearman
value: 64.0653163219731
- type: euclidean_pearson
value: 62.30616725924099
- type: euclidean_spearman
value: 61.556971332295916
- type: manhattan_pearson
value: 62.07642330128549
- type: manhattan_spearman
value: 61.155494129828
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.6089703921826
- type: cos_sim_spearman
value: 86.52303197250791
- type: euclidean_pearson
value: 85.95801955963246
- type: euclidean_spearman
value: 86.25242424112962
- type: manhattan_pearson
value: 85.88829100470312
- type: manhattan_spearman
value: 86.18742955805165
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.02282098487036
- type: mrr
value: 95.05126409538174
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.928
- type: map_at_10
value: 67.308
- type: map_at_100
value: 67.89500000000001
- type: map_at_1000
value: 67.91199999999999
- type: map_at_3
value: 65.091
- type: map_at_5
value: 66.412
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 68.401
- type: mrr_at_100
value: 68.804
- type: mrr_at_1000
value: 68.819
- type: mrr_at_3
value: 66.72200000000001
- type: mrr_at_5
value: 67.72200000000001
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 71.944
- type: ndcg_at_100
value: 74.464
- type: ndcg_at_1000
value: 74.82799999999999
- type: ndcg_at_3
value: 68.257
- type: ndcg_at_5
value: 70.10300000000001
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 27.222
- type: precision_at_5
value: 17.533
- type: recall_at_1
value: 55.928
- type: recall_at_10
value: 84.65
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 74.656
- type: recall_at_5
value: 79.489
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 94.5795129511524
- type: cos_sim_f1
value: 89.34673366834171
- type: cos_sim_precision
value: 89.79797979797979
- type: cos_sim_recall
value: 88.9
- type: dot_accuracy
value: 99.53465346534654
- type: dot_ap
value: 81.56492504352725
- type: dot_f1
value: 76.33816908454227
- type: dot_precision
value: 76.37637637637637
- type: dot_recall
value: 76.3
- type: euclidean_accuracy
value: 99.78514851485149
- type: euclidean_ap
value: 94.59134620408962
- type: euclidean_f1
value: 88.96484375
- type: euclidean_precision
value: 86.92748091603053
- type: euclidean_recall
value: 91.10000000000001
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.5190197328845
- type: manhattan_f1
value: 88.84462151394423
- type: manhattan_precision
value: 88.4920634920635
- type: manhattan_recall
value: 89.2
- type: max_accuracy
value: 99.79009900990098
- type: max_ap
value: 94.59134620408962
- type: max_f1
value: 89.34673366834171
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.1487505617497
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.502518166001856
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.33775480236701
- type: mrr
value: 51.17302223919871
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.561111309808208
- type: cos_sim_spearman
value: 30.2839254379273
- type: dot_pearson
value: 29.560242291401973
- type: dot_spearman
value: 30.51527274679116
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.215
- type: map_at_10
value: 1.752
- type: map_at_100
value: 9.258
- type: map_at_1000
value: 23.438
- type: map_at_3
value: 0.6
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 91.333
- type: mrr_at_100
value: 91.333
- type: mrr_at_1000
value: 91.333
- type: mrr_at_3
value: 91.333
- type: mrr_at_5
value: 91.333
- type: ndcg_at_1
value: 75
- type: ndcg_at_10
value: 69.596
- type: ndcg_at_100
value: 51.970000000000006
- type: ndcg_at_1000
value: 48.864999999999995
- type: ndcg_at_3
value: 73.92699999999999
- type: ndcg_at_5
value: 73.175
- type: precision_at_1
value: 84
- type: precision_at_10
value: 74
- type: precision_at_100
value: 53.2
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 79.333
- type: precision_at_5
value: 78.4
- type: recall_at_1
value: 0.215
- type: recall_at_10
value: 1.9609999999999999
- type: recall_at_100
value: 12.809999999999999
- type: recall_at_1000
value: 46.418
- type: recall_at_3
value: 0.6479999999999999
- type: recall_at_5
value: 1.057
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 10.508000000000001
- type: map_at_100
value: 16.258
- type: map_at_1000
value: 17.705000000000002
- type: map_at_3
value: 6.157
- type: map_at_5
value: 7.510999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.786
- type: mrr_at_100
value: 49.619
- type: mrr_at_1000
value: 49.619
- type: mrr_at_3
value: 45.918
- type: mrr_at_5
value: 46.837
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 26.401999999999997
- type: ndcg_at_100
value: 37.139
- type: ndcg_at_1000
value: 48.012
- type: ndcg_at_3
value: 31.875999999999998
- type: ndcg_at_5
value: 27.383000000000003
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.611999999999999
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 47.29
- type: recall_at_1000
value: 81.137
- type: recall_at_3
value: 7.069
- type: recall_at_5
value: 9.483
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.1126
- type: ap
value: 14.710862719285753
- type: f1
value: 55.437808972378846
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.39049235993209
- type: f1
value: 60.69810537250234
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.15576640316866
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.52917684925792
- type: cos_sim_ap
value: 75.97497873817315
- type: cos_sim_f1
value: 70.01151926276718
- type: cos_sim_precision
value: 67.98409147402435
- type: cos_sim_recall
value: 72.16358839050132
- type: dot_accuracy
value: 82.47004828038385
- type: dot_ap
value: 62.48739894974198
- type: dot_f1
value: 59.13107511045656
- type: dot_precision
value: 55.27765029830197
- type: dot_recall
value: 63.562005277044854
- type: euclidean_accuracy
value: 86.46361089586935
- type: euclidean_ap
value: 75.59282886839452
- type: euclidean_f1
value: 69.6465443945099
- type: euclidean_precision
value: 64.52847175331982
- type: euclidean_recall
value: 75.64643799472296
- type: manhattan_accuracy
value: 86.43380818978363
- type: manhattan_ap
value: 75.5742420974403
- type: manhattan_f1
value: 69.8636926889715
- type: manhattan_precision
value: 65.8644859813084
- type: manhattan_recall
value: 74.37994722955145
- type: max_accuracy
value: 86.52917684925792
- type: max_ap
value: 75.97497873817315
- type: max_f1
value: 70.01151926276718
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.29056545193464
- type: cos_sim_ap
value: 86.63028865482376
- type: cos_sim_f1
value: 79.18166458532285
- type: cos_sim_precision
value: 75.70585756426465
- type: cos_sim_recall
value: 82.99199260856174
- type: dot_accuracy
value: 85.23305002522606
- type: dot_ap
value: 76.0482687263196
- type: dot_f1
value: 70.80484330484332
- type: dot_precision
value: 65.86933474688577
- type: dot_recall
value: 76.53988296889437
- type: euclidean_accuracy
value: 89.26145845461248
- type: euclidean_ap
value: 86.54073288416006
- type: euclidean_f1
value: 78.9721371479794
- type: euclidean_precision
value: 76.68649354417525
- type: euclidean_recall
value: 81.39821373575609
- type: manhattan_accuracy
value: 89.22847052431405
- type: manhattan_ap
value: 86.51250729037905
- type: manhattan_f1
value: 78.94601825044894
- type: manhattan_precision
value: 75.32694594027555
- type: manhattan_recall
value: 82.93039728980598
- type: max_accuracy
value: 89.29056545193464
- type: max_ap
value: 86.63028865482376
- type: max_f1
value: 79.18166458532285
language:
- en
license: mit
---
# E5-base-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-v2')
model = AutoModel.from_pretrained('intfloat/e5-base-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens. |
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- mteb
model-index:
- name: e5-large-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.22388059701493
- type: ap
value: 43.20816505595132
- type: f1
value: 73.27811303522058
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.748325
- type: ap
value: 90.72534979701297
- type: f1
value: 93.73895874282185
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.612
- type: f1
value: 47.61157345898393
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.208
- type: map_at_100
value: 39.417
- type: map_at_1000
value: 39.428999999999995
- type: map_at_3
value: 33.95
- type: map_at_5
value: 36.329
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.288
- type: mrr_at_100
value: 39.511
- type: mrr_at_1000
value: 39.523
- type: mrr_at_3
value: 34.009
- type: mrr_at_5
value: 36.434
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.417
- type: ndcg_at_100
value: 51.812000000000005
- type: ndcg_at_1000
value: 52.137
- type: ndcg_at_3
value: 37.528
- type: ndcg_at_5
value: 41.81
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.979
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 47.937000000000005
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.546499570522094
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.01607489943561
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.616107510107774
- type: mrr
value: 72.75106626214661
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.33018094733868
- type: cos_sim_spearman
value: 83.60190492611737
- type: euclidean_pearson
value: 82.1492450218961
- type: euclidean_spearman
value: 82.70308926526991
- type: manhattan_pearson
value: 81.93959600076842
- type: manhattan_spearman
value: 82.73260801016369
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.54545454545455
- type: f1
value: 84.49582530928923
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.362725540120096
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.849509608178145
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.502999999999997
- type: map_at_10
value: 43.323
- type: map_at_100
value: 44.708999999999996
- type: map_at_1000
value: 44.838
- type: map_at_3
value: 38.987
- type: map_at_5
value: 41.516999999999996
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 49.13
- type: mrr_at_100
value: 49.697
- type: mrr_at_1000
value: 49.741
- type: mrr_at_3
value: 45.804
- type: mrr_at_5
value: 47.842
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 50.266999999999996
- type: ndcg_at_100
value: 54.967
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 43.823
- type: ndcg_at_5
value: 47.12
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 10.057
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.851
- type: recall_at_1
value: 31.502999999999997
- type: recall_at_10
value: 63.715999999999994
- type: recall_at_100
value: 83.61800000000001
- type: recall_at_1000
value: 96.63199999999999
- type: recall_at_3
value: 45.403
- type: recall_at_5
value: 54.481
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 37.330999999999996
- type: map_at_100
value: 38.580999999999996
- type: map_at_1000
value: 38.708
- type: map_at_3
value: 34.713
- type: map_at_5
value: 36.104
- type: mrr_at_1
value: 35.223
- type: mrr_at_10
value: 43.419000000000004
- type: mrr_at_100
value: 44.198
- type: mrr_at_1000
value: 44.249
- type: mrr_at_3
value: 41.614000000000004
- type: mrr_at_5
value: 42.553000000000004
- type: ndcg_at_1
value: 35.223
- type: ndcg_at_10
value: 42.687999999999995
- type: ndcg_at_100
value: 47.447
- type: ndcg_at_1000
value: 49.701
- type: ndcg_at_3
value: 39.162
- type: ndcg_at_5
value: 40.557
- type: precision_at_1
value: 35.223
- type: precision_at_10
value: 7.962
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.184999999999999
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 51.881
- type: recall_at_100
value: 72.04
- type: recall_at_1000
value: 86.644
- type: recall_at_3
value: 40.778
- type: recall_at_5
value: 45.176
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.175
- type: map_at_10
value: 51.174
- type: map_at_100
value: 52.26499999999999
- type: map_at_1000
value: 52.315999999999995
- type: map_at_3
value: 47.897
- type: map_at_5
value: 49.703
- type: mrr_at_1
value: 43.448
- type: mrr_at_10
value: 54.505
- type: mrr_at_100
value: 55.216
- type: mrr_at_1000
value: 55.242000000000004
- type: mrr_at_3
value: 51.98500000000001
- type: mrr_at_5
value: 53.434000000000005
- type: ndcg_at_1
value: 43.448
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.537
- type: ndcg_at_1000
value: 62.546
- type: ndcg_at_3
value: 51.73799999999999
- type: ndcg_at_5
value: 54.324
- type: precision_at_1
value: 43.448
- type: precision_at_10
value: 9.292
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.218
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.175
- type: recall_at_10
value: 72.00999999999999
- type: recall_at_100
value: 90.155
- type: recall_at_1000
value: 97.257
- type: recall_at_3
value: 57.133
- type: recall_at_5
value: 63.424
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.405
- type: map_at_10
value: 30.043
- type: map_at_100
value: 31.191000000000003
- type: map_at_1000
value: 31.275
- type: map_at_3
value: 27.034000000000002
- type: map_at_5
value: 28.688000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.993
- type: mrr_at_100
value: 32.992
- type: mrr_at_1000
value: 33.050000000000004
- type: mrr_at_3
value: 28.964000000000002
- type: mrr_at_5
value: 30.653000000000002
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 40.709
- type: ndcg_at_1000
value: 42.855
- type: ndcg_at_3
value: 29.139
- type: ndcg_at_5
value: 32.045
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.65
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 22.405
- type: recall_at_10
value: 49.391
- type: recall_at_100
value: 74.53699999999999
- type: recall_at_1000
value: 90.605
- type: recall_at_3
value: 33.126
- type: recall_at_5
value: 40.073
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.309999999999999
- type: map_at_10
value: 20.688000000000002
- type: map_at_100
value: 22.022
- type: map_at_1000
value: 22.152
- type: map_at_3
value: 17.954
- type: map_at_5
value: 19.439
- type: mrr_at_1
value: 16.294
- type: mrr_at_10
value: 24.479
- type: mrr_at_100
value: 25.515
- type: mrr_at_1000
value: 25.593
- type: mrr_at_3
value: 21.642
- type: mrr_at_5
value: 23.189999999999998
- type: ndcg_at_1
value: 16.294
- type: ndcg_at_10
value: 25.833000000000002
- type: ndcg_at_100
value: 32.074999999999996
- type: ndcg_at_1000
value: 35.083
- type: ndcg_at_3
value: 20.493
- type: ndcg_at_5
value: 22.949
- type: precision_at_1
value: 16.294
- type: precision_at_10
value: 5.112
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.587000000000001
- type: recall_at_1
value: 13.309999999999999
- type: recall_at_10
value: 37.851
- type: recall_at_100
value: 64.835
- type: recall_at_1000
value: 86.334
- type: recall_at_3
value: 23.493
- type: recall_at_5
value: 29.528
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.857999999999997
- type: map_at_10
value: 35.503
- type: map_at_100
value: 36.957
- type: map_at_1000
value: 37.065
- type: map_at_3
value: 32.275999999999996
- type: map_at_5
value: 34.119
- type: mrr_at_1
value: 31.954
- type: mrr_at_10
value: 40.851
- type: mrr_at_100
value: 41.863
- type: mrr_at_1000
value: 41.900999999999996
- type: mrr_at_3
value: 38.129999999999995
- type: mrr_at_5
value: 39.737
- type: ndcg_at_1
value: 31.954
- type: ndcg_at_10
value: 41.343999999999994
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 49.501
- type: ndcg_at_3
value: 36.047000000000004
- type: ndcg_at_5
value: 38.639
- type: precision_at_1
value: 31.954
- type: precision_at_10
value: 7.68
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.589
- type: recall_at_1
value: 25.857999999999997
- type: recall_at_10
value: 53.43599999999999
- type: recall_at_100
value: 78.82400000000001
- type: recall_at_1000
value: 92.78999999999999
- type: recall_at_3
value: 38.655
- type: recall_at_5
value: 45.216
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.709
- type: map_at_10
value: 34.318
- type: map_at_100
value: 35.657
- type: map_at_1000
value: 35.783
- type: map_at_3
value: 31.326999999999998
- type: map_at_5
value: 33.021
- type: mrr_at_1
value: 30.137000000000004
- type: mrr_at_10
value: 39.093
- type: mrr_at_100
value: 39.992
- type: mrr_at_1000
value: 40.056999999999995
- type: mrr_at_3
value: 36.606
- type: mrr_at_5
value: 37.861
- type: ndcg_at_1
value: 30.137000000000004
- type: ndcg_at_10
value: 39.974
- type: ndcg_at_100
value: 45.647999999999996
- type: ndcg_at_1000
value: 48.259
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 37.175999999999995
- type: precision_at_1
value: 30.137000000000004
- type: precision_at_10
value: 7.363
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 16.857
- type: precision_at_5
value: 11.963
- type: recall_at_1
value: 24.709
- type: recall_at_10
value: 52.087
- type: recall_at_100
value: 76.125
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 38.149
- type: recall_at_5
value: 43.984
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.40791666666667
- type: map_at_10
value: 32.458083333333335
- type: map_at_100
value: 33.691916666666664
- type: map_at_1000
value: 33.81191666666666
- type: map_at_3
value: 29.51625
- type: map_at_5
value: 31.168083333333335
- type: mrr_at_1
value: 27.96591666666666
- type: mrr_at_10
value: 36.528583333333344
- type: mrr_at_100
value: 37.404
- type: mrr_at_1000
value: 37.464333333333336
- type: mrr_at_3
value: 33.92883333333333
- type: mrr_at_5
value: 35.41933333333333
- type: ndcg_at_1
value: 27.96591666666666
- type: ndcg_at_10
value: 37.89141666666666
- type: ndcg_at_100
value: 43.23066666666666
- type: ndcg_at_1000
value: 45.63258333333333
- type: ndcg_at_3
value: 32.811249999999994
- type: ndcg_at_5
value: 35.22566666666667
- type: precision_at_1
value: 27.96591666666666
- type: precision_at_10
value: 6.834083333333332
- type: precision_at_100
value: 1.12225
- type: precision_at_1000
value: 0.15241666666666667
- type: precision_at_3
value: 15.264333333333335
- type: precision_at_5
value: 11.039416666666666
- type: recall_at_1
value: 23.40791666666667
- type: recall_at_10
value: 49.927083333333336
- type: recall_at_100
value: 73.44641666666668
- type: recall_at_1000
value: 90.19950000000001
- type: recall_at_3
value: 35.88341666666667
- type: recall_at_5
value: 42.061249999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.592000000000002
- type: map_at_10
value: 26.895999999999997
- type: map_at_100
value: 27.921000000000003
- type: map_at_1000
value: 28.02
- type: map_at_3
value: 24.883
- type: map_at_5
value: 25.812
- type: mrr_at_1
value: 22.698999999999998
- type: mrr_at_10
value: 29.520999999999997
- type: mrr_at_100
value: 30.458000000000002
- type: mrr_at_1000
value: 30.526999999999997
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.483999999999998
- type: ndcg_at_1
value: 22.698999999999998
- type: ndcg_at_10
value: 31.061
- type: ndcg_at_100
value: 36.398
- type: ndcg_at_1000
value: 38.89
- type: ndcg_at_3
value: 27.149
- type: ndcg_at_5
value: 28.627000000000002
- type: precision_at_1
value: 22.698999999999998
- type: precision_at_10
value: 5.106999999999999
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 11.963
- type: precision_at_5
value: 8.221
- type: recall_at_1
value: 19.592000000000002
- type: recall_at_10
value: 41.329
- type: recall_at_100
value: 66.094
- type: recall_at_1000
value: 84.511
- type: recall_at_3
value: 30.61
- type: recall_at_5
value: 34.213
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.71
- type: map_at_10
value: 20.965
- type: map_at_100
value: 21.994
- type: map_at_1000
value: 22.133
- type: map_at_3
value: 18.741
- type: map_at_5
value: 19.951
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.66
- type: mrr_at_100
value: 25.540000000000003
- type: mrr_at_1000
value: 25.629
- type: mrr_at_3
value: 22.511
- type: mrr_at_5
value: 23.72
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 30.229
- type: ndcg_at_1000
value: 33.623
- type: ndcg_at_3
value: 21.203
- type: ndcg_at_5
value: 23.006999999999998
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.725
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.14
- type: precision_at_5
value: 7.481
- type: recall_at_1
value: 14.71
- type: recall_at_10
value: 34.087
- type: recall_at_100
value: 57.147999999999996
- type: recall_at_1000
value: 81.777
- type: recall_at_3
value: 22.996
- type: recall_at_5
value: 27.73
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.472
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.867000000000004
- type: map_at_1000
value: 33.967000000000006
- type: map_at_3
value: 29.718
- type: map_at_5
value: 31.345
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 36.945
- type: mrr_at_100
value: 37.794
- type: mrr_at_1000
value: 37.857
- type: mrr_at_3
value: 34.266000000000005
- type: mrr_at_5
value: 35.768
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 38.35
- type: ndcg_at_100
value: 43.739
- type: ndcg_at_1000
value: 46.087
- type: ndcg_at_3
value: 33.004
- type: ndcg_at_5
value: 35.411
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 6.715999999999999
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.299
- type: precision_at_5
value: 10.951
- type: recall_at_1
value: 23.472
- type: recall_at_10
value: 51.413
- type: recall_at_100
value: 75.17
- type: recall_at_1000
value: 91.577
- type: recall_at_3
value: 36.651
- type: recall_at_5
value: 42.814
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.666
- type: map_at_10
value: 32.963
- type: map_at_100
value: 34.544999999999995
- type: map_at_1000
value: 34.792
- type: map_at_3
value: 29.74
- type: map_at_5
value: 31.5
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 38.013000000000005
- type: mrr_at_100
value: 38.997
- type: mrr_at_1000
value: 39.055
- type: mrr_at_3
value: 34.947
- type: mrr_at_5
value: 36.815
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.361000000000004
- type: ndcg_at_100
value: 45.186
- type: ndcg_at_1000
value: 47.867
- type: ndcg_at_3
value: 33.797
- type: ndcg_at_5
value: 36.456
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 15.876000000000001
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 23.666
- type: recall_at_10
value: 51.858000000000004
- type: recall_at_100
value: 77.805
- type: recall_at_1000
value: 94.504
- type: recall_at_3
value: 36.207
- type: recall_at_5
value: 43.094
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.662
- type: map_at_10
value: 23.594
- type: map_at_100
value: 24.593999999999998
- type: map_at_1000
value: 24.694
- type: map_at_3
value: 20.925
- type: map_at_5
value: 22.817999999999998
- type: mrr_at_1
value: 17.375
- type: mrr_at_10
value: 25.734
- type: mrr_at_100
value: 26.586
- type: mrr_at_1000
value: 26.671
- type: mrr_at_3
value: 23.044
- type: mrr_at_5
value: 24.975
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 28.186
- type: ndcg_at_100
value: 33.436
- type: ndcg_at_1000
value: 36.203
- type: ndcg_at_3
value: 23.152
- type: ndcg_at_5
value: 26.397
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.786
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 15.662
- type: recall_at_10
value: 40.066
- type: recall_at_100
value: 65.006
- type: recall_at_1000
value: 85.94000000000001
- type: recall_at_3
value: 27.400000000000002
- type: recall_at_5
value: 35.002
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.853
- type: map_at_10
value: 15.568000000000001
- type: map_at_100
value: 17.383000000000003
- type: map_at_1000
value: 17.584
- type: map_at_3
value: 12.561
- type: map_at_5
value: 14.056
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 28.288000000000004
- type: mrr_at_100
value: 29.432000000000002
- type: mrr_at_1000
value: 29.498
- type: mrr_at_3
value: 25.049
- type: mrr_at_5
value: 26.857
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 22.21
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 33.583
- type: ndcg_at_3
value: 16.994999999999997
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.192
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.573
- type: precision_at_5
value: 10.202
- type: recall_at_1
value: 8.853
- type: recall_at_10
value: 28.087
- type: recall_at_100
value: 53.701
- type: recall_at_1000
value: 76.29899999999999
- type: recall_at_3
value: 15.913
- type: recall_at_5
value: 20.658
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.077
- type: map_at_10
value: 20.788999999999998
- type: map_at_100
value: 30.429000000000002
- type: map_at_1000
value: 32.143
- type: map_at_3
value: 14.692
- type: map_at_5
value: 17.139
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.036
- type: mrr_at_100
value: 78.401
- type: mrr_at_1000
value: 78.404
- type: mrr_at_3
value: 76.75
- type: mrr_at_5
value: 77.47500000000001
- type: ndcg_at_1
value: 58.12500000000001
- type: ndcg_at_10
value: 44.015
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 56.211999999999996
- type: ndcg_at_3
value: 49.151
- type: ndcg_at_5
value: 46.195
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 35.5
- type: precision_at_100
value: 11.355
- type: precision_at_1000
value: 2.1950000000000003
- type: precision_at_3
value: 53.083000000000006
- type: precision_at_5
value: 44.800000000000004
- type: recall_at_1
value: 9.077
- type: recall_at_10
value: 26.259
- type: recall_at_100
value: 56.547000000000004
- type: recall_at_1000
value: 78.551
- type: recall_at_3
value: 16.162000000000003
- type: recall_at_5
value: 19.753999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.44500000000001
- type: f1
value: 44.67067691783401
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.182
- type: map_at_10
value: 78.223
- type: map_at_100
value: 78.498
- type: map_at_1000
value: 78.512
- type: map_at_3
value: 76.71
- type: map_at_5
value: 77.725
- type: mrr_at_1
value: 73.177
- type: mrr_at_10
value: 82.513
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.635
- type: mrr_at_3
value: 81.376
- type: mrr_at_5
value: 82.182
- type: ndcg_at_1
value: 73.177
- type: ndcg_at_10
value: 82.829
- type: ndcg_at_100
value: 83.84
- type: ndcg_at_1000
value: 84.07900000000001
- type: ndcg_at_3
value: 80.303
- type: ndcg_at_5
value: 81.846
- type: precision_at_1
value: 73.177
- type: precision_at_10
value: 10.241999999999999
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 31.247999999999998
- type: precision_at_5
value: 19.697
- type: recall_at_1
value: 68.182
- type: recall_at_10
value: 92.657
- type: recall_at_100
value: 96.709
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 85.9
- type: recall_at_5
value: 89.755
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.108
- type: map_at_10
value: 33.342
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.478
- type: map_at_3
value: 29.067
- type: map_at_5
value: 31.563000000000002
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.913000000000004
- type: mrr_at_100
value: 50.724000000000004
- type: mrr_at_1000
value: 50.766
- type: mrr_at_3
value: 47.504999999999995
- type: mrr_at_5
value: 49.033
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 41.144
- type: ndcg_at_100
value: 48.326
- type: ndcg_at_1000
value: 51.486
- type: ndcg_at_3
value: 37.486999999999995
- type: ndcg_at_5
value: 38.78
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 11.358
- type: precision_at_100
value: 1.873
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 21.108
- type: recall_at_10
value: 47.249
- type: recall_at_100
value: 74.52
- type: recall_at_1000
value: 93.31
- type: recall_at_3
value: 33.271
- type: recall_at_5
value: 39.723000000000006
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.317
- type: map_at_10
value: 64.861
- type: map_at_100
value: 65.697
- type: map_at_1000
value: 65.755
- type: map_at_3
value: 61.258
- type: map_at_5
value: 63.590999999999994
- type: mrr_at_1
value: 80.635
- type: mrr_at_10
value: 86.528
- type: mrr_at_100
value: 86.66199999999999
- type: mrr_at_1000
value: 86.666
- type: mrr_at_3
value: 85.744
- type: mrr_at_5
value: 86.24300000000001
- type: ndcg_at_1
value: 80.635
- type: ndcg_at_10
value: 73.13199999999999
- type: ndcg_at_100
value: 75.927
- type: ndcg_at_1000
value: 76.976
- type: ndcg_at_3
value: 68.241
- type: ndcg_at_5
value: 71.071
- type: precision_at_1
value: 80.635
- type: precision_at_10
value: 15.326
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.961
- type: precision_at_5
value: 28.599999999999998
- type: recall_at_1
value: 40.317
- type: recall_at_10
value: 76.631
- type: recall_at_100
value: 87.495
- type: recall_at_1000
value: 94.362
- type: recall_at_3
value: 65.94200000000001
- type: recall_at_5
value: 71.499
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.686
- type: ap
value: 87.5577120393173
- type: f1
value: 91.6629447355139
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.702
- type: map_at_10
value: 36.414
- type: map_at_100
value: 37.561
- type: map_at_1000
value: 37.605
- type: map_at_3
value: 32.456
- type: map_at_5
value: 34.827000000000005
- type: mrr_at_1
value: 24.355
- type: mrr_at_10
value: 37.01
- type: mrr_at_100
value: 38.085
- type: mrr_at_1000
value: 38.123000000000005
- type: mrr_at_3
value: 33.117999999999995
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 24.384
- type: ndcg_at_10
value: 43.456
- type: ndcg_at_100
value: 48.892
- type: ndcg_at_1000
value: 49.964
- type: ndcg_at_3
value: 35.475
- type: ndcg_at_5
value: 39.711
- type: precision_at_1
value: 24.384
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.052999999999999
- type: precision_at_5
value: 11.189
- type: recall_at_1
value: 23.702
- type: recall_at_10
value: 65.057
- type: recall_at_100
value: 90.021
- type: recall_at_1000
value: 98.142
- type: recall_at_3
value: 43.551
- type: recall_at_5
value: 53.738
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.62380300957591
- type: f1
value: 94.49871222100734
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.14090287277702
- type: f1
value: 60.32101258220515
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.84330867518494
- type: f1
value: 71.92248688515255
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.10692669804976
- type: f1
value: 77.9904839122866
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.822988923078444
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.38394880253403
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.82504612539082
- type: mrr
value: 32.84462298174977
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.029
- type: map_at_10
value: 14.088999999999999
- type: map_at_100
value: 17.601
- type: map_at_1000
value: 19.144
- type: map_at_3
value: 10.156
- type: map_at_5
value: 11.892
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 56.596999999999994
- type: mrr_at_100
value: 57.11000000000001
- type: mrr_at_1000
value: 57.14
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.774
- type: ndcg_at_1
value: 44.891999999999996
- type: ndcg_at_10
value: 37.134
- type: ndcg_at_100
value: 33.652
- type: ndcg_at_1000
value: 42.548
- type: ndcg_at_3
value: 41.851
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 8.309999999999999
- type: precision_at_1000
value: 2.146
- type: precision_at_3
value: 39.422000000000004
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.029
- type: recall_at_10
value: 18.907
- type: recall_at_100
value: 33.76
- type: recall_at_1000
value: 65.14999999999999
- type: recall_at_3
value: 11.584999999999999
- type: recall_at_5
value: 14.626
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.373000000000005
- type: map_at_10
value: 55.836
- type: map_at_100
value: 56.611999999999995
- type: map_at_1000
value: 56.63
- type: map_at_3
value: 51.747
- type: map_at_5
value: 54.337999999999994
- type: mrr_at_1
value: 44.147999999999996
- type: mrr_at_10
value: 58.42699999999999
- type: mrr_at_100
value: 58.902
- type: mrr_at_1000
value: 58.914
- type: mrr_at_3
value: 55.156000000000006
- type: mrr_at_5
value: 57.291000000000004
- type: ndcg_at_1
value: 44.119
- type: ndcg_at_10
value: 63.444
- type: ndcg_at_100
value: 66.40599999999999
- type: ndcg_at_1000
value: 66.822
- type: ndcg_at_3
value: 55.962
- type: ndcg_at_5
value: 60.228
- type: precision_at_1
value: 44.119
- type: precision_at_10
value: 10.006
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.135
- type: precision_at_5
value: 17.59
- type: recall_at_1
value: 39.373000000000005
- type: recall_at_10
value: 83.78999999999999
- type: recall_at_100
value: 96.246
- type: recall_at_1000
value: 99.324
- type: recall_at_3
value: 64.71900000000001
- type: recall_at_5
value: 74.508
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.199
- type: map_at_10
value: 82.892
- type: map_at_100
value: 83.578
- type: map_at_1000
value: 83.598
- type: map_at_3
value: 79.948
- type: map_at_5
value: 81.779
- type: mrr_at_1
value: 79.67
- type: mrr_at_10
value: 86.115
- type: mrr_at_100
value: 86.249
- type: mrr_at_1000
value: 86.251
- type: mrr_at_3
value: 85.08200000000001
- type: mrr_at_5
value: 85.783
- type: ndcg_at_1
value: 79.67
- type: ndcg_at_10
value: 86.839
- type: ndcg_at_100
value: 88.252
- type: ndcg_at_1000
value: 88.401
- type: ndcg_at_3
value: 83.86200000000001
- type: ndcg_at_5
value: 85.473
- type: precision_at_1
value: 79.67
- type: precision_at_10
value: 13.19
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.677
- type: precision_at_5
value: 24.118000000000002
- type: recall_at_1
value: 69.199
- type: recall_at_10
value: 94.321
- type: recall_at_100
value: 99.20400000000001
- type: recall_at_1000
value: 99.947
- type: recall_at_3
value: 85.787
- type: recall_at_5
value: 90.365
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82810046856353
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.38132611783628
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.127000000000001
- type: map_at_10
value: 12.235
- type: map_at_100
value: 14.417
- type: map_at_1000
value: 14.75
- type: map_at_3
value: 8.906
- type: map_at_5
value: 10.591000000000001
- type: mrr_at_1
value: 25.2
- type: mrr_at_10
value: 35.879
- type: mrr_at_100
value: 36.935
- type: mrr_at_1000
value: 36.997
- type: mrr_at_3
value: 32.783
- type: mrr_at_5
value: 34.367999999999995
- type: ndcg_at_1
value: 25.2
- type: ndcg_at_10
value: 20.509
- type: ndcg_at_100
value: 28.67
- type: ndcg_at_1000
value: 34.42
- type: ndcg_at_3
value: 19.948
- type: ndcg_at_5
value: 17.166
- type: precision_at_1
value: 25.2
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.214
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.533
- type: precision_at_5
value: 14.860000000000001
- type: recall_at_1
value: 5.127000000000001
- type: recall_at_10
value: 21.147
- type: recall_at_100
value: 44.946999999999996
- type: recall_at_1000
value: 72.89
- type: recall_at_3
value: 11.277
- type: recall_at_5
value: 15.042
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.0373011786213
- type: cos_sim_spearman
value: 79.27889560856613
- type: euclidean_pearson
value: 80.31186315495655
- type: euclidean_spearman
value: 79.41630415280811
- type: manhattan_pearson
value: 80.31755140442013
- type: manhattan_spearman
value: 79.43069870027611
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.8659751342045
- type: cos_sim_spearman
value: 76.95377612997667
- type: euclidean_pearson
value: 81.24552945497848
- type: euclidean_spearman
value: 77.18236963555253
- type: manhattan_pearson
value: 81.26477607759037
- type: manhattan_spearman
value: 77.13821753062756
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.34597139044875
- type: cos_sim_spearman
value: 84.124169425592
- type: euclidean_pearson
value: 83.68590721511401
- type: euclidean_spearman
value: 84.18846190846398
- type: manhattan_pearson
value: 83.57630235061498
- type: manhattan_spearman
value: 84.10244043726902
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67641885599572
- type: cos_sim_spearman
value: 80.46450725650428
- type: euclidean_pearson
value: 81.61645042715865
- type: euclidean_spearman
value: 80.61418394236874
- type: manhattan_pearson
value: 81.55712034928871
- type: manhattan_spearman
value: 80.57905670523951
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.86650310886782
- type: cos_sim_spearman
value: 89.76081629222328
- type: euclidean_pearson
value: 89.1530747029954
- type: euclidean_spearman
value: 89.80990657280248
- type: manhattan_pearson
value: 89.10640563278132
- type: manhattan_spearman
value: 89.76282108434047
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.93864027911118
- type: cos_sim_spearman
value: 85.47096193999023
- type: euclidean_pearson
value: 85.03141840870533
- type: euclidean_spearman
value: 85.43124029598181
- type: manhattan_pearson
value: 84.99002664393512
- type: manhattan_spearman
value: 85.39169195120834
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.7045343749832
- type: cos_sim_spearman
value: 89.03262221146677
- type: euclidean_pearson
value: 89.56078218264365
- type: euclidean_spearman
value: 89.17827006466868
- type: manhattan_pearson
value: 89.52717595468582
- type: manhattan_spearman
value: 89.15878115952923
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.20191302875551
- type: cos_sim_spearman
value: 64.11446552557646
- type: euclidean_pearson
value: 64.6918197393619
- type: euclidean_spearman
value: 63.440182631197764
- type: manhattan_pearson
value: 64.55692904121835
- type: manhattan_spearman
value: 63.424877742756266
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.37793104662344
- type: cos_sim_spearman
value: 87.7357802629067
- type: euclidean_pearson
value: 87.4286301545109
- type: euclidean_spearman
value: 87.78452920777421
- type: manhattan_pearson
value: 87.42445169331255
- type: manhattan_spearman
value: 87.78537677249598
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.31465405081792
- type: mrr
value: 95.7173781193389
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.904
- type: map_at_100
value: 68.539
- type: map_at_1000
value: 68.562
- type: map_at_3
value: 65.415
- type: map_at_5
value: 66.788
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 68.797
- type: mrr_at_100
value: 69.236
- type: mrr_at_1000
value: 69.257
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.967
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 72.24199999999999
- type: ndcg_at_100
value: 74.86
- type: ndcg_at_1000
value: 75.354
- type: ndcg_at_3
value: 67.93400000000001
- type: ndcg_at_5
value: 70.02199999999999
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.778000000000002
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.383
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8029702970297
- type: cos_sim_ap
value: 94.9210324173411
- type: cos_sim_f1
value: 89.8521162672106
- type: cos_sim_precision
value: 91.67533818938605
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.69504950495049
- type: dot_ap
value: 90.4919719146181
- type: dot_f1
value: 84.72289156626506
- type: dot_precision
value: 81.76744186046511
- type: dot_recall
value: 87.9
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.87827463795753
- type: euclidean_f1
value: 89.55680081507896
- type: euclidean_precision
value: 91.27725856697819
- type: euclidean_recall
value: 87.9
- type: manhattan_accuracy
value: 99.7990099009901
- type: manhattan_ap
value: 94.87587025149682
- type: manhattan_f1
value: 89.76298537569339
- type: manhattan_precision
value: 90.53916581892166
- type: manhattan_recall
value: 89
- type: max_accuracy
value: 99.8029702970297
- type: max_ap
value: 94.9210324173411
- type: max_f1
value: 89.8521162672106
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92385753948724
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.671756975431144
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.677928036739004
- type: mrr
value: 51.56413133435193
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.523589340819683
- type: cos_sim_spearman
value: 30.187407518823235
- type: dot_pearson
value: 29.039713969699015
- type: dot_spearman
value: 29.114740651155508
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.211
- type: map_at_10
value: 1.6199999999999999
- type: map_at_100
value: 8.658000000000001
- type: map_at_1000
value: 21.538
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.919
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.18599999999999
- type: mrr_at_100
value: 86.18599999999999
- type: mrr_at_1000
value: 86.18599999999999
- type: mrr_at_3
value: 85
- type: mrr_at_5
value: 85.9
- type: ndcg_at_1
value: 74
- type: ndcg_at_10
value: 66.542
- type: ndcg_at_100
value: 50.163999999999994
- type: ndcg_at_1000
value: 45.696999999999996
- type: ndcg_at_3
value: 71.531
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 78
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 51.06
- type: precision_at_1000
value: 20.022000000000002
- type: precision_at_3
value: 76
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.211
- type: recall_at_10
value: 1.813
- type: recall_at_100
value: 12.098
- type: recall_at_1000
value: 42.618
- type: recall_at_3
value: 0.603
- type: recall_at_5
value: 0.987
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.2079999999999997
- type: map_at_10
value: 7.777000000000001
- type: map_at_100
value: 12.825000000000001
- type: map_at_1000
value: 14.196
- type: map_at_3
value: 4.285
- type: map_at_5
value: 6.177
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 42.635
- type: mrr_at_100
value: 43.955
- type: mrr_at_1000
value: 43.955
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.088
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 31.840000000000003
- type: ndcg_at_1000
value: 43.191
- type: ndcg_at_3
value: 23.45
- type: ndcg_at_5
value: 22.994
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 6.755
- type: precision_at_1000
value: 1.4200000000000002
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 2.2079999999999997
- type: recall_at_10
value: 13.144
- type: recall_at_100
value: 42.491
- type: recall_at_1000
value: 77.04299999999999
- type: recall_at_3
value: 5.3469999999999995
- type: recall_at_5
value: 9.139
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9044
- type: ap
value: 14.625783489340755
- type: f1
value: 54.814936562590546
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.94227504244483
- type: f1
value: 61.22516038508854
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.602409155145864
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.94641473445789
- type: cos_sim_ap
value: 76.91572747061197
- type: cos_sim_f1
value: 70.14348097317529
- type: cos_sim_precision
value: 66.53254437869822
- type: cos_sim_recall
value: 74.1688654353562
- type: dot_accuracy
value: 84.80061989628658
- type: dot_ap
value: 70.7952548895177
- type: dot_f1
value: 65.44780728844965
- type: dot_precision
value: 61.53310104529617
- type: dot_recall
value: 69.89445910290237
- type: euclidean_accuracy
value: 86.94641473445789
- type: euclidean_ap
value: 76.80774009393652
- type: euclidean_f1
value: 70.30522503879979
- type: euclidean_precision
value: 68.94977168949772
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 86.8629671574179
- type: manhattan_ap
value: 76.76518632600317
- type: manhattan_f1
value: 70.16056518946692
- type: manhattan_precision
value: 68.360450563204
- type: manhattan_recall
value: 72.0580474934037
- type: max_accuracy
value: 86.94641473445789
- type: max_ap
value: 76.91572747061197
- type: max_f1
value: 70.30522503879979
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.10428066907285
- type: cos_sim_ap
value: 86.25114759921435
- type: cos_sim_f1
value: 78.37857884586856
- type: cos_sim_precision
value: 75.60818546078993
- type: cos_sim_recall
value: 81.35971666153372
- type: dot_accuracy
value: 87.41995575736406
- type: dot_ap
value: 81.51838010086782
- type: dot_f1
value: 74.77398015435503
- type: dot_precision
value: 71.53002390662354
- type: dot_recall
value: 78.32614721281182
- type: euclidean_accuracy
value: 89.12368533395428
- type: euclidean_ap
value: 86.33456799874504
- type: euclidean_f1
value: 78.45496750232127
- type: euclidean_precision
value: 75.78388462366364
- type: euclidean_recall
value: 81.32121958731136
- type: manhattan_accuracy
value: 89.10622113556099
- type: manhattan_ap
value: 86.31215061745333
- type: manhattan_f1
value: 78.40684906011539
- type: manhattan_precision
value: 75.89536643366722
- type: manhattan_recall
value: 81.09023714197721
- type: max_accuracy
value: 89.12368533395428
- type: max_ap
value: 86.33456799874504
- type: max_f1
value: 78.45496750232127
language:
- en
license: mit
---
# E5-large-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens. |
Culmenus/opus-mt-de-is-finetuned-de-to-is | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
title: Sakamata Font DCGAN
sdk: gradio
sdk_version: 3.1.7
app_file: grd.py
license: other
---
# Sakamata Font DCGAN
This is experimental project that make fake handwritten character by DCGAN.
Dataset: [SakamataFontProject](https://github.com/sakamata-ch/SakamataFontProject)
This project working under Hololive Derivative Works Guidelines.
You have to read and agree for guideline if you want to use artifact of this project. |
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-19T07:29:02Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_mt5-billsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.0564
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mt5-billsum
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9034
- Rouge1: 0.0564
- Rouge2: 0.0136
- Rougel: 0.0517
- Rougelsum: 0.0517
- Gen Len: 11.9597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 8.1506 | 0.0449 | 0.0106 | 0.0418 | 0.0417 | 10.5444 |
| No log | 2.0 | 62 | 6.3407 | 0.0471 | 0.0107 | 0.0435 | 0.0437 | 11.4839 |
| No log | 3.0 | 93 | 5.3118 | 0.0518 | 0.0131 | 0.0474 | 0.0475 | 11.5 |
| No log | 4.0 | 124 | 4.9034 | 0.0564 | 0.0136 | 0.0517 | 0.0517 | 11.9597 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: bigscience-openrail-m
datasets:
- OpenAssistant/oasst1
metrics:
- character
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | # How to run this ggml file?
WARNING: this can be slow and CPU intensive
Clone whisper.cpp repository:
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
make
place this model called large_q5_0.bin inside whisper.cpp/models folder and run:
./main -m models/large_q5_0.bin yourfilename.wav
# Command to transcribe to SRT subtitle files:
./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress
# Command to transcribe to TRANSLATED (to English) SRT subtitle files:
./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress --translate
It can transcribe ONLY wav files!
# Command line to convert mp4 (works for any video, just change the extension) to wav:
ffmpeg -i yourfilename.mp4 -vn -acodec pcm_s16le -ar 16000 -ac 2 yourfilename.wav
# Command to convert all mp4 (works for any video, just change the extension) inside folder to wav:
find . -type f -iname "*.mp4" -exec bash -c 'ffmpeg -i "$0" -vn -acodec pcm_s16le -ar 16000 -ac 2 "${0%.*}.wav"' {} \;
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-19T07:39:42Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mBart50_large_torch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBart50_large_torch
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8635
- Rouge1: 0.2879
- Rouge2: 0.1036
- Rougel: 0.2108
- Rougelsum: 0.2116
- Gen Len: 68.2131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:|
| No log | 1.0 | 140 | 2.5457 | 0.0159 | 0.0039 | 0.0113 | 0.011 | 187.3607 |
| No log | 2.0 | 280 | 2.5558 | 0.2018 | 0.0681 | 0.1589 | 0.1581 | 99.1311 |
| No log | 3.0 | 420 | 2.6483 | 0.2745 | 0.0941 | 0.2044 | 0.2046 | 68.459 |
| 1.8959 | 4.0 | 560 | 2.8635 | 0.2879 | 0.1036 | 0.2108 | 0.2116 | 68.2131 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CurtisBowser/DialoGPT-medium-sora-three | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
The AWS Certified Solutions Architect - Associate (SAA-C03) certification is an associate-level certification that validates your ability to design and deploy secure, reliable, and cost-effective solutions on AWS. The exam covers a wide range of AWS services, including compute, storage, networking, databases, analytics, machine learning, and artificial intelligence.
To be successful in the exam, you should have a strong understanding of the AWS Well-Architected Framework and the ability to apply its principles to real-world scenarios. You should also have experience with designing and deploying solutions on AWS.
The exam is 135 minutes long and consists of 65 multiple-choice and multiple-response questions. The passing score is 720.
If you are interested in becoming an AWS Certified Solutions Architect - Associate, there are a number of resources available to help you prepare for the exam. AWS offers a number of free and paid training courses, as well as practice exams. You can also find a number of third-party resources that can help you prepare for the exam.
The AWS Certified Solutions Architect - Associate certification is a valuable asset for anyone who wants to work with AWS. It demonstrates your skills and knowledge of AWS, and it can help you advance your career in cloud computing.
Get More Details: https://www.pass4surexams.com/amazon/saa-c03-dumps.html
|
CurtisBowser/DialoGPT-medium-sora | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# anything midjourney v4.1 API Inference
![generated from stablediffusionapi.com]()
## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "anything-midjourney"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/anything-midjourney)
Credits: [View credits](https://civitai.com/?query=anything%20midjourney%20v4.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "anything-midjourney",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
DSI/ar_emotion_6 | [
"pytorch",
"bert",
"transformers"
] | null | {
"architectures": [
"BertForMultiLabelSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
datasets:
- lambdalabs/pokemon-blip-captions
language:
- en
---
This is the 8-bit quantized OpenVINO version of the [Stable Diffusion model for pokemon generation](https://huggingface.co/valhalla/sd-pokemon-model).
To run the model use the folloing code:
```python
%pip install optimum[openvino,diffusers]
from optimum.intel.openvino import OVStableDiffusionPipeline
from diffusers import LMSDiscreteScheduler, DDPMScheduler
import torch
import random
import numpy as np
pipe = OVStableDiffusionPipeline.from_pretrained("OpenVINO/stable-diffusion-pokemons-valhalla-quantized-agressive", compile=False)
pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1)
pipe.compile()
np.random.seed(42)
random.seed(42)
torch.manual_seed(42)
prompt = "cartoon bird"
output = pipe(prompt, num_inference_steps=50, output_type="pil") #Use 100 for more accurate result
output.images[0].save("output.png")
```
|
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Tweets",
"Sentiment analysis"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | The most technologically advanced and user-friendly way to convert OST files to PST files is with DataVare OST to PST Converter. Advanced features in this programme optimise the entire conversion process. Emails, notes, messages, emails, contacts, calendars, etc. are all exported. Throughout the conversion process, the folder hierarchy and data integrity are maintained. The user-friendly interface of this software is one of its standout qualities, making it simple to use for both technical and non-technical users. It is a tried-and-true application that delivers entirely accurate results. Microsoft Outlook versions 2003, 2007, 2010, 2013, 2016, and 2019 are all compatible with it. All users can easily and safely use the utility. Their OST files are converted to PST files using this utility without any data being lost. Downloading a free demo that enables you to convert files from an OST folder to PST format will allow you to try it out.
Read more :- https://www.datavare.com/software/ost-to-pst-converter-expert.html |
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
---
# From Clozing to Comprehending: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader
Pre-trained Machine Reader (PMR) is pre-trained with 18 million Machine Reading Comprehension (MRC) examples constructed with Wikipedia Hyperlinks.
It was introduced in the paper From Clozing to Comprehending: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader by
Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing
and first released in [this repository](https://github.com/DAMO-NLP-SG/PMR).
This model is initialized with roberta-base and further continued pre-trained with an MRC objective.
## Model description
The model is pre-trained with distantly labeled data using a learning objective called Wiki Anchor Extraction (WAE).
Specifically, we constructed a large volume of general-purpose and high-quality MRC-style training data based on Wikipedia anchors (i.e., hyperlinked texts).
For each Wikipedia anchor, we composed a pair of correlated articles.
One side of the pair is the Wikipedia article that contains detailed descriptions of the hyperlinked entity, which we defined as the definition article.
The other side of the pair is the article that mentions the specific anchor text, which we defined as the mention article.
We composed an MRC-style training instance in which the anchor is the answer,
the surrounding passage of the anchor in the mention article is the context, and the definition of the anchor entity in the definition article is the query.
Based on the above data, we then introduced a novel WAE problem as the pre-training task of PMR.
In this task, PMR determines whether the context and the query are relevant.
If so, PMR extracts the answer from the context that satisfies the query description.
During fine-tuning, we unified downstream NLU tasks in our MRC formulation, which typically falls into four categories:
(1) span extraction with pre-defined labels (e.g., NER) in which each task label is treated as a query to search the corresponding answers in the input text (context);
(2) span extraction with natural questions (e.g., EQA) in which the question is treated as the query for answer extraction from the given passage (context);
(3) sequence classification with pre-defined task labels, such as sentiment analysis. Each task label is used as a query for the input text (context); and
(4) sequence classification with natural questions on multiple choices, such as multi-choice QA (MCQA). We treated the concatenation of the question and one choice as the query for the given passage (context).
Then, in the output space, we tackle span extraction problems by predicting the probability of context span being the answer.
We tackle sequence classification problems by conducting relevance classification on [CLS] (extracting [CLS] if relevant).
## Model variations
There are three versions of models released. The details are:
| Model | Backbone | #params |
|------------|-----------|----------|
| [PMR-base](https://huggingface.co/DAMO-NLP-SG/PMR-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M |
| [PMR-large](https://huggingface.co/DAMO-NLP-SG/PMR-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M |
| [PMR-xxlarge](https://huggingface.co/DAMO-NLP-SG/PMR-xxlarge) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M |
## Intended uses & limitations
The models need to be fine-tuned on the data downstream tasks. During fine-tuning, no task-specific layer is required.
### How to use
You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR).
### BibTeX entry and citation info
```bibtxt
@article{xu2022clozing,
title={From Clozing to Comprehending: Retrofitting Pre-trained Language Model to Pre-trained Machine Reader},
author={Xu, Weiwen and Li, Xin and Zhang, Wenxuan and Zhou, Meng and Bing, Lidong and Lam, Wai and Si, Luo},
journal={arXiv preprint arXiv:2212.04755},
year={2022}
}
``` |
DanL/scientific-challenges-and-directions | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:DanL/scientific-challenges-and-directions-dataset",
"arxiv:2108.13751",
"transformers",
"generated_from_trainer"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 134 | "2023-05-19T09:04:54Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_cool_GPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_cool_GPT
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000166
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8873 | 1.0 | 1133 | 3.7959 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Darkrider/covidbert_mednli | [
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2023-05-19T09:18:25Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-amazon-fine-food
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon-fine-food
This model is a fine-tuned version of [MarioAvolio99/distilbert-base-uncased-finetuned-amazon-fine-food](https://huggingface.co/MarioAvolio99/distilbert-base-uncased-finetuned-amazon-fine-food) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1973
- Accuracy: 0.6610
- F1: 0.6888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.129 | 1.72 | 500 | 1.4693 | 0.7113 | 0.7241 |
| 0.1196 | 3.45 | 1000 | 2.0547 | 0.6219 | 0.6551 |
| 0.0632 | 5.17 | 1500 | 1.8699 | 0.6779 | 0.7004 |
| 0.0516 | 6.9 | 2000 | 2.1973 | 0.6610 | 0.6888 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DavidSpaceG/MSGIFSR | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: afl-3.0
---
language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
thumbnail: "url to a thumbnail used in social sharing"
tags:
- tag1
- tag2
license: "any valid license identifier"
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2 |
Davlan/m2m100_418M-eng-yor-mt | [
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.52 +/- 10.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/m2m100_418M-yor-eng-mt | [
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6055
- Answer: {'precision': 0.8728323699421965, 'recall': 0.9241126070991432, 'f1': 0.8977407847800237, 'number': 817}
- Header: {'precision': 0.6, 'recall': 0.5042016806722689, 'f1': 0.547945205479452, 'number': 119}
- Question: {'precision': 0.9, 'recall': 0.9108635097493036, 'f1': 0.9053991693585604, 'number': 1077}
- Overall Precision: 0.8740
- Overall Recall: 0.8922
- Overall F1: 0.8830
- Overall Accuracy: 0.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4096 | 10.53 | 200 | 0.9947 | {'precision': 0.8293515358361775, 'recall': 0.8922888616891065, 'f1': 0.8596698113207547, 'number': 817} | {'precision': 0.5833333333333334, 'recall': 0.5294117647058824, 'f1': 0.5550660792951542, 'number': 119} | {'precision': 0.8858195211786372, 'recall': 0.89322191272052, 'f1': 0.8895053166897827, 'number': 1077} | 0.8461 | 0.8713 | 0.8585 | 0.8133 |
| 0.0451 | 21.05 | 400 | 1.4850 | {'precision': 0.8394495412844036, 'recall': 0.8959608323133414, 'f1': 0.866785079928952, 'number': 817} | {'precision': 0.6333333333333333, 'recall': 0.4789915966386555, 'f1': 0.5454545454545454, 'number': 119} | {'precision': 0.9026974951830443, 'recall': 0.8700092850510678, 'f1': 0.8860520094562647, 'number': 1077} | 0.863 | 0.8574 | 0.8602 | 0.7910 |
| 0.0147 | 31.58 | 600 | 1.5603 | {'precision': 0.8478260869565217, 'recall': 0.9069767441860465, 'f1': 0.8764044943820224, 'number': 817} | {'precision': 0.6336633663366337, 'recall': 0.5378151260504201, 'f1': 0.5818181818181819, 'number': 119} | {'precision': 0.8845807033363391, 'recall': 0.9108635097493036, 'f1': 0.8975297346752059, 'number': 1077} | 0.8570 | 0.8872 | 0.8719 | 0.7948 |
| 0.0077 | 42.11 | 800 | 1.7433 | {'precision': 0.8344444444444444, 'recall': 0.9192166462668299, 'f1': 0.8747815958066395, 'number': 817} | {'precision': 0.5596330275229358, 'recall': 0.5126050420168067, 'f1': 0.5350877192982455, 'number': 119} | {'precision': 0.9031954887218046, 'recall': 0.8922934076137419, 'f1': 0.897711349836525, 'number': 1077} | 0.8553 | 0.8808 | 0.8678 | 0.7910 |
| 0.004 | 52.63 | 1000 | 1.6055 | {'precision': 0.8728323699421965, 'recall': 0.9241126070991432, 'f1': 0.8977407847800237, 'number': 817} | {'precision': 0.6, 'recall': 0.5042016806722689, 'f1': 0.547945205479452, 'number': 119} | {'precision': 0.9, 'recall': 0.9108635097493036, 'f1': 0.9053991693585604, 'number': 1077} | 0.8740 | 0.8922 | 0.8830 | 0.8022 |
| 0.0021 | 63.16 | 1200 | 1.7317 | {'precision': 0.8688524590163934, 'recall': 0.9082007343941249, 'f1': 0.8880909634949131, 'number': 817} | {'precision': 0.5904761904761905, 'recall': 0.5210084033613446, 'f1': 0.5535714285714286, 'number': 119} | {'precision': 0.8847184986595175, 'recall': 0.9192200557103064, 'f1': 0.9016393442622952, 'number': 1077} | 0.8633 | 0.8912 | 0.8770 | 0.7979 |
| 0.0017 | 73.68 | 1400 | 1.7249 | {'precision': 0.8705463182897862, 'recall': 0.8971848225214198, 'f1': 0.8836648583484027, 'number': 817} | {'precision': 0.5555555555555556, 'recall': 0.5042016806722689, 'f1': 0.5286343612334802, 'number': 119} | {'precision': 0.8818181818181818, 'recall': 0.9006499535747446, 'f1': 0.891134588883785, 'number': 1077} | 0.86 | 0.8758 | 0.8678 | 0.8014 |
| 0.0012 | 84.21 | 1600 | 1.8716 | {'precision': 0.8679678530424799, 'recall': 0.9253365973072215, 'f1': 0.8957345971563981, 'number': 817} | {'precision': 0.5652173913043478, 'recall': 0.5462184873949579, 'f1': 0.5555555555555555, 'number': 119} | {'precision': 0.898320895522388, 'recall': 0.8941504178272981, 'f1': 0.8962308050255934, 'number': 1077} | 0.8669 | 0.8862 | 0.8764 | 0.7946 |
| 0.0006 | 94.74 | 1800 | 1.8381 | {'precision': 0.8563218390804598, 'recall': 0.9118727050183598, 'f1': 0.8832246591582691, 'number': 817} | {'precision': 0.6055045871559633, 'recall': 0.5546218487394958, 'f1': 0.5789473684210525, 'number': 119} | {'precision': 0.8931226765799256, 'recall': 0.8922934076137419, 'f1': 0.8927078495123084, 'number': 1077} | 0.8623 | 0.8803 | 0.8712 | 0.7954 |
| 0.0007 | 105.26 | 2000 | 1.7090 | {'precision': 0.8822115384615384, 'recall': 0.8984088127294981, 'f1': 0.8902365069739235, 'number': 817} | {'precision': 0.576271186440678, 'recall': 0.5714285714285714, 'f1': 0.5738396624472574, 'number': 119} | {'precision': 0.8811881188118812, 'recall': 0.9090064995357474, 'f1': 0.8948811700182815, 'number': 1077} | 0.8641 | 0.8847 | 0.8743 | 0.8122 |
| 0.0003 | 115.79 | 2200 | 1.7487 | {'precision': 0.8730723606168446, 'recall': 0.9008567931456548, 'f1': 0.8867469879518072, 'number': 817} | {'precision': 0.5645161290322581, 'recall': 0.5882352941176471, 'f1': 0.5761316872427984, 'number': 119} | {'precision': 0.8936755270394133, 'recall': 0.9052924791086351, 'f1': 0.8994464944649446, 'number': 1077} | 0.8654 | 0.8847 | 0.8750 | 0.8084 |
| 0.0002 | 126.32 | 2400 | 1.7644 | {'precision': 0.8686046511627907, 'recall': 0.9143206854345165, 'f1': 0.8908765652951699, 'number': 817} | {'precision': 0.5932203389830508, 'recall': 0.5882352941176471, 'f1': 0.5907172995780592, 'number': 119} | {'precision': 0.8961397058823529, 'recall': 0.9052924791086351, 'f1': 0.9006928406466513, 'number': 1077} | 0.8674 | 0.8902 | 0.8786 | 0.8091 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Davlan/mT5_base_yoruba_adr | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2003.10564",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3750 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1500,
"warmup_steps": 3750,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Davlan/mbart50-large-yor-eng-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
---
Introduction This repo contains pre-trained models, checkpoints, training logs and decoding results of the following pull-request:
https://github.com/k2-fsa/icefall/pull/1059 |
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5628
- Rouge1: 0.2526
- Rouge2: 0.0768
- Rougel: 0.2107
- Rougelsum: 0.2092
- Gen Len: 18.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 99 | 2.5303 | 0.2381 | 0.0804 | 0.1919 | 0.1908 | 18.4651 |
| No log | 2.0 | 198 | 2.5241 | 0.2623 | 0.0922 | 0.2162 | 0.2144 | 19.1395 |
| No log | 3.0 | 297 | 2.5326 | 0.2481 | 0.0879 | 0.2043 | 0.204 | 18.0233 |
| No log | 4.0 | 396 | 2.5628 | 0.2526 | 0.0768 | 0.2107 | 0.2092 | 18.7907 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Davlan/mt5_base_yor_eng_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MSBD/bert-finetuned-orquac3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MSBD/bert-finetuned-orquac3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9282
- Validation Loss: 9.8399
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 38736, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5921 | 8.8054 | 0 |
| 2.7555 | 9.2279 | 1 |
| 1.9282 | 9.8399 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Davlan/xlm-roberta-base-finetuned-chichewa | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\T-YELM~1\AppData\Local\Temp\tmpddi6u3u8\YakovElm\test
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\T-YELM~1\AppData\Local\Temp\tmpddi6u3u8\YakovElm\test")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Davlan/xlm-roberta-base-finetuned-english | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
language:
- ko
tags:
- gptq
---
https://github.com/qwopqwop200/GPTQ-for-KoAlpaca/tree/main |
Davlan/xlm-roberta-base-finetuned-lingala | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-Montecarlo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Davlan/xlm-roberta-base-finetuned-naija | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | # zh-tw-llm-ta01-pythia-70m-ta8000-v1-b_2_lora_instruction_tune-a100-t004-649aad-merged
This model is a part of the `zh-tw-llm` project.
* Base model: `zh-tw-llm-ta01-pythia-70m-ta8000-v1-b_1_embeddings_and_attention-a100-t4-b56fb5`
* LoRA model: `zh-tw-llm-ta01-pythia-70m-ta8000-v1-b_2_lora_instruction_tune-a100-t004-649aad`
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1` |
Davlan/xlm-roberta-base-finetuned-zulu | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Misomiso Dreambooth model trained by jmorthorst with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Davlan/xlm-roberta-base-masakhaner | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# hc-chilloutmix_NiCkpt API Inference
![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/16386684611684491123.png)
## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "hc-chilloutmixnickpt"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/hc-chilloutmixnickpt)
Credits: [View credits](https://civitai.com/?query=hc-chilloutmix_NiCkpt)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "hc-chilloutmixnickpt",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Davlan/xlm-roberta-large-masakhaner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,449 | null | ---
tags:
- mteb
model-index:
- name: multilingual-e5-base
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (af)
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (am)
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ar)
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (az)
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (bn)
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (cy)
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (da)
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (de)
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (el)
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (es)
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fa)
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fi)
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fr)
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (he)
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hi)
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hu)
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hy)
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (id)
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (is)
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ja)
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (jv)
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ka)
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (km)
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (kn)
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ko)
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (lv)
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ml)
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (mn)
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ms)
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (my)
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nb)
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nl)
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pl)
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pt)
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ro)
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ru)
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sl)
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sq)
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sv)
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sw)
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ta)
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (te)
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (th)
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tl)
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tr)
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ur)
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (vi)
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-TW)
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (af)
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (am)
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ar)
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (az)
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (bn)
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (cy)
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (da)
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (de)
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (el)
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (es)
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fa)
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fi)
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fr)
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (he)
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hi)
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hu)
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hy)
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (id)
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (is)
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ja)
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (jv)
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ka)
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (km)
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (kn)
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ko)
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (lv)
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ml)
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (mn)
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ms)
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (my)
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nb)
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nl)
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pl)
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pt)
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ro)
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ru)
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sl)
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sq)
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sv)
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sw)
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ta)
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (te)
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (th)
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tl)
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tr)
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ur)
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (vi)
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-TW)
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (sqi-eng)
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fry-eng)
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kur-eng)
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tur-eng)
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (deu-eng)
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nld-eng)
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ron-eng)
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ang-eng)
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ido-eng)
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jav-eng)
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (isl-eng)
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slv-eng)
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cym-eng)
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kaz-eng)
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (est-eng)
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (heb-eng)
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gla-eng)
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mar-eng)
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lat-eng)
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bel-eng)
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pms-eng)
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gle-eng)
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pes-eng)
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nob-eng)
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bul-eng)
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cbk-eng)
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hun-eng)
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uig-eng)
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (rus-eng)
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (spa-eng)
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hye-eng)
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tel-eng)
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (afr-eng)
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mon-eng)
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arz-eng)
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hrv-eng)
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nov-eng)
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gsw-eng)
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nds-eng)
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ukr-eng)
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uzb-eng)
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lit-eng)
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ina-eng)
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lfn-eng)
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (zsm-eng)
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ita-eng)
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cmn-eng)
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lvs-eng)
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (glg-eng)
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ceb-eng)
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bre-eng)
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ben-eng)
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swg-eng)
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arq-eng)
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kab-eng)
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fra-eng)
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (por-eng)
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tat-eng)
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (oci-eng)
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pol-eng)
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (war-eng)
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (aze-eng)
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (vie-eng)
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nno-eng)
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cha-eng)
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mhr-eng)
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dan-eng)
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ell-eng)
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (amh-eng)
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pam-eng)
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hsb-eng)
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (srp-eng)
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (epo-eng)
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kzj-eng)
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (awa-eng)
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fao-eng)
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mal-eng)
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ile-eng)
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bos-eng)
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cor-eng)
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cat-eng)
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (eus-eng)
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yue-eng)
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swe-eng)
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dtp-eng)
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kat-eng)
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jpn-eng)
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (csb-eng)
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (xho-eng)
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (orv-eng)
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ind-eng)
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tuk-eng)
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (max-eng)
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swh-eng)
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hin-eng)
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dsb-eng)
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ber-eng)
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tam-eng)
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slk-eng)
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tgl-eng)
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ast-eng)
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mkd-eng)
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (khm-eng)
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ces-eng)
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tzl-eng)
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (urd-eng)
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ara-eng)
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kor-eng)
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yid-eng)
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fin-eng)
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tha-eng)
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (wuu-eng)
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
---
## Multilingual-E5-base
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens. |
DecafNosebleed/ScaraBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Apache15SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Apache15SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Declan/Breitbart_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
widget:
- text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:"
example_title: "Email Writing"
- text: "<human>: Create a list of things to do in San Francisco\n<bot>:"
example_title: "Brainstorming"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [togethercomputer/RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
```bash
pip install hf-hub-ctranslate2>=2.0.6
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model togethercomputer/RedPajama-INCITE-Chat-7B-v0.1 --output_dir /home/michael/tmp-ct2fast-RedPajama-INCITE-Chat-7B-v0.1 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-RedPajama-INCITE-Chat-7B-v0.1"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# RedPajama-INCITE-Chat-7B-v0.1
RedPajama-INCITE-Chat-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
- Base Model: [RedPajama-INCITE-Base-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1)
- Instruction-tuned Version: [RedPajama-INCITE-Instruct-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1)
- Chat Version: [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
To prompt the chat model, use the following format:
```
<human>: [Instruction]
<bot>:
```
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist.
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.bfloat16)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-Chat-7B-v0.1` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-Chat-7B-v0.1` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-Chat-7B-v0.1`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 131M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
Declan/ChicagoTribune_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- espnet
- audio
- text-to-speech
language: jp
datasets:
- studies_multi
license: cc-by-4.0
---
## ESPnet2 TTS model
### `fujie/fujie_studies_tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token`
This model was trained by Shinya Fujie using studies_multi recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 2219358fbd064d79214b12540afd498feaf49596
pip install -e .
cd egs2/studies_multi/tts1
./run.sh --skip_data_prep false --skip_train true --download_model fujie/fujie_studies_tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38263
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/models--espnet--kan-bayashi_jsut_vits_prosody/snapshots/3a859bfd2c9710846fa6244598000f0578a2d3e4/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody/train.total_count.ave_10best.pth
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/train/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/valid/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/22k/raw/ITA_tr_no_dev/text
- text
- text
- - dump/22k/raw/ITA_tr_no_dev/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/ITA_dev/text
- text
- text
- - dump/22k/raw/ITA_dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
generator_only: false
token_list:
- <blank>
- <unk>
- a
- o
- i
- '['
- '#'
- u
- ']'
- e
- k
- n
- t
- r
- s
- N
- m
- _
- sh
- d
- g
- ^
- $
- w
- cl
- h
- y
- b
- j
- ts
- ch
- z
- p
- f
- ky
- ry
- gy
- hy
- ny
- by
- my
- py
- v
- dy
- '?'
- ty
- <happy>
- <angry>
- <sad>
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_prosody_with_special_token
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 50
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# hc-the-legend-of-zelda-breath-of-the-wild-style-lora API Inference
![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/2881381001684496279.png)
## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "hc-the-legend-of-zel"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/hc-the-legend-of-zel)
Credits: [View credits](https://civitai.com/?query=hc-the-legend-of-zelda-breath-of-the-wild-style-lora)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "hc-the-legend-of-zel",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Declan/NPR_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2023-05-19T11:42:29Z" | ---
library_name: "transformers.js"
---
https://huggingface.co/openai/clip-vit-base-patch32 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
Declan/NewYorkPost_model_v1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5541301365636306
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8233
- Matthews Correlation: 0.5541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5229 | 1.0 | 535 | 0.5306 | 0.4277 |
| 0.3478 | 2.0 | 1070 | 0.5107 | 0.5091 |
| 0.2334 | 3.0 | 1605 | 0.5299 | 0.5472 |
| 0.1766 | 4.0 | 2140 | 0.7634 | 0.5317 |
| 0.1231 | 5.0 | 2675 | 0.8233 | 0.5541 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+rocm5.4.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Declan/NewYorkTimes_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: other
---
# Overview
This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created by https://github.com/jondurbin/airoboros
### Eval (gpt4 judging)
![chart](meta-chart.png)
| model | raw score | gpt-3.5 adjusted score |
| --- | --- | --- |
| __airoboros-13b__ | __17947__ | __98.087__ |
| gpt35 | 18297 | 100.0 |
| gpt4-x-alpasta-30b | 15612 | 85.33 |
| manticore-13b | 15856 | 86.66 |
| vicuna-13b-1.1 | 16306 | 89.12 |
| wizard-vicuna-13b-uncensored | 16287 | 89.01 |
<details>
<summary>individual question scores, with shareGPT links (200 prompts generated by gpt-4)</summary>
*wb-13b-u is Wizard-Vicuna-13b-Uncensored*
| airoboros-13b | gpt35 | gpt4-x-alpasta-30b | manticore-13b | vicuna-13b-1.1 | wv-13b-u | link |
|----------------:|--------:|---------------------:|----------------:|-----------------:|-------------------------------:|:---------------------------------------|
| 80 | 95 | 70 | 90 | 85 | 60 | [eval](https://sharegpt.com/c/PIbRQD3) |
| 20 | 95 | 40 | 30 | 90 | 80 | [eval](https://sharegpt.com/c/fSzwzzd) |
| 100 | 100 | 100 | 95 | 95 | 100 | [eval](https://sharegpt.com/c/AXMzZiO) |
| 90 | 100 | 85 | 60 | 95 | 100 | [eval](https://sharegpt.com/c/7obzJm2) |
| 95 | 90 | 80 | 85 | 95 | 75 | [eval](https://sharegpt.com/c/cRpj6M1) |
| 100 | 95 | 90 | 95 | 98 | 92 | [eval](https://sharegpt.com/c/p0by1T7) |
| 50 | 100 | 80 | 95 | 60 | 55 | [eval](https://sharegpt.com/c/rowNlKx) |
| 70 | 90 | 80 | 60 | 85 | 40 | [eval](https://sharegpt.com/c/I4POj4I) |
| 100 | 95 | 50 | 85 | 40 | 60 | [eval](https://sharegpt.com/c/gUAeiRp) |
| 85 | 60 | 55 | 65 | 50 | 70 | [eval](https://sharegpt.com/c/Lgw4QQL) |
| 95 | 100 | 85 | 90 | 60 | 75 | [eval](https://sharegpt.com/c/X9tDYft) |
| 100 | 95 | 70 | 80 | 50 | 85 | [eval](https://sharegpt.com/c/9V2ElkH) |
| 100 | 95 | 80 | 70 | 60 | 90 | [eval](https://sharegpt.com/c/D5xg6qt) |
| 95 | 100 | 70 | 85 | 90 | 90 | [eval](https://sharegpt.com/c/lQnSfDs) |
| 80 | 95 | 90 | 60 | 30 | 85 | [eval](https://sharegpt.com/c/1hpHGNc) |
| 60 | 95 | 0 | 75 | 50 | 40 | [eval](https://sharegpt.com/c/an6TqE4) |
| 100 | 95 | 90 | 98 | 95 | 95 | [eval](https://sharegpt.com/c/7vr6n3F) |
| 60 | 85 | 40 | 50 | 20 | 0 | [eval](https://sharegpt.com/c/TOkMkgE) |
| 100 | 90 | 85 | 95 | 95 | 80 | [eval](https://sharegpt.com/c/Qu7ak0r) |
| 100 | 95 | 100 | 95 | 90 | 95 | [eval](https://sharegpt.com/c/hMD4gPo) |
| 95 | 90 | 96 | 80 | 92 | 88 | [eval](https://sharegpt.com/c/HTlicNh) |
| 95 | 92 | 90 | 93 | 89 | 91 | [eval](https://sharegpt.com/c/MjxHpAf) |
| 95 | 93 | 90 | 94 | 96 | 92 | [eval](https://sharegpt.com/c/4RvxOR9) |
| 95 | 90 | 93 | 88 | 92 | 85 | [eval](https://sharegpt.com/c/PcAIU9r) |
| 95 | 90 | 85 | 96 | 88 | 92 | [eval](https://sharegpt.com/c/MMqul3q) |
| 95 | 95 | 90 | 93 | 92 | 91 | [eval](https://sharegpt.com/c/YQsLyzJ) |
| 95 | 98 | 80 | 97 | 99 | 96 | [eval](https://sharegpt.com/c/UDhSTMq) |
| 95 | 93 | 90 | 87 | 92 | 89 | [eval](https://sharegpt.com/c/4gCfdCV) |
| 90 | 85 | 95 | 80 | 92 | 75 | [eval](https://sharegpt.com/c/bkQs4SP) |
| 90 | 85 | 95 | 93 | 80 | 92 | [eval](https://sharegpt.com/c/LeLCEEt) |
| 95 | 92 | 90 | 91 | 93 | 89 | [eval](https://sharegpt.com/c/DFxNzVu) |
| 100 | 95 | 90 | 85 | 80 | 95 | [eval](https://sharegpt.com/c/gnVzNML) |
| 95 | 97 | 93 | 92 | 96 | 94 | [eval](https://sharegpt.com/c/y7pxMIy) |
| 95 | 93 | 94 | 90 | 88 | 92 | [eval](https://sharegpt.com/c/5UeCvTY) |
| 90 | 95 | 98 | 85 | 96 | 92 | [eval](https://sharegpt.com/c/T4oL9I5) |
| 90 | 88 | 85 | 80 | 82 | 84 | [eval](https://sharegpt.com/c/HnGyTAG) |
| 90 | 95 | 85 | 87 | 92 | 88 | [eval](https://sharegpt.com/c/ZbRMBNj) |
| 95 | 97 | 96 | 90 | 93 | 92 | [eval](https://sharegpt.com/c/iTmFJqd) |
| 95 | 93 | 92 | 90 | 89 | 91 | [eval](https://sharegpt.com/c/VuPifET) |
| 90 | 95 | 93 | 92 | 94 | 91 | [eval](https://sharegpt.com/c/AvFAH1x) |
| 90 | 85 | 95 | 80 | 88 | 75 | [eval](https://sharegpt.com/c/4ealKGN) |
| 85 | 90 | 95 | 88 | 92 | 80 | [eval](https://sharegpt.com/c/bE1b2vX) |
| 90 | 95 | 92 | 85 | 80 | 87 | [eval](https://sharegpt.com/c/I3nMPBC) |
| 85 | 90 | 95 | 80 | 88 | 75 | [eval](https://sharegpt.com/c/as7r3bW) |
| 85 | 80 | 75 | 90 | 70 | 82 | [eval](https://sharegpt.com/c/qYceaUa) |
| 90 | 85 | 95 | 92 | 93 | 80 | [eval](https://sharegpt.com/c/g4FXchU) |
| 90 | 95 | 75 | 85 | 80 | 70 | [eval](https://sharegpt.com/c/6kGLvL5) |
| 85 | 90 | 80 | 88 | 82 | 83 | [eval](https://sharegpt.com/c/SRozqaF) |
| 85 | 90 | 95 | 92 | 88 | 80 | [eval](https://sharegpt.com/c/GoKydf6) |
| 85 | 90 | 80 | 75 | 95 | 88 | [eval](https://sharegpt.com/c/37aXkHQ) |
| 85 | 90 | 80 | 88 | 84 | 92 | [eval](https://sharegpt.com/c/nVuUaTj) |
| 80 | 90 | 75 | 85 | 70 | 95 | [eval](https://sharegpt.com/c/TkAQKLC) |
| 90 | 88 | 85 | 80 | 92 | 83 | [eval](https://sharegpt.com/c/55cO2y0) |
| 85 | 75 | 90 | 80 | 78 | 88 | [eval](https://sharegpt.com/c/tXtq5lT) |
| 85 | 90 | 80 | 82 | 75 | 88 | [eval](https://sharegpt.com/c/TfMjeJQ) |
| 90 | 85 | 40 | 95 | 80 | 88 | [eval](https://sharegpt.com/c/2jQ6K2S) |
| 85 | 95 | 90 | 75 | 88 | 80 | [eval](https://sharegpt.com/c/aQtr2ca) |
| 85 | 95 | 90 | 92 | 89 | 88 | [eval](https://sharegpt.com/c/tbWLyZ7) |
| 80 | 85 | 75 | 60 | 90 | 70 | [eval](https://sharegpt.com/c/moHC7i2) |
| 85 | 90 | 87 | 80 | 88 | 75 | [eval](https://sharegpt.com/c/GK6GShh) |
| 85 | 80 | 75 | 50 | 90 | 80 | [eval](https://sharegpt.com/c/ugcW4qG) |
| 95 | 80 | 90 | 85 | 75 | 82 | [eval](https://sharegpt.com/c/WL8iq6F) |
| 85 | 90 | 80 | 70 | 95 | 88 | [eval](https://sharegpt.com/c/TZJKnvS) |
| 90 | 95 | 70 | 85 | 80 | 75 | [eval](https://sharegpt.com/c/beNOKb5) |
| 90 | 85 | 70 | 75 | 80 | 60 | [eval](https://sharegpt.com/c/o2oRCF5) |
| 95 | 90 | 70 | 50 | 85 | 80 | [eval](https://sharegpt.com/c/TNjbK6D) |
| 80 | 85 | 40 | 60 | 90 | 95 | [eval](https://sharegpt.com/c/rJvszWJ) |
| 75 | 60 | 80 | 55 | 70 | 85 | [eval](https://sharegpt.com/c/HJwRkro) |
| 90 | 85 | 60 | 50 | 80 | 95 | [eval](https://sharegpt.com/c/AeFoSDK) |
| 45 | 85 | 60 | 20 | 65 | 75 | [eval](https://sharegpt.com/c/KA1cgOl) |
| 85 | 90 | 30 | 60 | 80 | 70 | [eval](https://sharegpt.com/c/RTy8n0y) |
| 90 | 95 | 80 | 40 | 85 | 70 | [eval](https://sharegpt.com/c/PJMJoXh) |
| 85 | 90 | 70 | 75 | 80 | 95 | [eval](https://sharegpt.com/c/Ib3jzyC) |
| 90 | 70 | 50 | 20 | 60 | 40 | [eval](https://sharegpt.com/c/oMmqqtX) |
| 90 | 95 | 75 | 60 | 85 | 80 | [eval](https://sharegpt.com/c/qRNhNTw) |
| 85 | 80 | 60 | 70 | 65 | 75 | [eval](https://sharegpt.com/c/3MAHQIy) |
| 90 | 85 | 80 | 75 | 82 | 70 | [eval](https://sharegpt.com/c/0Emc5HS) |
| 90 | 95 | 80 | 70 | 85 | 75 | [eval](https://sharegpt.com/c/UqAxRWF) |
| 85 | 75 | 30 | 80 | 90 | 70 | [eval](https://sharegpt.com/c/eywxGAw) |
| 85 | 90 | 50 | 70 | 80 | 60 | [eval](https://sharegpt.com/c/A2KSEWP) |
| 100 | 95 | 98 | 99 | 97 | 96 | [eval](https://sharegpt.com/c/C8rebQf) |
| 95 | 90 | 92 | 93 | 91 | 89 | [eval](https://sharegpt.com/c/cd9HF4V) |
| 95 | 92 | 90 | 85 | 88 | 91 | [eval](https://sharegpt.com/c/LHkjvQJ) |
| 100 | 95 | 98 | 97 | 96 | 99 | [eval](https://sharegpt.com/c/o5PdoyZ) |
| 100 | 100 | 100 | 90 | 100 | 95 | [eval](https://sharegpt.com/c/rh8pZVg) |
| 100 | 95 | 98 | 97 | 94 | 99 | [eval](https://sharegpt.com/c/T5DYL83) |
| 95 | 90 | 92 | 93 | 94 | 91 | [eval](https://sharegpt.com/c/G5Osg3X) |
| 100 | 95 | 98 | 90 | 96 | 95 | [eval](https://sharegpt.com/c/9ZqI03V) |
| 95 | 96 | 92 | 90 | 89 | 93 | [eval](https://sharegpt.com/c/4tFfwZU) |
| 100 | 95 | 93 | 90 | 92 | 88 | [eval](https://sharegpt.com/c/mG1JqPH) |
| 100 | 100 | 98 | 97 | 99 | 100 | [eval](https://sharegpt.com/c/VDdtgCu) |
| 95 | 90 | 92 | 85 | 93 | 94 | [eval](https://sharegpt.com/c/uKtGkvg) |
| 95 | 93 | 90 | 92 | 96 | 91 | [eval](https://sharegpt.com/c/9B92N6P) |
| 95 | 96 | 92 | 90 | 93 | 91 | [eval](https://sharegpt.com/c/GeIFfOu) |
| 95 | 90 | 92 | 93 | 91 | 89 | [eval](https://sharegpt.com/c/gn3E9nN) |
| 100 | 98 | 95 | 97 | 96 | 99 | [eval](https://sharegpt.com/c/Erxa46H) |
| 90 | 95 | 85 | 88 | 92 | 87 | [eval](https://sharegpt.com/c/oRHVOvK) |
| 95 | 93 | 90 | 92 | 89 | 88 | [eval](https://sharegpt.com/c/ghtKLUX) |
| 100 | 95 | 97 | 90 | 96 | 94 | [eval](https://sharegpt.com/c/ZL4KjqP) |
| 95 | 93 | 90 | 92 | 94 | 91 | [eval](https://sharegpt.com/c/YOnqIQa) |
| 95 | 92 | 90 | 93 | 94 | 88 | [eval](https://sharegpt.com/c/3BKwKho) |
| 95 | 92 | 60 | 97 | 90 | 96 | [eval](https://sharegpt.com/c/U1i31bn) |
| 95 | 90 | 92 | 93 | 91 | 89 | [eval](https://sharegpt.com/c/etfRoAE) |
| 95 | 90 | 97 | 92 | 91 | 93 | [eval](https://sharegpt.com/c/B0OpVxR) |
| 90 | 95 | 93 | 85 | 92 | 91 | [eval](https://sharegpt.com/c/MBgGJ5A) |
| 95 | 90 | 40 | 92 | 93 | 85 | [eval](https://sharegpt.com/c/eQKTYO7) |
| 100 | 100 | 95 | 90 | 95 | 90 | [eval](https://sharegpt.com/c/szKWCBt) |
| 90 | 95 | 96 | 98 | 93 | 92 | [eval](https://sharegpt.com/c/8ZhUcAv) |
| 90 | 95 | 92 | 89 | 93 | 94 | [eval](https://sharegpt.com/c/VQWdy99) |
| 100 | 95 | 100 | 98 | 96 | 99 | [eval](https://sharegpt.com/c/g1DHUSM) |
| 100 | 100 | 95 | 90 | 100 | 90 | [eval](https://sharegpt.com/c/uYgfJC3) |
| 90 | 85 | 88 | 92 | 87 | 91 | [eval](https://sharegpt.com/c/crk8BH3) |
| 95 | 97 | 90 | 92 | 93 | 94 | [eval](https://sharegpt.com/c/95F9afQ) |
| 90 | 95 | 85 | 88 | 92 | 89 | [eval](https://sharegpt.com/c/otioHUo) |
| 95 | 93 | 90 | 92 | 94 | 91 | [eval](https://sharegpt.com/c/KSiL9F6) |
| 90 | 95 | 85 | 80 | 88 | 82 | [eval](https://sharegpt.com/c/GmGq3b3) |
| 95 | 90 | 60 | 85 | 93 | 70 | [eval](https://sharegpt.com/c/VOhklyz) |
| 95 | 92 | 94 | 93 | 96 | 90 | [eval](https://sharegpt.com/c/wqy8m6k) |
| 95 | 90 | 85 | 93 | 87 | 92 | [eval](https://sharegpt.com/c/iWKrIuS) |
| 95 | 96 | 93 | 90 | 97 | 92 | [eval](https://sharegpt.com/c/o1h3w8N) |
| 100 | 0 | 0 | 100 | 0 | 0 | [eval](https://sharegpt.com/c/3UH9eed) |
| 60 | 100 | 0 | 80 | 0 | 0 | [eval](https://sharegpt.com/c/44g0FAh) |
| 0 | 100 | 60 | 0 | 0 | 90 | [eval](https://sharegpt.com/c/PaQlcrU) |
| 100 | 100 | 0 | 100 | 100 | 100 | [eval](https://sharegpt.com/c/51icV4o) |
| 100 | 100 | 100 | 100 | 95 | 100 | [eval](https://sharegpt.com/c/1VnbGAR) |
| 100 | 100 | 100 | 50 | 90 | 100 | [eval](https://sharegpt.com/c/EYGBrgw) |
| 100 | 100 | 100 | 100 | 95 | 90 | [eval](https://sharegpt.com/c/EGRduOt) |
| 100 | 100 | 100 | 95 | 0 | 100 | [eval](https://sharegpt.com/c/O3JJfnK) |
| 50 | 95 | 20 | 10 | 30 | 85 | [eval](https://sharegpt.com/c/2roVtAu) |
| 100 | 100 | 60 | 20 | 30 | 40 | [eval](https://sharegpt.com/c/sphFpfx) |
| 100 | 0 | 0 | 0 | 0 | 100 | [eval](https://sharegpt.com/c/OeWGKBo) |
| 0 | 100 | 60 | 0 | 0 | 80 | [eval](https://sharegpt.com/c/TOUsuFA) |
| 50 | 100 | 20 | 90 | 0 | 10 | [eval](https://sharegpt.com/c/Y3P6DCu) |
| 100 | 100 | 100 | 100 | 100 | 100 | [eval](https://sharegpt.com/c/hkbdeiM) |
| 100 | 100 | 100 | 100 | 100 | 100 | [eval](https://sharegpt.com/c/eubbaVC) |
| 40 | 100 | 95 | 0 | 100 | 40 | [eval](https://sharegpt.com/c/QWiF49v) |
| 100 | 100 | 100 | 100 | 80 | 100 | [eval](https://sharegpt.com/c/dKTapBu) |
| 100 | 100 | 100 | 0 | 90 | 40 | [eval](https://sharegpt.com/c/P8NGwFZ) |
| 0 | 100 | 100 | 50 | 70 | 20 | [eval](https://sharegpt.com/c/v96BtBL) |
| 100 | 100 | 50 | 90 | 0 | 95 | [eval](https://sharegpt.com/c/YRlzj1t) |
| 100 | 95 | 90 | 85 | 98 | 80 | [eval](https://sharegpt.com/c/76VX3eB) |
| 95 | 98 | 90 | 92 | 96 | 89 | [eval](https://sharegpt.com/c/JK1uNef) |
| 90 | 95 | 75 | 85 | 80 | 82 | [eval](https://sharegpt.com/c/ku6CKmx) |
| 95 | 98 | 50 | 92 | 96 | 94 | [eval](https://sharegpt.com/c/0iAFuKW) |
| 95 | 90 | 0 | 93 | 92 | 94 | [eval](https://sharegpt.com/c/6uGnKio) |
| 95 | 90 | 85 | 92 | 80 | 88 | [eval](https://sharegpt.com/c/lfpRBw8) |
| 95 | 93 | 75 | 85 | 90 | 92 | [eval](https://sharegpt.com/c/mKu70jb) |
| 90 | 95 | 88 | 85 | 92 | 89 | [eval](https://sharegpt.com/c/GkYzJHO) |
| 100 | 100 | 100 | 95 | 97 | 98 | [eval](https://sharegpt.com/c/mly2k0z) |
| 85 | 40 | 30 | 95 | 90 | 88 | [eval](https://sharegpt.com/c/5td2ob0) |
| 90 | 95 | 92 | 85 | 88 | 93 | [eval](https://sharegpt.com/c/0ISpWfy) |
| 95 | 96 | 92 | 90 | 89 | 93 | [eval](https://sharegpt.com/c/kdUDUn7) |
| 90 | 95 | 85 | 80 | 92 | 88 | [eval](https://sharegpt.com/c/fjMNYr2) |
| 95 | 98 | 65 | 90 | 85 | 93 | [eval](https://sharegpt.com/c/6xBIf2Q) |
| 95 | 92 | 96 | 97 | 90 | 89 | [eval](https://sharegpt.com/c/B9GY8Ln) |
| 95 | 90 | 92 | 91 | 89 | 93 | [eval](https://sharegpt.com/c/vn1FPU4) |
| 95 | 90 | 80 | 75 | 95 | 90 | [eval](https://sharegpt.com/c/YurEMYg) |
| 92 | 40 | 30 | 95 | 90 | 93 | [eval](https://sharegpt.com/c/D19Qeui) |
| 90 | 92 | 85 | 88 | 89 | 87 | [eval](https://sharegpt.com/c/5QRFfrt) |
| 95 | 80 | 90 | 92 | 91 | 88 | [eval](https://sharegpt.com/c/pYWPRi4) |
| 95 | 93 | 92 | 90 | 91 | 94 | [eval](https://sharegpt.com/c/wPRTntL) |
| 100 | 98 | 95 | 90 | 92 | 96 | [eval](https://sharegpt.com/c/F6PLYKE) |
| 95 | 92 | 80 | 85 | 90 | 93 | [eval](https://sharegpt.com/c/WeJnMGv) |
| 95 | 98 | 90 | 88 | 97 | 96 | [eval](https://sharegpt.com/c/zNKL49e) |
| 90 | 95 | 85 | 88 | 86 | 92 | [eval](https://sharegpt.com/c/kIKmA1b) |
| 100 | 100 | 100 | 100 | 100 | 100 | [eval](https://sharegpt.com/c/1btWd4O) |
| 90 | 95 | 85 | 96 | 92 | 88 | [eval](https://sharegpt.com/c/s9sf1Lp) |
| 100 | 98 | 95 | 99 | 97 | 96 | [eval](https://sharegpt.com/c/RWzv8py) |
| 95 | 92 | 70 | 90 | 93 | 89 | [eval](https://sharegpt.com/c/bYF7FqA) |
| 95 | 90 | 88 | 92 | 94 | 93 | [eval](https://sharegpt.com/c/SuUqjMj) |
| 95 | 90 | 93 | 92 | 85 | 94 | [eval](https://sharegpt.com/c/r0aRdYY) |
| 95 | 93 | 90 | 87 | 92 | 91 | [eval](https://sharegpt.com/c/VuMfkkd) |
| 95 | 93 | 90 | 96 | 92 | 91 | [eval](https://sharegpt.com/c/rhm6fa4) |
| 95 | 97 | 85 | 96 | 98 | 90 | [eval](https://sharegpt.com/c/DwXnyqG) |
| 95 | 92 | 90 | 85 | 93 | 94 | [eval](https://sharegpt.com/c/0ScdkGS) |
| 95 | 96 | 92 | 90 | 97 | 93 | [eval](https://sharegpt.com/c/6yIoCDU) |
| 95 | 93 | 96 | 94 | 90 | 92 | [eval](https://sharegpt.com/c/VubEvp9) |
| 95 | 94 | 93 | 92 | 90 | 89 | [eval](https://sharegpt.com/c/RHzmZWG) |
| 90 | 85 | 95 | 80 | 87 | 75 | [eval](https://sharegpt.com/c/IMiP9Zm) |
| 95 | 94 | 92 | 93 | 90 | 96 | [eval](https://sharegpt.com/c/bft4PIL) |
| 95 | 100 | 90 | 95 | 95 | 95 | [eval](https://sharegpt.com/c/iHXB34b) |
| 100 | 95 | 85 | 100 | 0 | 90 | [eval](https://sharegpt.com/c/vCGn9R7) |
| 100 | 95 | 90 | 95 | 100 | 95 | [eval](https://sharegpt.com/c/be8crZL) |
| 95 | 90 | 60 | 95 | 85 | 80 | [eval](https://sharegpt.com/c/33elmDz) |
| 100 | 95 | 90 | 98 | 97 | 99 | [eval](https://sharegpt.com/c/RWD3Zx7) |
| 95 | 90 | 85 | 95 | 80 | 92 | [eval](https://sharegpt.com/c/GiwBvM7) |
| 100 | 95 | 100 | 98 | 100 | 90 | [eval](https://sharegpt.com/c/hX2pYxk) |
| 100 | 95 | 80 | 85 | 90 | 85 | [eval](https://sharegpt.com/c/MfxdGd7) |
| 100 | 90 | 95 | 85 | 95 | 100 | [eval](https://sharegpt.com/c/28hQjmS) |
| 95 | 90 | 85 | 80 | 88 | 92 | [eval](https://sharegpt.com/c/fzy5EPe) |
| 100 | 100 | 0 | 0 | 100 | 0 | [eval](https://sharegpt.com/c/vwxPjbR) |
| 100 | 100 | 100 | 50 | 100 | 75 | [eval](https://sharegpt.com/c/FAYfFWy) |
| 100 | 100 | 0 | 0 | 100 | 0 | [eval](https://sharegpt.com/c/SoudGsQ) |
| 0 | 100 | 0 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/mkwEgVn) |
| 100 | 100 | 50 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/q8MQEsz) |
| 100 | 100 | 100 | 100 | 100 | 95 | [eval](https://sharegpt.com/c/tzHpsKh) |
| 100 | 100 | 50 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/3ugYBtJ) |
| 100 | 100 | 0 | 0 | 100 | 0 | [eval](https://sharegpt.com/c/I6KfOJT) |
| 90 | 85 | 80 | 95 | 70 | 75 | [eval](https://sharegpt.com/c/enaV1CK) |
| 100 | 100 | 0 | 0 | 0 | 0 | [eval](https://sharegpt.com/c/JBk7oSh) |
</details>
### Training data
I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.
The jailbreak prompt I used is the default prompt in the python code when using the `--uncensored` flag: https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39
I also did a few passes of manually cleanup to remove some bad prompts, but mostly I left the data as-is. Initially, the model was fairly bad at math/extrapolation, closed question-answering (heavy hallucination), and coding, so I did one more fine tuning pass with additional synthetic instructions aimed at those types of problems.
Both the initial instructions and final-pass fine-tuning instructions will be published soon.
### Fine-tuning method
I used the excellent [FastChat](https://github.com/lm-sys/FastChat) module, running with:
```
source /workspace/venv/bin/activate
export NCCL_P2P_DISABLE=1
export NCCL_P2P_LEVEL=LOC
torchrun --nproc_per_node=8 --master_port=20001 /workspace/FastChat/fastchat/train/train_mem.py \
--model_name_or_path /workspace/llama-13b \
--data_path /workspace/as_conversations.json \
--bf16 True \
--output_dir /workspace/airoboros-uncensored-13b \
--num_train_epochs 3 \
--per_device_train_batch_size 20 \
--per_device_eval_batch_size 20 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "steps" \
--eval_steps 500 \
--save_strategy "steps" \
--save_steps 500 \
--save_total_limit 10 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.04 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap offload" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True
```
This ran on 8x nvidia 80gb a100's for about 40 hours.
![train/loss](IMG_0128.jpeg)
![eval/loss](IMG_0130.jpeg)
### Prompt format
The prompt should be 1:1 compatible with the FastChat/vicuna format, e.g.:
With a preamble:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: [prompt]
<\s>
ASSISTANT:
```
Or just:
```
USER: [prompt]
<\s>
ASSISTANT:
```
### License
The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.
|
Declan/Politico_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5244 | 4.3 | 1000 | 0.4794 |
| 0.5022 | 8.61 | 2000 | 0.4651 |
| 0.4941 | 12.91 | 3000 | 0.4611 |
| 0.4892 | 17.21 | 4000 | 0.4594 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DeepPavlov/rubert-base-cased-sentence | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1508.05326",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers",
"has_space"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46,991 | null | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="karinjd/negformalfinal")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("karinjd/negformalfinal")
model = AutoModelForCausalLMWithValueHead.from_pretrained("karinjd/negformalfinal")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
DeividasM/wav2vec2-large-xlsr-53-lithuanian | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hans14/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PolicyGradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Hans14/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
--- |
Devmapall/paraphrase-quora | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_finetuned_newsgroups
results: []
---
# distilbert_finetuned_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [20 Newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
## Training procedure
Used 10% of the training set as the validation set.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
Achieves 83.13% accuracy on Test set.
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DevsIA/Devs_IA | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
#### Current Training Steps: 68,000
This repo contains a low-rank adapter (LoRA) for LLaMA-13b
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 10
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora _r_: 64
- Lora target modules: q_proj, k_proj, v_proj, o_proj
That is:
```
python finetune.py \
--base_model='decapoda-research/llama-13b-hf' \
--num_epochs=10 \
--batch_size=128 \
--cutoff_len=512 \
--group_by_length \
--output_dir='./bactrian-x-llama-13b-lora' \
--lora_target_modules='q_proj,k_proj,v_proj,o_proj' \
--lora_r=64 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{bactrian,
author = {Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
title = {Bactrian-X: A Multilingual Replicable Instruction-Following Model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MBZUAI-nlp/Bactrian-X}},
}
```
|
DevsIA/imagenes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hokaixian/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hokaixian/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2760
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2760 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DewiBrynJones/wav2vec2-large-xlsr-welsh | [
"cy",
"dataset:common_voice",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- Muennighoff/P3
- Muennighoff/natural-instructions
widget:
- text: "Label the tweets as either 'positive', 'negative', 'mixed', or 'neutral': \n\nTweet: I can say that there isn't anything I would change.\nLabel: positive\n\nTweet: I'm not sure about this.\nLabel: neutral\n\nTweet: I liked some parts but I didn't like other parts.\nLabel: mixed\n\nTweet: I think the background image could have been better.\nLabel: negative\n\nTweet: I really like it.\nLabel:"
example_title: "Sentiment Analysis"
- text: "Please answer the following question:\n\nQuestion: What is the capital of Canada?\nAnswer: Ottawa\n\nQuestion: What is the currency of Switzerland?\nAnswer: Swiss franc\n\nQuestion: In which country is Wisconsin located?\nAnswer:"
example_title: "Question Answering"
- text: "Given a news article, classify its topic.\nPossible labels: 1. World 2. Sports 3. Business 4. Sci/Tech\n\nArticle: A nearby star thought to harbor comets and asteroids now appears to be home to planets, too.\nLabel: Sci/Tech\n\nArticle: Soaring crude prices plus worries about the economy and the outlook for earnings are expected to hang over the stock market next week during the depth of the summer doldrums.\nLabel: Business\n\nArticle: Murtagh a stickler for success Northeastern field hockey coach Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to detract from a team that has been the America East champion for the past three years and has been to the NCAA tournament 13 times.\nLabel::"
example_title: "Topic Classification"
- text: "Paraphrase the given sentence into a different sentence.\n\nInput: Can you recommend some upscale restaurants in New York?\nOutput: What upscale restaurants do you recommend in New York?\n\nInput: What are the famous places we should not miss in Paris?\nOutput: Recommend some of the best places to visit in Paris?\n\nInput: Could you recommend some hotels that have cheap price in Zurich?\nOutput:"
example_title: "Paraphrasing"
- text: "Given a review from Amazon's food products, the task is to generate a short summary of the given review in the input.\n\nInput: I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.\nOutput: Good Quality Dog Food\n\nInput: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as 'Jumbo'.\nOutput: Not as Advertised\n\nInput: My toddler loves this game to a point where he asks for it. That's a big thing for me. Secondly, no glitching unlike one of their competitors (PlayShifu). Any tech I don’t have to reach out to support for help is a good tech for me. I even enjoy some of the games and activities in this. Overall, this is a product that shows that the developers took their time and made sure people would not be asking for refund. I’ve become bias regarding this product and honestly I look forward to buying more of this company’s stuff. Please keep up the great work.\nOutput:"
example_title: "Text Summarization"
- text: "Identify which sense of a word is meant in a given context.\n\nContext: The river overflowed the bank.\nWord: bank\nSense: river bank\n\nContext: A mouse takes much more room than a trackball.\nWord: mouse\nSense: computer mouse\n\nContext: The bank will not be accepting cash on Saturdays.\nWord: bank\nSense: commercial (finance) banks\n\nContext: Bill killed the project\nWord: kill\nSense:"
example_title: "Word Sense Disambiguation"
- text: "Given a pair of sentences, choose whether the two sentences agree (entailment)/disagree (contradiction) with each other.\nPossible labels: 1. entailment 2. contradiction\n\nSentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was dressed in winter clothes.\nLabel: entailment\n\nSentence 1: The boy skated down the staircase railing. Sentence 2: The boy is a newbie skater.\nLabel: contradiction\n\nSentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A couple riding in a golf cart.\nLabel:"
example_title: "Natural Language Inference"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1)
```bash
pip install hf-hub-ctranslate2>=2.0.6
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1 --output_dir /home/michael/tmp-ct2fast-RedPajama-INCITE-Instruct-7B-v0.1 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-RedPajama-INCITE-Instruct-7B-v0.1"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
tags:
- ctranslate2
- int8
- float16
# RedPajama-INCITE-Instruct-7B-v0.1
RedPajama-INCITE-Instruct-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.
- Base Model: [RedPajama-INCITE-Base-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1)
- Instruction-tuned Version: [RedPajama-INCITE-Instruct-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1)
- Chat Version: [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1", torch_dtype=torch.bfloat16)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
RedPajama-INCITE-Instruct-7B-v0.1 is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
RedPajama-INCITE-Instruct-7B-v0.1 is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
RedPajama-INCITE-Instruct-7B-v0.1, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 131M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
Dhito/am | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
inference: false
---
# medalpaca-13B-GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit).
* [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/medalpaca-13B-GGML).
* [medalpaca's float32 HF format repo for GPU inference and further conversions](https://huggingface.co/medalpaca/medalpaca-13b).
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`medalpaca-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
`medalpaca-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`medalpaca-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`medalpaca-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`medalpaca-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 8 -m medalpaca-13B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
```
Change `-t 8` to the number of physical CPU cores you have.
## How to run in `text-generation-webui`
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
# Original model card: MedAlpaca 13b
## Table of Contents
[Model Description](#model-description)
- [Architecture](#architecture)
- [Training Data](#trainig-data)
[Model Usage](#model-usage)
[Limitations](#limitations)
## Model Description
### Architecture
`medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
### Training Data
The training data for this project was sourced from various resources.
Firstly, we used Anki flashcards to automatically generate questions,
from the front of the cards and anwers from the back of the card.
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
to generate questions from the headings and using the corresponding paragraphs
as answers. This dataset is still under development and we believe
that approximately 70% of these question answer pairs are factual correct.
Thirdly, we used StackExchange to extract question-answer pairs, taking the
top-rated question from five categories: Academia, Bioinformatics, Biology,
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
| Source | n items |
|------------------------------|--------|
| ChatDoc large | 200000 |
| wikidoc | 67704 |
| Stackexchange academia | 40865 |
| Anki flashcards | 33955 |
| Stackexchange biology | 27887 |
| Stackexchange fitness | 9833 |
| Stackexchange health | 7721 |
| Wikidoc patient information | 5942 |
| Stackexchange bioinformatics | 5407 |
## Model Usage
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
Inference
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
```python
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
question = "What are the symptoms of diabetes?"
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
answer = qa_pipeline({"question": question, "context": context})
print(answer)
```
## Limitations
The model may not perform effectively outside the scope of the medical domain.
The training data primarily targets the knowledge level of medical students,
which may result in limitations when addressing the needs of board-certified physicians.
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
|
Dibyaranjan/nl_image_search | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-19T13:43:21Z" | ---
language:
- en
- zh
tags:
- translation
license: apache-2.0
datasets:
- DDDSSS/en-zh-dataset
metrics:
- bleu
- sacrebleu
---
该模型主要的训练数据是opus100和CodeAlpaca_20K中的英文作为翻译内容,采用chatglm作为翻译器翻译成中文,并将脏数据筛选后得到DDDSSS/en-zh-dataset数据集
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
parser.add_argument('--device', default="cpu", type=str, help='"cuda:1"、"cuda:2"……')
mode_name = opt.model
device = opt.device
model = AutoModelForSeq2SeqLM.from_pretrained(mode_name)
tokenizer = AutoTokenizer.from_pretrained(mode_name)
translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer,
torch_dtype="float", device_map=True,device=device)
x=["If nothing is detected and there is a config.json file, it’s assumed the library is transformers.","By looking into the presence of files such as *.nemo or *saved_model.pb*, the Hub can determine if a model is from NeMo or Keras."]
re = translation(x, max_length=450)
print('翻译为:' ,re)
微调:
import numpy as np
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
import torch
# books = load_from_disk("")
books = load_dataset("json", data_files=".json")
books = books["train"].train_test_split(test_size=0.2)
checkpoint = "./opus-mt-en-zh"
# checkpoint = "./model/checkpoint-19304"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
source_lang = "en"
target_lang = "zh"
def preprocess_function(examples):
inputs = [example[source_lang] for example in examples["translation"]]
targets = [example[target_lang] for example in examples["translation"]]
model_inputs = tokenizer(inputs, text_target=targets, max_length=512, truncation=True)
return model_inputs
tokenized_books = books.map(preprocess_function, batched=True)
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
metric = evaluate.load("sacrebleu")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
batchsize=4
training_args = Seq2SeqTrainingArguments(
output_dir="./my_awesome_opus_books_model",
evaluation_strategy="epoch",
learning_rate=2e-4,
per_device_train_batch_size=batchsize,
per_device_eval_batch_size=batchsize,
weight_decay=0.01,
# save_total_limit=3,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
push_to_hub=False,
save_strategy="epoch",
jit_mode_eval=True
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_books["train"],
eval_dataset=tokenized_books["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train() |
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2023-05-19T13:44:06Z" | ---
language:
- th
---
##Example 20 sequences
```
num_beams=200, num_return_sequences=20, max_length=10
input = 'อินเดียเป็น <extra_id_0> ของโลก </s>'
output = ['อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศ ที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ดินแดน ของโลก </s>',
'อินเดียเป็น อันดับ 1 ของโลก </s>',
'อินเดียเป็น ประเทศทางประวัติศาสตร์ ของโลก </s>',
'อินเดียเป็น อันดับ 1 ของโลก </s>',
'อินเดียเป็น ศูนย์กลางทางประวัติศาสตร์ ของโลก </s>',
'อินเดียเป็น ดินแดน ของโลก </s>',
'อินเดียเป็น ประเทศ ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศ อันดับ 1 ของโลก </s>',
'อินเดียเป็น ประเทศ ที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศ อันดับ 1 ของโลก </s>']
``` |
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Yazidoo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: my_awesome_pokemon_model_resnet18
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: validation
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.01079136690647482
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_pokemon_model_resnet18
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8019
- Accuracy: 0.0108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.275 | 1.0 | 76 | 6.1680 | 0.0014 |
| 3.3896 | 1.99 | 152 | 6.6421 | 0.0115 |
| 3.0563 | 2.99 | 228 | 6.8019 | 0.0108 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Doquey/DialoGPT-small-Luisbot1 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.90 +/- 25.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|