Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
AhmedHassan19/model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
``` ``` [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mariancg-a-code-generation-transformer-model/code-generation-on-conala)](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model) ``` ``` # MarianCG: a code generation transformer model inspired by machine translation This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset. MarianCG model and its implemetation with the code of training and the generated output is available at this repository: https://github.com/AhmedSSoliman/MarianCG-NL-to-Code CoNaLa Dataset for Code Generation is available at https://huggingface.co/datasets/AhmedSSoliman/CoNaLa This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-CoNaLa ```python # Model and Tokenizer from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # model_name = "AhmedSSoliman/MarianCG-NL-to-Code" model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa") tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa") # Input (Natural Language) and Output (Python Code) NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]" output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt")) output_code = tokenizer.decode(output[0], skip_special_tokens=True) ``` This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-CoNaLa --- Tasks: - Translation - Code Generation - Text2Text Generation - Text Generation --- # Citation We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite: ``` @article{soliman2022mariancg, title={MarianCG: a code generation transformer model inspired by machine translation}, author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I}, journal={Journal of Engineering and Applied Science}, volume={69}, number={1}, pages={1--23}, year={2022}, publisher={SpringerOpen} url={https://doi.org/10.1186/s44147-022-00159-4} } ```
{"widget": [{"text": "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"}, {"text": "check if all elements in list `mylist` are identical"}, {"text": "enable debug mode on flask application `app`"}, {"text": "getting the length of `my_tuple`"}, {"text": "find all files in directory \"/mydir\" with extension \".txt\""}]}
AhmedSSoliman/MarianCG-CoNaLa
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ahmedahmed/Wewe
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Ahren09/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Back to the Future DialoGPT Model
{"tags": ["conversational"]}
AiPorter/DialoGPT-small-Back_to_the_future
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick DialoGPT Model
{"tags": ["conversational"]}
Aibox/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
Trained on Stephen King's top 50 books as .txt files.
{}
Aidan8756/stephenKingModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
AidenGO/KDXF_Bert4MaskedLM
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_no_lm
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bashkir-cv7_opt This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset. It achieves the following results on the evaluation set: - Training Loss: 0.268400 - Validation Loss: 0.088252 - WER without LM: 0.085588 - WER with LM: 0.04440795062008041 - CER with LM: 0.010491234992390509 ## Model description Trained with this [jupiter notebook](https://drive.google.com/file/d/1KohDXZtKBWXVPZYlsLtqfxJGBzKmTtSh/view?usp=sharing) ## Intended uses & limitations In order to reduce the number of characters, the following letters have been replaced or removed: - 'я' -> 'йа' - 'ю' -> 'йу' - 'ё' -> 'йо' - 'е' -> 'йэ' for first letter - 'е' -> 'э' for other cases - 'ъ' -> deleted - 'ь' -> deleted Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu113 - Datasets 1.18.2 - Tokenizers 0.10.3
{"language": ["ba"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bashkir-cv7_opt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ba"}, "metrics": [{"type": "wer", "value": 0.04440795062008041, "name": "Test WER"}, {"type": "cer", "value": 0.010491234992390509, "name": "Test CER"}]}]}]}
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "ba", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AimB/konlpy_berttokenizer_helsinki
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AimB/mT5-en-kr-aihub-netflix
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
you can use this model with simpletransfomers. ``` !pip install simpletransformers from simpletransformers.t5 import T5Model model = T5Model("mt5", "AimB/mT5-en-kr-natural") print(model.predict(["I feel good today"])) print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"])) ```
{}
AimB/mT5-en-kr-natural
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AimB/mT5-en-kr-opus
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aimendo/Triage
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 35248482 - CO2 Emissions (in grams): 7.989144645413398 ## Validation Metrics - Loss: 0.13783401250839233 - Accuracy: 0.9728654124457308 - Macro F1: 0.949537871674076 - Micro F1: 0.9728654124457308 - Weighted F1: 0.9732422812610365 - Macro Precision: 0.9380372699332605 - Micro Precision: 0.9728654124457308 - Weighted Precision: 0.974548513256663 - Macro Recall: 0.9689346153591594 - Micro Recall: 0.9728654124457308 - Weighted Recall: 0.9728654124457308 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' /static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2Fmodels%2FAimendo%2Fautonlp-triage-35248482 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Aimendo/autonlp-data-triage"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 7.989144645413398}
Aimendo/autonlp-triage-35248482
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Aimendo/autonlp-data-triage", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 530014983 - CO2 Emissions (in grams): 55.10196329868386 ## Validation Metrics - Loss: 0.23171618580818176 - Accuracy: 0.9298837645294338 - Precision: 0.9314414866901055 - Recall: 0.9279459594696022 - AUC: 0.979447403984557 - F1: 0.9296904373981703 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' /static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2Fmodels%2FAjay191191%2Fautonlp-Test-530014983 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Ajay191191/autonlp-data-Test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 55.10196329868386}
Ajay191191/autonlp-Test-530014983
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Ajay191191/autonlp-data-Test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 16122692 ## Validation Metrics - Loss: 1.1877621412277222 - Rouge1: 42.0713 - Rouge2: 23.3043 - RougeL: 37.3755 - RougeLsum: 37.8961 - Gen Len: 60.7117 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' /static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2FAjaykannan6%2Fautonlp-manthan-16122692 ```
{"language": "unk", "tags": "autonlp", "datasets": ["Ajaykannan6/autonlp-data-manthan"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
Ajaykannan6/autonlp-manthan-16122692
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autonlp", "unk", "dataset:Ajaykannan6/autonlp-data-manthan", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ajteks/Chatbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AkaiSnow/Rick_bot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akame/Vi
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akaramhuggingface/News
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.9492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8695 | 1.0 | 8248 | 0.8813 | | 0.6333 | 2.0 | 16496 | 0.8042 | | 0.4372 | 3.0 | 24744 | 0.9492 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.7.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]}
Akari/albert-base-v2-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0915 | 1.0 | 2346 | 7.0517 | | 6.905 | 2.0 | 4692 | 6.8735 | | 6.8565 | 3.0 | 7038 | 6.8924 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]}
Akash7897/bert-base-cased-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0789 - Matthews Correlation: 0.5222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 | | 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 | | 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 | | 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 | | 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.522211073949747, "name": "Matthews Correlation"}]}]}]}
Akash7897/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3010 - Accuracy: 0.9037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1793 | 1.0 | 4210 | 0.3010 | 0.9037 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.9036697247706422, "name": "Accuracy"}]}]}]}
Akash7897/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akash7897/fill_mask_model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.558 | 1.0 | 2249 | 6.4672 | | 6.1918 | 2.0 | 4498 | 6.1970 | | 6.0019 | 3.0 | 6747 | 6.1079 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]}
Akash7897/gpt2-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akash7897/my-newtokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akash7897/test-clm
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akashamba/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/Central_kurdish_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets): - Loss: 0.348580 - Wer: 0.401147 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Central Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000095637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 200 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 500 | 5.097800 | 2.190326 | 1.001207 | | 1000 | 0.797500 | 0.331392 | 0.576819 | | 1500 | 0.405100 | 0.262009 | 0.549049 | | 2000 | 0.322100 | 0.248178 | 0.479626 | | 2500 | 0.264600 | 0.258866 | 0.488983 | | 3000 | 0.228300 | 0.261523 | 0.469665 | | 3500 | 0.201000 | 0.270135 | 0.451856 | | 4000 | 0.180900 | 0.279302 | 0.448536 | | 4500 | 0.163800 | 0.280921 | 0.459704 | | 5000 | 0.147300 | 0.319249 | 0.471778 | | 5500 | 0.137600 | 0.289546 | 0.449140 | | 6000 | 0.132000 | 0.311350 | 0.458195 | | 6500 | 0.117100 | 0.316726 | 0.432840 | | 7000 | 0.109200 | 0.302210 | 0.439481 | | 7500 | 0.104900 | 0.325913 | 0.439481 | | 8000 | 0.097500 | 0.329446 | 0.431935 | | 8500 | 0.088600 | 0.345259 | 0.425898 | | 9000 | 0.084900 | 0.342891 | 0.428313 | | 9500 | 0.080900 | 0.353081 | 0.424389 | | 10000 | 0.075600 | 0.347063 | 0.424992 | | 10500 | 0.072800 | 0.330086 | 0.424691 | | 11000 | 0.068100 | 0.350658 | 0.421974 | | 11500 | 0.064700 | 0.342949 | 0.413522 | | 12000 | 0.061500 | 0.341704 | 0.415334 | | 12500 | 0.059500 | 0.346279 | 0.411410 | | 13000 | 0.057400 | 0.349901 | 0.407184 | | 13500 | 0.056400 | 0.347733 | 0.402656 | | 14000 | 0.053300 | 0.344899 | 0.405976 | | 14500 | 0.052900 | 0.346708 | 0.402656 | | 15000 | 0.050600 | 0.344118 | 0.400845 | | 15500 | 0.050200 | 0.348396 | 0.402958 | | 16000 | 0.049800 | 0.348312 | 0.401751 | | 16500 | 0.051900 | 0.348372 | 0.401147 | | 17000 | 0.049800 | 0.348580 | 0.401147 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.1 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Central_kurdish_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ckb --split test ```
{"language": ["ckb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Central_kurdish_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ckb"}, "metrics": [{"type": "wer", "value": 0.36754389884276845, "name": "Test WER"}, {"type": "cer", "value": 0.07827896768334217, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ckb"}, "metrics": [{"type": "wer", "value": 0.36754389884276845, "name": "Test WER"}, {"type": "cer", "value": 0.07827896768334217, "name": "Test CER"}]}]}]}
Akashpb13/Central_kurdish_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/Galician_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): - Loss: 0.137096 - Wer: 0.196230 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Galician train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 500 | 5.038100 | 3.035432 | 1.000000 | | 1000 | 2.180000 | 0.406300 | 0.557964 | | 1500 | 0.331700 | 0.153797 | 0.262394 | | 2000 | 0.171600 | 0.145268 | 0.235627 | | 2500 | 0.125900 | 0.136622 | 0.228087 | | 3000 | 0.105400 | 0.131650 | 0.224128 | | 3500 | 0.087600 | 0.141032 | 0.217531 | | 4000 | 0.078300 | 0.143675 | 0.214515 | | 4500 | 0.070000 | 0.144607 | 0.208106 | | 5000 | 0.061500 | 0.135259 | 0.202828 | | 5500 | 0.055600 | 0.130638 | 0.203959 | | 6000 | 0.050500 | 0.137416 | 0.202451 | | 6500 | 0.046600 | 0.140379 | 0.200000 | | 7000 | 0.040800 | 0.140179 | 0.200377 | | 7500 | 0.041000 | 0.138089 | 0.196795 | | 8000 | 0.038400 | 0.136927 | 0.197172 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Galician_xlsr --dataset mozilla-foundation/common_voice_8_0 --config gl --split test ```
{"language": ["gl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Galician_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.11308483789555426, "name": "Test WER"}, {"type": "cer", "value": 0.023982371794871796, "name": "Test CER"}, {"type": "wer", "value": 11.31, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 0.11308483789555426, "name": "Test WER"}, {"type": "cer", "value": 0.023982371794871796, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 39.05, "name": "Test WER"}]}]}]}
Akashpb13/Galician_xlsr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/Hausa_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): - Loss: 0.275118 - Wer: 0.329955 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 500 | 5.175900 | 2.750914 | 1.000000 | | 1000 | 1.028700 | 0.338649 | 0.497999 | | 1500 | 0.332200 | 0.246896 | 0.402241 | | 2000 | 0.227300 | 0.239640 | 0.395839 | | 2500 | 0.175000 | 0.239577 | 0.373966 | | 3000 | 0.140400 | 0.243272 | 0.356095 | | 3500 | 0.119200 | 0.263761 | 0.365164 | | 4000 | 0.099300 | 0.265954 | 0.353428 | | 4500 | 0.084400 | 0.276367 | 0.349693 | | 5000 | 0.073700 | 0.282631 | 0.343825 | | 5500 | 0.068000 | 0.282344 | 0.341158 | | 6000 | 0.064500 | 0.281591 | 0.342491 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test ```
{"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Hausa_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}]}]}
Akashpb13/Hausa_xlsr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/Kabyle_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets): - Loss: 0.159032 - Wer: 0.187934 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Kabyle train.tsv. Only 50,000 records were sampled randomly and trained due to huge size of dataset. Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 4 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 500 | 7.199800 | 3.130564 | 1.000000 | | 1000 | 1.570200 | 0.718097 | 0.734682 | | 1500 | 0.850800 | 0.524227 | 0.640532 | | 2000 | 0.712200 | 0.468694 | 0.603454 | | 2500 | 0.651200 | 0.413833 | 0.573025 | | 3000 | 0.603100 | 0.403680 | 0.552847 | | 3500 | 0.553300 | 0.372638 | 0.541719 | | 4000 | 0.537200 | 0.353759 | 0.531191 | | 4500 | 0.506300 | 0.359109 | 0.519601 | | 5000 | 0.479600 | 0.343937 | 0.511336 | | 5500 | 0.479800 | 0.338214 | 0.503948 | | 6000 | 0.449500 | 0.332600 | 0.495221 | | 6500 | 0.439200 | 0.323905 | 0.492635 | | 7000 | 0.434900 | 0.310417 | 0.484555 | | 7500 | 0.403200 | 0.311247 | 0.483262 | | 8000 | 0.401500 | 0.295637 | 0.476566 | | 8500 | 0.397000 | 0.301321 | 0.471672 | | 9000 | 0.371600 | 0.295639 | 0.468440 | | 9500 | 0.370700 | 0.294039 | 0.468902 | | 10000 | 0.364900 | 0.291195 | 0.468440 | | 10500 | 0.348300 | 0.284898 | 0.461098 | | 11000 | 0.350100 | 0.281764 | 0.459805 | | 11500 | 0.336900 | 0.291022 | 0.461606 | | 12000 | 0.330700 | 0.280467 | 0.455234 | | 12500 | 0.322500 | 0.271714 | 0.452694 | | 13000 | 0.307400 | 0.289519 | 0.455465 | | 13500 | 0.309300 | 0.281922 | 0.451217 | | 14000 | 0.304800 | 0.271514 | 0.452186 | | 14500 | 0.288100 | 0.286801 | 0.446830 | | 15000 | 0.293200 | 0.276309 | 0.445399 | | 15500 | 0.289800 | 0.287188 | 0.446230 | | 16000 | 0.274800 | 0.286406 | 0.441243 | | 16500 | 0.271700 | 0.284754 | 0.441520 | | 17000 | 0.262500 | 0.275431 | 0.442167 | | 17500 | 0.255500 | 0.276575 | 0.439858 | | 18000 | 0.260200 | 0.269911 | 0.435425 | | 18500 | 0.250600 | 0.270519 | 0.434686 | | 19000 | 0.243300 | 0.267655 | 0.437826 | | 19500 | 0.240600 | 0.277109 | 0.431731 | | 20000 | 0.237200 | 0.266622 | 0.433994 | | 20500 | 0.231300 | 0.273015 | 0.428868 | | 21000 | 0.227200 | 0.263024 | 0.430161 | | 21500 | 0.220400 | 0.272880 | 0.429607 | | 22000 | 0.218600 | 0.272340 | 0.426883 | | 22500 | 0.213100 | 0.277066 | 0.428407 | | 23000 | 0.205000 | 0.278404 | 0.424020 | | 23500 | 0.200900 | 0.270877 | 0.418987 | | 24000 | 0.199000 | 0.289120 | 0.425821 | | 24500 | 0.196100 | 0.275831 | 0.424066 | | 25000 | 0.191100 | 0.282822 | 0.421850 | | 25500 | 0.190100 | 0.275820 | 0.418248 | | 26000 | 0.178800 | 0.279208 | 0.419125 | | 26500 | 0.183100 | 0.271464 | 0.419218 | | 27000 | 0.177400 | 0.280869 | 0.419680 | | 27500 | 0.171800 | 0.279593 | 0.414924 | | 28000 | 0.172900 | 0.276949 | 0.417648 | | 28500 | 0.164900 | 0.283491 | 0.417786 | | 29000 | 0.164800 | 0.283122 | 0.416078 | | 29500 | 0.165500 | 0.281969 | 0.415801 | | 30000 | 0.163800 | 0.283319 | 0.412753 | | 30500 | 0.153500 | 0.285702 | 0.414046 | | 31000 | 0.156500 | 0.285041 | 0.412615 | | 31500 | 0.150900 | 0.284336 | 0.413723 | | 32000 | 0.151800 | 0.285922 | 0.412292 | | 32500 | 0.149200 | 0.289461 | 0.412153 | | 33000 | 0.145400 | 0.291322 | 0.409567 | | 33500 | 0.145600 | 0.294361 | 0.409614 | | 34000 | 0.144200 | 0.290686 | 0.409059 | | 34500 | 0.143400 | 0.289474 | 0.409844 | | 35000 | 0.143500 | 0.290340 | 0.408367 | | 35500 | 0.143200 | 0.289581 | 0.407351 | | 36000 | 0.138400 | 0.292782 | 0.408736 | | 36500 | 0.137900 | 0.289108 | 0.408044 | | 37000 | 0.138200 | 0.292127 | 0.407166 | | 37500 | 0.134600 | 0.291797 | 0.408413 | | 38000 | 0.139800 | 0.290056 | 0.408090 | | 38500 | 0.136500 | 0.291198 | 0.408090 | | 39000 | 0.137700 | 0.289696 | 0.408044 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Kabyle_xlsr --dataset mozilla-foundation/common_voice_8_0 --config kab --split test ```
{"language": ["kab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Kabyle_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kab"}, "metrics": [{"type": "wer", "value": 0.3188425282720088, "name": "Test WER"}, {"type": "cer", "value": 0.09443079928558358, "name": "Test CER"}]}]}]}
Akashpb13/Kabyle_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "kab", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/Swahili_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets): - Loss: 0.159032 - Wer: 0.187934 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Hausa train.tsv and dev.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 80 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 500 | 4.810000 | 2.168847 | 0.995747 | | 1000 | 0.564200 | 0.209411 | 0.303485 | | 1500 | 0.217700 | 0.153959 | 0.239534 | | 2000 | 0.150700 | 0.139901 | 0.216327 | | 2500 | 0.119400 | 0.137543 | 0.208828 | | 3000 | 0.099500 | 0.140921 | 0.203045 | | 3500 | 0.087100 | 0.138835 | 0.199649 | | 4000 | 0.074600 | 0.141297 | 0.195844 | | 4500 | 0.066600 | 0.148560 | 0.194127 | | 5000 | 0.060400 | 0.151214 | 0.194388 | | 5500 | 0.054400 | 0.156072 | 0.192187 | | 6000 | 0.051100 | 0.154726 | 0.190322 | | 6500 | 0.048200 | 0.159847 | 0.189538 | | 7000 | 0.046400 | 0.158727 | 0.188307 | | 7500 | 0.046500 | 0.159032 | 0.187934 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Swahili_xlsr --dataset mozilla-foundation/common_voice_8_0 --config sw --split test ```
{"language": ["sw"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Swahili_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sw"}, "metrics": [{"type": "wer", "value": 0.11763625454589981, "name": "Test WER"}, {"type": "cer", "value": 0.02884228669922436, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.11763625454589981, "name": "Test WER"}, {"type": "cer", "value": 0.02884228669922436, "name": "Test CER"}]}]}]}
Akashpb13/Swahili_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/xlsr_hungarian_new This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets): - Loss: 0.197464 - Wer: 0.330094 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice hungarian train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000095637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 16 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 500 | 4.785300 | 0.952295 | 0.796236 | | 1000 | 0.535800 | 0.217474 | 0.381613 | | 1500 | 0.258400 | 0.205524 | 0.345056 | | 2000 | 0.202800 | 0.198680 | 0.336264 | | 2500 | 0.182700 | 0.197464 | 0.330094 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/xlsr_hungarian_new --dataset mozilla-foundation/common_voice_8_0 --config hu --split test ```
{"language": ["hu"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_hungarian_new", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hu"}, "metrics": [{"type": "wer", "value": 0.2851621517163838, "name": "Test WER"}, {"type": "cer", "value": 0.06112982522287432, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 0.2851621517163838, "name": "Test WER"}, {"type": "cer", "value": 0.06112982522287432, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "hu"}, "metrics": [{"type": "wer", "value": 47.15, "name": "Test WER"}]}]}]}
Akashpb13/xlsr_hungarian_new
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Akashpb13/xlsr_kurmanji_kurdish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): - Loss: 0.292389 - Wer: 0.388585 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Kurmanji Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 16 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 200 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 200 | 4.382500 | 3.183725 | 1.000000 | | 400 | 2.870200 | 0.996664 | 0.781117 | | 600 | 0.609900 | 0.333755 | 0.445052 | | 800 | 0.326800 | 0.305729 | 0.403157 | | 1000 | 0.255000 | 0.290734 | 0.391621 | | 1200 | 0.226300 | 0.292389 | 0.388585 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.1 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/xlsr_kurmanji_kurdish --dataset mozilla-foundation/common_voice_8_0 --config kmr --split test ```
{"language": ["kmr", "ku"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_kurmanji_kurdish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.33073206986250464, "name": "Test WER"}, {"type": "cer", "value": 0.08035244447163924, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "kmr"}, "metrics": [{"type": "wer", "value": 0.33073206986250464, "name": "Test WER"}, {"type": "cer", "value": 0.08035244447163924, "name": "Test CER"}]}]}]}
Akashpb13/xlsr_kurmanji_kurdish
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "ku", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Maltese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "Akashpb13/xlsr_maltese_wav2vec2" device = "cuda" chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]' model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "mt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Test Result**: 29.42 %
{"language": "mt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Maltese by Akash PB", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mt", "type": "common_voice", "args": {}}, "metrics": [{"type": "wer", "value": 29.42, "name": "Test WER"}]}]}]}
Akashpb13/xlsr_maltese_wav2vec2
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "mt", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akbarariza/Anjar
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akira-Yanagi/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akiva/Joke
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Akjder/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aklily/Lilys
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# BEiT for Face Mask Detection BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Training Metrics epoch = 0.55 total_flos = 576468516GF train_loss = 0.151 train_runtime = 0:58:16.56 train_samples_per_second = 16.505 train_steps_per_second = 1.032 --- ## Evaluation Metrics epoch = 0.55 eval_accuracy = 0.975 eval_loss = 0.0803 eval_runtime = 0:03:13.02 eval_samples_per_second = 18.629 eval_steps_per_second = 2.331
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/BEiT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "beit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# ConvNeXt for Face Mask Detection ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al. ## Training Metrics epoch = 3.54 total_flos = 1195651761GF train_loss = 0.0079 train_runtime = 1:08:20.25 train_samples_per_second = 14.075 train_steps_per_second = 0.22 --- ## Evaluation Metrics epoch = 3.54 eval_accuracy = 0.9961 eval_loss = 0.0151 eval_runtime = 0:01:23.47 eval_samples_per_second = 43.079 eval_steps_per_second = 5.391
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
null
[ "transformers", "pytorch", "safetensors", "convnext", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# Distilled Data-efficient Image Transformer for Face Mask Detection Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Training Metrics epoch = 2.0 total_flos = 2078245655GF train_loss = 0.0438 train_runtime = 1:37:16.87 train_samples_per_second = 9.887 train_steps_per_second = 0.309 --- ## Evaluation Metrics epoch = 2.0 eval_accuracy = 0.9922 eval_loss = 0.0271 eval_runtime = 0:03:17.36 eval_samples_per_second = 18.22 eval_steps_per_second = 2.28
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/DeiT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "deit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Clinical BERT for ICD-10 Prediction The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. --- ## How to use the model Load the model via the transformers library: from transformers import AutoTokenizer, BertForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction") model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction") config = model.config Run the model with clinical diagonosis text: text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) Return the Top-5 predicted ICD-10 codes: results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5] return [ config.id2label[ids] for ids in results]
{"license": "apache-2.0", "tags": ["text-classification"]}
AkshatSurolia/ICD-10-Code-Prediction
null
[ "transformers", "pytorch", "bert", "text-classification", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# Vision Transformer (ViT) for Face Mask Detection Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Training Metrics epoch = 0.89 total_flos = 923776502GF train_loss = 0.057 train_runtime = 0:40:10.40 train_samples_per_second = 23.943 train_steps_per_second = 1.497 --- ## Evaluation Metrics epoch = 0.89 eval_accuracy = 0.9894 eval_loss = 0.0395 eval_runtime = 0:00:36.81 eval_samples_per_second = 97.685 eval_steps_per_second = 12.224
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/ViT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AkshayDev/BERT_Fine_Tuning
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AkshaySg/GrammarCorrection
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# Spoken Language Identification Model ## Model description The model can classify a speech utterance according to the language spoken. It covers following different languages ( English, Indonesian, Japanese, Korean, Thai, Vietnamese, Mandarin Chinese).
{"language": "multilingual", "license": "apache-2.0", "tags": ["LID", "spoken language recognition"], "datasets": ["VoxLingua107"], "metrics": ["ER"], "inference": false}
AkshaySg/LanguageIdentification
null
[ "LID", "spoken language recognition", "multilingual", "dataset:VoxLingua107", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
AkshaySg/gramCorrection
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
audio-classification
speechbrain
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model ## Model description This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain. The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. The model can classify a speech utterance according to the language spoken. It covers 107 different languages ( Abkhazian, Afrikaans, Amharic, Arabic, Assamese, Azerbaijani, Bashkir, Belarusian, Bulgarian, Bengali, Tibetan, Breton, Bosnian, Catalan, Cebuano, Czech, Welsh, Danish, German, Greek, English, Esperanto, Spanish, Estonian, Basque, Persian, Finnish, Faroese, French, Galician, Guarani, Gujarati, Manx, Hausa, Hawaiian, Hindi, Croatian, Haitian, Hungarian, Armenian, Interlingua, Indonesian, Icelandic, Italian, Hebrew, Japanese, Javanese, Georgian, Kazakh, Central Khmer, Kannada, Korean, Latin, Luxembourgish, Lingala, Lao, Lithuanian, Latvian, Malagasy, Maori, Macedonian, Malayalam, Mongolian, Marathi, Malay, Maltese, Burmese, Nepali, Dutch, Norwegian Nynorsk, Norwegian, Occitan, Panjabi, Polish, Pushto, Portuguese, Romanian, Russian, Sanskrit, Scots, Sindhi, Sinhala, Slovak, Slovenian, Shona, Somali, Albanian, Serbian, Sundanese, Swedish, Swahili, Tamil, Telugu, Tajik, Thai, Turkmen, Tagalog, Turkish, Tatar, Ukrainian, Urdu, Uzbek, Vietnamese, Waray, Yiddish, Yoruba, Mandarin Chinese). ## Intended uses & limitations The model has two uses: - use 'as is' for spoken language recognition - use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data The model is trained on automatically collected YouTube data. For more information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/). #### How to use ```python import torchaudio from speechbrain.pretrained import EncoderClassifier language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp") # Download Thai language sample from Omniglot and cvert to suitable form signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3") prediction = language_id.classify_batch(signal) print(prediction) (tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508, 0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997, 0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256, 0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944, 0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950, 0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777, 0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193, 0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364, 0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017, 0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464, 0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838, 0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th']) # The scores in the prediction[0] tensor can be interpreted as cosine scores between # the languages and the given utterance (i.e., the larger the better) # The identified language ISO code is given in prediction[3] print(prediction[3]) ['th'] # Alternatively, use the utterance embedding extractor: emb = language_id.encode_batch(signal) print(emb.shape) torch.Size([1, 1, 256]) ``` #### Limitations and bias Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are: - Probably it's accuracy on smaller languages is quite limited - Probably it works worse on female speech than male speech (because YouTube data includes much more male speech) - Based on subjective experiments, it doesn't work well on speech with a foreign accent - Probably it doesn't work well on children's speech and on persons with speech disorders ## Training data The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/). VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives. VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language. ## Training procedure We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model. Training recipe will be published soon. ## Evaluation results Error rate: 7% on the development dataset ### BibTeX entry and citation info ```bibtex @inproceedings{valk2021slt, title={{VoxLingua107}: a Dataset for Spoken Language Recognition}, author={J{\"o}rgen Valk and Tanel Alum{\"a}e}, booktitle={Proc. IEEE SLT Workshop}, year={2021}, } ```
{"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "/static-proxy?url=https%3A%2F%2Fcdn-media.huggingface.co%2Fspeech_samples%2FLibriSpeech_61-70968-0000.flac"}]}
AkshaySg/langid
null
[ "speechbrain", "audio-classification", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107", "multilingual", "dataset:VoxLingua107", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Akuva2001/SocialGraph
null
[ "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Al/mymodel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlErysvi/Erys
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Alaeddin/convbert-base-turkish-ner-cased
null
[ "transformers", "pytorch", "convbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlanDev/DallEMiniButBetter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlanDev/dall-e-better
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlanDev/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
AlbertHSU/BertTEST
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
AlbertHSU/ChineseFoodBert
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Alberto15Romero/GptNeo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlchemistDude/DialoGPT-medium-Gon
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ale/Alen
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aleenbo/Arcane
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-base-cased-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/bert-srb-base-cased-oscar
null
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aleksandar/bert-srb-ner-setimes-lr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1955 - Precision: 0.8229 - Recall: 0.8465 - F1: 0.8345 - Accuracy: 0.9645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 104 | 0.2281 | 0.6589 | 0.7001 | 0.6789 | 0.9350 | | No log | 2.0 | 208 | 0.1833 | 0.7105 | 0.7694 | 0.7388 | 0.9470 | | No log | 3.0 | 312 | 0.1573 | 0.7461 | 0.7778 | 0.7616 | 0.9525 | | No log | 4.0 | 416 | 0.1489 | 0.7665 | 0.8091 | 0.7872 | 0.9557 | | 0.1898 | 5.0 | 520 | 0.1445 | 0.7881 | 0.8327 | 0.8098 | 0.9587 | | 0.1898 | 6.0 | 624 | 0.1473 | 0.7913 | 0.8316 | 0.8109 | 0.9601 | | 0.1898 | 7.0 | 728 | 0.1558 | 0.8101 | 0.8347 | 0.8222 | 0.9620 | | 0.1898 | 8.0 | 832 | 0.1616 | 0.8026 | 0.8302 | 0.8162 | 0.9612 | | 0.1898 | 9.0 | 936 | 0.1716 | 0.8127 | 0.8409 | 0.8266 | 0.9631 | | 0.0393 | 10.0 | 1040 | 0.1751 | 0.8140 | 0.8369 | 0.8253 | 0.9628 | | 0.0393 | 11.0 | 1144 | 0.1775 | 0.8096 | 0.8420 | 0.8255 | 0.9626 | | 0.0393 | 12.0 | 1248 | 0.1763 | 0.8161 | 0.8386 | 0.8272 | 0.9636 | | 0.0393 | 13.0 | 1352 | 0.1949 | 0.8259 | 0.8400 | 0.8329 | 0.9634 | | 0.0393 | 14.0 | 1456 | 0.1842 | 0.8205 | 0.8420 | 0.8311 | 0.9642 | | 0.0111 | 15.0 | 1560 | 0.1862 | 0.8160 | 0.8493 | 0.8323 | 0.9646 | | 0.0111 | 16.0 | 1664 | 0.1989 | 0.8176 | 0.8367 | 0.8270 | 0.9627 | | 0.0111 | 17.0 | 1768 | 0.1945 | 0.8246 | 0.8409 | 0.8327 | 0.9638 | | 0.0111 | 18.0 | 1872 | 0.1997 | 0.8270 | 0.8426 | 0.8347 | 0.9634 | | 0.0111 | 19.0 | 1976 | 0.1917 | 0.8258 | 0.8491 | 0.8373 | 0.9651 | | 0.0051 | 20.0 | 2080 | 0.1955 | 0.8229 | 0.8465 | 0.8345 | 0.9645 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9645112274185379}}]}]}
Aleksandar/bert-srb-ner-setimes
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3561 - Precision: 0.8909 - Recall: 0.9082 - F1: 0.8995 - Accuracy: 0.9547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3907 | 1.0 | 625 | 0.2316 | 0.8255 | 0.8314 | 0.8285 | 0.9259 | | 0.2091 | 2.0 | 1250 | 0.1920 | 0.8598 | 0.8731 | 0.8664 | 0.9420 | | 0.1562 | 3.0 | 1875 | 0.1833 | 0.8608 | 0.8820 | 0.8713 | 0.9441 | | 0.0919 | 4.0 | 2500 | 0.1985 | 0.8712 | 0.8886 | 0.8798 | 0.9476 | | 0.0625 | 5.0 | 3125 | 0.2195 | 0.8762 | 0.8923 | 0.8842 | 0.9485 | | 0.0545 | 6.0 | 3750 | 0.2320 | 0.8706 | 0.9004 | 0.8852 | 0.9495 | | 0.0403 | 7.0 | 4375 | 0.2459 | 0.8817 | 0.8957 | 0.8887 | 0.9505 | | 0.0269 | 8.0 | 5000 | 0.2603 | 0.8813 | 0.9021 | 0.8916 | 0.9516 | | 0.0193 | 9.0 | 5625 | 0.2916 | 0.8812 | 0.8949 | 0.8880 | 0.9500 | | 0.0162 | 10.0 | 6250 | 0.2938 | 0.8814 | 0.9025 | 0.8918 | 0.9520 | | 0.0134 | 11.0 | 6875 | 0.3330 | 0.8809 | 0.8961 | 0.8885 | 0.9497 | | 0.0076 | 12.0 | 7500 | 0.3141 | 0.8840 | 0.9025 | 0.8932 | 0.9524 | | 0.0069 | 13.0 | 8125 | 0.3292 | 0.8819 | 0.9065 | 0.8940 | 0.9535 | | 0.0053 | 14.0 | 8750 | 0.3454 | 0.8844 | 0.9018 | 0.8930 | 0.9523 | | 0.0038 | 15.0 | 9375 | 0.3519 | 0.8912 | 0.9061 | 0.8986 | 0.9539 | | 0.0034 | 16.0 | 10000 | 0.3437 | 0.8894 | 0.9038 | 0.8965 | 0.9539 | | 0.0024 | 17.0 | 10625 | 0.3518 | 0.8896 | 0.9072 | 0.8983 | 0.9543 | | 0.0018 | 18.0 | 11250 | 0.3572 | 0.8877 | 0.9072 | 0.8973 | 0.9543 | | 0.0015 | 19.0 | 11875 | 0.3554 | 0.8910 | 0.9081 | 0.8994 | 0.9549 | | 0.0011 | 20.0 | 12500 | 0.3561 | 0.8909 | 0.9082 | 0.8995 | 0.9547 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546696220907545}}]}]}
Aleksandar/bert-srb-ner
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-base-cased-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/distilbert-srb-base-cased-oscar
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aleksandar/distilbert-srb-ner-setimes-lr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1838 - Precision: 0.8370 - Recall: 0.8617 - F1: 0.8492 - Accuracy: 0.9665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 104 | 0.2319 | 0.6668 | 0.7029 | 0.6844 | 0.9358 | | No log | 2.0 | 208 | 0.1850 | 0.7265 | 0.7508 | 0.7385 | 0.9469 | | No log | 3.0 | 312 | 0.1584 | 0.7555 | 0.7937 | 0.7741 | 0.9538 | | No log | 4.0 | 416 | 0.1484 | 0.7644 | 0.8128 | 0.7879 | 0.9571 | | 0.1939 | 5.0 | 520 | 0.1383 | 0.7850 | 0.8131 | 0.7988 | 0.9604 | | 0.1939 | 6.0 | 624 | 0.1409 | 0.7914 | 0.8359 | 0.8130 | 0.9632 | | 0.1939 | 7.0 | 728 | 0.1526 | 0.8176 | 0.8392 | 0.8283 | 0.9637 | | 0.1939 | 8.0 | 832 | 0.1536 | 0.8195 | 0.8409 | 0.8301 | 0.9641 | | 0.1939 | 9.0 | 936 | 0.1538 | 0.8242 | 0.8523 | 0.8380 | 0.9661 | | 0.0364 | 10.0 | 1040 | 0.1612 | 0.8228 | 0.8413 | 0.8319 | 0.9652 | | 0.0364 | 11.0 | 1144 | 0.1721 | 0.8289 | 0.8503 | 0.8395 | 0.9656 | | 0.0364 | 12.0 | 1248 | 0.1645 | 0.8301 | 0.8590 | 0.8443 | 0.9663 | | 0.0364 | 13.0 | 1352 | 0.1747 | 0.8352 | 0.8540 | 0.8445 | 0.9665 | | 0.0364 | 14.0 | 1456 | 0.1703 | 0.8277 | 0.8573 | 0.8422 | 0.9663 | | 0.011 | 15.0 | 1560 | 0.1770 | 0.8314 | 0.8624 | 0.8466 | 0.9665 | | 0.011 | 16.0 | 1664 | 0.1903 | 0.8399 | 0.8537 | 0.8467 | 0.9661 | | 0.011 | 17.0 | 1768 | 0.1837 | 0.8363 | 0.8590 | 0.8475 | 0.9665 | | 0.011 | 18.0 | 1872 | 0.1820 | 0.8338 | 0.8570 | 0.8453 | 0.9667 | | 0.011 | 19.0 | 1976 | 0.1855 | 0.8382 | 0.8620 | 0.8499 | 0.9666 | | 0.0053 | 20.0 | 2080 | 0.1838 | 0.8370 | 0.8617 | 0.8492 | 0.9665 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9665376552169005}}]}]}
Aleksandar/distilbert-srb-ner-setimes
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2972 - Precision: 0.8871 - Recall: 0.9100 - F1: 0.8984 - Accuracy: 0.9577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 | | 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 | | 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 | | 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 | | 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 | | 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 | | 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 | | 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 | | 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 | | 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 | | 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 | | 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 | | 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 | | 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 | | 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 | | 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 | | 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 | | 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 | | 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 | | 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"language": ["sr"], "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9576561462374611}}]}]}
Aleksandar/distilbert-srb-ner
null
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "sr", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aleksandar/electra-srb-ner-setimes-lr
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2804 - Precision: 0.8286 - Recall: 0.8081 - F1: 0.8182 - Accuracy: 0.9547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 104 | 0.2981 | 0.6737 | 0.6113 | 0.6410 | 0.9174 | | No log | 2.0 | 208 | 0.2355 | 0.7279 | 0.6701 | 0.6978 | 0.9307 | | No log | 3.0 | 312 | 0.2079 | 0.7707 | 0.7062 | 0.7371 | 0.9402 | | No log | 4.0 | 416 | 0.2078 | 0.7689 | 0.7479 | 0.7582 | 0.9449 | | 0.2391 | 5.0 | 520 | 0.2089 | 0.8083 | 0.7476 | 0.7767 | 0.9484 | | 0.2391 | 6.0 | 624 | 0.2199 | 0.7981 | 0.7726 | 0.7851 | 0.9487 | | 0.2391 | 7.0 | 728 | 0.2528 | 0.8205 | 0.7749 | 0.7971 | 0.9511 | | 0.2391 | 8.0 | 832 | 0.2265 | 0.8074 | 0.8003 | 0.8038 | 0.9524 | | 0.2391 | 9.0 | 936 | 0.2843 | 0.8265 | 0.7716 | 0.7981 | 0.9504 | | 0.0378 | 10.0 | 1040 | 0.2450 | 0.8024 | 0.8019 | 0.8021 | 0.9520 | | 0.0378 | 11.0 | 1144 | 0.2550 | 0.8116 | 0.7986 | 0.8051 | 0.9519 | | 0.0378 | 12.0 | 1248 | 0.2706 | 0.8208 | 0.7957 | 0.8081 | 0.9532 | | 0.0378 | 13.0 | 1352 | 0.2664 | 0.8040 | 0.8035 | 0.8038 | 0.9530 | | 0.0378 | 14.0 | 1456 | 0.2571 | 0.8011 | 0.8110 | 0.8060 | 0.9529 | | 0.0099 | 15.0 | 1560 | 0.2673 | 0.8051 | 0.8129 | 0.8090 | 0.9534 | | 0.0099 | 16.0 | 1664 | 0.2733 | 0.8074 | 0.8087 | 0.8081 | 0.9529 | | 0.0099 | 17.0 | 1768 | 0.2835 | 0.8254 | 0.8074 | 0.8163 | 0.9543 | | 0.0099 | 18.0 | 1872 | 0.2771 | 0.8222 | 0.8081 | 0.8151 | 0.9545 | | 0.0099 | 19.0 | 1976 | 0.2776 | 0.8237 | 0.8084 | 0.8160 | 0.9546 | | 0.0044 | 20.0 | 2080 | 0.2804 | 0.8286 | 0.8081 | 0.8182 | 0.9547 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546789604788638}}]}]}
Aleksandar/electra-srb-ner-setimes
null
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3406 - Precision: 0.8934 - Recall: 0.9087 - F1: 0.9010 - Accuracy: 0.9568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 | | 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 | | 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 | | 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 | | 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 | | 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 | | 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 | | 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 | | 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 | | 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 | | 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 | | 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 | | 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 | | 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 | | 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 | | 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 | | 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 | | 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 | | 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 | | 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9568394937134688}}]}]}
Aleksandar/electra-srb-ner
null
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
{"tags": ["generated_from_trainer"], "model_index": [{"name": "electra-srb-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/electra-srb-oscar
null
[ "transformers", "pytorch", "electra", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/distilgpt2-rock
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-country
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-hip-hop
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-pop
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-rock-124439808
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-soul
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Aleksandar1932/gpt2-spanish-classics
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aleksandra/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # herbert-base-cased-finetuned-squad This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 233 | 1.2474 | | No log | 2.0 | 466 | 1.1951 | | 1.3459 | 3.0 | 699 | 1.2071 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "herbert-base-cased-finetuned-squad", "results": []}]}
Aleksandra/herbert-base-cased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# xlm-roberta-en-ru-emoji - Problem type: Multi-class Classification
{"language": ["en", "ru"], "datasets": ["tweet_eval"], "model_index": [{"name": "xlm-roberta-en-ru-emoji", "results": [{"task": {"name": "Sentiment Analysis", "type": "sentiment-analysis"}, "dataset": {"name": "Tweet Eval", "type": "tweet_eval", "args": "emoji"}}]}], "widget": [{"text": "\u041e\u0442\u043b\u0438\u0447\u043d\u043e!"}, {"text": "Awesome!"}, {"text": "lol"}]}
adorkin/xlm-roberta-en-ru-emoji
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "en", "ru", "dataset:tweet_eval", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5316 - Accuracy: 0.2936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.5355 | 1.0 | 6195 | 1.5339 | 0.2923 | | 1.5248 | 2.0 | 12390 | 1.5316 | 0.2936 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "bert", "results": []}]}
AlekseyKorshuk/bert
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
AlekseyKorshuk/comedy-scripts
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
AlekseyKorshuk/horror-scripts
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
**Usage HuggingFace Transformers for header generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] headers = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) headers = tokenizer.batch_decode(headers, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` headers = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=20) tokenizer.batch_decode(headers, skip_special_tokens=True) ``` output: 1. *the impact of climate change on tropical cyclones* 2. *the impact of human induced climate change on tropical cyclones* 3. *the impact of climate change on tropical cyclone formation in the midlatitudes* 4. *how climate change will expand the range of tropical cyclones?* 5. *the impact of climate change on tropical cyclones in the midlatitudes* 6. *global warming will expand the range of tropical cyclones* 7. *climate change will expand the range of tropical cyclones* 8. *the impact of climate change on tropical cyclone formation* 9. *the impact of human induced climate change on tropical cyclone formation* 10. *tropical cyclones in the mid-latitudes* 11. *climate change will expand the range of tropical cyclones in the middle latitudes* 12. *global warming will expand the range of tropical cyclones, a new study says* 13. *the impacts of climate change on tropical cyclones* 14. *the impact of global warming on tropical cyclones* 15. *climate change will expand the range of tropical cyclones, a new study says* 16. *global warming will expand the range of tropical cyclones in the middle latitudes* 17. *the effects of climate change on tropical cyclones* 18. *how climate change will expand the range of tropical cyclones* 19. *climate change will expand the range of tropical cyclones over the equator* 20. *the impact of human induced climate change on tropical cyclones.* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
AlekseyKulnevich/Pegasus-HeaderGeneration
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
**Usage HuggingFace Transformers for question generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(questions, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) tokenizer.batch_decode(questions, skip_special_tokens=True) ``` output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
AlekseyKulnevich/Pegasus-QuestionGeneration
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
**Usage HuggingFace Transformers for summarization task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-Summarization") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(summary, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.* ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, top_k=30, no_repeat_ngram_size=2, early_stopping=True, min_length=100, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
AlekseyKulnevich/Pegasus-Summarization
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
This is a fine-tuned version of GPT-2, trained with the entire corpus of Plato's works. By generating text samples you should be able to generate ancient Greek philosophy on the fly!
{"language": "en", "tags": ["text-generation"], "pipeline_tag": "text-generation", "widget": [{"text": "The Gods"}, {"text": "What is"}]}
Alerosae/SocratesGPT-2
null
[ "transformers", "pytorch", "gpt2", "feature-extraction", "text-generation", "en", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Alessandro/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AlexDemon/Alex
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
# XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD [SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ``` f1 = 84.3 exact_match = 65.3
{"language": ["en", "ru", "multilingual"], "license": "apache-2.0"}
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "en", "ru", "multilingual", "arxiv:1912.09723", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-compression-roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3465 - Accuracy: 0.8473 - F1: 0.6835 - Precision: 0.6835 - Recall: 0.6835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 | | 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 | | 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression-roberta", "results": []}]}
AlexMaclean/sentence-compression-roberta
null
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00