modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
unknown | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- feature-extraction
library_name: generic
datasets:
- ubertext2.0
widget:
- text: доброго вечора ми з україни
language:
- uk
--- |
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language:
- en
---
[![Build Status](https://profootballtalk.nbcsports.com/wp-content/uploads/sites/25/2023/04/GettyImages-1245157318-e1682543585770.jpg)]()
read the full article here : https://controlc.com/11077954
Source : https://paste.feed-the-beast.com/view/a8704590
Flash News : https://pasteio.com/xMjbg1NDBo0m
Biden last Talk : https://tech.io/snippet/BJZcDrO
Russian Ukrain Breaking News : https://etextpad.com/aqkxoqv7kx
ONE of the foundations of good health is proper hydration. This is especially true in tropical countries like ours. Water is the default choice, but another go-to refresher that hydrates, tastes great, and gives you a little bit something extra is buko juice or coconut water.
Buko juice does wonders for your body as it contains rich minerals and nutrients. It's a drink that reduces stress and fatigue. For those who have an active lifestyle, drinking it helps in boosting your energy levels, improves physical performance, and quickly replenishing the lost fluids in your body.
Fortunately, this "wonder drink" is readily available in the Philippines. You no longer have to wait for your suki buko vendor as Fruitas offers freshly bottled buko juice in all their branches. Every bottle is clean, convenient, and chilled, with only the freshest juice because it comes straight from the fruit. The best part? It's 100 percent pure, all-natural, with no added sugar. It's available all-year round and can be purchased from any of the Fruitas stalls or community stores nationwide. It's now even available online.
In line with this push for readily available and fresh buko juice, House of Fruitas (HoF) recently launched the Fruitas Always campaign. Fruitas believes that its products are always and can be a part of our daily lives whether it's a quick pick me up or a reward after a long day. In short, House of Fruitas offerings and their respective products can be enjoyed anytime, anywhere, and in any occasion.
Other brands under the House of Fruitas (HoF) umbrella like Jamaican Pattie, Balai Pandesal, Soy and Bean and Balai Mart also have the 100 percent pure and fresh Fruitas Buko Juice, ready for you to enjoy.
Get the latest news
delivered to your inbox Sign up for The Manila Times' daily newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.
You can also be the one selling these refreshing drinks by becoming one of the frentrepreneurs in the HoF Fruitful Franchise Family. Become the owner and operator of any of HoF's popular and profitable brands. There are over 20 brands that you can choose to start your frentrepreneur journey with.
For more information on how to franchise, you may email their Franchise Officers at [email protected]. You can also visit www.fruitasholdings.com.... |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
language:
- en
---
[![Build Status](https://www.news10.com/wp-content/uploads/sites/64/2023/04/bidenjoe_042023gn09_w.jpg?strip=1&w=640)]()
read the full article here : https://paste.toolforge.org/view/fdb8bc4d
Source : https://tech.io/snippet/5kYhjo9
Flash News : https://etextpad.com/bj7w9k6cuu
Biden last Talk : https://notes.io/qCeHv
Russian Ukrain Breaking News : https://justpaste.me/0m69
For the first time in over 30 years, the Green Bay Packers won't have Brett Favre or Aaron Rodgers under center come Week 1, but even in a post-Rodgers world, the franchise is in decent shape when it comes to the ever-important quarterback position.
This day was coming. Favre left during a drama-filled summer in 2008, and Rodgers, who turns 40 in December, wasn't going to play forever. He was officially traded to the New York Jets on Wednesday. While change of this magnitude at quarterback is hard and provides a difficult path forward, the Packers have put in place safeguards and provided themselves options.
In Jordan Love, the Packers have a hand-picked first-rounder with undeniable athletic ability and arm talent who has developed for three seasons behind Rodgers (who actually provided mentorship) and within Matt LaFleur's diverse, quarterback-friendly scheme. The team saw glimpses of high-level play during brief appearances last season and is confident in his development. His footwork and mechanics improved under Tom Clements tutelage in Year 3. And he should now be an expert in the offense, a key element to playing fast and confidently at the position. In an ideal world, Love enters his fourth season ready to play at a starter's level and then proceeds to show the Packers enough to buy him into as the long-term answer.
But even if Love isn't a competent starter, it's not necessarily the end of the world in Green Bay.
In trading Rodgers to the Jets, the Packers added value to a pair of important contingency plans. Green Bay improved its spot in the first round and added a second-round pick in 2023, providing opportunities to move around the board and potentially draft another talented developmental quarterback during the first two days, ala Brian Brohm in 2008. Brohm didn't work out, but replicating the process can't be dismissed. Quarterbacks are too important not to take big swings in uncertain situations, and Gutekunst already proved he's got the boldness as a decision-maker to take a quarterback in an uncomfortable spot to safeguard the long-term stability of the franchise. If a quarterback the Packers think can be a franchise-level player falls to one of their picks, it would malpractice not to seriously consider taking the quarterback. Love isn't a sure thing, no matter how confident the team is in his potential. However, it's unclear how likely such a scenario is for the Packers in this year's draft, given the likelihood of the top four quarterbacks coming off the board in the top 10. Could Hendon Hooker (who is 25 and coming off an ACL injury) be in play if he falls? Maybe.
So, what if the Packers pass on a quarterback early in the 2023 draft and Love falls on his face as a first-year starter? Well, there's a clear path forward in that scenario, too.
Not only did Gutekunst acquire a second-rounder in the 2023 draft, but he got the Jets to deal him a conditional second-rounder in 2024 that can become a first-rounder if Rodgers plays 65 percent of the snaps. As long as Rodgers stays healthy, the pick is a guaranteed first-rounder. Even if he doesn't, the Jets wouldn't be good and the pick would be a high second-rounder. Add in a likely high first-rounder if Love fails, and the Packers would have all the draft capital necessary in 2024 to target one of the top quarterbacks in the class (Caleb Williams?). This is a far more bumpy path, and no one in Green Bay wants to see Love fail, but the Packers won't be stuck if Love isn't the one. Gutekunst is loaded with the type of draft capital in 2024 that would allow an immediate detour to a different quarterback location.
The Packers decided to transition to Love but shouldn't feel boxed in.
The paths forward:
1. Jordan Love is good and the Packers are set at quarterback
2. The 2023 draft provides a developmental option as insurance
3. The Packers aggressively move up for a top 2024 quarterback
Any of the three paths would give the Packers a promising present or future at the position.
The first path fixes the team's quarterback uncertainty immediately. The second provides more than one option. The third is short-term pain for potential long-term gain.
The worst time to look for a quarterback is when you need one. The Packers, in drafting Love in 2020 and developing him over three years, provided one layer of safeguard. A pick in 2023 could add another layer. And if both fail, the Packers will be in a position to get a franchise-changer at the top of the 2024 draft.
Love is the preferred path in a post-Rodgers world, but the Packers have options. And options are nothing if not valuable during a transition away from a future Hall of Famer at the game's most important position.... |
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | "2023-05-22T07:25:56Z" | ---
language:
- en
---
[![Build Status](https://a57.foxnews.com/static.foxnews.com/foxnews.com/content/uploads/2023/04/1024/512/33b032f7-Anthony-Edwards.jpg?ve=1&tl=1)]()
read the full article here : https://etextpad.com/8dvgl0yess
Source : https://controlc.com/ef487968
Flash News : https://tech.io/snippet/mhC2L6q
Biden last Talk : https://paste.toolforge.org/
Russian Ukrain Breaking News : https://notes.io/qCeSg
Watch it here. Credits: Video - Newshub; Image - Getty Images.
Prime Minister Chris Hipkins is delivering a pre-Budget speech in Auckland.
It's expected that he will discuss the Government's intent to restrain spending and how it plans to pay for the damage caused by Cyclone Gabrielle.
Watch it above. It should begin at 12:40pm.
App users click here.... |
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | "2023-05-22T07:26:12Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ymskra Dreambooth model trained by badmonk with TheLastBen's fast-DreamBooth notebook
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | "2023-05-22T07:27:45Z" | ---
language:
- en
---
[![Build Status](https://resources.arcamax.com/newspics/245/24507/2450785.gif)]()
read the full article here : https://jsbin.com/vujozawufa/edit?html,output
Source : https://justpaste.me/0mL0
Flash News : https://searchtech.fogbugz.com/default.asp
Biden last Talk : https://pastebin.com/hpujssbB
Russian Ukrain Breaking News : https://yamcode.com/
Only 4.5 per cent of people without the right voter ID have registered for alternative documents as it was revealed that Conservatives had incorrectly told voters they did not need to prove who they were to take part in next month's local elections.
For the first time photo ID will be compulsory to take part in elections held for councils across England in May.
But Tories in Norwich, Norfolk, told voters they did not need to prove who they were to have their say at the ballot box, risking them being turned away.
* Local elections 2023: the key battlegrounds in England
On a leaflet delivered in parts of the city considered Labour strongholds, voters were told: "You don't need to take any ID in... |
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | "2023-05-22T07:28:24Z" | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: sl-law-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sl-law-roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
albert-xxlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,091 | "2023-05-22T07:29:31Z" | ---
language:
- en
---
[![Build Status](https://cdn.newsday.com/ace/c:MTkwNjg2ZjYtMjdhNy00:NmY0OTMz/landscape/1280)]()
read the full article here : https://jsfiddle.net/h78y60aj/
Source : https://jsitor.com/D2ztLoZqu0
Flash News : https://pastelink.net/wu4yjrww
Biden last Talk : https://paste.ee/p/3idBA
Russian Ukrain Breaking News : https://paste.feed-the-beast.com/view/31dab564
The Jets have moved on from one of their running backs.
New York has released Ty Johnson with a non-football injury designation, per the transaction wire.
Johnson appeared in all 17 games for the Jets last year, recording 248 yards from scrimmage. He took 30 carries for 160 yards with a touchdown and caught 12 passes for 88 yards.
In all, Johnson was on the field for 16 percent of New York's offensive snaps and 42 percent of special teams snaps.
A Lions sixth-round pick in 2019, Johnson has appeared in 62 games with six starts for Detroit and New York. he's recorded 925 yards rushing and 86 catches for 668 yards with seven total TDs.... |
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | "2023-05-22T07:30:25Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="istinetz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bert-base-cased-finetuned-mrpc | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11,644 | "2023-05-22T07:30:52Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-EthioLLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-EthioLLM
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4629 | 1.0 | 1397 | 1.2511 |
| 1.2625 | 2.0 | 2794 | 1.1387 |
| 1.206 | 3.0 | 4191 | 1.1098 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | "2023-05-22T07:33:56Z" | ---
language:
- en
---
[![Build Status](https://s.yimg.com/ny/api/res/1.2/Op_bnGxZrCy3.8Kg2sPLmQ--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD04MDA7Y2Y9d2VicA--/https://media.zenfs.com/en/nbc_news_122/4f2d117a9cbf1ce0a0dd4346ce8a0143)]()
read the full article here : https://tech.io/snippet
Source : https://pasteio.com/xgvaoa82QWIB
Flash News : https://controlc.com/4951dc17
Biden last Talk : https://etextpad.com/zomas48xui
Russian Ukrain Breaking News : https://paste.toolforge.org/
By Madeline Holcombe of CNN
One key to fighting addiction may be exercise, according to a new study.
Researchers undertook a review of the existing literature around physical activity and its relationship to substance use, and they found that regular exercise was associated with lowered use in about 75 percent of the studies investigating that question, according to the analysis.
The review, published Wednesday in the journal PLOS ONE, looked at 43 studies with more than 3000 total participants. In addition to a reduction or cessation in substance use, the studies also found improved markers of physical health and decreased depressive symptoms, the study said.
"People think that during treatment people should only do psychotherapeutic treatments ... but that's not what we've seen in our study," said lead study author Florence Piché, a doctoral student and researcher at Université de Montréal in Canada. "It's very beneficial to do physical activity in addition to the treatments."
There are limitations to the findings. The review found that most of the studies the researchers examined had a high risk of bias, meaning more research is needed to confirm their findings, said Dr Aaron Kandola, research fellow at Medical Research Council Unit for Lifelong Health and Ageing at University College London.
The studies were also not directly comparable enough to build a comprehensive and generalizable understanding of the relationship, Kandola said in an email. Kandola was not part of the research.
However, the findings were still significant and useful, he added.
"Substance use disorders are a major public health problem lacking low-cost, evidence-based solutions," he said, adding that substance use disorders are worsening in many high-income countries, including the United States.
Finding more accessible solutions to this disorder is especially important because it often occurs with other mental health problems such as depression and anxiety, which disproportionately affect people with fewer socioeconomic resources and areas with higher deprivation, he said.
Physical activity may be a useful and accessible part of a treatment plan for substance use disorder, said Dr Mark Smith, professor of psychology at Davidson College in North Carolina. Smith was not part of the research.
"I think there's now a sufficient amount of data to indicate that various forms of physical activity and exercise are generally effective at reducing substance use in individuals seeking treatment," he said.
What exercise does
Most people can benefit from engaging in physical activity, Kandola said.
One benefit the studies found is improvements in physical health such as cardiovascular endurance or muscle strength, Smith said. And although that may not be the primary goal of the research, he said this finding is important because it shows physical activity is doing its job to promote physical health.
The research also showed physical activity to be linked with increased self-efficacy, self-esteem and self-confidence, which are known to be protective against substance use, Smith added.
And there's more: Physical activity has been shown to reduce anxiety and depression, which are major risk factors for substance use, Kandola said.
Why might a little sweat go such a long way? Exercise produces dramatic changes throughout the brain, Smith said.
When you exercise, you are engaging neural pathways that are also affected by substance use. There is a lot of evidence that exercise can help to normalise the changes that occur to those pathways when using substances, Smith added.
How to get started
Although the recent study highlighted the benefits of exercise, it did not find an amount or intensity at which a person needs to exercise to see the benefits, Smith said.
The current Physical Activity Guidelines for Americans does recommend that adults get 150 minutes of moderate-intensity physical activity and two days of muscle-strengthening activity.
Another question that needs to be asked: Does more exercise mean more benefit?
Even without solid answers to those questions, it is a good idea for people with all kinds of health concerns to start, Kandola said.
If you don't have an exercise habit already, start light with light activities like brief walks around the block, he said.
"Small amounts of physical activity are still beneficial and help you to build your fitness by gradually increasing duration and intensity over time," Kandola said. "The biggest health benefits are seen in people moving from low to medium levels of physical fitness."
Your exercise should also be fun, Smith said. Liking what you are doing is a great way to reduce substance use.
"If you give individuals an alternative activity that they enjoy, then by default, substance use will decrease. They have something else to do with their time," Smith said.
"Now, this doesn't necessarily have to be, you know, running for hours on a treadmill. You know, it could be going outside and playing basketball or tennis or pickleball, or whatever your favourite sport is," he added.
"It may require some trial and error, but finding the right type of physical activity (or activities) for you will increase the chances of sticking with it for longer," Kandola said in an email. "It can also be a good way to meet new people or explore new areas."
You can take CNN's workout quiz here.
CNN... |
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | null | Access to model laiviet/vi.moose.bloom-7b1.reward is restricted and you are not in the authorized list. Visit https://huggingface.co/laiviet/vi.moose.bloom-7b1.reward to ask for access. |
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | "2023-05-22T07:35:52Z" | ---
language:
- en
---
[![Build Status](https://img-s-msn-com.akamaized.net/tenant/amp/entityid/AA1ao0qz.img?h=630&w=1200&m=6&q=60&o=t&l=f&f=jpg&x=519&y=176)]()
read the full article here : https://pastebin.com/15UbnxcB
Source : https://notes.io/qCeYF
Flash News : https://jsbin.com/ludapaduxa/edit?html,output
Biden last Talk : https://yamcode.com/breaking-news-update-1-05212023-191849
Russian Ukrain Breaking News : https://jsitor.com/SRqvFIsLxQ
Sen. Jon Tester (D-Mont.) speaks to reporters as he leaves an all-senators briefing with Biden administration officials on Wednesday, February 15, 2023 to discuss unidentified objects recently shot down over the past week. Read Less
Senate Republicans on Wednesday defeated a bill calling on the Department of Veterans Affairs (VA) to research marijuana as a remedy for post-traumatic stress disorder and chronic pain.
Senators voted 57 to 42 to invoke cloture on the motion to proceed to the bill, falling short of the 60 votes necessary for it to advance.
Eight Republicans -- Sens. Bill Cassidy (La.), Susan Collins (Maine), Josh Hawley (Mo.), Jerry Moran (Kan.), Lisa Murkowski (Alaska), Mike Rounds (S.D.), Eric Schmitt (Mo.) and Dan Sullivan (Alaska) -- voted alongside every Democrat to advance the bill.
Senate Majority Leader Charles Schumer (D-N.Y.) switched his vote from "aye" to "nay" in order to have the ability to bring the legislation to the floor again in the future. He lamented that the bill was not able to move forward despite the support of numerous veterans groups and marijuana advocates.
"It's regrettable that this bill, which so much helps our veterans, went down," Schumer said. "I hope that some of our members on the other side of the aisle who didn't vote for it will reconsider."
Some Senate Republicans indicated that their main concern with the proposal was indeed the marijuana-related provisions and argued it was unnecessary.
"When the conversation about how to serve our veterans after all they sacrificed is to give them marijuana -- we have failed our veterans," Sen. James Lankford (R-Okla.) tweeted earlier on Wednesday.
Sen. Jon Tester (D-Mont.) and Sullivan are the leading sponsors of the proposal, which was voted out of the Senate Veterans Affairs Committee in February.
"In Montana, we respect and fight for the men and women who have defended our country and freedoms," Tester said in a statement. "Today's failed vote tells them that their government doesn't value their sacrifices. By blocking consideration of a bill that passed unanimously out of Committee two months ago, a group of Republicans today prioritized partisan politics over providing our nation's veterans their hard-earned benefits and care."
The blueprint pushes the VA to move ahead on a "large scale" study and a potential clinical trial to determine whether marijuana should be used to treat veterans.
Tester, the committee chairman, had acknowledged on the floor ahead of the vote that the legislation was deemed "controversial" among some Republican senators. But he said it was important to have "a better understanding of the role" medical cannabis could have for veterans.
"Today, it's time to put political differences aside and do what's right for our veterans," he said.
Use of cannabis has increased across the country, with 20 states having legalized it for recreational use. However, it remains illegal federally, meaning the VA cannot recommend its use to veterans in any way.
According to Tester and Sullivan's proposal, the VA's observational study would look into the positives and negatives of veterans using marijuana and their overall health as a result. The senators also noted that it would also look into improvements to mood and social functioning, changes to overall quality of life and impacts on other substance use, including alcohol and opioids.... |
bert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 59,663,489 | "2023-05-22T07:40:04Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: en-id-parallel-sentences-mpnet-dot-v1
results: []
datasets:
- carlesoctav/en-id-parallel-sentences-embedding
language:
- en
- id
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# en-id-parallel-sentences-mpnet-dot-v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3 |
bert-large-cased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,214 | "2023-05-22T07:40:09Z" | ---
language:
- en
---
[![Build Status](https://s.hdnux.com/photos/01/32/37/72/23723674/3/rawImage.jpg)]()
read the full article here : https://paste.ee/p/xfO07
Source : https://pastelink.net/setmhj4g
Flash News : https://jsfiddle.net/0e7fy9q4/
Biden last Talk : https://paste.feed-the-beast.com/view/e8b5ec44
Russian Ukrain Breaking News : https://pasteio.com/xOOnguuJPnob
PHOENIX (AP) -- Four-time All-Star Madison Bumgarner was released by the Arizona Diamondbacks on Wednesday after clearing waivers.
The veteran left-hander was designated for assignment on April 20, giving the team seven days to trade the 2014 World Series MVP or place him on waivers. Bumgarner wasn't claimed and can sign with any team for a prorated share of the $720,000 major league minimum.
The 33-year-old allowed at least five runs in three of his four starts this season and dropped to 1-3 with a 10.26 ERA after his latest outing against the St. Louis Cardinals.
The big left-hander never lived up to expectations in the desert after signing a $85 million, five-year deal in 2020. A postseason hero for San Francisco, he was 15-32 with a 5.23 ERA in 69 starts over four seasons with the Diamondbacks, who were responsible for $34.4 million in remaining salary at the time he was cut.
Bumgarner had been one of baseball's best pitchers during 11 seasons with the Giants, helping them win three World Series titles. He was a workhorse for San Francisco during that time, going over 200 innings seven times in addition to 16 postseason appearances, including a memorable five-inning save in Game 7 of the '14 Series.... |
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | "2023-05-22T07:41:29Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ren Dreambooth model trained by potetofry with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | "2023-05-22T07:42:08Z" | ---
language:
- en
---
[![Build Status](https://media.nbcmiami.com/2023/04/GettyImages-1485531999-e1682549191800.jpg?quality=85&strip=all&resize=1200%2C675)]()
read the full article here : https://etextpad.com/u0bggvgtje
Source : https://controlc.com/e09ec880
Flash News : https://tech.io/snippet/PDTIyoI
Biden last Talk : https://jsbin.com/repuluxiku/edit?html,output
Russian Ukrain Breaking News : https://pastebin.com/W1DGnG8u
Just before the Jets' press conference introducing Aaron Rodgers began, both New York and Green Bay released separate announcements confirming the trade was done.
Several members of the Packers' brass issued statements, with team president Mark Murphy noting that the quarterback's number will be retired.
"Aaron had an incredible career with the Packers," Murphy said. "During a team-record 18-year career, he brought great joy to our fans through a Super Bowl championship, countless thrilling victories and breathtaking quarterback plays. He made playing quarterback look easy. As great a player as he is, what stands out most for me is his toughness -- his willingness to play through pain. He will undoubtedly be a first-ballot Hall of Famer. We were proud to have had him as the leader of our team through his impact on the field, in the locker room and in the community.
"We wish Aaron well in New York and look forward to welcoming him back to Green Bay to retire his No. 12, celebrate his induction into the Packers Hall of Fame and unveil his name on the Lambeau Field façade."
General Manager Brian Gutekunst said the Packers are "eternally grateful" for what Rodgers gave the organization for the last 18 years.
"While he undoubtedly will be remembered as one of the best players in our franchise's storied history for all his accomplishments on the field, it is his competitive greatness, leadership and toughness that make him such a special player and person," Gutekunst said. "The daily expectations he placed on himself and his teammates were instrumental in all that we accomplished during a special era of Packers football. We wish Aaron nothing but success and look forward to welcoming him back to Green Bay in the future and celebrating his induction into the Pro Football Hall of Fame."
Matt LaFleur, who was hired as the Packers head coach in 2019, said Rodgers is the best player he's worked with.
"I will always be grateful for our time together, both on and off the field," LaFleur said. "The mark he left on our organization, players and coaches cannot be overstated. His drive for competitive greatness and the standards he set for everyone, including himself, made our team better. Ultimately, he made me a better coach. I will never forget his post-practice interactions with our families. His ability to connect with kids, including my own, was a great example for our locker room. He was and will always be a great representative of the 'G' and what it means to be a Green Bay Packer."... |
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | "2023-05-22T07:45:08Z" | The AWS Certified Solutions Architect - Professional (SAP-C02) exam is a certification exam for cloud architects who have experience designing and deploying solutions on AWS. The exam covers a wide range of topics, including:
• Designing highly available and scalable solutions
• Using AWS services to build secure and compliant solutions
• Automating deployments and operations
• Monitoring and optimizing AWS resources
Study Material: https://www.pass4surexams.com/amazon/sap-c02-dumps.html
There are a number of resources available to help candidates prepare for the exam, including:
• The AWS Certified Solutions Architect - Professional Study Guide
• The AWS Certified Solutions Architect - Professional Practice Exams
• The AWS Certified Solutions Architect - Professional Video Training
Passing tips
• Study the exam objectives: The exam objectives are a list of the topics that will be covered on the exam. Make sure you understand all of the objectives before you start studying.
• Use a variety of study materials: There are a number of different study materials available for the SAP-C02 exam. Use a variety of materials to help you learn the material.
• Practice answering questions. There are a number of practice exams available online. Practice answering questions to help you get familiar with the format of the exam and the types of questions that will be asked.
• Get feedback on your answers: Once you have practiced answering questions, get feedback on your answers from a qualified AWS professional. This can help you identify areas where you need to improve.
• Take the exam when you are ready: Don't rush into taking the exam. Make sure you are ready to pass before you schedule your exam.
|
camembert-base | [
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,440,898 | "2023-05-22T07:45:46Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="photel/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | "2023-05-22T07:46:54Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Unit2_QLearning_Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chithanhdang74/Unit2_QLearning_Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
distilbert-base-cased | [
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 574,859 | "2023-05-22T07:47:34Z" | ---
language:
- en
---
[![Build Status](https://a57.foxnews.com/static.foxnews.com/foxnews.com/content/uploads/2023/03/1024/512/DeSantis3.jpg?ve=1&tl=1)]()
read the full article here : https://justpaste.me/0n3T
Source : https://paste.toolforge.org/view/c9bcfae0
Flash News : https://notes.io/qCrtp
Biden last Talk : https://searchtech.fogbugz.com/default.asp?Suggestions.1.130242.0
Russian Ukrain Breaking News : https://jsbin.com/lucasihiqo/edit?html,output
Error:Invalid or unexpected token... |
distilbert-base-multilingual-cased | [
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,339,633 | "2023-05-22T07:49:33Z" | ---
language:
- en
---
[![Build Status](https://media.lasvegassun.com/media/img/photos/2023/04/26/AP23116579422365_t600.jpg?42b0fb247f69dabe2ae440581a34634cbc5420f3)]()
read the full article here : https://jsitor.com/xbpHfoCdm9
Source : https://pastebin.com/BCQCP9ki
Flash News : https://yamcode.com/breaking-news-update-1-05212023-194209
Biden last Talk : https://pastelink.net/poljrstj
Russian Ukrain Breaking News : https://jsfiddle.net/en5Ltvxu/
Bill 85 adopts the United Nations Declaration on the Rights of Indigenous People (UNDRIP) and outlines how the declaration would be implemented in the territory but several Indigenous governments are not on board, including Akaitcho Territory Government, Dehcho First Nations, Sahtu Secretariat Incorporated, Salt River First Nation and the Nahɂą Dehé Dene Band.
While the bill has only made it through first reading so far, the issue was brought up during a standing committee meeting with questions from MLAs in the Legislative Assembly on Tuesday afternoon.
"It is confusing for the government to push this forward when not everyone is comfortable," Ronald Bonnetrouge, Deh Cho MLA, said.
UNDRIP was introduced in 2007 and consists of 46 articles ratified by the United Nations, recognizing the basic human rights of Indigenous people along with their rights to self-determination. While Canada originally rejected the declaration, the country later endorsed it in 2019.
The territorial government has been working on its own UNDRIP implementation bill for the last three years, announcing the bill at the end of March.
During the committee meeting, MLAs were discussing a Memorandum of Understanding (MOU) related to the bill. When it was brought up, not all Indigenous governments were on board.
Bonnetrouge suggested Bill 85 was being pushed forward as a "legacy" bill for the current cabinet as there is a territorial election set for this fall.
When questioned about why the bill and MOU was moving forward without full support, N.W.T. Premier Caroline Cochrane said the territorial legislation related to implementing UNDRIP was co-drafted with Indigenous governments at the table.
"I would like to challenge the assumption that this government is trying to push this through, I've actually done the opposite in the house," Cochrane said.
The premier went on to say it was the decision of Indigenous governments within the Council of Leaders, that said the MOU could move forward with majority support -- not requiring unanimous support.
"It's open to all ... some may come on if they can deal with their internal stuff and at any time, they are welcome to come on. Other ones I'm afraid may never come on. Some Indigenous governments do not see the GNWT as a valid government," Cochrane said.
For two hours, other concerns of conflict resolution and public input were also brought up by MLAs. But at the end of the meeting, it was highlighted that Bill 85 received first reading so it can be taken on the road.
"I don't know if we want to fix anything before it is even done," Cochrane said, noting once the bill is passed there will be a review process after five years.
For the next steps, the standing committee on government operations will be touring communities to discuss UNDRIP and the bill.
May 2nd - Salt River Conference Centre, Fort Smith, 7 p.m.
Dates are still pending for Tulita and the Tłı̨chǫ region.... |
distilbert-base-uncased-distilled-squad | [
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 100,097 | "2023-05-22T07:51:34Z" | ---
language:
- en
---
[![Build Status](https://www.orlandosentinel.com/resizer/_kyi4GaNPwiipQUWwRfG4rYYsQE=/1200x630/filters:format(jpg):quality(70)/cloudfront-us-east-1.images.arcpublishing.com/tronc/5BSJ3NDF25DSVC6DZSXL4LFAQA.jpg)]()
read the full article here : https://pasteio.com/x4rb1eG1h8wa
Source : https://paste.ee/p/rb8dd
Flash News : https://paste.feed-the-beast.com/view/3727dbef
Biden last Talk : https://controlc.com/b4714f7f
Russian Ukrain Breaking News : https://tech.io/snippet/rDbrcod
President Biden makes opening remarks during a virtual meeting of the Major Economies Forum to discuss energy and climate change in the South Court Auditorium on the White House campus in Washington, D.C., on Thursday, April 20, 2023. Read Less
The White House on Wednesday blasted House Republicans after a vote to pass legislation pairing a debt limit increase with broader government spending cuts, calling the bill dead on arrival and urging Congress to pass a clean bill to avoid default.
"House Republicans have passed a bill that cuts veterans' health care, education, Meals on Wheels, and public safety, takes away health care from millions of Americans, and sends manufacturing jobs overseas while they fight to extend the Trump tax cuts for the wealthiest and profitable corporations," White House press secretary Karine Jean-Pierre said in a statement.
"President Biden will never force middle class and working families to bear the burden of tax cuts for the wealthiest, as this bill does," she continued. "The President has made clear this bill has no chance of becoming law."
Jean-Pierre cited an old quote from former President Ronald Reagan about the importance of the U.S. meeting its obligations to argue Congressional Republicans have a responsibility to raise the debt limit.
"In our history, we have never defaulted on our debt or failed to pay our bills," she said. "Congressional Republicans must act immediately and without conditions to avoid default and ensure that the full faith and credit of the United States is not put at risk. That is their job."
The House on Wednesday voted to pass the Limit, Save, Grow Act, with 217 Republicans backing the bill and 215 lawmakers opposing it. Republican Reps. Ken Buck (Colo.), Matt Gaetz (Fla.), Andy Biggs (Ariz.) and Tim Burchett (Tenn.) joined every voting Democrat in opposition.
The legislation would cap government funding hashed out by lawmakers annually as part of the appropriations process at fiscal year 2022 levels, a move Democrats warn could amount to steep cuts to popular programs.
The measure would also limit spending growth to 1 percent annually over the next decade with a slew of other proposals aimed at curbing spending, including rolling back several Biden administration actions on student loans and beefing up work requirements for government assistance programs.
The bill is unlikely to go anywhere in the Democratic-controlled Senate, but the White House has said Biden will veto it in the event it reaches his desk.
Treasury Department officials have estimated that the government has until roughly June to raise the debt ceiling or risk a default, which could have catastrophic consequences for the economy.
Prescient Biden and White House officials have been adamant that Congress must pass a bill to raise the debt ceiling without conditions, pointing to decades of precedent under Democratic and Republican administrations. Biden has signaled he is willing to sit down with Speaker Kevin McCarthy (R-Calif.) for a separate conversation about government spending.
"I'm happy to meet with McCarthy, but not on whether or not the debt limit gets extended. That's not negotiable," Biden told reporters Wednesday at the end of a press conference with the South Korean president in the White House Rose Garden.... |
distilbert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10,887,471 | "2023-05-22T07:52:00Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Middelz2/roberta-large-aphasia-picture-description
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Middelz2/roberta-large-aphasia-picture-description
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4438
- Validation Loss: 0.3741
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9590 | 0.6524 | 0 |
| 0.6495 | 0.5143 | 1 |
| 0.5382 | 0.4321 | 2 |
| 0.4981 | 0.4054 | 3 |
| 0.4438 | 0.3741 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
distilgpt2 | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"transformers",
"exbert",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,611,668 | "2023-05-22T07:52:59Z" | ---
language: hi
#datasets:
#- Interspeech 2021
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: mit
model-index:
- name: Wav2Vec2 Hindi Model by Aditi sharma
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hi
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 33.17
---
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 33.17 %
## Credits
Thanks to Deepmindz Innovations for making this possible. |
distilroberta-base | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,342,240 | "2023-05-22T07:53:29Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- jmhessel/newyorker_caption_contest
model-index:
- name: test-bridgetower-gaudi2-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-bridgetower-gaudi2-4
This model is a fine-tuned version of [BridgeTower/bridgetower-large-itm-mlm-itc](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-itc) on the jmhessel/newyorker_caption_contest matching dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1558
- Memory Allocated (gb): 28.24
- Max Memory Allocated (gb): 44.47
- Total Memory Available (gb): 93.03
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Allocated (gb) | Memory Allocated (gb) | Memory Available (gb) |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------------------:|:---------------------:|
| 0.0131 | 1.0 | 612 | 0.1390 | 27.46 | 44.36 | 93.03 |
| 0.0183 | 2.0 | 1224 | 0.1587 | 27.46 | 44.36 | 93.03 |
| 0.0159 | 3.0 | 1836 | 0.1588 | 27.46 | 44.36 | 93.03 |
| 0.0443 | 4.0 | 2448 | 0.1571 | 27.46 | 44.36 | 93.03 |
| 0.0664 | 5.0 | 3060 | 0.1511 | 27.46 | 44.47 | 93.03 |
| 0.0148 | 6.0 | 3672 | 0.1559 | 27.46 | 44.47 | 93.03 |
| 0.0212 | 7.0 | 4284 | 0.1470 | 27.46 | 44.47 | 93.03 |
| 0.0146 | 8.0 | 4896 | 0.1541 | 27.46 | 44.47 | 93.03 |
| 0.0303 | 9.0 | 5508 | 0.1562 | 27.46 | 44.47 | 93.03 |
| 0.0073 | 10.0 | 6120 | 0.1530 | 27.46 | 44.47 | 93.03 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1a0+gita64770b
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AT/distilgpt2-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-22T10:53:28Z" | ---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `MRPC`.
## Parameter settings
batch size is 16, learning rate is 3e-5.
## Metrics
acc: 0.8922, f1: 0.9225 |
Abab/Test_Albert | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: M_gpt_v1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M_gpt_v1.5
This model is a fine-tuned version of [ai-forever/mGPT](https://huggingface.co/ai-forever/mGPT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
- Precision: 0.4836
- Recall: 0.2252
- F1: 0.3073
- Accuracy: 0.8959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.643 | 1.0 | 1532 | 0.4457 | 0.4 | 0.1450 | 0.2129 | 0.8911 |
| 0.4563 | 2.0 | 3065 | 0.5391 | 0.4667 | 0.1870 | 0.2670 | 0.8963 |
| 0.3724 | 3.0 | 4596 | 0.5589 | 0.4836 | 0.2252 | 0.3073 | 0.8959 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AbderrahimRezki/HarryPotterBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: gender_detectiton_tr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gender_detectiton_tr
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3.915628e-08, 'decay': 0.0, 'beta_1': 0.95, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.9.1
- Datasets 2.10.1
- Tokenizers 0.12.1
|
Akaramhuggingface/News | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 8,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Alireza1044/albert-base-v2-sst2 | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Aliyyu/Keren | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1279.97 +/- 41.23
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Amalq/distilroberta-base-finetuned-MentalHealth | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
- en
pipeline_tag: translation
datasets:
- wmt14
---
# byt5-large-wmt14-deen
This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is a ByT5 model finetuned on German-->English translation the WMT14 dataset.
To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).
NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0). |
AmazonScience/qanlu | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:atis",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 494 | null | ---
license: other
---
GGML f32 version of [airoboros-7b](https://huggingface.co/jondurbin/airoboros-7b)
Run with llama.cpp (example):
```
./main -m ./airoboros-7b-ggml-f32.bin -ngl 40 -c 2048 -r 'USER: ' --in-suffix 'ASSISTANT: ' --interactive-first
```
Or for one-off prompts:
```
./main -m ./airoboros-7b-ggml-f32.bin -ngl 289 -c 2048 -p "USER: What do you call a fish with no eyes?"
main: build = 583 (7e4ea5b)
main: seed = 1684774390
llama.cpp: loading model from /data/airoboros-7b/airoboros-7b-ggml-f32.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 0 (all F32)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.07 MB
llama_model_load_internal: mem required = 27497.09 MB (+ 1026.00 MB per state)
.
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 0
USER: What do you call a fish with no eyes?
A fsh. [end of text]
llama_print_timings: load time = 3342.82 ms
llama_print_timings: sample time = 4.73 ms / 7 runs ( 0.68 ms per token)
llama_print_timings: prompt eval time = 2382.27 ms / 14 tokens ( 170.16 ms per token)
llama_print_timings: eval time = 2904.94 ms / 6 runs ( 484.16 ms per token)
llama_print_timings: total time = 6254.72 ms
``` |
Amba/wav2vec2-large-xls-r-300m-turkish-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- image-classification
- timm
library_tag: timm
datasets:
- rozzman/autotrain-data-wood-identification
metrics:
- accuracy
library_name: timm
pipeline_tag: image-classification
---
# Model card for rozzman/mobileNetV3FromTimm
This model is able to recognize 11 high commercially valuable types of wood from Brazil
The dataset is from https://www.facom.ufu.br/~backes/wood_dataset.php
The names of these 11 types of wood are shown in the table below
![graph1](wood_dataset_1.png)
The cross-sections of these 11 types of wood are shown in the following image
![graph2](wood_dataset_2.png) |
Amir99/toxic | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: juanfkurucz/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AmirBialer/amirbialer-Classifier | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
- en
pipeline_tag: translation
datasets:
- wmt14
---
# mt5-large-wmt14-deen
This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is an mT5 model finetuned on German-->English translation the WMT14 dataset.
To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).
NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0). |
Amirosein/distilbert_v1 | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: other
---
---
license: other
---
---
license: other
---
GGML f16 version of [airoboros-7b](https://huggingface.co/jondurbin/airoboros-7b)
Run with llama.cpp (example):
```
./main -m ./airoboros-7b-ggml-f16.bin -ngl 40 -c 2048 -r 'USER: ' --in-suffix 'ASSISTANT: ' --interactive-first
```
Or for one-off prompts:
```
./main -m ./airoboros-7b-ggml-f16.bin -ngl 40 -c 2048 -p "USER: If the sky is green, what color is the sky?"
main: build = 583 (7e4ea5b)
main: seed = 1684774592
llama.cpp: loading model from /data/airoboros-7b/airoboros-7b-ggml-f16.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 1 (mostly F16)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.07 MB
llama_model_load_internal: mem required = 14645.09 MB (+ 1026.00 MB per state)
.
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 0
USER: If the sky is green, what color is the sky?
The color of the sky would not change if it were green. [end of text]
llama_print_timings: load time = 2497.62 ms
llama_print_timings: sample time = 11.55 ms / 16 runs ( 0.72 ms per token)
llama_print_timings: prompt eval time = 2011.22 ms / 16 tokens ( 125.70 ms per token)
llama_print_timings: eval time = 4135.80 ms / 15 runs ( 275.72 ms per token)
llama_print_timings: total time = 6650.07 ms
``` |
Amrrs/wav2vec2-large-xlsr-53-tamil | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index",
"has_space"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: creativeml-openrail-m
datasets:
- mozilla-foundation/common_voice_13_0
language:
- fa
- en
metrics:
- wer
- accuracy
pipeline_tag: automatic-speech-recognition
--- |
Ana1315/A | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
---
GGML q4_0 version of [airoboros-7b](https://huggingface.co/jondurbin/airoboros-7b)
Run with llama.cpp (example):
```
./main -m ./airoboros-7b-ggml-q4_0.bin -ngl 40 -c 2048 -r 'USER: ' --in-suffix 'ASSISTANT: ' --interactive-first
```
Or for one-off prompts:
```
./main -m ./airoboros-7b-ggml-q4_0.bin -ngl 40 -c 2048 -p "USER: Write a news headline about a llama kicking an alpaca in the face at the zoo."
main: build = 583 (7e4ea5b)
main: seed = 1684775260
llama.cpp: loading model from /data/airoboros-7b/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.07 MB
llama_model_load_internal: mem required = 5407.71 MB (+ 1026.00 MB per state)
.
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 0
USER: Write a news headline about a llama kicking an alpaca in the face at the zoo.
"Zoo Worker Hospitalized After Llama Kicks Alpaca In Face" [end of text]
llama_print_timings: load time = 2983.53 ms
llama_print_timings: sample time = 14.89 ms / 22 runs ( 0.68 ms per token)
llama_print_timings: prompt eval time = 2827.72 ms / 26 tokens ( 108.76 ms per token)
llama_print_timings: eval time = 3423.47 ms / 21 runs ( 163.02 ms per token)
llama_print_timings: total time = 6428.42 ms
``` |
Analufm/Ana | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
- en
pipeline_tag: translation
---
# mt5-small-nc16-250k-deen
This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is an mT5 model finetuned on German-->English translation using 250k sentence pairs from the WMT NewsCommentary v16 dataset.
To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).
NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0). |
Anamika/autonlp-fa-473312409 | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Anamika/autonlp-data-fa",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | "2023-05-22T17:01:53Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 451.00 +/- 192.71
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sadra-barikbin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sadra-barikbin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sadra-barikbin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 50000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Andranik/TestPytorchClassification | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | "2023-05-22T17:04:05Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: wasimar/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Andranik/TestQaV1 | [
"pytorch",
"rust",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language:
- de
- en
pipeline_tag: translation
---
# byt5-small-nc16-deen
This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is a ByT5 model finetuned on German-->English translation using 250k sentence pairs from the WMT NewsCommentary v16 dataset.
To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).
NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0). |
AndrewMcDowell/wav2vec2-xls-r-300m-arabic | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of <rickmann>
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - patrickvonplaten/papa_out_2
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of <rickmann> using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
DreamBooth for the text encoder was enabled: True.
|
AndyyyCai/bert-base-uncased-finetuned-copa | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | {
"architectures": [
"BertForMultipleChoice"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
title: Stable Diffusion Inpainting
emoji: ⚡
colorFrom: gray
colorTo: yellow
sdk: gradio
sdk_version: 3.11
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Anji/roberta-base-squad2-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-imdb-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.96
- name: F1
type: f1
value: 0.9602780536246276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-imdb-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1230
- Accuracy: 0.96
- F1: 0.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ankit-11/distilbert-base-uncased-finetuned-toxic | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
### Attribution
This model is derived from [MosaicML's MPT-7B model](https://huggingface.co/mosaicml/mpt-7b/tree/main), with changes from
[cekal/mpt-7b-peft-compatible](https://huggingface.co/cekal/mpt-7b-peft-compatible) applied; each licensed under the
Apache License, version 2.0.
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
Ann2020/distilbert-base-uncased-finetuned-ner | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | "2023-05-22T19:45:39Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
- posicube/mean_reciprocal_ranktags:
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# biencoder-bert-tiny-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('antoinelouis/biencoder-bert-tiny-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-bert-tiny-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-bert-tiny-mmarcoFR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
***
We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages.
| MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 | Recall@500 |
|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
Below, we compared its results with other biencoder models fine-tuned on the same dataset:
| | model | MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 (↑) | Recall@500 |
|---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 0 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
| 1 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
| 2 | [biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR) | 27.6 | 32.92 | 27.09 | 50.97 | 77.41 | 87.79 |
| 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 27.63 | 32.7 | 27.01 | 50.1 | 76.85 | 88.73 |
| 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 26.8 | 31.87 | 26.23 | 49.2 | 76.44 | 87.87 |
| 5 | [biencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-mmarcoFR) | 27.2 | 32.22 | 26.63 | 49.41 | 75.71 | 86.88 |
| 6 | [biencoder-multi-qa-distilbert-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-distilbert-cos-v1-mmarcoFR) | 26.36 | 31.26 | 25.82 | 47.93 | 75.42 | 86.78 |
| 7 | [biencoder-bert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-base-uncased-mmarcoFR) | 26.3 | 31.14 | 25.74 | 47.67 | 74.57 | 86.33 |
| 8 | [biencoder-msmarco-distilbert-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR) | 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
| 9 | [biencoder-all-distilroberta-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR) | 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
| 10 | [biencoder-all-MiniLM-L6-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-MiniLM-L6-v2-mmarcoFR) | 25.49 | 30.39 | 24.99 | 47.1 | 73.48 | 86.09 |
| 11 | [biencoder-distilbert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilbert-base-uncased-mmarcoFR) | 25.18 | 29.83 | 24.64 | 45.77 | 73.16 | 85.13 |
| 12 | [biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR) | 26.22 | 30.99 | 25.69 | 47.29 | 73.09 | 84.95 |
| 13 | [biencoder-roberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-roberta-base-mmarcoFR) | 25.94 | 30.72 | 25.43 | 46.98 | 73.07 | 84.76 |
| 14 | [biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR) | 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
| 15 | [biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR) | 24.72 | 29.58 | 24.25 | 46.05 | 72.19 | 84.6 |
| 16 | [biencoder-MiniLM-L12-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L12-H384-uncased-mmarcoFR) | 25.43 | 30.1 | 24.88 | 46.13 | 72.16 | 83.84 |
| 17 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
| 18 | [biencoder-electra-base-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-discriminator-mmarcoFR) | 24.77 | 29.37 | 24.21 | 45.2 | 70.84 | 83.25 |
| 19 | [biencoder-bert-medium-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-medium-mmarcoFR) | 23.86 | 28.56 | 23.39 | 44.47 | 70.57 | 83.58 |
| 20 | [biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR) | 24.39 | 28.96 | 23.91 | 44.58 | 70.36 | 82.88 |
| 21 | [biencoder-distilroberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilroberta-base-mmarcoFR) | 23.94 | 28.44 | 23.46 | 43.77 | 70.08 | 82.86 |
| 22 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
| 23 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 23.38 | 27.97 | 22.91 | 43.5 | 68.96 | 81.61 |
| 24 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 22.4 | 26.84 | 21.95 | 41.96 | 68.88 | 82.14 |
| 25 | [biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR) | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
| 26 | [biencoder-MiniLM-L6-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-H384-uncased-mmarcoFR) | 22.86 | 27.34 | 22.41 | 42.62 | 68.4 | 81.54 |
| 27 | [biencoder-deberta-v3-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-small-mmarcoFR) | 22.44 | 26.84 | 21.97 | 41.84 | 68.17 | 80.9 |
| 28 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
| 29 | [biencoder-bert-mini-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-mini-mmarcoFR) | 20.06 | 24.09 | 19.66 | 37.78 | 64.27 | 77.39 |
| 30 | [biencoder-electra-small-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-small-discriminator-mmarcoFR) | 20.32 | 24.36 | 19.9 | 38.16 | 63.98 | 77.23 |
| 31 | [biencoder-deberta-v3-xsmall-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-xsmall-mmarcoFR) | 17.7 | 21.29 | 17.31 | 33.59 | 58.76 | 73.45 |
| 32 | **biencoder-bert-tiny-mmarcoFR** | 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
| 33 | [biencoder-t5-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-t5-small-mmarcoFR) | 12.44 | 15.1 | 12.14 | 24.28 | 47.82 | 63.37 |
| 34 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 0.22 | 0.28 | 0.21 | 0.5 | 1.25 | 2.34 |
## Training
***
#### Background
We used the [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 62.4k steps) using a batch size of 160. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:
- a corpus of 8.8M passages;
- a training set of ~533k queries (with at least one relevant passage);
- a development set of ~101k queries;
- a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
## Citation
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'biencoder-bert-tiny-mmarcoFR: A Biencoder Model Trained on French mMARCO',
publisher = 'Hugging Face',
month = 'may',
year = '2023',
url = 'https://huggingface.co/antoinelouis/biencoder-bert-tiny-mmarcoFR',
}
``` |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | "2023-05-22T20:56:35Z" | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
- posicube/mean_reciprocal_ranktags:
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
***
We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages.
| MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 | Recall@500 |
|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
Below, we compared its results with other biencoder models fine-tuned on the same dataset:
| | model | MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 (↑) | Recall@500 |
|---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 0 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
| 1 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
| 2 | [biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR) | 27.6 | 32.92 | 27.09 | 50.97 | 77.41 | 87.79 |
| 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 27.63 | 32.7 | 27.01 | 50.1 | 76.85 | 88.73 |
| 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 26.8 | 31.87 | 26.23 | 49.2 | 76.44 | 87.87 |
| 5 | [biencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-mmarcoFR) | 27.2 | 32.22 | 26.63 | 49.41 | 75.71 | 86.88 |
| 6 | [biencoder-multi-qa-distilbert-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-distilbert-cos-v1-mmarcoFR) | 26.36 | 31.26 | 25.82 | 47.93 | 75.42 | 86.78 |
| 7 | [biencoder-bert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-base-uncased-mmarcoFR) | 26.3 | 31.14 | 25.74 | 47.67 | 74.57 | 86.33 |
| 8 | [biencoder-msmarco-distilbert-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR) | 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
| 9 | [biencoder-all-distilroberta-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR) | 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
| 10 | [biencoder-all-MiniLM-L6-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-MiniLM-L6-v2-mmarcoFR) | 25.49 | 30.39 | 24.99 | 47.1 | 73.48 | 86.09 |
| 11 | [biencoder-distilbert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilbert-base-uncased-mmarcoFR) | 25.18 | 29.83 | 24.64 | 45.77 | 73.16 | 85.13 |
| 12 | [biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR) | 26.22 | 30.99 | 25.69 | 47.29 | 73.09 | 84.95 |
| 13 | [biencoder-roberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-roberta-base-mmarcoFR) | 25.94 | 30.72 | 25.43 | 46.98 | 73.07 | 84.76 |
| 14 | **biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR** | 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
| 15 | [biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR) | 24.72 | 29.58 | 24.25 | 46.05 | 72.19 | 84.6 |
| 16 | [biencoder-MiniLM-L12-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L12-H384-uncased-mmarcoFR) | 25.43 | 30.1 | 24.88 | 46.13 | 72.16 | 83.84 |
| 17 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
| 18 | [biencoder-electra-base-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-discriminator-mmarcoFR) | 24.77 | 29.37 | 24.21 | 45.2 | 70.84 | 83.25 |
| 19 | [biencoder-bert-medium-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-medium-mmarcoFR) | 23.86 | 28.56 | 23.39 | 44.47 | 70.57 | 83.58 |
| 20 | [biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR) | 24.39 | 28.96 | 23.91 | 44.58 | 70.36 | 82.88 |
| 21 | [biencoder-distilroberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilroberta-base-mmarcoFR) | 23.94 | 28.44 | 23.46 | 43.77 | 70.08 | 82.86 |
| 22 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
| 23 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 23.38 | 27.97 | 22.91 | 43.5 | 68.96 | 81.61 |
| 24 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 22.4 | 26.84 | 21.95 | 41.96 | 68.88 | 82.14 |
| 25 | [biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR) | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
| 26 | [biencoder-MiniLM-L6-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-H384-uncased-mmarcoFR) | 22.86 | 27.34 | 22.41 | 42.62 | 68.4 | 81.54 |
| 27 | [biencoder-deberta-v3-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-small-mmarcoFR) | 22.44 | 26.84 | 21.97 | 41.84 | 68.17 | 80.9 |
| 28 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
| 29 | [biencoder-bert-mini-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-mini-mmarcoFR) | 20.06 | 24.09 | 19.66 | 37.78 | 64.27 | 77.39 |
| 30 | [biencoder-electra-small-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-small-discriminator-mmarcoFR) | 20.32 | 24.36 | 19.9 | 38.16 | 63.98 | 77.23 |
| 31 | [biencoder-deberta-v3-xsmall-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-xsmall-mmarcoFR) | 17.7 | 21.29 | 17.31 | 33.59 | 58.76 | 73.45 |
| 32 | [biencoder-bert-tiny-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-tiny-mmarcoFR) | 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
| 33 | [biencoder-t5-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-t5-small-mmarcoFR) | 12.44 | 15.1 | 12.14 | 24.28 | 47.82 | 63.37 |
| 34 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 0.22 | 0.28 | 0.21 | 0.5 | 1.25 | 2.34 |
## Training
***
#### Background
We used the [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 65.7k steps) using a batch size of 152. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:
- a corpus of 8.8M passages;
- a training set of ~533k queries (with at least one relevant passage);
- a development set of ~101k queries;
- a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
## Citation
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR: A Biencoder Model Trained on French mMARCO',
publisher = 'Hugging Face',
month = 'may',
year = '2023',
url = 'https://huggingface.co/antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR',
}
``` |
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | "2023-05-22T20:57:08Z" | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-300m-swa-r22-2k-ft-ft-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-300m-swa-r22-2k-ft-ft-v1
This model is a fine-tuned version of [mutisya/wav2vec2-300m-swa-tz_3_22-6.5k-ft](https://huggingface.co/mutisya/wav2vec2-300m-swa-tz_3_22-6.5k-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0858
- Wer: 0.0515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.3689 | 0.29 | 400 | 2.6587 | 1.0 |
| 0.9121 | 0.58 | 800 | 0.1588 | 0.1934 |
| 0.2273 | 0.87 | 1200 | 0.1018 | 0.1343 |
| 0.1602 | 1.16 | 1600 | 0.0883 | 0.1223 |
| 0.1576 | 1.44 | 2000 | 0.0836 | 0.1100 |
| 0.1393 | 1.73 | 2400 | 0.0740 | 0.0996 |
| 0.1235 | 2.02 | 2800 | 0.0741 | 0.0954 |
| 0.1168 | 2.31 | 3200 | 0.0747 | 0.0914 |
| 0.122 | 2.6 | 3600 | 0.0709 | 0.0857 |
| 0.1251 | 2.89 | 4000 | 0.0699 | 0.0948 |
| 0.1283 | 3.18 | 4400 | 0.0754 | 0.0921 |
| 0.1181 | 3.47 | 4800 | 0.0730 | 0.0909 |
| 0.1304 | 3.75 | 5200 | 0.0725 | 0.0892 |
| 0.1012 | 4.04 | 5600 | 0.0761 | 0.0865 |
| 0.103 | 4.33 | 6000 | 0.0789 | 0.0897 |
| 0.1184 | 4.62 | 6400 | 0.0740 | 0.0800 |
| 0.1033 | 4.91 | 6800 | 0.0881 | 0.0838 |
| 0.0986 | 5.2 | 7200 | 0.0695 | 0.0768 |
| 0.0953 | 5.49 | 7600 | 0.0689 | 0.0811 |
| 0.0867 | 5.78 | 8000 | 0.0683 | 0.0778 |
| 0.0962 | 6.06 | 8400 | 0.0685 | 0.0723 |
| 0.0871 | 6.35 | 8800 | 0.0698 | 0.0786 |
| 0.0927 | 6.64 | 9200 | 0.0692 | 0.0742 |
| 0.0776 | 6.93 | 9600 | 0.0689 | 0.0764 |
| 0.0744 | 7.22 | 10000 | 0.0704 | 0.0727 |
| 0.0774 | 7.51 | 10400 | 0.0713 | 0.0700 |
| 0.0805 | 7.8 | 10800 | 0.0664 | 0.0705 |
| 0.069 | 8.09 | 11200 | 0.0678 | 0.0839 |
| 0.0637 | 8.38 | 11600 | 0.0693 | 0.0674 |
| 0.0683 | 8.66 | 12000 | 0.0715 | 0.0725 |
| 0.0681 | 8.95 | 12400 | 0.0751 | 0.0739 |
| 0.0576 | 9.24 | 12800 | 0.0706 | 0.0768 |
| 0.0553 | 9.53 | 13200 | 0.0715 | 0.0678 |
| 0.0588 | 9.82 | 13600 | 0.0733 | 0.0680 |
| 0.0528 | 10.11 | 14000 | 0.0783 | 0.0610 |
| 0.0505 | 10.4 | 14400 | 0.0781 | 0.0782 |
| 0.0591 | 10.69 | 14800 | 0.0806 | 0.0645 |
| 0.0519 | 10.97 | 15200 | 0.0755 | 0.0658 |
| 0.0531 | 11.26 | 15600 | 0.0731 | 0.0605 |
| 0.0492 | 11.55 | 16000 | 0.0751 | 0.0621 |
| 0.0491 | 11.84 | 16400 | 0.0813 | 0.0654 |
| 0.0466 | 12.13 | 16800 | 0.0792 | 0.0612 |
| 0.0442 | 12.42 | 17200 | 0.0793 | 0.0605 |
| 0.0447 | 12.71 | 17600 | 0.0766 | 0.0634 |
| 0.0439 | 13.0 | 18000 | 0.0811 | 0.0590 |
| 0.0413 | 13.29 | 18400 | 0.0806 | 0.0603 |
| 0.0413 | 13.57 | 18800 | 0.0830 | 0.0615 |
| 0.0389 | 13.86 | 19200 | 0.0797 | 0.0568 |
| 0.036 | 14.15 | 19600 | 0.0792 | 0.0552 |
| 0.0403 | 14.44 | 20000 | 0.0807 | 0.0593 |
| 0.0412 | 14.73 | 20400 | 0.0838 | 0.0570 |
| 0.036 | 15.02 | 20800 | 0.0873 | 0.0575 |
| 0.0336 | 15.31 | 21200 | 0.0815 | 0.0580 |
| 0.0341 | 15.6 | 21600 | 0.0789 | 0.0586 |
| 0.0356 | 15.88 | 22000 | 0.0869 | 0.0563 |
| 0.0314 | 16.17 | 22400 | 0.0844 | 0.0552 |
| 0.0345 | 16.46 | 22800 | 0.0850 | 0.0532 |
| 0.0311 | 16.75 | 23200 | 0.0846 | 0.0532 |
| 0.0302 | 17.04 | 23600 | 0.0860 | 0.0549 |
| 0.0345 | 17.33 | 24000 | 0.0875 | 0.0530 |
| 0.0299 | 17.62 | 24400 | 0.0865 | 0.0531 |
| 0.03 | 17.91 | 24800 | 0.0870 | 0.0519 |
| 0.0301 | 18.19 | 25200 | 0.0869 | 0.0528 |
| 0.03 | 18.48 | 25600 | 0.0862 | 0.0531 |
| 0.0294 | 18.77 | 26000 | 0.0846 | 0.0521 |
| 0.0269 | 19.06 | 26400 | 0.0851 | 0.0527 |
| 0.027 | 19.35 | 26800 | 0.0861 | 0.0517 |
| 0.0293 | 19.64 | 27200 | 0.0856 | 0.0515 |
| 0.0275 | 19.93 | 27600 | 0.0858 | 0.0515 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
- posicube/mean_reciprocal_ranktags:
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# biencoder-msmarco-distilbert-cos-v5-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
***
We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages.
| MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 | Recall@500 |
|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
Below, we compared its results with other biencoder models fine-tuned on the same dataset:
| | model | MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 (↑) | Recall@500 |
|---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 0 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
| 1 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
| 2 | [biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR) | 27.6 | 32.92 | 27.09 | 50.97 | 77.41 | 87.79 |
| 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 27.63 | 32.7 | 27.01 | 50.1 | 76.85 | 88.73 |
| 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 26.8 | 31.87 | 26.23 | 49.2 | 76.44 | 87.87 |
| 5 | [biencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-mmarcoFR) | 27.2 | 32.22 | 26.63 | 49.41 | 75.71 | 86.88 |
| 6 | [biencoder-multi-qa-distilbert-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-distilbert-cos-v1-mmarcoFR) | 26.36 | 31.26 | 25.82 | 47.93 | 75.42 | 86.78 |
| 7 | [biencoder-bert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-base-uncased-mmarcoFR) | 26.3 | 31.14 | 25.74 | 47.67 | 74.57 | 86.33 |
| 8 | **biencoder-msmarco-distilbert-cos-v5-mmarcoFR** | 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
| 9 | [biencoder-all-distilroberta-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR) | 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
| 10 | [biencoder-all-MiniLM-L6-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-MiniLM-L6-v2-mmarcoFR) | 25.49 | 30.39 | 24.99 | 47.1 | 73.48 | 86.09 |
| 11 | [biencoder-distilbert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilbert-base-uncased-mmarcoFR) | 25.18 | 29.83 | 24.64 | 45.77 | 73.16 | 85.13 |
| 12 | [biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR) | 26.22 | 30.99 | 25.69 | 47.29 | 73.09 | 84.95 |
| 13 | [biencoder-roberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-roberta-base-mmarcoFR) | 25.94 | 30.72 | 25.43 | 46.98 | 73.07 | 84.76 |
| 14 | [biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR) | 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
| 15 | [biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR) | 24.72 | 29.58 | 24.25 | 46.05 | 72.19 | 84.6 |
| 16 | [biencoder-MiniLM-L12-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L12-H384-uncased-mmarcoFR) | 25.43 | 30.1 | 24.88 | 46.13 | 72.16 | 83.84 |
| 17 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
| 18 | [biencoder-electra-base-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-discriminator-mmarcoFR) | 24.77 | 29.37 | 24.21 | 45.2 | 70.84 | 83.25 |
| 19 | [biencoder-bert-medium-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-medium-mmarcoFR) | 23.86 | 28.56 | 23.39 | 44.47 | 70.57 | 83.58 |
| 20 | [biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR) | 24.39 | 28.96 | 23.91 | 44.58 | 70.36 | 82.88 |
| 21 | [biencoder-distilroberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilroberta-base-mmarcoFR) | 23.94 | 28.44 | 23.46 | 43.77 | 70.08 | 82.86 |
| 22 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
| 23 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 23.38 | 27.97 | 22.91 | 43.5 | 68.96 | 81.61 |
| 24 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 22.4 | 26.84 | 21.95 | 41.96 | 68.88 | 82.14 |
| 25 | [biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR) | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
| 26 | [biencoder-MiniLM-L6-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-H384-uncased-mmarcoFR) | 22.86 | 27.34 | 22.41 | 42.62 | 68.4 | 81.54 |
| 27 | [biencoder-deberta-v3-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-small-mmarcoFR) | 22.44 | 26.84 | 21.97 | 41.84 | 68.17 | 80.9 |
| 28 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
| 29 | [biencoder-bert-mini-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-mini-mmarcoFR) | 20.06 | 24.09 | 19.66 | 37.78 | 64.27 | 77.39 |
| 30 | [biencoder-electra-small-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-small-discriminator-mmarcoFR) | 20.32 | 24.36 | 19.9 | 38.16 | 63.98 | 77.23 |
| 31 | [biencoder-deberta-v3-xsmall-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-xsmall-mmarcoFR) | 17.7 | 21.29 | 17.31 | 33.59 | 58.76 | 73.45 |
| 32 | [biencoder-bert-tiny-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-tiny-mmarcoFR) | 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
| 33 | [biencoder-t5-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-t5-small-mmarcoFR) | 12.44 | 15.1 | 12.14 | 24.28 | 47.82 | 63.37 |
| 34 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 0.22 | 0.28 | 0.21 | 0.5 | 1.25 | 2.34 |
## Training
***
#### Background
We used the [sentence-transformers/msmarco-distilbert-cos-v5](https://huggingface.co/sentence-transformers/msmarco-distilbert-cos-v5) model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 65.7k steps) using a batch size of 152. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:
- a corpus of 8.8M passages;
- a training set of ~533k queries (with at least one relevant passage);
- a development set of ~101k queries;
- a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
## Citation
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'biencoder-msmarco-distilbert-cos-v5-mmarcoFR: A Biencoder Model Trained on French mMARCO',
publisher = 'Hugging Face',
month = 'may',
year = '2023',
url = 'https://huggingface.co/antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR',
}
``` |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | "2023-05-22T20:57:31Z" | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
- posicube/mean_reciprocal_ranktags:
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# biencoder-all-distilroberta-v1-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
***
We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages.
| MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 | Recall@500 |
|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
Below, we compared its results with other biencoder models fine-tuned on the same dataset:
| | model | MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 (↑) | Recall@500 |
|---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 0 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
| 1 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
| 2 | [biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR) | 27.6 | 32.92 | 27.09 | 50.97 | 77.41 | 87.79 |
| 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 27.63 | 32.7 | 27.01 | 50.1 | 76.85 | 88.73 |
| 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 26.8 | 31.87 | 26.23 | 49.2 | 76.44 | 87.87 |
| 5 | [biencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-mmarcoFR) | 27.2 | 32.22 | 26.63 | 49.41 | 75.71 | 86.88 |
| 6 | [biencoder-multi-qa-distilbert-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-distilbert-cos-v1-mmarcoFR) | 26.36 | 31.26 | 25.82 | 47.93 | 75.42 | 86.78 |
| 7 | [biencoder-bert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-base-uncased-mmarcoFR) | 26.3 | 31.14 | 25.74 | 47.67 | 74.57 | 86.33 |
| 8 | [biencoder-msmarco-distilbert-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR) | 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
| 9 | **biencoder-all-distilroberta-v1-mmarcoFR** | 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
| 10 | [biencoder-all-MiniLM-L6-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-MiniLM-L6-v2-mmarcoFR) | 25.49 | 30.39 | 24.99 | 47.1 | 73.48 | 86.09 |
| 11 | [biencoder-distilbert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilbert-base-uncased-mmarcoFR) | 25.18 | 29.83 | 24.64 | 45.77 | 73.16 | 85.13 |
| 12 | [biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR) | 26.22 | 30.99 | 25.69 | 47.29 | 73.09 | 84.95 |
| 13 | [biencoder-roberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-roberta-base-mmarcoFR) | 25.94 | 30.72 | 25.43 | 46.98 | 73.07 | 84.76 |
| 14 | [biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR) | 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
| 15 | [biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR) | 24.72 | 29.58 | 24.25 | 46.05 | 72.19 | 84.6 |
| 16 | [biencoder-MiniLM-L12-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L12-H384-uncased-mmarcoFR) | 25.43 | 30.1 | 24.88 | 46.13 | 72.16 | 83.84 |
| 17 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
| 18 | [biencoder-electra-base-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-discriminator-mmarcoFR) | 24.77 | 29.37 | 24.21 | 45.2 | 70.84 | 83.25 |
| 19 | [biencoder-bert-medium-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-medium-mmarcoFR) | 23.86 | 28.56 | 23.39 | 44.47 | 70.57 | 83.58 |
| 20 | [biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR) | 24.39 | 28.96 | 23.91 | 44.58 | 70.36 | 82.88 |
| 21 | [biencoder-distilroberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilroberta-base-mmarcoFR) | 23.94 | 28.44 | 23.46 | 43.77 | 70.08 | 82.86 |
| 22 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
| 23 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 23.38 | 27.97 | 22.91 | 43.5 | 68.96 | 81.61 |
| 24 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 22.4 | 26.84 | 21.95 | 41.96 | 68.88 | 82.14 |
| 25 | [biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR) | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
| 26 | [biencoder-MiniLM-L6-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-H384-uncased-mmarcoFR) | 22.86 | 27.34 | 22.41 | 42.62 | 68.4 | 81.54 |
| 27 | [biencoder-deberta-v3-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-small-mmarcoFR) | 22.44 | 26.84 | 21.97 | 41.84 | 68.17 | 80.9 |
| 28 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
| 29 | [biencoder-bert-mini-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-mini-mmarcoFR) | 20.06 | 24.09 | 19.66 | 37.78 | 64.27 | 77.39 |
| 30 | [biencoder-electra-small-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-small-discriminator-mmarcoFR) | 20.32 | 24.36 | 19.9 | 38.16 | 63.98 | 77.23 |
| 31 | [biencoder-deberta-v3-xsmall-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-xsmall-mmarcoFR) | 17.7 | 21.29 | 17.31 | 33.59 | 58.76 | 73.45 |
| 32 | [biencoder-bert-tiny-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-tiny-mmarcoFR) | 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
| 33 | [biencoder-t5-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-t5-small-mmarcoFR) | 12.44 | 15.1 | 12.14 | 24.28 | 47.82 | 63.37 |
| 34 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 0.22 | 0.28 | 0.21 | 0.5 | 1.25 | 2.34 |
## Training
***
#### Background
We used the [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 65.7k steps) using a batch size of 152. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:
- a corpus of 8.8M passages;
- a training set of ~533k queries (with at least one relevant passage);
- a development set of ~101k queries;
- a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
## Citation
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'biencoder-all-distilroberta-v1-mmarcoFR: A Biencoder Model Trained on French mMARCO',
publisher = 'Hugging Face',
month = 'may',
year = '2023',
url = 'https://huggingface.co/antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR',
}
``` |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | "2023-05-22T20:59:25Z" | ---
pipeline_tag: sentence-similarity
language: fr
license: apache-2.0
datasets:
- unicamp-dl/mmarco
metrics:
- recall
- posicube/mean_reciprocal_ranktags:
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
***
We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages.
| MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 | Recall@500 |
|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
Below, we compared its results with other biencoder models fine-tuned on the same dataset:
| | model | MRR@10 | NDCG@10 | MAP@10 | Recall@10 | Recall@100 (↑) | Recall@500 |
|---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------:|----------:|---------:|------------:|-------------:|-------------:|
| 0 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
| 1 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
| 2 | [biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-mpnet-base-cos-v1-mmarcoFR) | 27.6 | 32.92 | 27.09 | 50.97 | 77.41 | 87.79 |
| 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 27.63 | 32.7 | 27.01 | 50.1 | 76.85 | 88.73 |
| 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 26.8 | 31.87 | 26.23 | 49.2 | 76.44 | 87.87 |
| 5 | [biencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-mmarcoFR) | 27.2 | 32.22 | 26.63 | 49.41 | 75.71 | 86.88 |
| 6 | [biencoder-multi-qa-distilbert-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-distilbert-cos-v1-mmarcoFR) | 26.36 | 31.26 | 25.82 | 47.93 | 75.42 | 86.78 |
| 7 | [biencoder-bert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-base-uncased-mmarcoFR) | 26.3 | 31.14 | 25.74 | 47.67 | 74.57 | 86.33 |
| 8 | [biencoder-msmarco-distilbert-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-distilbert-cos-v5-mmarcoFR) | 25.75 | 30.63 | 25.24 | 47.22 | 73.96 | 85.64 |
| 9 | [biencoder-all-distilroberta-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-distilroberta-v1-mmarcoFR) | 26.17 | 30.91 | 25.67 | 47.06 | 73.5 | 85.69 |
| 10 | [biencoder-all-MiniLM-L6-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-MiniLM-L6-v2-mmarcoFR) | 25.49 | 30.39 | 24.99 | 47.1 | 73.48 | 86.09 |
| 11 | [biencoder-distilbert-base-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilbert-base-uncased-mmarcoFR) | 25.18 | 29.83 | 24.64 | 45.77 | 73.16 | 85.13 |
| 12 | [biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L12-cos-v5-mmarcoFR) | 26.22 | 30.99 | 25.69 | 47.29 | 73.09 | 84.95 |
| 13 | [biencoder-roberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-roberta-base-mmarcoFR) | 25.94 | 30.72 | 25.43 | 46.98 | 73.07 | 84.76 |
| 14 | [biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distiluse-base-multilingual-cased-v1-mmarcoFR) | 24.57 | 29.08 | 24.04 | 44.51 | 72.54 | 85.13 |
| 15 | [biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-multi-qa-MiniLM-L6-cos-v1-mmarcoFR) | 24.72 | 29.58 | 24.25 | 46.05 | 72.19 | 84.6 |
| 16 | [biencoder-MiniLM-L12-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L12-H384-uncased-mmarcoFR) | 25.43 | 30.1 | 24.88 | 46.13 | 72.16 | 83.84 |
| 17 | **biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR** | 24.74 | 29.41 | 24.23 | 45.4 | 71.52 | 84.42 |
| 18 | [biencoder-electra-base-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-discriminator-mmarcoFR) | 24.77 | 29.37 | 24.21 | 45.2 | 70.84 | 83.25 |
| 19 | [biencoder-bert-medium-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-medium-mmarcoFR) | 23.86 | 28.56 | 23.39 | 44.47 | 70.57 | 83.58 |
| 20 | [biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-msmarco-MiniLM-L6-cos-v5-mmarcoFR) | 24.39 | 28.96 | 23.91 | 44.58 | 70.36 | 82.88 |
| 21 | [biencoder-distilroberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilroberta-base-mmarcoFR) | 23.94 | 28.44 | 23.46 | 43.77 | 70.08 | 82.86 |
| 22 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
| 23 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 23.38 | 27.97 | 22.91 | 43.5 | 68.96 | 81.61 |
| 24 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 22.4 | 26.84 | 21.95 | 41.96 | 68.88 | 82.14 |
| 25 | [biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarcoFR-v2-mmarcoFR) | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
| 26 | [biencoder-MiniLM-L6-H384-uncased-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-H384-uncased-mmarcoFR) | 22.86 | 27.34 | 22.41 | 42.62 | 68.4 | 81.54 |
| 27 | [biencoder-deberta-v3-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-small-mmarcoFR) | 22.44 | 26.84 | 21.97 | 41.84 | 68.17 | 80.9 |
| 28 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
| 29 | [biencoder-bert-mini-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-mini-mmarcoFR) | 20.06 | 24.09 | 19.66 | 37.78 | 64.27 | 77.39 |
| 30 | [biencoder-electra-small-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-small-discriminator-mmarcoFR) | 20.32 | 24.36 | 19.9 | 38.16 | 63.98 | 77.23 |
| 31 | [biencoder-deberta-v3-xsmall-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-deberta-v3-xsmall-mmarcoFR) | 17.7 | 21.29 | 17.31 | 33.59 | 58.76 | 73.45 |
| 32 | [biencoder-bert-tiny-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-tiny-mmarcoFR) | 14.94 | 18.22 | 14.59 | 29.46 | 51.94 | 66.3 |
| 33 | [biencoder-t5-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-t5-small-mmarcoFR) | 12.44 | 15.1 | 12.14 | 24.28 | 47.82 | 63.37 |
| 34 | [biencoder-bert-small-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-bert-small-mmarcoFR) | 0.22 | 0.28 | 0.21 | 0.5 | 1.25 | 2.34 |
## Training
***
#### Background
We used the [nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 65.7k steps) using a batch size of 152. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.
#### Data
We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:
- a corpus of 8.8M passages;
- a training set of ~533k queries (with at least one relevant passage);
- a development set of ~101k queries;
- a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
## Citation
```bibtex
@online{louis2023,
author = 'Antoine Louis',
title = 'biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR: A Biencoder Model Trained on French mMARCO',
publisher = 'Hugging Face',
month = 'may',
year = '2023',
url = 'https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR',
}
``` |
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | "2023-05-22T20:59:39Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: newsgroups_clasifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# newsgroups_clasifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | Access to model kmacmillan/test is restricted and you are not in the authorized list. Visit https://huggingface.co/kmacmillan/test to ask for access. |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# Model Card for LXPX
## Model Description
- **Developed by:** BADMONK
- **Model type:** Dreambooth Model + Extracted LoRA
- **Language(s) (NLP):** EN
- **License:** Creativeml-Openrail-M
- **Parent Model:** ChillRealHard
# How to Get Started with the Model
Use the code below to get started with the model.
### LXPX ### |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | "2023-05-22T21:39:54Z" | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
This repo hosts the 600B tokens preview of OpenLLaMA 3B. Please refer to the
[project homepage on GitHub](https://github.com/openlm-research/open_llama) for
model information and usage. |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
This repo hosts the 700B tokens preview of OpenLLaMA 7B. Please refer to the
[project homepage on GitHub](https://github.com/openlm-research/open_llama) for
model information and usage. |
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PixelPerfect100WithnoCpations Dreambooth model trained by OmarAhmed1 with TheLastBen's fast-DreamBooth notebook
|
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | Access to model xWorriedhobbiton/Jacee is restricted and you are not in the authorized list. Visit https://huggingface.co/xWorriedhobbiton/Jacee to ask for access. |
Anthos23/test_trainer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- llama
- alpaca
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/digitous/Alpacino13b |
Antony/mint_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
---
# Open Source + Copy Paste = Forked
---
# GreyMix
merged model and vae by Yuno779
civitai.com/user/Yuno779/models
---
# Be Careful!
these models are not intended for commercial use
if you do so you might be infringing copyrights and breaking the law
please use them responsibly
---
civitai.com/user/Powidl43 |
Anubhav23/model_name | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Xensword-MT5-Base-Summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Xensword-MT5-Base-Summarizer
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0159
- Rouge2: 0.0046
- Rougel: 0.0149
- Rougelsum: 0.015
- Gen Len: 9.6688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 314 | nan | 0.0159 | 0.0046 | 0.0149 | 0.015 | 9.6688 |
| 0.0 | 2.0 | 628 | nan | 0.0159 | 0.0046 | 0.0149 | 0.015 | 9.6688 |
| 0.0 | 3.0 | 942 | nan | 0.0159 | 0.0046 | 0.0149 | 0.015 | 9.6688 |
| 0.0 | 4.0 | 1256 | nan | 0.0159 | 0.0046 | 0.0149 | 0.015 | 9.6688 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Anupam/QuestionClassifier | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-22T22:55:25Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- jmhessel/newyorker_caption_contest
model-index:
- name: test-bridgetower-gaudi2-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-bridgetower-gaudi2-7
This model is a fine-tuned version of [BridgeTower/bridgetower-large-itm-mlm-itc](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-itc) on the jmhessel/newyorker_caption_contest matching dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1170
- Memory Allocated (gb): 28.24
- Max Memory Allocated (gb): 41.75
- Total Memory Available (gb): 93.03
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Allocated (gb) | Memory Allocated (gb) | Memory Available (gb) |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------------------:|:---------------------:|
| 0.1094 | 0.08 | 50 | 0.1501 | 27.46 | 41.75 | 93.03 |
| 0.0859 | 0.16 | 100 | 0.1328 | 27.46 | 41.75 | 93.03 |
| 0.0781 | 0.25 | 150 | 0.1301 | 27.46 | 41.75 | 93.03 |
| 0.0602 | 0.33 | 200 | 0.1248 | 27.46 | 41.75 | 93.03 |
| 0.0613 | 0.41 | 250 | 0.1255 | 27.46 | 41.75 | 93.03 |
| 0.0344 | 0.49 | 300 | 0.1265 | 27.46 | 41.75 | 93.03 |
| 0.0479 | 0.57 | 350 | 0.1239 | 27.46 | 41.75 | 93.03 |
| 0.0355 | 0.65 | 400 | 0.1233 | 27.46 | 41.75 | 93.03 |
| 0.0398 | 0.74 | 450 | 0.1214 | 27.46 | 41.75 | 93.03 |
| 0.0434 | 0.82 | 500 | 0.1186 | 27.46 | 41.75 | 93.03 |
| 0.0314 | 0.9 | 550 | 0.1171 | 27.46 | 41.75 | 93.03 |
| 0.0293 | 0.98 | 600 | 0.1169 | 27.46 | 41.75 | 93.03 |
| 0.0146 | 1.06 | 650 | 0.1172 | 27.46 | 41.75 | 93.03 |
| 0.0273 | 1.14 | 700 | 0.1168 | 27.46 | 41.75 | 93.03 |
| 0.0183 | 1.23 | 750 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.0271 | 1.31 | 800 | 0.1152 | 27.46 | 41.75 | 93.03 |
| 0.0307 | 1.39 | 850 | 0.1149 | 27.46 | 41.75 | 93.03 |
| 0.0146 | 1.47 | 900 | 0.1134 | 27.46 | 41.75 | 93.03 |
| 0.0129 | 1.55 | 950 | 0.1133 | 27.46 | 41.75 | 93.03 |
| 0.0252 | 1.63 | 1000 | 0.1136 | 27.46 | 41.75 | 93.03 |
| 0.0049 | 1.72 | 1050 | 0.1120 | 27.46 | 41.75 | 93.03 |
| 0.0275 | 1.8 | 1100 | 0.1132 | 27.46 | 41.75 | 93.03 |
| 0.0226 | 1.88 | 1150 | 0.1132 | 27.46 | 41.75 | 93.03 |
| 0.0434 | 1.96 | 1200 | 0.1128 | 27.46 | 41.75 | 93.03 |
| 0.0113 | 2.04 | 1250 | 0.1140 | 27.46 | 41.75 | 93.03 |
| 0.0293 | 2.12 | 1300 | 0.1134 | 27.46 | 41.75 | 93.03 |
| 0.0389 | 2.21 | 1350 | 0.1124 | 27.46 | 41.75 | 93.03 |
| 0.0113 | 2.29 | 1400 | 0.1115 | 27.46 | 41.75 | 93.03 |
| 0.0165 | 2.37 | 1450 | 0.1123 | 27.46 | 41.75 | 93.03 |
| 0.0183 | 2.45 | 1500 | 0.1132 | 27.46 | 41.75 | 93.03 |
| 0.0249 | 2.53 | 1550 | 0.1131 | 27.46 | 41.75 | 93.03 |
| 0.0383 | 2.61 | 1600 | 0.1150 | 27.46 | 41.75 | 93.03 |
| 0.0164 | 2.7 | 1650 | 0.1148 | 27.46 | 41.75 | 93.03 |
| 0.0316 | 2.78 | 1700 | 0.1156 | 27.46 | 41.75 | 93.03 |
| 0.0318 | 2.86 | 1750 | 0.1147 | 27.46 | 41.75 | 93.03 |
| 0.0295 | 2.94 | 1800 | 0.1145 | 27.46 | 41.75 | 93.03 |
| 0.0102 | 3.02 | 1850 | 0.1129 | 27.46 | 41.75 | 93.03 |
| 0.0273 | 3.1 | 1900 | 0.1134 | 27.46 | 41.75 | 93.03 |
| 0.0248 | 3.19 | 1950 | 0.1137 | 27.46 | 41.75 | 93.03 |
| 0.0156 | 3.27 | 2000 | 0.1144 | 27.46 | 41.75 | 93.03 |
| 0.0316 | 3.35 | 2050 | 0.1135 | 27.46 | 41.75 | 93.03 |
| 0.0151 | 3.43 | 2100 | 0.1139 | 27.46 | 41.75 | 93.03 |
| 0.0105 | 3.51 | 2150 | 0.1140 | 27.46 | 41.75 | 93.03 |
| 0.0268 | 3.59 | 2200 | 0.1140 | 27.46 | 41.75 | 93.03 |
| 0.0434 | 3.68 | 2250 | 0.1161 | 27.46 | 41.75 | 93.03 |
| 0.0105 | 3.76 | 2300 | 0.1153 | 27.46 | 41.75 | 93.03 |
| 0.0089 | 3.84 | 2350 | 0.1143 | 27.46 | 41.75 | 93.03 |
| 0.0396 | 3.92 | 2400 | 0.1138 | 27.46 | 41.75 | 93.03 |
| 0.0223 | 4.0 | 2450 | 0.1128 | 27.46 | 41.75 | 93.03 |
| 0.0151 | 4.08 | 2500 | 0.1136 | 27.46 | 41.75 | 93.03 |
| 0.0346 | 4.17 | 2550 | 0.1148 | 27.46 | 41.75 | 93.03 |
| 0.05 | 4.25 | 2600 | 0.1155 | 27.46 | 41.75 | 93.03 |
| 0.013 | 4.33 | 2650 | 0.1142 | 27.46 | 41.75 | 93.03 |
| 0.017 | 4.41 | 2700 | 0.1149 | 27.46 | 41.75 | 93.03 |
| 0.0295 | 4.49 | 2750 | 0.1146 | 27.46 | 41.75 | 93.03 |
| 0.0236 | 4.58 | 2800 | 0.1150 | 27.46 | 41.75 | 93.03 |
| 0.0404 | 4.66 | 2850 | 0.1144 | 27.46 | 41.75 | 93.03 |
| 0.0243 | 4.74 | 2900 | 0.1159 | 27.46 | 41.75 | 93.03 |
| 0.0287 | 4.82 | 2950 | 0.1157 | 27.46 | 41.75 | 93.03 |
| 0.0096 | 4.9 | 3000 | 0.1155 | 27.46 | 41.75 | 93.03 |
| 0.0236 | 4.98 | 3050 | 0.1168 | 27.46 | 41.75 | 93.03 |
| 0.0021 | 5.07 | 3100 | 0.1167 | 27.46 | 41.75 | 93.03 |
| 0.0277 | 5.15 | 3150 | 0.1164 | 27.46 | 41.75 | 93.03 |
| 0.0316 | 5.23 | 3200 | 0.1173 | 27.46 | 41.75 | 93.03 |
| 0.0285 | 5.31 | 3250 | 0.1161 | 27.46 | 41.75 | 93.03 |
| 0.0357 | 5.39 | 3300 | 0.1164 | 27.46 | 41.75 | 93.03 |
| 0.0148 | 5.47 | 3350 | 0.1177 | 27.46 | 41.75 | 93.03 |
| 0.0354 | 5.56 | 3400 | 0.1186 | 27.46 | 41.75 | 93.03 |
| 0.008 | 5.64 | 3450 | 0.1176 | 27.46 | 41.75 | 93.03 |
| 0.0213 | 5.72 | 3500 | 0.1169 | 27.46 | 41.75 | 93.03 |
| 0.034 | 5.8 | 3550 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0346 | 5.88 | 3600 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0149 | 5.96 | 3650 | 0.1169 | 27.46 | 41.75 | 93.03 |
| 0.022 | 6.05 | 3700 | 0.1172 | 27.46 | 41.75 | 93.03 |
| 0.0092 | 6.13 | 3750 | 0.1166 | 27.46 | 41.75 | 93.03 |
| 0.0336 | 6.21 | 3800 | 0.1162 | 27.46 | 41.75 | 93.03 |
| 0.0214 | 6.29 | 3850 | 0.1172 | 27.46 | 41.75 | 93.03 |
| 0.0348 | 6.37 | 3900 | 0.1185 | 27.46 | 41.75 | 93.03 |
| 0.0149 | 6.45 | 3950 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0441 | 6.54 | 4000 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.0171 | 6.62 | 4050 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0015 | 6.7 | 4100 | 0.1171 | 27.46 | 41.75 | 93.03 |
| 0.0218 | 6.78 | 4150 | 0.1180 | 27.46 | 41.75 | 93.03 |
| 0.0311 | 6.86 | 4200 | 0.1190 | 27.46 | 41.75 | 93.03 |
| 0.0279 | 6.94 | 4250 | 0.1180 | 27.46 | 41.75 | 93.03 |
| 0.0157 | 7.03 | 4300 | 0.1196 | 27.46 | 41.75 | 93.03 |
| 0.0153 | 7.11 | 4350 | 0.1201 | 27.46 | 41.75 | 93.03 |
| 0.021 | 7.19 | 4400 | 0.1197 | 27.46 | 41.75 | 93.03 |
| 0.0198 | 7.27 | 4450 | 0.1201 | 27.46 | 41.75 | 93.03 |
| 0.0234 | 7.35 | 4500 | 0.1178 | 27.46 | 41.75 | 93.03 |
| 0.0171 | 7.43 | 4550 | 0.1189 | 27.46 | 41.75 | 93.03 |
| 0.0207 | 7.52 | 4600 | 0.1187 | 27.46 | 41.75 | 93.03 |
| 0.0273 | 7.6 | 4650 | 0.1186 | 27.46 | 41.75 | 93.03 |
| 0.0354 | 7.68 | 4700 | 0.1186 | 27.46 | 41.75 | 93.03 |
| 0.0336 | 7.76 | 4750 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.016 | 7.84 | 4800 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.0281 | 7.92 | 4850 | 0.1183 | 27.46 | 41.75 | 93.03 |
| 0.0139 | 8.01 | 4900 | 0.1174 | 27.46 | 41.75 | 93.03 |
| 0.0089 | 8.09 | 4950 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0022 | 8.17 | 5000 | 0.1181 | 27.46 | 41.75 | 93.03 |
| 0.0406 | 8.25 | 5050 | 0.1189 | 27.46 | 41.75 | 93.03 |
| 0.0277 | 8.33 | 5100 | 0.1179 | 27.46 | 41.75 | 93.03 |
| 0.0106 | 8.42 | 5150 | 0.1187 | 27.46 | 41.75 | 93.03 |
| 0.0145 | 8.5 | 5200 | 0.1185 | 27.46 | 41.75 | 93.03 |
| 0.0158 | 8.58 | 5250 | 0.1195 | 27.46 | 41.75 | 93.03 |
| 0.0357 | 8.66 | 5300 | 0.1205 | 27.46 | 41.75 | 93.03 |
| 0.0091 | 8.74 | 5350 | 0.1188 | 27.46 | 41.75 | 93.03 |
| 0.0144 | 8.82 | 5400 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.0223 | 8.91 | 5450 | 0.1191 | 27.46 | 41.75 | 93.03 |
| 0.0312 | 8.99 | 5500 | 0.1189 | 27.46 | 41.75 | 93.03 |
| 0.0342 | 9.07 | 5550 | 0.1193 | 27.46 | 41.75 | 93.03 |
| 0.0207 | 9.15 | 5600 | 0.1192 | 27.46 | 41.75 | 93.03 |
| 0.0268 | 9.23 | 5650 | 0.1198 | 27.46 | 41.75 | 93.03 |
| 0.0078 | 9.31 | 5700 | 0.1189 | 27.46 | 41.75 | 93.03 |
| 0.0163 | 9.4 | 5750 | 0.1193 | 27.46 | 41.75 | 93.03 |
| 0.0016 | 9.48 | 5800 | 0.1193 | 27.46 | 41.75 | 93.03 |
| 0.0077 | 9.56 | 5850 | 0.1186 | 27.46 | 41.75 | 93.03 |
| 0.0226 | 9.64 | 5900 | 0.1189 | 27.46 | 41.75 | 93.03 |
| 0.0336 | 9.72 | 5950 | 0.1197 | 27.46 | 41.75 | 93.03 |
| 0.0309 | 9.8 | 6000 | 0.1184 | 27.46 | 41.75 | 93.03 |
| 0.0225 | 9.89 | 6050 | 0.1183 | 27.46 | 41.75 | 93.03 |
| 0.0086 | 9.97 | 6100 | 0.1187 | 27.46 | 41.75 | 93.03 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1a0+gita64770b
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gaurishhs/API | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
inference: false
---
# ScarletPajama
Introducing ScarletPajama: a language model that has been finetuned on the ShareGPT dataset. Built upon the robust RedPajama-INCITE-Chat-3b architecture
The original ShareGPT dataset consisted of 53k pairs of conversational exchanges. In order to optimize the training process, the dataset was converted to the appropriate format and filtered to remove long texts. The resulting filtered version of ShareGPT contains 22k pairs, ensuring a more focused and efficient training process.
## Model Details
- **Model Name**: ScarletPajama
- **Base Model**: RedPajama-INCITE-Chat-3b
- **Dataset**: <a href="https://huggingface.co/datasets/Fredithefish/ShareGPT-Unfiltered-RedPajama-Chat-format/blob/main/ShareGPT-22k.jsonl">ShareGPT-22K</a>
- **Fine-tuning Epochs**: 2 |
Apisate/Discord-Ai-Bot | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/Pirr/pythia-13b-deduped-green_devil |
Appolo/TestModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.915483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7798
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2786 | 0.7365 |
| 3.7784 | 2.0 | 636 | 1.8736 | 0.8365 |
| 3.7784 | 3.0 | 954 | 1.1615 | 0.8919 |
| 1.6922 | 4.0 | 1272 | 0.8645 | 0.9103 |
| 0.9103 | 5.0 | 1590 | 0.7798 | 0.9155 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.11.0
|
ArBert/albert-base-v2-finetuned-ner-gmm | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Xensword-T5-Base-Summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Xensword-T5-Base-Summarizer
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0029
- Rouge1: 0.1594
- Rouge2: 0.0664
- Rougel: 0.1405
- Rougelsum: 0.14
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 157 | 2.1287 | 0.157 | 0.0654 | 0.1388 | 0.1382 | 19.0 |
| No log | 2.0 | 314 | 2.0431 | 0.1613 | 0.0672 | 0.1419 | 0.1415 | 19.0 |
| No log | 3.0 | 471 | 2.0179 | 0.1593 | 0.0665 | 0.1406 | 0.1401 | 19.0 |
| 2.2552 | 4.0 | 628 | 2.0029 | 0.1594 | 0.0664 | 0.1405 | 0.14 | 19.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArBert/bert-base-uncased-finetuned-ner-gmm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2023-05-22T23:29:14Z" | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1 |
ArBert/bert-base-uncased-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | "2023-05-23T00:16:53Z" | ---
license: bsd-3-clause-clear
datasets:
- fgheorghe/terrain-generator
library_name: diffusers
--- |
ArBert/roberta-base-finetuned-ner-gmm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: naive-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.90 +/- 30.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ArBert/roberta-base-finetuned-ner-kmeans-twitter | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- llama
- alpaca
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/chavinlo/gpt4-x-alpaca/ |
Araf/Ummah | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: cmpatino/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aran/DialoGPT-medium-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1 |
Aran/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: shaquillehinds
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - shaquillehinds
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "shaquillehinds" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: a photo of shaquillehinds
![image_0](test_images/image_0.png)
![image_1](test_images/image_1.png)
![image_2](test_images/image_2.png)
![image_3](test_images/image_3.png)
|
ArashEsk95/bert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: ko
license: apache-2.0
tags:
- korean
--- |
ArashEsk95/bert-base-uncased-finetuned-stsb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Eldund/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Archie/myProject | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1 |
ArenaGrenade/char-cnn | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.06 +/- 19.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arghyad/Loki_small | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- recall
- precision
- accuracy
- f1
model-index:
- name: kematangan-pisang-vit-b-16-100eph-224-in1k-v2.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kematangan-pisang-vit-b-16-100eph-224-in1k-v2.8
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0205
- Recall: 0.9972
- Specificity: 0.9992
- Precision: 0.9958
- Npv: 0.9991
- Accuracy: 0.9973
- F1: 0.9965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Specificity | Precision | Npv | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-----------:|:---------:|:------:|:--------:|:------:|
| No log | 1.0 | 140 | 0.0862 | 0.9570 | 0.9890 | 0.9554 | 0.9889 | 0.9651 | 0.9560 |
| No log | 2.0 | 280 | 0.1454 | 0.9345 | 0.9807 | 0.9278 | 0.9800 | 0.9357 | 0.9215 |
| No log | 3.0 | 420 | 0.0809 | 0.9633 | 0.9921 | 0.9746 | 0.9929 | 0.9759 | 0.9677 |
| 0.1496 | 4.0 | 560 | 0.0610 | 0.9702 | 0.9912 | 0.9607 | 0.9904 | 0.9705 | 0.9631 |
| 0.1496 | 5.0 | 700 | 0.1214 | 0.9534 | 0.9903 | 0.9728 | 0.9915 | 0.9705 | 0.9599 |
| 0.1496 | 6.0 | 840 | 0.0269 | 0.9902 | 0.9975 | 0.9889 | 0.9974 | 0.9920 | 0.9895 |
| 0.1496 | 7.0 | 980 | 0.0316 | 0.9889 | 0.9968 | 0.9841 | 0.9965 | 0.9893 | 0.9861 |
| 0.064 | 8.0 | 1120 | 0.0297 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.064 | 9.0 | 1260 | 0.0636 | 0.9763 | 0.9935 | 0.9707 | 0.9932 | 0.9786 | 0.9726 |
| 0.064 | 10.0 | 1400 | 0.0611 | 0.9676 | 0.9930 | 0.9771 | 0.9936 | 0.9786 | 0.9714 |
| 0.0549 | 11.0 | 1540 | 0.0224 | 0.9944 | 0.9984 | 0.9918 | 0.9982 | 0.9946 | 0.9930 |
| 0.0549 | 12.0 | 1680 | 0.0225 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0549 | 13.0 | 1820 | 0.0543 | 0.9760 | 0.9948 | 0.9822 | 0.9952 | 0.9839 | 0.9787 |
| 0.0549 | 14.0 | 1960 | 0.0904 | 0.9506 | 0.9895 | 0.9674 | 0.9906 | 0.9678 | 0.9564 |
| 0.035 | 15.0 | 2100 | 0.0190 | 0.9948 | 0.9982 | 0.9938 | 0.9982 | 0.9946 | 0.9943 |
| 0.035 | 16.0 | 2240 | 0.0290 | 0.9902 | 0.9975 | 0.9889 | 0.9974 | 0.9920 | 0.9895 |
| 0.035 | 17.0 | 2380 | 0.0394 | 0.9803 | 0.9957 | 0.9848 | 0.9960 | 0.9866 | 0.9823 |
| 0.0324 | 18.0 | 2520 | 0.0912 | 0.9548 | 0.9904 | 0.9697 | 0.9913 | 0.9705 | 0.9602 |
| 0.0324 | 19.0 | 2660 | 0.0867 | 0.9690 | 0.9931 | 0.9750 | 0.9935 | 0.9786 | 0.9716 |
| 0.0324 | 20.0 | 2800 | 0.0206 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0324 | 21.0 | 2940 | 0.0188 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0229 | 22.0 | 3080 | 0.0262 | 0.9864 | 0.9965 | 0.9881 | 0.9966 | 0.9893 | 0.9872 |
| 0.0229 | 23.0 | 3220 | 0.1963 | 0.9252 | 0.9842 | 0.9541 | 0.9862 | 0.9517 | 0.9329 |
| 0.0229 | 24.0 | 3360 | 0.0457 | 0.9831 | 0.9965 | 0.9894 | 0.9969 | 0.9893 | 0.9858 |
| 0.0259 | 25.0 | 3500 | 0.0187 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0259 | 26.0 | 3640 | 0.0408 | 0.9774 | 0.9936 | 0.9813 | 0.9940 | 0.9812 | 0.9791 |
| 0.0259 | 27.0 | 3780 | 0.0191 | 0.9915 | 0.9982 | 0.9946 | 0.9984 | 0.9946 | 0.9929 |
| 0.0259 | 28.0 | 3920 | 0.0105 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0255 | 29.0 | 4060 | 0.0422 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.0255 | 30.0 | 4200 | 0.0133 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0255 | 31.0 | 4340 | 0.0140 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0255 | 32.0 | 4480 | 0.0192 | 0.9944 | 0.9984 | 0.9918 | 0.9982 | 0.9946 | 0.9930 |
| 0.0151 | 33.0 | 4620 | 0.0236 | 0.9944 | 0.9984 | 0.9918 | 0.9982 | 0.9946 | 0.9930 |
| 0.0151 | 34.0 | 4760 | 0.0180 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0151 | 35.0 | 4900 | 0.0185 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0078 | 36.0 | 5040 | 0.0453 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.0078 | 37.0 | 5180 | 0.0933 | 0.9833 | 0.9952 | 0.9769 | 0.9948 | 0.9839 | 0.9793 |
| 0.0078 | 38.0 | 5320 | 0.1109 | 0.9548 | 0.9904 | 0.9697 | 0.9913 | 0.9705 | 0.9602 |
| 0.0078 | 39.0 | 5460 | 0.0297 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0204 | 40.0 | 5600 | 0.0364 | 0.9925 | 0.9972 | 0.9917 | 0.9973 | 0.9920 | 0.9920 |
| 0.0204 | 41.0 | 5740 | 0.0892 | 0.9548 | 0.9904 | 0.9697 | 0.9913 | 0.9705 | 0.9602 |
| 0.0204 | 42.0 | 5880 | 0.3811 | 0.8998 | 0.9789 | 0.9419 | 0.9819 | 0.9357 | 0.9078 |
| 0.0144 | 43.0 | 6020 | 0.0609 | 0.9845 | 0.9966 | 0.9875 | 0.9967 | 0.9893 | 0.9859 |
| 0.0144 | 44.0 | 6160 | 0.0466 | 0.9874 | 0.9967 | 0.9849 | 0.9966 | 0.9893 | 0.9860 |
| 0.0144 | 45.0 | 6300 | 0.0688 | 0.9760 | 0.9948 | 0.9822 | 0.9952 | 0.9839 | 0.9787 |
| 0.0144 | 46.0 | 6440 | 0.0686 | 0.9676 | 0.9930 | 0.9771 | 0.9936 | 0.9786 | 0.9714 |
| 0.0135 | 47.0 | 6580 | 0.1157 | 0.9806 | 0.9944 | 0.9735 | 0.9940 | 0.9812 | 0.9759 |
| 0.0135 | 48.0 | 6720 | 0.0932 | 0.9833 | 0.9952 | 0.9769 | 0.9948 | 0.9839 | 0.9793 |
| 0.0135 | 49.0 | 6860 | 0.0933 | 0.9833 | 0.9952 | 0.9769 | 0.9948 | 0.9839 | 0.9793 |
| 0.013 | 50.0 | 7000 | 0.0300 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.013 | 51.0 | 7140 | 0.0212 | 0.9944 | 0.9984 | 0.9918 | 0.9982 | 0.9946 | 0.9930 |
| 0.013 | 52.0 | 7280 | 0.0241 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.013 | 53.0 | 7420 | 0.0463 | 0.9760 | 0.9948 | 0.9822 | 0.9952 | 0.9839 | 0.9787 |
| 0.0181 | 54.0 | 7560 | 0.0247 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0181 | 55.0 | 7700 | 0.0278 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0181 | 56.0 | 7840 | 0.0226 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0181 | 57.0 | 7980 | 0.0575 | 0.9718 | 0.9939 | 0.9796 | 0.9944 | 0.9812 | 0.9751 |
| 0.0096 | 58.0 | 8120 | 0.0647 | 0.9718 | 0.9939 | 0.9796 | 0.9944 | 0.9812 | 0.9751 |
| 0.0096 | 59.0 | 8260 | 0.0238 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0096 | 60.0 | 8400 | 0.0332 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0079 | 61.0 | 8540 | 0.0586 | 0.9705 | 0.9932 | 0.9733 | 0.9934 | 0.9786 | 0.9718 |
| 0.0079 | 62.0 | 8680 | 0.0177 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0079 | 63.0 | 8820 | 0.0392 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.0079 | 64.0 | 8960 | 0.0319 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.004 | 65.0 | 9100 | 0.1097 | 0.9633 | 0.9921 | 0.9746 | 0.9929 | 0.9759 | 0.9677 |
| 0.004 | 66.0 | 9240 | 0.0891 | 0.9676 | 0.9930 | 0.9771 | 0.9936 | 0.9786 | 0.9714 |
| 0.004 | 67.0 | 9380 | 0.0836 | 0.9718 | 0.9939 | 0.9796 | 0.9944 | 0.9812 | 0.9751 |
| 0.0032 | 68.0 | 9520 | 0.1032 | 0.9676 | 0.9930 | 0.9771 | 0.9936 | 0.9786 | 0.9714 |
| 0.0032 | 69.0 | 9660 | 0.0450 | 0.9845 | 0.9966 | 0.9875 | 0.9967 | 0.9893 | 0.9859 |
| 0.0032 | 70.0 | 9800 | 0.1123 | 0.9676 | 0.9930 | 0.9771 | 0.9936 | 0.9786 | 0.9714 |
| 0.0032 | 71.0 | 9940 | 0.0838 | 0.9718 | 0.9939 | 0.9796 | 0.9944 | 0.9812 | 0.9751 |
| 0.0006 | 72.0 | 10080 | 0.1040 | 0.9718 | 0.9939 | 0.9796 | 0.9944 | 0.9812 | 0.9751 |
| 0.0006 | 73.0 | 10220 | 0.0563 | 0.9817 | 0.9958 | 0.9831 | 0.9958 | 0.9866 | 0.9824 |
| 0.0006 | 74.0 | 10360 | 0.0556 | 0.9889 | 0.9968 | 0.9841 | 0.9965 | 0.9893 | 0.9861 |
| 0.0033 | 75.0 | 10500 | 0.1480 | 0.9506 | 0.9895 | 0.9674 | 0.9906 | 0.9678 | 0.9564 |
| 0.0033 | 76.0 | 10640 | 0.3012 | 0.9110 | 0.9814 | 0.9527 | 0.9843 | 0.9437 | 0.9198 |
| 0.0033 | 77.0 | 10780 | 0.1784 | 0.9421 | 0.9877 | 0.9628 | 0.9891 | 0.9625 | 0.9488 |
| 0.0033 | 78.0 | 10920 | 0.0614 | 0.9803 | 0.9957 | 0.9848 | 0.9960 | 0.9866 | 0.9823 |
| 0.0054 | 79.0 | 11060 | 0.1881 | 0.9337 | 0.9860 | 0.9584 | 0.9876 | 0.9571 | 0.9409 |
| 0.0054 | 80.0 | 11200 | 0.0831 | 0.9760 | 0.9948 | 0.9822 | 0.9952 | 0.9839 | 0.9787 |
| 0.0054 | 81.0 | 11340 | 0.0382 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0054 | 82.0 | 11480 | 0.0250 | 0.9917 | 0.9976 | 0.9879 | 0.9974 | 0.9920 | 0.9896 |
| 0.0047 | 83.0 | 11620 | 0.0244 | 0.9944 | 0.9984 | 0.9918 | 0.9982 | 0.9946 | 0.9930 |
| 0.0047 | 84.0 | 11760 | 0.0221 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0047 | 85.0 | 11900 | 0.0249 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0011 | 86.0 | 12040 | 0.0248 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0011 | 87.0 | 12180 | 0.0258 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0011 | 88.0 | 12320 | 0.0472 | 0.9845 | 0.9966 | 0.9875 | 0.9967 | 0.9893 | 0.9859 |
| 0.0011 | 89.0 | 12460 | 0.0363 | 0.9845 | 0.9966 | 0.9875 | 0.9967 | 0.9893 | 0.9859 |
| 0.0002 | 90.0 | 12600 | 0.0547 | 0.9889 | 0.9968 | 0.9841 | 0.9965 | 0.9893 | 0.9861 |
| 0.0002 | 91.0 | 12740 | 0.0270 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0002 | 92.0 | 12880 | 0.0262 | 0.9887 | 0.9974 | 0.9902 | 0.9975 | 0.9920 | 0.9894 |
| 0.0022 | 93.0 | 13020 | 0.0194 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0022 | 94.0 | 13160 | 0.0198 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0022 | 95.0 | 13300 | 0.0211 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0022 | 96.0 | 13440 | 0.0275 | 0.9930 | 0.9983 | 0.9930 | 0.9983 | 0.9946 | 0.9930 |
| 0.0011 | 97.0 | 13580 | 0.0201 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0011 | 98.0 | 13720 | 0.0204 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0011 | 99.0 | 13860 | 0.0205 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
| 0.0 | 100.0 | 14000 | 0.0205 | 0.9972 | 0.9992 | 0.9958 | 0.9991 | 0.9973 | 0.9965 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AriakimTaiyo/DialoGPT-revised-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IGustavsen/bart-base-finetuned-english-wikilingua_epoch-1-1e-4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IGustavsen/bart-base-finetuned-english-wikilingua_epoch-1-1e-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6603
- Validation Loss: 2.4052
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6603 | 2.4052 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AriakimTaiyo/DialoGPT-small-Rikka | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- gptj
- gpt-j
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/PygmalionAI/pygmalion-6b |
AriakimTaiyo/kumiko | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Qt5SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt5SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Aries/T5_question_answering | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Arnold/wav2vec2-hausa2-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: IGustavsen/mbart-50-finetuned-english-german-wikilingua_epoch-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IGustavsen/mbart-50-finetuned-english-german-wikilingua_epoch-1
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7358
- Validation Loss: 2.3730
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7358 | 2.3730 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
datasets:
- timdettmers/guanaco-13b
- JosephusCheung/GuanacoDataset
---
<center><h1><b>Guanaco</b> - Generative Universal Assistant for Natural-language Adaptive Context-aware Omnilingual outputs</h1></center>
<p><strong><font size="5">Information</font></strong></p>
Guanaco 13b LoRa from timdettmers/guanaco-13b that was merged to Llama 13b and is compatible with transformers 4.28.0
<br>This was made using https://huggingface.co/timdettmers/guanaco-13b and https://huggingface.co/datasets/JosephusCheung/GuanacoDataset
The details of the guanaco dataset and parameters of the LoRa that Tim Dettmers' released is not available at this time.
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Guanaco 13b 8bit</td>
<td>5.771384239196777</td>
<td>10.377276420593262</td>
<td></td>
</tr>
</tbody>
</table>
</body>
</html>
More information can be found here and below: https://huggingface.co/datasets/JosephusCheung/GuanacoDataset
Below is a description of Guanaco from https://guanaco-model.github.io/:
Guanaco is an advanced instruction-following language model built on Meta's LLaMA 13B model. Expanding upon the initial 52K dataset from the Alpaca model, an additional 534,530 entries have been incorporated, covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch, and various linguistic and grammatical tasks. This wealth of data enables Guanaco to perform exceptionally well in multilingual environments.
In an effort to foster openness and replicability in research, we have made the [Guanaco Dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) publicly accessible and released the [model weights](https://huggingface.co/JosephusCheung/Guanaco). By providing these resources, we aim to inspire more researchers to pursue related research and collectively advance the development of instruction-following language models.
When utilizing the Guanaco model, please bear in mind the following points:
* The Guanaco model has not been filtered for harmful, biased, or explicit content. As a result, outputs that do not adhere to ethical norms may be generated during use. Please exercise caution when using the model in research or practical applications.
1\. Improved context and prompt role support:
---------------------------------------------
The new format is designed to be similar to ChatGPT, allowing for better integration with the Alpaca format and enhancing the overall user experience.
Instruction is utilized as a few-shot context to support diverse inputs and responses, making it easier for the model to understand and provide accurate responses to user queries.
The format is as follows:
### Instruction:
User: History User Input
Assistant: History Assistant Answer
### Input:
System: Knowledge
User: New User Input
### Response:
New Assistant Answer
This structured format allows for easier tracking of the conversation history and maintaining context throughout a multi-turn dialogue.
2\. Role-playing support:
-------------------------
Guanaco now offers advanced role-playing support, similar to Character.AI, in English, Simplified Chinese, Traditional Chinese, Japanese, and Deutsch, making it more versatile for users from different linguistic backgrounds.
Users can instruct the model to assume specific roles, historical figures, or fictional characters, as well as personalities based on their input. This allows for more engaging and immersive conversations.
The model can use various sources of information to provide knowledge and context for the character's background and behavior, such as encyclopedic entries, first-person narrations, or a list of personality traits.
The model will consistently output responses in the format "Character Name: Reply" to maintain the chosen role throughout the conversation, enhancing the user's experience.
3\. Rejection of answers and avoidance of erroneous responses:
--------------------------------------------------------------
The model has been updated to handle situations where it lacks sufficient knowledge or is unable to provide a valid response more effectively.
Reserved keywords have been introduced to indicate different scenarios and provide clearer communication with the user:
* NO IDEA: Indicates that the model lacks the necessary knowledge to provide an accurate answer, and will explain this to the user, encouraging them to seek alternative sources.
* FORBIDDEN: Indicates that the model refuses to answer due to specific reasons (e.g., legal, ethical, or safety concerns), which will be inferred based on the context of the query.
* SFW: Indicates that the model refuses to answer a question because it has been filtered for NSFW content, ensuring a safer and more appropriate user experience.
4\. Continuation of responses for ongoing topics:
-------------------------------------------------
The Guanaco model can now continue answering questions or discussing topics upon the user's request, making it more adaptable and better suited for extended conversations.
The contextual structure consisting of System, Assistant, and User roles allows the model to engage in multi-turn dialogues, maintain context-aware conversations, and provide more coherent responses.
The model can now accommodate role specification and character settings, providing a more immersive and tailored conversational experience based on the user's preferences.
It is important to remember that Guanaco is a 7B-parameter model, and any knowledge-based content should be considered potentially inaccurate. We strongly recommend providing verifiable sources, such as Wikipedia, for knowledge-based answers. In the absence of sources, it is crucial to inform users of this limitation to prevent the dissemination of false information and to maintain transparency.
5\. Multimodal Visual Question Answering (VQA) Support:
-------------------------------------------------------
Guanaco expands its capabilities into the realm of multimodal interactions, now offering support for Visual Question Answering (VQA). The model achieves this by integrating data from the blip2-flan-t5-xxl for multilingual VQA tasks, marking a significant milestone in the development of multimodal chatbots.
This new feature allows the model to interpret and respond to queries that involve both text and visual inputs, providing a richer, more interactive, and comprehensive user experience. Users can now ask questions about an image, and the model will analyze the visual content in conjunction with the textual query to provide a response.
A noteworthy addition is the [Guanaco VQA Dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoVQADataset), publicly accessible now.
Now as a multimodal chatbot, Guanaco can bridge the gap between visual and linguistic understanding, making it an incredibly versatile tool for a wide array of applications.
However, as always, we encourage responsible and ethical use of this model. Please note that while Guanaco strives to provide accurate and helpful responses, it is still crucial to cross-verify the information from reliable sources for knowledge-based queries.
|
BME-TMIT/foszt2oszt | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"hu",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28 | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
tags:
- llama
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/openlm-research/open_llama_7b_preview_300bt |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
language:
- de
metrics:
- accuracy
- f1
- precision
- recall
library_name: transformers
pipeline_tag: text-classification
tags:
- distilbert
- job ads
- apprenticeship classification
---
further information follows |
Bhumika/roberta-base-finetuned-sst2 | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 85 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.75 +/- 47.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Biasface/DDDC | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- llama
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/openlm-research/open_llama_7b_preview_200bt |
BigBoy/model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_model2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train[:1%]
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0394
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.0935 | 1.0 |
| No log | 2.0 | 32 | 0.0394 | 1.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/BertaMyWorda | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null |
The Cisco Certified Network Associate (CCNA) 200-301 exam is a comprehensive exam that tests your knowledge of Cisco networking technologies. The exam covers a wide range of topics, including routing and switching, security, wireless networking, and even some programming concepts.
If you are planning to take the CCNA 200-301 exam, there are a number of study materials available to help you prepare. Here are a few of the best study materials for the CCNA 200-301 exam:
• Official Cisco CCNA 200-301 Study Guide: This is the official study guide from Cisco. It covers all of the topics that are on the exam in detail.
• CCNA 200-301 Practice Exams: There are a number of practice exams available online and in print. These exams can help you assess your knowledge and prepare for the real exam.
• CCNA 200-301 Video Training: There are a number of video training courses available online. These courses can help you learn the material in a visual way.
• CCNA 200-301 Boot Camps: There are a number of boot camps available that can help you prepare for the CCNA 200-301 exam in a short period of time.
In addition to these study materials, there are a number of online resources that can help you prepare for the CCNA 200-301 exam. These resources include Cisco's own website, as well as a number of third-party websites.
Prepare Exam With Pass4surexams: https://www.pass4surexams.com/cisco/200-301-dumps.html
|
BigSalmon/BlankSlots | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 4 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.30 +/- 9.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/DaBlank | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 4 | null | ---
license: apache-2.0
---
# Introduction
Models from this repo are converted from
https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2
which is trained using
https://github.com/k2-fsa/icefall/pull/349
|
BigSalmon/Flowberta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/GPT2HardArticleEasyArticle | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9461290322580646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2500
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.247 | 1.0 | 318 | 3.1740 | 0.7555 |
| 2.4149 | 2.0 | 636 | 1.5652 | 0.8639 |
| 1.1633 | 3.0 | 954 | 0.7781 | 0.9061 |
| 0.5688 | 4.0 | 1272 | 0.4624 | 0.9342 |
| 0.3005 | 5.0 | 1590 | 0.3368 | 0.9429 |
| 0.1785 | 6.0 | 1908 | 0.2871 | 0.9429 |
| 0.1174 | 7.0 | 2226 | 0.2673 | 0.9458 |
| 0.0877 | 8.0 | 2544 | 0.2525 | 0.9465 |
| 0.0728 | 9.0 | 2862 | 0.2521 | 0.9465 |
| 0.0661 | 10.0 | 3180 | 0.2500 | 0.9461 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|