Update README.md
Browse files
README.md
CHANGED
@@ -120,7 +120,7 @@ It is also compatible with NVIDIA Riva for [production-grade server deployments]
|
|
120 |
|
121 |
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
|
122 |
|
123 |
-
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
|
124 |
|
125 |
```
|
126 |
pip install nemo_toolkit['all']
|
@@ -161,13 +161,13 @@ This model provides transcribed speech as a string for a given audio sample.
|
|
161 |
|
162 |
## Model Architecture
|
163 |
|
164 |
-
Streaming Citrinet-1024 model is a non-autoregressive, streaming variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on
|
165 |
|
166 |
## Training
|
167 |
|
168 |
-
The NeMo toolkit [3] was used for training the
|
169 |
|
170 |
-
The
|
171 |
|
172 |
|
173 |
### Datasets
|
@@ -193,7 +193,7 @@ The list of the available models in this collection is shown in the following ta
|
|
193 |
While deploying with [NVIDIA Riva](https://developer.nvidia.com/riva), you can combine this model with external language models to further improve WER. The WER(%) of the latest model with different language modeling techniques are reported in the following table.
|
194 |
|
195 |
## Limitations
|
196 |
-
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech
|
197 |
|
198 |
## Deployment with NVIDIA Riva
|
199 |
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
|
|
|
120 |
|
121 |
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
|
122 |
|
123 |
+
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
|
124 |
|
125 |
```
|
126 |
pip install nemo_toolkit['all']
|
|
|
161 |
|
162 |
## Model Architecture
|
163 |
|
164 |
+
Streaming Citrinet-1024 model is a non-autoregressive, streaming variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet).
|
165 |
|
166 |
## Training
|
167 |
|
168 |
+
The NeMo toolkit [3] was used for training the model for over several hundred epochs. This model was trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml).
|
169 |
|
170 |
+
The tokenizer for this models was built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
171 |
|
172 |
|
173 |
### Datasets
|
|
|
193 |
While deploying with [NVIDIA Riva](https://developer.nvidia.com/riva), you can combine this model with external language models to further improve WER. The WER(%) of the latest model with different language modeling techniques are reported in the following table.
|
194 |
|
195 |
## Limitations
|
196 |
+
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech that includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
|
197 |
|
198 |
## Deployment with NVIDIA Riva
|
199 |
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
|