bofenghuang commited on
Commit
702248d
·
1 Parent(s): f8b1523
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -268,7 +268,7 @@ pip install faster-whisper
268
  Then, download the model converted to the CTranslate2 format:
269
 
270
  ```bash
271
- python -c "from huggingface_hub import snapshot_download; snapshot_download(repo_id='bofenghuang/whisper-large-v3-distil-fr-v0.2', local_dir='./models/whisper-large-v3-distil-fr-v0.2', allow_patterns='ctranslate2/*')"
272
  ```
273
 
274
  Now, you can transcirbe audio files by following the usage instructions provided in the repository:
@@ -278,7 +278,7 @@ from datasets import load_dataset
278
  from faster_whisper import WhisperModel
279
 
280
  # Load model
281
- model_name_or_path = "./models/whisper-large-v3-distil-fr-v0.2/ctranslate2
282
  model = WhisperModel(model_name_or_path", device="cuda", compute_type="float16") # Run on GPU with FP16
283
 
284
  # Example audio
@@ -311,7 +311,7 @@ Next, download the converted ggml weights from the Hugging Face Hub:
311
 
312
  ```bash
313
  # Download model quantized with Q5_0 method
314
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-distil-fr-v0.2', filename='ggml-model-q5_0.bin', local_dir='./models/whisper-large-v3-distil-fr-v0.2')"
315
  ```
316
 
317
  Now, you can transcribe an audio file using the following command:
@@ -364,7 +364,7 @@ Download the pytorch checkpoint in the original OpenAI format and convert it int
364
 
365
  ```bash
366
  # Download
367
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='bofenghuang/whisper-large-v3-distil-fr-v0.2', filename='original_model.pt', local_dir='./models/whisper-large-v3-distil-fr-v0.2')"
368
  # Convert into .npz
369
  python convert.py --torch-name-or-path ./models/whisper-large-v3-distil-fr-v0.2/original_model.pt --mlx-path ./mlx_models/whisper-large-v3-distil-fr-v0.2
370
  ```
 
268
  Then, download the model converted to the CTranslate2 format:
269
 
270
  ```bash
271
+ huggingface-cli download --include ctranslate2/* --local-dir ./models/whisper-large-v3-distil-fr-v0.2 bofenghuang/whisper-large-v3-distil-fr-v0.2
272
  ```
273
 
274
  Now, you can transcirbe audio files by following the usage instructions provided in the repository:
 
278
  from faster_whisper import WhisperModel
279
 
280
  # Load model
281
+ model_name_or_path = "./models/whisper-large-v3-distil-fr-v0.2/ctranslate2"
282
  model = WhisperModel(model_name_or_path", device="cuda", compute_type="float16") # Run on GPU with FP16
283
 
284
  # Example audio
 
311
 
312
  ```bash
313
  # Download model quantized with Q5_0 method
314
+ huggingface-cli download --include ggml-model* --local-dir ./models/whisper-large-v3-distil-fr-v0.2 bofenghuang/whisper-large-v3-distil-fr-v0.2
315
  ```
316
 
317
  Now, you can transcribe an audio file using the following command:
 
364
 
365
  ```bash
366
  # Download
367
+ huggingface-cli download --include original_model.pt --local-dir ./models/whisper-large-v3-distil-fr-v0.2 bofenghuang/whisper-large-v3-distil-fr-v0.2
368
  # Convert into .npz
369
  python convert.py --torch-name-or-path ./models/whisper-large-v3-distil-fr-v0.2/original_model.pt --mlx-path ./mlx_models/whisper-large-v3-distil-fr-v0.2
370
  ```