Aetherarchio commited on
Commit
8f6becc
·
verified ·
1 Parent(s): 8f6e704

match script naming

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  ---
10
 
11
  > [!TIP]
12
- > **Model moved to the AetherArchitectural Community:** <br>
13
  > Now at [**AetherArchitectural/GGUF-Quantization-Script**](https://huggingface.co/AetherArchitectural/GGUF-Quantization-Script).
14
  >
15
  > **Credits:** <br>
@@ -54,7 +54,7 @@ Your `imatrix.txt` is expected to be located inside the `imatrix` folder. I have
54
  Adjust `quantization_options` in [**line 138**](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script/blob/main/gguf-imat.py#L138).
55
 
56
  > [!NOTE]
57
- > Models downloaded to be used for quantization are cached at `C:\Users\{{User}}\.cache\huggingface\hub`. You can delete these files manually as needed after you're done with your quantizations, you can do it directly from your Terminal if you prefer with the `rmdir "C:\Users\{{User}}\.cache\huggingface\hub"` command. You can put it into another script or alias it to a convenient command if you prefer.
58
 
59
 
60
  **Hardware:**
@@ -70,7 +70,6 @@ Adjust `quantization_options` in [**line 138**](https://huggingface.co/FantasiaF
70
 
71
  **Usage:**
72
  ```
73
- python .\gguf-imat.py
74
  ```
75
  Quantizations will be output into the created `models\{model-name}-GGUF` folder.
76
- <br><br>
 
9
  ---
10
 
11
  > [!TIP]
12
+ > **AetherArchitectural Community:** <br>
13
  > Now at [**AetherArchitectural/GGUF-Quantization-Script**](https://huggingface.co/AetherArchitectural/GGUF-Quantization-Script).
14
  >
15
  > **Credits:** <br>
 
54
  Adjust `quantization_options` in [**line 138**](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script/blob/main/gguf-imat.py#L138).
55
 
56
  > [!NOTE]
57
+ > Models downloaded to be used for quantization might stay cached at `C:\Users\{{User}}\.cache\huggingface\hub`. You can delete these files manually if needed after you're done with your quantizations, you can do it directly from your Terminal if you prefer with the `rmdir "C:\Users\{{User}}\.cache\huggingface\hub"` command. You can put it into another script or alias it to a convenient command if you prefer.
58
 
59
 
60
  **Hardware:**
 
70
 
71
  **Usage:**
72
  ```
73
+ python .\gguf-imat-lossless-for-BF16.py
74
  ```
75
  Quantizations will be output into the created `models\{model-name}-GGUF` folder.