GGUF
mav23 commited on
Commit
228c91c
·
verified ·
1 Parent(s): 65877a2

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +44 -0
  3. vicuna-33b-v1.3.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ vicuna-33b-v1.3.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+
5
+ # Vicuna Model Card
6
+
7
+ ## Model Details
8
+
9
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
10
+
11
+ - **Developed by:** [LMSYS](https://lmsys.org/)
12
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
13
+ - **License:** Non-commercial license
14
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
15
+
16
+ ### Model Sources
17
+
18
+ - **Repository:** https://github.com/lm-sys/FastChat
19
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
20
+ - **Paper:** https://arxiv.org/abs/2306.05685
21
+ - **Demo:** https://chat.lmsys.org/
22
+
23
+ ## Uses
24
+
25
+ The primary use of Vicuna is research on large language models and chatbots.
26
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
27
+
28
+ ## How to Get Started with the Model
29
+
30
+ - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
31
+ - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
32
+
33
+ ## Training Details
34
+
35
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
36
+ The training data is around 125K conversations collected from ShareGPT.com.
37
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
38
+
39
+ ## Evaluation
40
+
41
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
42
+
43
+ ## Difference between different versions of Vicuna
44
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
vicuna-33b-v1.3.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab15f6f9d1f8d549d32cb95be6d0a5ba375644249df17f5b16c2add8a9ca2b09
3
+ size 18355967648