jayantdocplix commited on
Commit
8b0aa60
·
1 Parent(s): 63bdc1d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - medical
9
+ inference: false
10
+ ---
11
+ <!-- header start -->
12
+ <div style="width: 100%;">
13
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
+ </div>
15
+ <div style="display: flex; justify-content: space-between; width: 100%;">
16
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
18
+ </div>
19
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
21
+ </div>
22
+ </div>
23
+ <!-- header end -->
24
+
25
+ # medalpaca-13B-GGML
26
+
27
+ This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
28
+
29
+ This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
30
+
31
+ ## Repositories available
32
+
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit).
34
+ * [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/medalpaca-13B-GGML).
35
+ * [medalpaca's float32 HF format repo for GPU inference and further conversions](https://huggingface.co/medalpaca/medalpaca-13b).
36
+
37
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
38
+
39
+ llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
40
+
41
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
42
+
43
+ For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
44
+
45
+ ## Provided files
46
+ | Name | Quant method | Bits | Size | RAM required | Use case |
47
+ | ---- | ---- | ---- | ---- | ---- | ----- |
48
+ `medalpaca-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
49
+ `medalpaca-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
50
+ `medalpaca-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
51
+ `medalpaca-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
52
+ `medalpaca-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
53
+
54
+ ## How to run in `llama.cpp`
55
+
56
+ I use the following command line; adjust for your tastes and needs:
57
+
58
+ ```
59
+ ./main -t 8 -m medalpaca-13B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
60
+ ```
61
+
62
+ Change `-t 8` to the number of physical CPU cores you have.
63
+
64
+ ## How to run in `text-generation-webui`
65
+
66
+ GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
67
+
68
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
69
+
70
+ Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
71
+
72
+
73
+ <!-- footer start -->
74
+ ## Discord
75
+
76
+ For further support, and discussions on these models and AI in general, join us at:
77
+
78
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
79
+
80
+ ## Thanks, and how to contribute.
81
+
82
+ Thanks to the [chirper.ai](https://chirper.ai) team!
83
+
84
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
85
+
86
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
87
+
88
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
89
+
90
+ * Patreon: https://patreon.com/TheBlokeAI
91
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
92
+
93
+ **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
94
+
95
+ Thank you to all my generous patrons and donaters!
96
+ <!-- footer end -->
97
+ # Original model card: MedAlpaca 13b
98
+
99
+
100
+ ## Table of Contents
101
+
102
+ [Model Description](#model-description)
103
+ - [Architecture](#architecture)
104
+ - [Training Data](#trainig-data)
105
+ [Model Usage](#model-usage)
106
+ [Limitations](#limitations)
107
+
108
+ ## Model Description
109
+ ### Architecture
110
+ `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
111
+ It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
112
+ The primary goal of this model is to improve question-answering and medical dialogue tasks.
113
+
114
+ ### Training Data
115
+ The training data for this project was sourced from various resources.
116
+ Firstly, we used Anki flashcards to automatically generate questions,
117
+ from the front of the cards and anwers from the back of the card.
118
+ Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
119
+ We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
120
+ to generate questions from the headings and using the corresponding paragraphs
121
+ as answers. This dataset is still under development and we believe
122
+ that approximately 70% of these question answer pairs are factual correct.
123
+ Thirdly, we used StackExchange to extract question-answer pairs, taking the
124
+ top-rated question from five categories: Academia, Bioinformatics, Biology,
125
+ Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
126
+ consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
127
+
128
+ | Source | n items |
129
+ |------------------------------|--------|
130
+ | ChatDoc large | 200000 |
131
+ | wikidoc | 67704 |
132
+ | Stackexchange academia | 40865 |
133
+ | Anki flashcards | 33955 |
134
+ | Stackexchange biology | 27887 |
135
+ | Stackexchange fitness | 9833 |
136
+ | Stackexchange health | 7721 |
137
+ | Wikidoc patient information | 5942 |
138
+ | Stackexchange bioinformatics | 5407 |
139
+
140
+ ## Model Usage
141
+ To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
142
+ Inference
143
+
144
+ You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
145
+
146
+ ```python
147
+
148
+ from transformers import pipeline
149
+
150
+ qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
151
+ question = "What are the symptoms of diabetes?"
152
+ context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
153
+ answer = qa_pipeline({"question": question, "context": context})
154
+ print(answer)
155
+ ```
156
+
157
+ ## Limitations
158
+ The model may not perform effectively outside the scope of the medical domain.
159
+ The training data primarily targets the knowledge level of medical students,
160
+ which may result in limitations when addressing the needs of board-certified physicians.
161
+ The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
162
+ It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.