Update README.md
Browse files
README.md
CHANGED
@@ -24,12 +24,17 @@ Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://
|
|
24 |
|
25 |
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
|
26 |
|
|
|
|
|
|
|
|
|
|
|
27 |
### Model Description
|
28 |
|
29 |
<!-- Provide a longer summary of what this model is. -->
|
30 |
|
31 |
- **Developed by:** llmware
|
32 |
-
- **Model type:**
|
33 |
- **Language(s) (NLP):** English
|
34 |
- **License:** Llama-3.1 Community License
|
35 |
- **Finetuned from model:** Llama-3.1-Base
|
@@ -45,28 +50,20 @@ Any model can provide inaccurate or incomplete information, and should be used i
|
|
45 |
|
46 |
## How to Get Started with the Model
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
51 |
-
tokenizer = AutoTokenizer.from_pretrained("dragon-llama-3.1-gguf", trust_remote_code=True)
|
52 |
-
model = AutoModelForCausalLM.from_pretrained("dragon-llama-3.1-gguf", trust_remote_code=True)
|
53 |
-
|
54 |
-
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
|
55 |
-
|
56 |
-
The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
|
57 |
-
|
58 |
-
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
|
59 |
-
|
60 |
-
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
|
61 |
-
|
62 |
-
1. Text Passage Context, and
|
63 |
-
2. Specific question or instruction based on the text passage
|
64 |
-
|
65 |
-
To get the best results, package "my_prompt" as follows:
|
66 |
|
67 |
-
|
|
|
|
|
|
|
68 |
|
|
|
|
|
|
|
|
|
|
|
69 |
|
|
|
70 |
|
71 |
|
72 |
## Model Card Contact
|
|
|
24 |
|
25 |
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
|
26 |
|
27 |
+
The inference accuracy tests were performed on this model (GGUF 4_K_M) not the original Pytorch, and it is possible that the original Pytorch may score higher, but we have chosen to use the quantized version as it is most representative of the likely use of the model for inference.
|
28 |
+
|
29 |
+
Please compare with [dragon-llama2](https://www.huggingface.co/llmware/dragon-llama-v0) or the most recent [dragon-mistral-0.3](https://www.huggingface.co/llmware/dragon-mistral-0.3-gguf).
|
30 |
+
|
31 |
+
|
32 |
### Model Description
|
33 |
|
34 |
<!-- Provide a longer summary of what this model is. -->
|
35 |
|
36 |
- **Developed by:** llmware
|
37 |
+
- **Model type:** Llama-8b-3.1-Base
|
38 |
- **Language(s) (NLP):** English
|
39 |
- **License:** Llama-3.1 Community License
|
40 |
- **Finetuned from model:** Llama-3.1-Base
|
|
|
50 |
|
51 |
## How to Get Started with the Model
|
52 |
|
53 |
+
To pull the model via API:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
+
from huggingface_hub import snapshot_download
|
56 |
+
snapshot_download("llmware/dragon-llama-3.1-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
|
57 |
+
|
58 |
+
Load in your favorite GGUF inference engine, or try with llmware as follows:
|
59 |
|
60 |
+
from llmware.models import ModelCatalog
|
61 |
+
|
62 |
+
# to load the model and make a basic inference
|
63 |
+
model = ModelCatalog().load_model("llmware/dragon-llama-3.1-gguf", temperature=0.0, sample=False)
|
64 |
+
response = model.inference(query, add_context=text_sample)
|
65 |
|
66 |
+
Details on the prompt wrapper and other configurations are on the config.json file in the files repository.
|
67 |
|
68 |
|
69 |
## Model Card Contact
|