Puma-3B / README.md
acrastt's picture
Update README.md
c8aa5c9
|
raw
history blame
694 Bytes
metadata
license: apache-2.0
datasets:
  - totally-not-an-llm/sharegpt-hyperfiltered-3k
language:
  - en
library_name: transformers
pipeline_tag: text-generation

This is OpenLLaMA 3B V2 finetuned on ShareGPT Hyperfiltered for 1 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

GGML quant(q4_1) available here.

Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!