YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
NorLlama-3B - GGUF
- Model creator: https://huggingface.co/NorGLM/
- Original model: https://huggingface.co/NorGLM/NorLlama-3B/
Name | Quant method | Size |
---|---|---|
NorLlama-3B.Q2_K.gguf | Q2_K | 2.51GB |
NorLlama-3B.IQ3_XS.gguf | IQ3_XS | 2.51GB |
NorLlama-3B.IQ3_S.gguf | IQ3_S | 2.51GB |
NorLlama-3B.Q3_K_S.gguf | Q3_K_S | 2.51GB |
NorLlama-3B.IQ3_M.gguf | IQ3_M | 2.56GB |
NorLlama-3B.Q3_K.gguf | Q3_K | 2.56GB |
NorLlama-3B.Q3_K_M.gguf | Q3_K_M | 2.56GB |
NorLlama-3B.Q3_K_L.gguf | Q3_K_L | 2.59GB |
NorLlama-3B.IQ4_XS.gguf | IQ4_XS | 2.51GB |
NorLlama-3B.Q4_0.gguf | Q4_0 | 0.2GB |
NorLlama-3B.IQ4_NL.gguf | IQ4_NL | 0.49GB |
NorLlama-3B.Q4_K_S.gguf | Q4_K_S | 2.78GB |
NorLlama-3B.Q4_K.gguf | Q4_K | 2.82GB |
NorLlama-3B.Q4_K_M.gguf | Q4_K_M | 2.82GB |
NorLlama-3B.Q4_1.gguf | Q4_1 | 0.21GB |
NorLlama-3B.Q5_0.gguf | Q5_0 | 0.22GB |
NorLlama-3B.Q5_K_S.gguf | Q5_K_S | 2.91GB |
NorLlama-3B.Q5_K.gguf | Q5_K | 2.94GB |
NorLlama-3B.Q5_K_M.gguf | Q5_K_M | 2.94GB |
NorLlama-3B.Q5_1.gguf | Q5_1 | 0.23GB |
NorLlama-3B.Q6_K.gguf | Q6_K | 3.58GB |
NorLlama-3B.Q8_0.gguf | Q8_0 | 0.27GB |
Original model description:
license: cc-by-nc-sa-4.0 language: - 'no'
Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. NorLlama-3B is based on Llama architechture, and pretrained on Tencent Pre-training Framework
It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. NorGLM can be used for non-commercial purposes.
Datasets
All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English.
Run the Model
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "NorGLM/NorLlama-3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='auto',
torch_dtype=torch.bfloat16
)
text = "Tom ønsket å gå på barene med venner"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
Citation Information
If you feel our work is helpful, please cite our paper:
@article{liu2023nlebench+,
title={NLEBench+ NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian},
author={Liu, Peng and Zhang, Lemei and Farup, Terje Nissen and Lauvrak, Even W and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong},
journal={arXiv preprint arXiv:2312.01314},
year={2023}
}
- Downloads last month
- 6