--- license: apache-2.0 --- # GATEAU-LLaMA-7B-1K-10K A simple demo for the deployment of the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("ssz1111/GATEAU-1k-10k", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("ssz1111/GATEAU-1k-10k", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") model = model.eval() query = "\n\n Hello." response, history = model.chat(tokenizer, query, history=[], max_new_tokens=512, temperature=1) print(response) ```