LLMA model using Hugging Face: Getting no access

'm trying to get an LLMA model (meta-llama/Llama-2-7b-chat-hf ) running in a Docker container via Hugging Face. The corresponding website / blog is, where I got an instruction which I am using: LLM Everywhere: Docker for Local and Hugging Face Hosting | Docker Docker runs in Windows 11 Subsystem Linux (WSL) on ubuntu 24.04. I have a token from Hugging Face for both Finegrained and Read. I have also registered in Meta. With both Hugging Face tokens I get an error message that I have no access: "Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/tokenizer_config.json. Access to model meta-llama/Llama-2-7b-chat-hf is restricted and you are not in the authorized list. Visit https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to ask for access." What am I doing wrong? I have to add that I am an absolute beginner with both Docker and LLMA.

1 Like

The procedure should be correct. The most common problem is when you haven’t obtained permission from Meta, but you have.
If there is anything that concerns you, the description on that page is quite old, so there is a possibility that the usage has changed slightly… Try looking for a newer manual.