Not work. Cannot access gated repo.

#11
by KLL1111 - opened

The repository is closed, I understand there is no solution to this?
What is Llama-3.1-8B used for, can I do without it?
I only need what's in the Image and Caption window: Model Florence-2, Qwen2-VL and JoyCaption
I don't understand what the right column is for.

OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/meta-llama/Meta-Llama-3.1-8B.
401 Client Error. (Request ID: Root=)
Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/resolve/main/config.json.
Access to model meta-llama/Llama-3.1-8B is restricted. You must have access to it and be authenticated to access it. Please log in.

If you are trying to use this space locally, JoyCaption uses Llama 3.1. You need to apply for access to that model. You can get access to model from this link https://huggingface.co/meta-llama/Meta-Llama-3.1-8B.
You also need to login with your huggingface account on your local pc.

If you are trying to use this space locally, JoyCaption uses Llama 3.1. You need to apply for access to that model. You can get access to model from this link https://huggingface.co/meta-llama/Meta-Llama-3.1-8B.
You also need to login with your huggingface account on your local pc.

I can't do it. Is there any way to transfer the program to an OPEN model?
Maybe:

#tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, use_fast=False, token=HF_TOKEN)
tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-reranker-v2-m3")

#text_model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto", torch_dtype=torch.bfloat16, token=HF_TOKEN)
text_model = AutoModelForSequenceClassification.from_pretrained("BAAI/bge-reranker-v2-m3")

But, it's not work

My recommendation would be disabling JoyCaption. You cannot put random model to use it. It is trained according to llama 3.1's configs.

https://huggingface.co/spaces/gokaygokay/FLUX-Prompt-Generator/blob/main/ui_components.py

You need to delete anything related to JoyCaption inside this script.

My recommendation would be disabling JoyCaption. You cannot put random model to use it. It is trained according to llama 3.1's configs.

https://huggingface.co/spaces/gokaygokay/FLUX-Prompt-Generator/blob/main/ui_components.py

You need to delete anything related to JoyCaption inside this script.

And it won't work with these models? They don't have any stupid restrictions.
unsloth/Meta-Llama-3.1-8B-bnb-4bit
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

They might work, if their torch dtype matches. You can try it, might not work perfectly.

Sign up or log in to comment