fdschmidt93 commited on
Commit
74cdef3
·
1 Parent(s): 767b6d1

fix(README): minor clean-ups

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -41,10 +41,9 @@ This model has only been trained on self-supervised data and not yet been fine-t
41
  ```python
42
  import torch
43
  import torch.nn.functional as F
44
- from transformers import AutoTokenizer, AutoModel, AutoConfig
45
 
46
  tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
47
-
48
  model = AutoModel.from_pretrained(
49
  "fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
50
  trust_remote_code=True,
@@ -94,6 +93,7 @@ from peft.tuners.lora.config import LoraConfig
94
 
95
  # Only attach LoRAs to the linear layers of LLM2Vec inside NLLB-LLM2Vec
96
  lora_config = LoraConfig(
 
97
  lora_alpha = 32,
98
  target_modules = r".*llm2vec.*(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj).*",
99
  bias = "none",
@@ -126,4 +126,4 @@ If you are using `NLLB-LLM2Vec` in your work, please cite
126
  }
127
  ```
128
 
129
- The work has been accepted to EMNLP findings. The Bibtex will therefore be updated when the paper will be released on ACLAnthology.
 
41
  ```python
42
  import torch
43
  import torch.nn.functional as F
44
+ from transformers import AutoTokenizer, AutoModel
45
 
46
  tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
 
47
  model = AutoModel.from_pretrained(
48
  "fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
49
  trust_remote_code=True,
 
93
 
94
  # Only attach LoRAs to the linear layers of LLM2Vec inside NLLB-LLM2Vec
95
  lora_config = LoraConfig(
96
+ rank = 16,
97
  lora_alpha = 32,
98
  target_modules = r".*llm2vec.*(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj).*",
99
  bias = "none",
 
126
  }
127
  ```
128
 
129
+ The work has been accepted to findings of EMNLP. The Bibtex will therefore be updated when the paper will be released on ACLAnthology.