antoinelouis commited on
Commit
fdd76a2
·
1 Parent(s): 473d20c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -57,8 +57,8 @@ def mean_pooling(model_output, attention_mask):
57
  sentences = ['This is an example sentence', 'Each sentence is converted']
58
 
59
  # Load model from HuggingFace Hub
60
- tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-antoinelouis-biencoder-camembert-base-mmarcoFRFR-bsard')
61
- model = AutoModel.from_pretrained('antoinelouis/biencoder-antoinelouis-biencoder-camembert-base-mmarcoFRFR-bsard')
62
 
63
  # Tokenize sentences
64
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -79,7 +79,7 @@ We evaluate the model on the test set of LLeQA, which consists of 195 legal ques
79
 
80
  | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100 | R@500 |
81
  |---------:|----------:|---------:|-------:|--------:|--------:|
82
- | 19.03 | 14.36 | 10.77 | 15.95 | 34.12 | 52.26 |
83
 
84
  ## Training
85
  ***
 
57
  sentences = ['This is an example sentence', 'Each sentence is converted']
58
 
59
  # Load model from HuggingFace Hub
60
+ tokenizer = AutoTokenizer.from_pretrained('maastrichtlawtech/camembert-base-lleqa')
61
+ model = AutoModel.from_pretrained('maastrichtlawtech/camembert-base-lleqa')
62
 
63
  # Tokenize sentences
64
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
79
 
80
  | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100 | R@500 |
81
  |---------:|----------:|---------:|-------:|--------:|--------:|
82
+ | 36.55 | 39.27 | 30.64 | 58.27 | 82.43 | 92.41 |
83
 
84
  ## Training
85
  ***