crazyjeannot commited on
Commit
0cf3b2a
·
verified ·
1 Parent(s): b7c0d36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -79
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
- datasets: []
3
- language: []
 
 
4
  library_name: sentence-transformers
5
  pipeline_tag: sentence-similarity
6
  tags:
@@ -8,35 +10,43 @@ tags:
8
  - sentence-similarity
9
  - feature-extraction
10
  widget: []
 
 
 
11
  ---
12
 
13
- # SentenceTransformer
 
 
 
 
 
 
14
 
15
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
  - **Model Type:** Sentence Transformer
21
- <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
22
- - **Maximum Sequence Length:** 8192 tokens
23
- - **Output Dimensionality:** 1024 tokens
24
  - **Similarity Function:** Cosine Similarity
25
- <!-- - **Training Dataset:** Unknown -->
26
- <!-- - **Language:** Unknown -->
27
- <!-- - **License:** Unknown -->
28
 
29
  ### Model Sources
30
 
31
  - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
32
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
33
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
34
 
35
  ### Full Model Architecture
36
 
37
  ```
38
  SentenceTransformer(
39
- (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
40
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
41
  (2): Normalize()
42
  )
@@ -44,71 +54,47 @@ SentenceTransformer(
44
 
45
  ## Usage
46
 
47
- ### Direct Usage (Sentence Transformers)
48
-
49
- First install the Sentence Transformers library:
50
-
51
- ```bash
52
- pip install -U sentence-transformers
53
- ```
54
 
55
  Then you can load this model and run inference.
56
  ```python
57
- from sentence_transformers import SentenceTransformer
58
 
59
  # Download from the 🤗 Hub
60
- model = SentenceTransformer("sentence_transformers_model_id")
 
 
 
61
  # Run inference
62
  sentences = [
63
- 'The weather is lovely today.',
64
- "It's so sunny outside!",
65
- 'He drove to the stadium.',
66
  ]
67
  embeddings = model.encode(sentences)
68
  print(embeddings.shape)
69
  # [3, 1024]
70
-
71
- # Get the similarity scores for the embeddings
72
- similarities = model.similarity(embeddings, embeddings)
73
- print(similarities.shape)
74
- # [3, 3]
75
  ```
76
 
77
- <!--
78
- ### Direct Usage (Transformers)
79
-
80
- <details><summary>Click to see the direct usage in Transformers</summary>
81
-
82
- </details>
83
- -->
84
-
85
- <!--
86
- ### Downstream Usage (Sentence Transformers)
87
-
88
- You can finetune this model on your own dataset.
89
-
90
- <details><summary>Click to expand</summary>
91
-
92
- </details>
93
- -->
94
-
95
- <!--
96
- ### Out-of-Scope Use
97
-
98
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
99
- -->
100
 
101
- <!--
102
- ## Bias, Risks and Limitations
103
 
104
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
105
- -->
106
 
107
- <!--
108
- ### Recommendations
 
 
 
 
109
 
110
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
111
- -->
 
 
112
 
113
  ## Training Details
114
 
@@ -123,22 +109,17 @@ You can finetune this model on your own dataset.
123
 
124
  ## Citation
125
 
126
- ### BibTeX
127
-
128
- <!--
129
- ## Glossary
130
-
131
- *Clearly define terms in order to be accessible across audiences.*
132
- -->
133
 
134
- <!--
135
- ## Model Card Authors
136
-
137
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
138
- -->
139
-
140
- <!--
141
- ## Model Card Contact
142
-
143
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
144
- -->
 
 
1
  ---
2
+ datasets:
3
+ - crazyjeannot/fr_literary_dataset_large
4
+ language:
5
+ - fr
6
  library_name: sentence-transformers
7
  pipeline_tag: sentence-similarity
8
  tags:
 
10
  - sentence-similarity
11
  - feature-extraction
12
  widget: []
13
+ license: apache-2.0
14
+ base_model:
15
+ - BAAI/bge-m3
16
  ---
17
 
18
+ # Literary Encoder
19
+
20
+ This is an encoder model finetuned from the FlagOpen/FlagEmbedding family of models.
21
+
22
+ The model is specialized for studying french literary fiction with a training corpus based on 40.000 passages from free from rights french literary novels.
23
+
24
+ It maps paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
25
 
 
26
 
27
  ## Model Details
28
 
29
  ### Model Description
30
  - **Model Type:** Sentence Transformer
31
+ - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)
32
+ - **Maximum Sequence Length:** 512 tokens
33
+ - **Output Dimensionality:** 1024
34
  - **Similarity Function:** Cosine Similarity
35
+ - **Training Dataset:** [crazyjeannot/fr_literary_dataset_large](https://huggingface.co/datasets/crazyjeannot/fr_literary_dataset_large)
36
+ - **Language:** French
37
+ - **License:** cc-by-2.5
38
 
39
  ### Model Sources
40
 
41
  - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
42
+ - **Repository:** [Flag Embedding on GitHub](https://github.com/FlagOpen/FlagEmbedding)
43
+ - **Hugging Face:** [BGE dense model on Hugging Face](https://huggingface.co/BAAI/bge-m3)
44
 
45
  ### Full Model Architecture
46
 
47
  ```
48
  SentenceTransformer(
49
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
50
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
51
  (2): Normalize()
52
  )
 
54
 
55
  ## Usage
56
 
57
+ ### Direct Usage (FlagEmbedding)
 
 
 
 
 
 
58
 
59
  Then you can load this model and run inference.
60
  ```python
61
+ from FlagEmbedding import FlagModel
62
 
63
  # Download from the 🤗 Hub
64
+ model = FlagModel('crazyjeannot/literary_bge_base',
65
+ query_instruction_for_retrieval="",
66
+ use_fp16=True)
67
+
68
  # Run inference
69
  sentences = [
70
+ 'Il y avait, du reste, cette chose assez triste, c’est que si M. de Marsantes, à l’esprit fort ouvert, eût apprécié un fils si différent de lui, Robert de Saint-Loup, parce qu’il était de ceux qui croient que le mérite est attaché à certaines formes de la vie, avait un souvenir affectueux mais un peu méprisant d’un père qui s’était occupé toute sa vie de chasse et de course, avait bâillé à Wagner et raffolé d’Offenbach.',
71
+ "D’ailleurs, les opinions tranchantes abondent dans un siècle où l’on ne doute de rien, hors de l’existence de Dieu ; mais comme les jugements généraux que l’on porte sur les peuples sont assez souvent démentis par l’expérience, je n’aurai garde de prononcer.",
72
+ 'Il était chargé de remettre l’objet, quel qu’il fût, au commodore, et d’en prendre un reçu, comme preuve que lui et son camarade s’étaient acquittés de leur commission.',
73
  ]
74
  embeddings = model.encode(sentences)
75
  print(embeddings.shape)
76
  # [3, 1024]
 
 
 
 
 
77
  ```
78
 
79
+ ### SentenceTransformer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
+ ```python
82
+ from sentence_transformers import SentenceTransformer
83
 
84
+ # Download from the 🤗 Hub
85
+ model = SentenceTransformer("sentence_transformers_model_id")
86
 
87
+ # Run inference
88
+ sentences = [
89
+ 'Il y avait, du reste, cette chose assez triste, c’est que si M. de Marsantes, à l’esprit fort ouvert, eût apprécié un fils si différent de lui, Robert de Saint-Loup, parce qu’il était de ceux qui croient que le mérite est attaché à certaines formes de la vie, avait un souvenir affectueux mais un peu méprisant d’un père qui s’était occupé toute sa vie de chasse et de course, avait bâillé à Wagner et raffolé d’Offenbach.',
90
+ "D’ailleurs, les opinions tranchantes abondent dans un siècle où l’on ne doute de rien, hors de l’existence de Dieu ; mais comme les jugements généraux que l’on porte sur les peuples sont assez souvent démentis par l’expérience, je n’aurai garde de prononcer.",
91
+ 'Il était chargé de remettre l’objet, quel qu’il fût, au commodore, et d’en prendre un reçu, comme preuve que lui et son camarade s’étaient acquittés de leur commission.',
92
+ ]
93
 
94
+ embeddings = model.encode(sentences)
95
+ print(embeddings.shape)
96
+ # [3, 1024]
97
+ ```
98
 
99
  ## Training Details
100
 
 
109
 
110
  ## Citation
111
 
112
+ If you find this repository useful, please consider giving a like and citation
 
 
 
 
 
 
113
 
114
+ ```
115
+ @inproceedings{barre_latent_2024,
116
+ title={Latent {Structures} of {Intertextuality} in {French} {Fiction}},
117
+ author={Barré, Jean},
118
+ address = {Aarhus, Denmark},
119
+ series = {{CEUR} {Workshop} {Proceedings}},
120
+ booktitle = {Proceedings of the {Conference} on {Computational} {Humanities} {Research} CHR2024},
121
+ publisher = {CEUR},
122
+ editor = {Haverals, Wouter and Koolen, Marijn and Thompson, Laure},
123
+ year = {2024},
124
+ }
125
+ ```