julianrisch commited on
Commit
3f1283a
·
verified ·
1 Parent(s): 32d17e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -17
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
  - exbert
9
  ---
10
 
11
- # deepset/xlm-roberta-base-squad2-distilled
12
  - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
13
 
14
  ## Overview
@@ -17,7 +17,7 @@ tags:
17
  **Downstream-task:** Extractive QA
18
  **Training data:** SQuAD 2.0
19
  **Eval data:** SQuAD 2.0
20
- **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
21
  **Infrastructure**: 1x Tesla v100
22
 
23
  ## Hyperparameters
@@ -36,19 +36,33 @@ distillation_loss_weight = 0.75
36
  ## Usage
37
 
38
  ### In Haystack
39
- Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
 
40
  ```python
41
- reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled")
42
- # or
43
- reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled")
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ```
45
- For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
46
 
47
  ### In Transformers
48
  ```python
49
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
50
 
51
- model_name = "deepset/xlm-roberta-base-squad2-distilled"
52
 
53
  # a) Get predictions
54
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
@@ -77,8 +91,9 @@ Evaluated on the SQuAD 2.0 dev set
77
  **Michel Bartels:** [email protected]
78
 
79
  ## About us
 
80
  <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
81
- <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
82
  <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
83
  </div>
84
  <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
@@ -86,20 +101,19 @@ Evaluated on the SQuAD 2.0 dev set
86
  </div>
87
  </div>
88
 
89
- [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
90
-
91
 
92
  Some of our other work:
93
- - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
94
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
95
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
96
 
97
  ## Get in touch and join the Haystack community
98
 
99
- <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
100
 
101
- We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
102
 
103
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
104
 
105
  By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
8
  - exbert
9
  ---
10
 
11
+ # Multilingual XLM-RoBERTa base distilled for Extractive QA on various languages
12
  - haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
13
 
14
  ## Overview
 
17
  **Downstream-task:** Extractive QA
18
  **Training data:** SQuAD 2.0
19
  **Eval data:** SQuAD 2.0
20
+ **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
21
  **Infrastructure**: 1x Tesla v100
22
 
23
  ## Hyperparameters
 
36
  ## Usage
37
 
38
  ### In Haystack
39
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
40
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
41
  ```python
42
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
43
+
44
+ from haystack import Document
45
+ from haystack.components.readers import ExtractiveReader
46
+
47
+ docs = [
48
+ Document(content="Python is a popular programming language"),
49
+ Document(content="python ist eine beliebte Programmiersprache"),
50
+ ]
51
+
52
+ reader = ExtractiveReader(model="deepset/roberta-base-squad2")
53
+ reader.warm_up()
54
+
55
+ question = "What is a popular programming language?"
56
+ result = reader.run(query=question, documents=docs)
57
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
58
  ```
59
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
60
 
61
  ### In Transformers
62
  ```python
63
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
64
 
65
+ model_name = "deepset/roberta-base-squad2"
66
 
67
  # a) Get predictions
68
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
 
91
  **Michel Bartels:** [email protected]
92
 
93
  ## About us
94
+
95
  <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
96
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
97
  <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
98
  </div>
99
  <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
 
101
  </div>
102
  </div>
103
 
104
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
 
105
 
106
  Some of our other work:
107
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
108
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
109
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
110
 
111
  ## Get in touch and join the Haystack community
112
 
113
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
114
 
115
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
116
 
117
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
118
 
119
  By the way: [we're hiring!](http://www.deepset.ai/jobs)