Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
a16fcdf
·
verified ·
1 Parent(s): ef1a4f0

Scheduled Commit

Browse files
data/clustering_battle-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl CHANGED
@@ -3,3 +3,4 @@
3
  {"tstamp": 1723134893.9338, "task_type": "clustering", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "3e3c8125c1a74295b1f2003dfcd3e96b", "0_model_name": "voyage-multilingual-2", "0_prompt": ["Pikachu", "Darth Vader", "Yoda", "Squirtle", "Gandalf", "Legolas", "Mickey Mouse", "Donald Duck", "Charizard"], "0_ncluster": 4, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "a3d7637cfbcc4a82a6cb2152046b5196", "1_model_name": "text-embedding-3-large", "1_prompt": ["Pikachu", "Darth Vader", "Yoda", "Squirtle", "Gandalf", "Legolas", "Mickey Mouse", "Donald Duck", "Charizard"], "1_ncluster": 4, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
4
  {"tstamp": 1723136681.2304, "task_type": "clustering", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "1b178d077fdc430684b789a84ebacd0a", "0_model_name": "text-embedding-3-large", "0_prompt": ["Indian", "Pacific", "Southern", "Arctic", "Atlantic", "rooibos", "pu-erh", "chalk", "fountain pen", "cirrus", "nimbus", "altostratus", "cumulus", "stratus", "flute", "drums"], "0_ncluster": 5, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "d185a0b148bd4a59b40c2774e66ec18e", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": ["Indian", "Pacific", "Southern", "Arctic", "Atlantic", "rooibos", "pu-erh", "chalk", "fountain pen", "cirrus", "nimbus", "altostratus", "cumulus", "stratus", "flute", "drums"], "1_ncluster": 5, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
5
  {"tstamp": 1723136723.7632, "task_type": "clustering", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "4feaf1ee0a274ac8a85968a2361f8e54", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": ["ciabatta", "brioche", "baguette", "literature", "biology", "chemistry", "history", "physics"], "0_ncluster": 2, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "0bd549df73be4ad68095e81f687bf038", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": ["ciabatta", "brioche", "baguette", "literature", "biology", "chemistry", "history", "physics"], "1_ncluster": 2, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
 
 
3
  {"tstamp": 1723134893.9338, "task_type": "clustering", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "3e3c8125c1a74295b1f2003dfcd3e96b", "0_model_name": "voyage-multilingual-2", "0_prompt": ["Pikachu", "Darth Vader", "Yoda", "Squirtle", "Gandalf", "Legolas", "Mickey Mouse", "Donald Duck", "Charizard"], "0_ncluster": 4, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "a3d7637cfbcc4a82a6cb2152046b5196", "1_model_name": "text-embedding-3-large", "1_prompt": ["Pikachu", "Darth Vader", "Yoda", "Squirtle", "Gandalf", "Legolas", "Mickey Mouse", "Donald Duck", "Charizard"], "1_ncluster": 4, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
4
  {"tstamp": 1723136681.2304, "task_type": "clustering", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "1b178d077fdc430684b789a84ebacd0a", "0_model_name": "text-embedding-3-large", "0_prompt": ["Indian", "Pacific", "Southern", "Arctic", "Atlantic", "rooibos", "pu-erh", "chalk", "fountain pen", "cirrus", "nimbus", "altostratus", "cumulus", "stratus", "flute", "drums"], "0_ncluster": 5, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "d185a0b148bd4a59b40c2774e66ec18e", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": ["Indian", "Pacific", "Southern", "Arctic", "Atlantic", "rooibos", "pu-erh", "chalk", "fountain pen", "cirrus", "nimbus", "altostratus", "cumulus", "stratus", "flute", "drums"], "1_ncluster": 5, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
5
  {"tstamp": 1723136723.7632, "task_type": "clustering", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "4feaf1ee0a274ac8a85968a2361f8e54", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": ["ciabatta", "brioche", "baguette", "literature", "biology", "chemistry", "history", "physics"], "0_ncluster": 2, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "0bd549df73be4ad68095e81f687bf038", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": ["ciabatta", "brioche", "baguette", "literature", "biology", "chemistry", "history", "physics"], "1_ncluster": 2, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
6
+ {"tstamp": 1723214645.1831, "task_type": "clustering", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fb881f138f0b43cba2ae08f7e3c4f4a8", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": ["werewolf", "phoenix", "mermaid", "centaur", "unicorn", "liberalism", "anarchism", "fascism", "Japanese", "Mexican", "Indian"], "0_ncluster": 3, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "0f26f47f54dd40819a948e28ece4a83e", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": ["werewolf", "phoenix", "mermaid", "centaur", "unicorn", "liberalism", "anarchism", "fascism", "Japanese", "Mexican", "Indian"], "1_ncluster": 3, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
data/clustering_individual-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl CHANGED
@@ -34,3 +34,5 @@
34
  {"tstamp": 1723167740.1635, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723167740.0784, "finish": 1723167740.1635, "ip": "", "conv_id": "ccda2003cecb4e508491c5b1e76aced5", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["Female", "Male", "Dustin Streeck"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
35
  {"tstamp": 1723167770.6188, "task_type": "clustering", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723167770.5315, "finish": 1723167770.6188, "ip": "", "conv_id": "9362a37be5c14bf9a3e8bed406342357", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": ["Female", "Male", "Dustin Streeck", "is the following name Male or Female: Sonja Heckmann"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
36
  {"tstamp": 1723167770.6188, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723167770.5315, "finish": 1723167770.6188, "ip": "", "conv_id": "ccda2003cecb4e508491c5b1e76aced5", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["Female", "Male", "Dustin Streeck", "is the following name Male or Female: Sonja Heckmann"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
34
  {"tstamp": 1723167740.1635, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723167740.0784, "finish": 1723167740.1635, "ip": "", "conv_id": "ccda2003cecb4e508491c5b1e76aced5", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["Female", "Male", "Dustin Streeck"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
35
  {"tstamp": 1723167770.6188, "task_type": "clustering", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723167770.5315, "finish": 1723167770.6188, "ip": "", "conv_id": "9362a37be5c14bf9a3e8bed406342357", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": ["Female", "Male", "Dustin Streeck", "is the following name Male or Female: Sonja Heckmann"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
36
  {"tstamp": 1723167770.6188, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723167770.5315, "finish": 1723167770.6188, "ip": "", "conv_id": "ccda2003cecb4e508491c5b1e76aced5", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["Female", "Male", "Dustin Streeck", "is the following name Male or Female: Sonja Heckmann"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
37
+ {"tstamp": 1723214590.2294, "task_type": "clustering", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723214590.1403, "finish": 1723214590.2294, "ip": "", "conv_id": "fb881f138f0b43cba2ae08f7e3c4f4a8", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": ["werewolf", "phoenix", "mermaid", "centaur", "unicorn", "liberalism", "anarchism", "fascism", "Japanese", "Mexican", "Indian"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
38
+ {"tstamp": 1723214590.2294, "task_type": "clustering", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723214590.1403, "finish": 1723214590.2294, "ip": "", "conv_id": "0f26f47f54dd40819a948e28ece4a83e", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": ["werewolf", "phoenix", "mermaid", "centaur", "unicorn", "liberalism", "anarchism", "fascism", "Japanese", "Mexican", "Indian"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl CHANGED
@@ -89,3 +89,4 @@
89
  {"tstamp": 1723214250.5246, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "0eae60be6a9a4aba93f2db3e8c73b3ac", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "is pyjamas an american word?", "0_output": [["is pyjamas an american word?", "English Stackexchange\n\nQ: Difference between \"pajama\", \"pyjama\" and \"paijama\" Which is the correct word: pajama, pyjama or paijama? \nIs there a difference between these three?\n\nA: Pajama is the American spelling while pyjama is the British spelling. Both mean the same.\n\nPajama/Pyjama: loose-fitting clothes worn for sleep\n\nPaijama is the word from which pajama/pyjama has been derived. Also, the meaning (of pajama/pyjama) has been altered a bit.\n\nPaijama: loose, lightweight trousers fitted with drawstring waistbands\n\nPajama/Pyjama was incorporated into the English language during the British Rule in India.\n(Bonus: Most people in India call it either \"paijama\" or \"pajama\")\n"]], "0_corpus": "stackexchange", "1_conv_id": "49f040f2e8bd479ba79bd8da5e7a97ed", "1_model_name": "BM25", "1_prompt": "is pyjamas an american word?", "1_output": [["is pyjamas an american word?", "English Stackexchange\n\nQ: Difference between \"pajama\", \"pyjama\" and \"paijama\" Which is the correct word: pajama, pyjama or paijama? \nIs there a difference between these three?\n\nA: Pajama is the American spelling while pyjama is the British spelling. Both mean the same.\n\nPajama/Pyjama: loose-fitting clothes worn for sleep\n\nPaijama is the word from which pajama/pyjama has been derived. Also, the meaning (of pajama/pyjama) has been altered a bit.\n\nPaijama: loose, lightweight trousers fitted with drawstring waistbands\n\nPajama/Pyjama was incorporated into the English language during the British Rule in India.\n(Bonus: Most people in India call it either \"paijama\" or \"pajama\")\n"]], "1_corpus": "stackexchange"}
90
  {"tstamp": 1723214321.8007, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "5d900500bce240c5bbf2aef59147a9bc", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "will wout van aert ever will Paris roubaix", "0_output": [["will wout van aert ever will Paris roubaix", "Roubaix\n\nToponymy The current city's name is most likely derived from Frankish rausa \"reed\" and baki \"brook\". Therefore, the meaning of Roubaix can, in all likelihood, find its origin on the banks of its three historical brooks: Espierre, Trichon and Favreuil. The place was mentioned for the first time in a Latinised form in the 9th century: Villa Rusbaci. Thereafter, the following names were in use: 1047 and 1106 Rubais, 1122 Rosbays, 1166 Rusbais, 1156 and 1202 Robais, 1223 Roubais. Over the span of centuries, the name evolved to Roubaix as shown on Mercator's map of Flanders published at Leuven in 1540.\nParallel to the official and usual name Roubaix, some translations are worth a mention. Firstly, though the city has never belonged to the Flemish-speaking area, the seldom-heard renderings Robeke and Roodebeeke are documented for Roubaix. Furthermore, the Dutch Language Union established Robaais as the city's proper Dutch name. Lastly, one can cite Rosbacum as the definite Latin transcription of Roubaix which has been in use since the 19th century, as recorded on dedication statements sealed in the first stones of the foundations of the City Hall laid in 1840 and the Church of Notre Dame laid in 1842.\nHistory\nHeraldry\nPeople\nInhabitants of Roubaix are known in English as \"Roubaisians\" and in French as Roubaisiens () or in the feminine form Roubaisiennes (), also natively called Roubaignots () or in the feminine form Roubaignotes ()."]], "0_corpus": "wikipedia", "1_conv_id": "d406de720188454babb25d33b24c7f74", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "will wout van aert ever will Paris roubaix", "1_output": [["will wout van aert ever will Paris roubaix", "Wout van Aert\n\nWout van Aert (born 15 September 1994) is a Belgian professional road and cyclo-cross racer who rides for UCI WorldTeam . Van Aert won three consecutive men's races at the UCI Cyclo-cross World Championships in 2016, 2017 and 2018.\nHaving initially prioritised competing in cyclo-cross, Van Aert terminated his contract with in 2018 and joined in March 2019, on a three-year deal. Van Aert has since taken more than thirty professional road wins, including nine stage victories at the Tour de France between 2019 and 2022 (also winning the points classification in 2022), and won Milan–San Remo – one of the five Cycling monuments – in 2020."]], "1_corpus": "wikipedia"}
91
  {"tstamp": 1723214376.7533, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b9286918cc114d8bb1a337f3424394b8", "0_model_name": "voyage-multilingual-2", "0_prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "0_output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nAbstract: Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard."]], "0_corpus": "arxiv", "1_conv_id": "52edf7a1ae264368a62d5b66e4d9b210", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "1_output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: FlauBERT: Unsupervised Language Model Pre-training for French\n\nAbstract: Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP."]], "1_corpus": "arxiv"}
 
 
89
  {"tstamp": 1723214250.5246, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "0eae60be6a9a4aba93f2db3e8c73b3ac", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "is pyjamas an american word?", "0_output": [["is pyjamas an american word?", "English Stackexchange\n\nQ: Difference between \"pajama\", \"pyjama\" and \"paijama\" Which is the correct word: pajama, pyjama or paijama? \nIs there a difference between these three?\n\nA: Pajama is the American spelling while pyjama is the British spelling. Both mean the same.\n\nPajama/Pyjama: loose-fitting clothes worn for sleep\n\nPaijama is the word from which pajama/pyjama has been derived. Also, the meaning (of pajama/pyjama) has been altered a bit.\n\nPaijama: loose, lightweight trousers fitted with drawstring waistbands\n\nPajama/Pyjama was incorporated into the English language during the British Rule in India.\n(Bonus: Most people in India call it either \"paijama\" or \"pajama\")\n"]], "0_corpus": "stackexchange", "1_conv_id": "49f040f2e8bd479ba79bd8da5e7a97ed", "1_model_name": "BM25", "1_prompt": "is pyjamas an american word?", "1_output": [["is pyjamas an american word?", "English Stackexchange\n\nQ: Difference between \"pajama\", \"pyjama\" and \"paijama\" Which is the correct word: pajama, pyjama or paijama? \nIs there a difference between these three?\n\nA: Pajama is the American spelling while pyjama is the British spelling. Both mean the same.\n\nPajama/Pyjama: loose-fitting clothes worn for sleep\n\nPaijama is the word from which pajama/pyjama has been derived. Also, the meaning (of pajama/pyjama) has been altered a bit.\n\nPaijama: loose, lightweight trousers fitted with drawstring waistbands\n\nPajama/Pyjama was incorporated into the English language during the British Rule in India.\n(Bonus: Most people in India call it either \"paijama\" or \"pajama\")\n"]], "1_corpus": "stackexchange"}
90
  {"tstamp": 1723214321.8007, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "5d900500bce240c5bbf2aef59147a9bc", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "will wout van aert ever will Paris roubaix", "0_output": [["will wout van aert ever will Paris roubaix", "Roubaix\n\nToponymy The current city's name is most likely derived from Frankish rausa \"reed\" and baki \"brook\". Therefore, the meaning of Roubaix can, in all likelihood, find its origin on the banks of its three historical brooks: Espierre, Trichon and Favreuil. The place was mentioned for the first time in a Latinised form in the 9th century: Villa Rusbaci. Thereafter, the following names were in use: 1047 and 1106 Rubais, 1122 Rosbays, 1166 Rusbais, 1156 and 1202 Robais, 1223 Roubais. Over the span of centuries, the name evolved to Roubaix as shown on Mercator's map of Flanders published at Leuven in 1540.\nParallel to the official and usual name Roubaix, some translations are worth a mention. Firstly, though the city has never belonged to the Flemish-speaking area, the seldom-heard renderings Robeke and Roodebeeke are documented for Roubaix. Furthermore, the Dutch Language Union established Robaais as the city's proper Dutch name. Lastly, one can cite Rosbacum as the definite Latin transcription of Roubaix which has been in use since the 19th century, as recorded on dedication statements sealed in the first stones of the foundations of the City Hall laid in 1840 and the Church of Notre Dame laid in 1842.\nHistory\nHeraldry\nPeople\nInhabitants of Roubaix are known in English as \"Roubaisians\" and in French as Roubaisiens () or in the feminine form Roubaisiennes (), also natively called Roubaignots () or in the feminine form Roubaignotes ()."]], "0_corpus": "wikipedia", "1_conv_id": "d406de720188454babb25d33b24c7f74", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "will wout van aert ever will Paris roubaix", "1_output": [["will wout van aert ever will Paris roubaix", "Wout van Aert\n\nWout van Aert (born 15 September 1994) is a Belgian professional road and cyclo-cross racer who rides for UCI WorldTeam . Van Aert won three consecutive men's races at the UCI Cyclo-cross World Championships in 2016, 2017 and 2018.\nHaving initially prioritised competing in cyclo-cross, Van Aert terminated his contract with in 2018 and joined in March 2019, on a three-year deal. Van Aert has since taken more than thirty professional road wins, including nine stage victories at the Tour de France between 2019 and 2022 (also winning the points classification in 2022), and won Milan–San Remo – one of the five Cycling monuments – in 2020."]], "1_corpus": "wikipedia"}
91
  {"tstamp": 1723214376.7533, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b9286918cc114d8bb1a337f3424394b8", "0_model_name": "voyage-multilingual-2", "0_prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "0_output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nAbstract: Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard."]], "0_corpus": "arxiv", "1_conv_id": "52edf7a1ae264368a62d5b66e4d9b210", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "1_output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: FlauBERT: Unsupervised Language Model Pre-training for French\n\nAbstract: Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP."]], "1_corpus": "arxiv"}
92
+ {"tstamp": 1723214816.8531, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "63c0772dd1b1480fab134c0d0fabb1aa", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "0_corpus": "stackexchange", "1_conv_id": "267b03dfb7a24b7e9f7ab7aa3f65fdda", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "1_corpus": "stackexchange"}
data/retrieval_individual-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl CHANGED
@@ -360,3 +360,5 @@
360
  {"tstamp": 1723214295.5454, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723214293.2511, "finish": 1723214295.5454, "ip": "", "conv_id": "d406de720188454babb25d33b24c7f74", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "will wout van aert ever will Paris roubaix", "output": [["will wout van aert ever will Paris roubaix", "Wout van Aert\n\nWout van Aert (born 15 September 1994) is a Belgian professional road and cyclo-cross racer who rides for UCI WorldTeam . Van Aert won three consecutive men's races at the UCI Cyclo-cross World Championships in 2016, 2017 and 2018.\nHaving initially prioritised competing in cyclo-cross, Van Aert terminated his contract with in 2018 and joined in March 2019, on a three-year deal. Van Aert has since taken more than thirty professional road wins, including nine stage victories at the Tour de France between 2019 and 2022 (also winning the points classification in 2022), and won Milan–San Remo – one of the five Cycling monuments – in 2020."]], "corpus": "wikipedia"}
361
  {"tstamp": 1723214298.1618, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723214295.8043, "finish": 1723214298.1618, "ip": "", "conv_id": "b9286918cc114d8bb1a337f3424394b8", "model_name": "voyage-multilingual-2", "prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nAbstract: Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard."]], "corpus": "arxiv"}
362
  {"tstamp": 1723214298.1618, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723214295.8043, "finish": 1723214298.1618, "ip": "", "conv_id": "52edf7a1ae264368a62d5b66e4d9b210", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: FlauBERT: Unsupervised Language Model Pre-training for French\n\nAbstract: Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP."]], "corpus": "arxiv"}
 
 
 
360
  {"tstamp": 1723214295.5454, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723214293.2511, "finish": 1723214295.5454, "ip": "", "conv_id": "d406de720188454babb25d33b24c7f74", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "will wout van aert ever will Paris roubaix", "output": [["will wout van aert ever will Paris roubaix", "Wout van Aert\n\nWout van Aert (born 15 September 1994) is a Belgian professional road and cyclo-cross racer who rides for UCI WorldTeam . Van Aert won three consecutive men's races at the UCI Cyclo-cross World Championships in 2016, 2017 and 2018.\nHaving initially prioritised competing in cyclo-cross, Van Aert terminated his contract with in 2018 and joined in March 2019, on a three-year deal. Van Aert has since taken more than thirty professional road wins, including nine stage victories at the Tour de France between 2019 and 2022 (also winning the points classification in 2022), and won Milan–San Remo – one of the five Cycling monuments – in 2020."]], "corpus": "wikipedia"}
361
  {"tstamp": 1723214298.1618, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723214295.8043, "finish": 1723214298.1618, "ip": "", "conv_id": "b9286918cc114d8bb1a337f3424394b8", "model_name": "voyage-multilingual-2", "prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: MTEB-French: Resources for French Sentence Embedding Evaluation and Analysis\n\nAbstract: Recently, numerous embedding models have been made available and widely used for various NLP tasks. The Massive Text Embedding Benchmark (MTEB) has primarily simplified the process of choosing a model that performs well for several tasks in English, but extensions to other languages remain challenging. This is why we expand MTEB to propose the first massive benchmark of sentence embeddings for French. We gather 15 existing datasets in an easy-to-use interface and create three new French datasets for a global evaluation of 8 task categories. We compare 51 carefully selected embedding models on a large scale, conduct comprehensive statistical tests, and analyze the correlation between model performance and many of their characteristics. We find out that even if no model is the best on all tasks, large multilingual models pre-trained on sentence similarity perform exceptionally well. Our work comes with open-source code, new datasets and a public leaderboard."]], "corpus": "arxiv"}
362
  {"tstamp": 1723214298.1618, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723214295.8043, "finish": 1723214298.1618, "ip": "", "conv_id": "52edf7a1ae264368a62d5b66e4d9b210", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "output": [["Quel est le modèle de text embedding le plus performant pour travailler sur des documents en français et en anglais et qui utilise au maximum 8 GB de mémoire vive en FP32 ?", "Title: FlauBERT: Unsupervised Language Model Pre-training for French\n\nAbstract: Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP."]], "corpus": "arxiv"}
363
+ {"tstamp": 1723214767.1032, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723214764.7911, "finish": 1723214767.1032, "ip": "", "conv_id": "63c0772dd1b1480fab134c0d0fabb1aa", "model_name": "GritLM/GritLM-7B", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "corpus": "stackexchange"}
364
+ {"tstamp": 1723214767.1032, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723214764.7911, "finish": 1723214767.1032, "ip": "", "conv_id": "267b03dfb7a24b7e9f7ab7aa3f65fdda", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "corpus": "stackexchange"}