Muennighoff
commited on
Scheduled Commit
Browse files
data/clustering_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl
CHANGED
@@ -10,3 +10,5 @@
|
|
10 |
{"tstamp": 1722363294.4266, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722363294.3361, "finish": 1722363294.4266, "ip": "", "conv_id": "fb2cf7f9ac6f41eba82993886cfe0176", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
11 |
{"tstamp": 1722363303.4905, "task_type": "clustering", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722363303.3937, "finish": 1722363303.4905, "ip": "", "conv_id": "b76eafd60a354f2f9f8b9bfe3fa6de9e", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
12 |
{"tstamp": 1722363303.4905, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722363303.3937, "finish": 1722363303.4905, "ip": "", "conv_id": "fb2cf7f9ac6f41eba82993886cfe0176", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
|
|
|
|
|
10 |
{"tstamp": 1722363294.4266, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722363294.3361, "finish": 1722363294.4266, "ip": "", "conv_id": "fb2cf7f9ac6f41eba82993886cfe0176", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
11 |
{"tstamp": 1722363303.4905, "task_type": "clustering", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722363303.3937, "finish": 1722363303.4905, "ip": "", "conv_id": "b76eafd60a354f2f9f8b9bfe3fa6de9e", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
12 |
{"tstamp": 1722363303.4905, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722363303.3937, "finish": 1722363303.4905, "ip": "", "conv_id": "fb2cf7f9ac6f41eba82993886cfe0176", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?", "octagon", "rectangle", "Temple of Artemis", "Colossus of Rhodes", "Statue of Zeus", "Lighthouse of Alexandria", "Hanging Gardens of Babylon", "Pyramids of Giza", "brunette", "black", "blonde", "redhead", "gray", "auburn", "white", "soccer", "basketball", "tennis", "baseball", "cricket", "ruby", "topaz", "diamond"], "ncluster": 5, "output": "", "ndim": "2D (press for 3D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
13 |
+
{"tstamp": 1722364429.5484, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722364429.4395, "finish": 1722364429.5484, "ip": "", "conv_id": "0a845050502a431ba6824e99541d19af", "model_name": "GritLM/GritLM-7B", "prompt": ["Accountants don't have responsibility responsibility for stemming inflation, but we do have the job of reporting it,\" says Connor, Connor, who speaks passionately about the subject.", "Achieving a 4 percent rate of inflation inflation by the end of 1980 is viewed by almost all economists as economically economically Impossible,\" the GOP book says.", "Although Governmental resistance to inflation has stiffened significantly of late, it is difficult to avoid the conclusion that the Government is not yet prepared to act as decisively to check inflation as it is to check recession.", "And that forces businesses to lay off workers, which slows down inflation by creating a recession.", "At a time when inflation is the main concern of every American, the Federal Government has a special obligation to take those actions which begin to stop inflation,\" he said.", "\"Back to basics\" is not only the se- cret of success in learning but it is also one step to help solve runaway inflation. "], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
14 |
+
{"tstamp": 1722364429.5484, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722364429.4395, "finish": 1722364429.5484, "ip": "", "conv_id": "85f76146430a4965941e0cefda8b555e", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["Accountants don't have responsibility responsibility for stemming inflation, but we do have the job of reporting it,\" says Connor, Connor, who speaks passionately about the subject.", "Achieving a 4 percent rate of inflation inflation by the end of 1980 is viewed by almost all economists as economically economically Impossible,\" the GOP book says.", "Although Governmental resistance to inflation has stiffened significantly of late, it is difficult to avoid the conclusion that the Government is not yet prepared to act as decisively to check inflation as it is to check recession.", "And that forces businesses to lay off workers, which slows down inflation by creating a recession.", "At a time when inflation is the main concern of every American, the Federal Government has a special obligation to take those actions which begin to stop inflation,\" he said.", "\"Back to basics\" is not only the se- cret of success in learning but it is also one step to help solve runaway inflation. "], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
|
data/retrieval_battle-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl
CHANGED
@@ -5,3 +5,6 @@
|
|
5 |
{"tstamp": 1722363327.3402, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "55efd4d6039e4ba194eb41c38a3c6ea7", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "prompt optimization using a reward model", "0_output": [["prompt optimization using a reward model", "Title: Invariants of moduli spaces of abelian surfaces\n\nAbstract: Compactifications of moduli spaces of (1,p)-polarized abelian surfaces with level structures of canonical type have been described in great detail by Hulek, Kahn and Weintraub. The aim of this paper is to determine some invariants of smooth models of these moduli spaces. In particular, a geometric description of their canonical divisors is given and their Chern numbers are computed."]], "0_corpus": "arxiv", "1_conv_id": "720dd7663692404dbe4e7978fb23acb5", "1_model_name": "voyage-multilingual-2", "1_prompt": "prompt optimization using a reward model", "1_output": [["prompt optimization using a reward model", "Title: Reward Collapse in Aligning Large Language Models\n\nAbstract: The extraordinary capabilities of large language models (LLMs) such as ChatGPT and GPT-4 are in part unleashed by aligning them with reward models that are trained on human preferences, which are often represented as rankings of responses to prompts. In this paper, we document the phenomenon of \\textit{reward collapse}, an empirical observation where the prevailing ranking-based approach results in an \\textit{identical} reward distribution \\textit{regardless} of the prompts during the terminal phase of training. This outcome is undesirable as open-ended prompts like ``write a short story about your best friend'' should yield a continuous range of rewards for their completions, while specific prompts like ``what is the capital of New Zealand'' should generate either high or low rewards. Our theoretical investigation reveals that reward collapse is primarily due to the insufficiency of the ranking-based objective function to incorporate prompt-related information during optimization. This insight allows us to derive closed-form expressions for the reward distribution associated with a set of utility functions in an asymptotic regime. To overcome reward collapse, we introduce a prompt-aware optimization scheme that provably admits a prompt-dependent reward distribution within the interpolating regime. Our experimental results suggest that our proposed prompt-aware utility functions significantly alleviate reward collapse during the training of reward models."]], "1_corpus": "arxiv"}
|
6 |
{"tstamp": 1722363435.7841, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4cb5f6a122cb452f867ed2739011cd9b", "0_model_name": "BM25", "0_prompt": "distilling the reranking ability of large language models to cross encoders", "0_output": [["distilling the reranking ability of large language models to cross encoders", "Title: ReFIT: Relevance Feedback from a Reranker during Inference\n\nAbstract: Retrieve-and-rerank is a prevalent framework in neural information retrieval, wherein a bi-encoder network initially retrieves a pre-defined number of candidates (e.g., K=100), which are then reranked by a more powerful cross-encoder model. While the reranker often yields improved candidate scores compared to the retriever, its scope is confined to only the top K retrieved candidates. As a result, the reranker cannot improve retrieval performance in terms of Recall@K. In this work, we propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time. Specifically, given a test instance during inference, we distill the reranker's predictions for that instance into the retriever's query representation using a lightweight update mechanism. The aim of the distillation loss is to align the retriever's candidate scores more closely with those produced by the reranker. The algorithm then proceeds by executing a second retrieval step using the updated query vector. We empirically demonstrate that this method, applicable to various retrieve-and-rerank frameworks, substantially enhances retrieval recall across multiple domains, languages, and modalities."]], "0_corpus": "arxiv", "1_conv_id": "44d3ff6d268a4decb6abdb3d5eb97904", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "distilling the reranking ability of large language models to cross encoders", "1_output": [["distilling the reranking ability of large language models to cross encoders", "Title: A Systematic Investigation of Distilling Large Language Models into Cross-Encoders for Passage Re-ranking\n\nAbstract: Cross-encoders distilled from large language models (LLMs) are often more effective re-rankers than cross-encoders fine-tuned on manually labeled data. However, the distilled models usually do not reach their teacher LLM's effectiveness. To investigate whether best practices for fine-tuning cross-encoders on manually labeled data (e.g., hard-negative sampling, deep sampling, and listwise loss functions) can help to improve LLM ranker distillation, we construct and release a new distillation dataset: Rank-DistiLLM. In our experiments, cross-encoders trained on Rank-DistiLLM reach the effectiveness of LLMs while being orders of magnitude more efficient. Our code and data is available at https://github.com/webis-de/msmarco-llm-distillation."]], "1_corpus": "arxiv"}
|
7 |
{"tstamp": 1722363538.7216, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "435649f6492e485c93e938c5af958974", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "How can I use an LLM to train an embedding model?", "0_output": [["How can I use an LLM to train an embedding model?", "Title: Language Models are Universal Embedders\n\nAbstract: In the large language model (LLM) revolution, embedding is a key component of various systems. For example, it is used to retrieve knowledge or memories for LLMs, to build content moderation filters, etc. As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario. In this work, we make an initial step towards this goal, demonstrating that multiple languages (both natural and programming) pre-trained transformer decoders can embed universally when finetuned on limited English data. We provide a comprehensive practice with thorough evaluations. On English MTEB, our models achieve competitive performance on different embedding tasks by minimal training data. On other benchmarks, such as multilingual classification and code search, our models (without any supervision) perform comparably to, or even surpass heavily supervised baselines and/or APIs. These results provide evidence of a promising path towards building powerful unified embedders that can be applied across tasks and languages."]], "0_corpus": "arxiv", "1_conv_id": "8eee464cbf244db6b3e8f8a8cc8561d8", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "How can I use an LLM to train an embedding model?", "1_output": [["How can I use an LLM to train an embedding model?", "Title: Embedding-Aligned Language Models\n\nAbstract: We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M dataset to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
5 |
{"tstamp": 1722363327.3402, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "55efd4d6039e4ba194eb41c38a3c6ea7", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "prompt optimization using a reward model", "0_output": [["prompt optimization using a reward model", "Title: Invariants of moduli spaces of abelian surfaces\n\nAbstract: Compactifications of moduli spaces of (1,p)-polarized abelian surfaces with level structures of canonical type have been described in great detail by Hulek, Kahn and Weintraub. The aim of this paper is to determine some invariants of smooth models of these moduli spaces. In particular, a geometric description of their canonical divisors is given and their Chern numbers are computed."]], "0_corpus": "arxiv", "1_conv_id": "720dd7663692404dbe4e7978fb23acb5", "1_model_name": "voyage-multilingual-2", "1_prompt": "prompt optimization using a reward model", "1_output": [["prompt optimization using a reward model", "Title: Reward Collapse in Aligning Large Language Models\n\nAbstract: The extraordinary capabilities of large language models (LLMs) such as ChatGPT and GPT-4 are in part unleashed by aligning them with reward models that are trained on human preferences, which are often represented as rankings of responses to prompts. In this paper, we document the phenomenon of \\textit{reward collapse}, an empirical observation where the prevailing ranking-based approach results in an \\textit{identical} reward distribution \\textit{regardless} of the prompts during the terminal phase of training. This outcome is undesirable as open-ended prompts like ``write a short story about your best friend'' should yield a continuous range of rewards for their completions, while specific prompts like ``what is the capital of New Zealand'' should generate either high or low rewards. Our theoretical investigation reveals that reward collapse is primarily due to the insufficiency of the ranking-based objective function to incorporate prompt-related information during optimization. This insight allows us to derive closed-form expressions for the reward distribution associated with a set of utility functions in an asymptotic regime. To overcome reward collapse, we introduce a prompt-aware optimization scheme that provably admits a prompt-dependent reward distribution within the interpolating regime. Our experimental results suggest that our proposed prompt-aware utility functions significantly alleviate reward collapse during the training of reward models."]], "1_corpus": "arxiv"}
|
6 |
{"tstamp": 1722363435.7841, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4cb5f6a122cb452f867ed2739011cd9b", "0_model_name": "BM25", "0_prompt": "distilling the reranking ability of large language models to cross encoders", "0_output": [["distilling the reranking ability of large language models to cross encoders", "Title: ReFIT: Relevance Feedback from a Reranker during Inference\n\nAbstract: Retrieve-and-rerank is a prevalent framework in neural information retrieval, wherein a bi-encoder network initially retrieves a pre-defined number of candidates (e.g., K=100), which are then reranked by a more powerful cross-encoder model. While the reranker often yields improved candidate scores compared to the retriever, its scope is confined to only the top K retrieved candidates. As a result, the reranker cannot improve retrieval performance in terms of Recall@K. In this work, we propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time. Specifically, given a test instance during inference, we distill the reranker's predictions for that instance into the retriever's query representation using a lightweight update mechanism. The aim of the distillation loss is to align the retriever's candidate scores more closely with those produced by the reranker. The algorithm then proceeds by executing a second retrieval step using the updated query vector. We empirically demonstrate that this method, applicable to various retrieve-and-rerank frameworks, substantially enhances retrieval recall across multiple domains, languages, and modalities."]], "0_corpus": "arxiv", "1_conv_id": "44d3ff6d268a4decb6abdb3d5eb97904", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "distilling the reranking ability of large language models to cross encoders", "1_output": [["distilling the reranking ability of large language models to cross encoders", "Title: A Systematic Investigation of Distilling Large Language Models into Cross-Encoders for Passage Re-ranking\n\nAbstract: Cross-encoders distilled from large language models (LLMs) are often more effective re-rankers than cross-encoders fine-tuned on manually labeled data. However, the distilled models usually do not reach their teacher LLM's effectiveness. To investigate whether best practices for fine-tuning cross-encoders on manually labeled data (e.g., hard-negative sampling, deep sampling, and listwise loss functions) can help to improve LLM ranker distillation, we construct and release a new distillation dataset: Rank-DistiLLM. In our experiments, cross-encoders trained on Rank-DistiLLM reach the effectiveness of LLMs while being orders of magnitude more efficient. Our code and data is available at https://github.com/webis-de/msmarco-llm-distillation."]], "1_corpus": "arxiv"}
|
7 |
{"tstamp": 1722363538.7216, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "435649f6492e485c93e938c5af958974", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "How can I use an LLM to train an embedding model?", "0_output": [["How can I use an LLM to train an embedding model?", "Title: Language Models are Universal Embedders\n\nAbstract: In the large language model (LLM) revolution, embedding is a key component of various systems. For example, it is used to retrieve knowledge or memories for LLMs, to build content moderation filters, etc. As such cases span from English to other natural or programming languages, from retrieval to classification and beyond, it is desirable to build a unified embedding model rather than dedicated ones for each scenario. In this work, we make an initial step towards this goal, demonstrating that multiple languages (both natural and programming) pre-trained transformer decoders can embed universally when finetuned on limited English data. We provide a comprehensive practice with thorough evaluations. On English MTEB, our models achieve competitive performance on different embedding tasks by minimal training data. On other benchmarks, such as multilingual classification and code search, our models (without any supervision) perform comparably to, or even surpass heavily supervised baselines and/or APIs. These results provide evidence of a promising path towards building powerful unified embedders that can be applied across tasks and languages."]], "0_corpus": "arxiv", "1_conv_id": "8eee464cbf244db6b3e8f8a8cc8561d8", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "How can I use an LLM to train an embedding model?", "1_output": [["How can I use an LLM to train an embedding model?", "Title: Embedding-Aligned Language Models\n\nAbstract: We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M dataset to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations."]], "1_corpus": "arxiv"}
|
8 |
+
{"tstamp": 1722364270.4879, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "6e94a2b6dc134ea99ef8d30eb1530d85", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \"types\" according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers."]], "0_corpus": "arxiv", "1_conv_id": "3de8ea25d77340518d8527eed178d6ab", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining\n\nAbstract: Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking theoretical guidance. This research tackles these limitations by investigating strategies based on low-cost proxies for data mixtures, with the aim of streamlining data curation to enhance training efficiency. Specifically, we propose a unified scaling law, termed BiMix, which accurately models the bivariate scaling behaviors of both data quantity and mixing proportions. We conduct systematic experiments and provide empirical evidence for the predictive power and fundamental principles of BiMix. Notably, our findings reveal that entropy-driven training-free data mixtures can achieve comparable or even better performance than more resource-intensive methods. We hope that our quantitative insights can shed light on further judicious research and development in cost-effective language modeling."]], "1_corpus": "arxiv"}
|
9 |
+
{"tstamp": 1722364274.046, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "8a308b6e542942c5a5144ec9d52b95f3", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Can we submit to other venues after review at ARR without withdrawing the paper?", "0_output": [["Can we submit to other venues after review at ARR without withdrawing the paper?", "Tex Stackexchange\n\nQ: Online Overleaf Editor: Clicked the Submit Button by mistake By mistake I clicked on the \"Submit\" button provided in the overleaf editor. Now my document shows:\n\"Manuscript Submitted to ACM\"\nI have following questions:\n\n*\n\n*What I have to do with my document, it was an incomplete document?\n\n*Will my document be leaked out now?\n\n*How can I remove the line \"Manuscript Submitted to ACM\" from my document?\n\nSome body please guide me.\nZulfi.\n\nA: (Tom from Overleaf Support here.)\nIf you accidentally submitted a paper via the Overleaf's Submit modal, there are three options:\n\n*\n\n*Accidental submissions to the Overleaf Gallery: Simply contact us via [email protected] and as for the submission to be rejected. We're happy to assist with this.\n\n\n*Accidental submissions to journals with direct submission: In these cases, the files are not with us anymore. You'll need to get in touch with the journal's Editorial Office and ask for the submission to be disregarded. It shouldn't be an issue.\n\n\n*Accidental submissions when a ZIP file was provided to you: Nothing is needed. You were given the ZIP file yourself, it hasn't been sent to any 3rd party.\n"]], "0_corpus": "stackexchange", "1_conv_id": "89bc9091aa614bccae37263f84d98252", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Can we submit to other venues after review at ARR without withdrawing the paper?", "1_output": [["Can we submit to other venues after review at ARR without withdrawing the paper?", "Tex Stackexchange\n\nQ: Online Overleaf Editor: Clicked the Submit Button by mistake By mistake I clicked on the \"Submit\" button provided in the overleaf editor. Now my document shows:\n\"Manuscript Submitted to ACM\"\nI have following questions:\n\n*\n\n*What I have to do with my document, it was an incomplete document?\n\n*Will my document be leaked out now?\n\n*How can I remove the line \"Manuscript Submitted to ACM\" from my document?\n\nSome body please guide me.\nZulfi.\n\nA: (Tom from Overleaf Support here.)\nIf you accidentally submitted a paper via the Overleaf's Submit modal, there are three options:\n\n*\n\n*Accidental submissions to the Overleaf Gallery: Simply contact us via [email protected] and as for the submission to be rejected. We're happy to assist with this.\n\n\n*Accidental submissions to journals with direct submission: In these cases, the files are not with us anymore. You'll need to get in touch with the journal's Editorial Office and ask for the submission to be disregarded. It shouldn't be an issue.\n\n\n*Accidental submissions when a ZIP file was provided to you: Nothing is needed. You were given the ZIP file yourself, it hasn't been sent to any 3rd party.\n"]], "1_corpus": "stackexchange"}
|
10 |
+
{"tstamp": 1722364308.3175, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "863d8484113b41f3ad486bce94296929", "0_model_name": "text-embedding-004", "0_prompt": "do you have to have a gun permit to shoot at a range", "0_output": [["do you have to have a gun permit to shoot at a range", "Shooting range\n\nA shooting range, firing range, gun range or shooting ground is a specialized facility, venue, or field designed specifically for firearm usage qualifications, training, practice, or competitions. Some shooting ranges are operated by military or law enforcement agencies, though the majority of ranges are privately owned by civilians and sporting clubs and cater mostly to recreational shooters. Each facility is typically overseen by one or more supervisory personnel, known as a Range Officer (RO), or sometimes a range master in the United States. Supervisory personnel are responsible for ensuring that all safety rules and relevant laws are followed at all times."]], "0_corpus": "wikipedia", "1_conv_id": "2cadf86189454fcc929249ad0f97a89a", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "do you have to have a gun permit to shoot at a range", "1_output": [["do you have to have a gun permit to shoot at a range", "Overview of gun laws by nation\n\nPoland Gun ownership in Poland is regulated by the Weapons and Munitions Act of 1999 as further amended. A permit is required to own or possess firearms. Permits are granted for specific purposes, such as self-defense, sport, hunting, collecting etc. and are except the ones for self-defense, which are . Institutional permits allow for firearm ownership by private security companies, shooting ranges, sport clubs etc.\nPermits are issued to permanent residents over 21, not addicted to alcohol or other substances, having no criminal record, being in good physical and mental health, who passed an exam before sporting association, hunting association or the Police. Permits specify the purposes, categories and quantities of allowed guns; categories depend on permit purpose while quantities are assigned at the discretion of the Police, based on documentation provided. Quantities can be increased when one uses up all available slots.\nCarrying loaded guns is allowed on all types of permits except permits for collecting and memorial purposes. Hunters can carry loaded only on hunting grounds. Guns shall be carried concealed in a holster close to the body. Carrying in public transport is limited only to self-defense permit holders. It is prohibited to carry while intoxicated, and when attending public gatherings or mass events.\nEveryone, including foreigners, is allowed to use legal guns at registered shooting ranges, regardless of actual gun's ownership, e.g. rented from the range or shared by a legal owner present. Discharging a firearm for training or leisure out of a registered shooting range (even on a large private property) is prohibited."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl
CHANGED
@@ -50,3 +50,9 @@
|
|
50 |
{"tstamp": 1722363747.5485, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722363745.3093, "finish": 1722363747.5485, "ip": "", "conv_id": "bb784d4759d840059823dfe465d8781d", "model_name": "voyage-multilingual-2", "prompt": "When was Barak Obama born?", "output": [["When was Barak Obama born?", "Barack Obama\n\nEarly life and career Obama was born on August 4, 1961, at Kapiolani Medical Center for Women and Children in Honolulu, Hawaii. He is the only president born outside the contiguous 48 states. He was born to an 18-year-old American mother and a 27-year-old Kenyan father. His mother, Ann Dunham (1942–1995), was born in Wichita, Kansas, and was of English, Welsh, German, Swiss, and Irish descent. In 2007 it was discovered her great-great-grandfather Falmouth Kearney emigrated from the village of Moneygall, Ireland to the US in 1850. In July 2012, Ancestry.com found a strong likelihood that Dunham was descended from John Punch, an enslaved African man who lived in the Colony of Virginia during the seventeenth century. Obama's father, Barack Obama Sr. (1934–1982), was a married Luo Kenyan from Nyang'oma Kogelo. His last name, Obama, was derived from his Luo descent. Obama's parents met in 1960 in a Russian language class at the University of Hawaiʻi at Mānoa, where his father was a foreign student on a scholarship. The couple married in Wailuku, Hawaii, on February 2, 1961, six months before Obama was born.\nIn late August 1961, a few weeks after he was born, Barack and his mother moved to the University of Washington in Seattle, where they lived for a year. During that time, Barack's father completed his undergraduate degree in economics in Hawaii, graduating in June 1962. He left to attend graduate school on a scholarship at Harvard University, where he earned an M.A. in economics. Obama's parents divorced in March 1964. Obama Sr. returned to Kenya in 1964, where he married for a third time and worked for the Kenyan government as the Senior Economic Analyst in the Ministry of Finance. He visited his son in Hawaii only once, at Christmas 1971, before he was killed in an automobile accident in 1982, when Obama was 21 years old. Recalling his early childhood, Obama said: \"That my father looked nothing like the people around me—that he was black as pitch, my mother white as milk—barely registered in my mind.\" He described his struggles as a young adult to reconcile social perceptions of his multiracial heritage."]], "corpus": "wikipedia"}
|
51 |
{"tstamp": 1722363788.9924, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722363786.7978, "finish": 1722363788.9924, "ip": "", "conv_id": "24d2c06987a441288423ee3f2375229b", "model_name": "text-embedding-004", "prompt": "How tall is the empire state building?", "output": [["How tall is the empire state building?", "Empire State Building\n\nThe Empire State Building is a 102-story Art Deco skyscraper in the Midtown South neighborhood of Manhattan in New York City. The building was designed by Shreve, Lamb & Harmon and built from 1930 to 1931. Its name is derived from \"Empire State\", the nickname of the state of New York. The building has a roof height of and stands a total of tall, including its antenna. The Empire State Building was the world's tallest building until the first tower of the World Trade Center was topped out in 1970; following the September 11 attacks in 2001, the Empire State Building was New York City's tallest building until it was surpassed in 2012 by One World Trade Center. , the building is the seventh-tallest building in New York City, the ninth-tallest completed skyscraper in the United States, and the 54th-tallest in the world.\nThe site of the Empire State Building, on the west side of Fifth Avenue between West 33rd and 34th Streets, was developed in 1893 as the Waldorf–Astoria Hotel. In 1929, Empire State Inc. acquired the site and devised plans for a skyscraper there. The design for the Empire State Building was changed fifteen times until it was ensured to be the world's tallest building. Construction started on March 17, 1930, and the building opened thirteen and a half months afterward on May 1, 1931. Despite favorable publicity related to the building's construction, because of the Great Depression and World War II, its owners did not make a profit until the early 1950s."]], "corpus": "wikipedia"}
|
52 |
{"tstamp": 1722363788.9924, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722363786.7978, "finish": 1722363788.9924, "ip": "", "conv_id": "0f22a0122fec4df88023658934c96ebe", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "How tall is the empire state building?", "output": [["How tall is the empire state building?", "Empire State Building\n\nThe September 11 attacks destroyed the World Trade Center and the broadcast centers atop it, leaving most of the city's stations without a transmitter for ten days until the Armstrong Tower in Alpine, New Jersey, was re-activated temporarily. By October 2001, nearly all of the city's commercial broadcast stations (both television and FM radio) were again transmitting from the top of the Empire State Building. In a report that Congress commissioned about the transition from analog television to digital television, it was stated that the placement of broadcast stations in the Empire State Building was considered \"problematic\" due to interference from nearby buildings. In comparison, the congressional report stated that the former Twin Towers had very few buildings of comparable height nearby thus signals suffered little interference. In 2003, a few FM stations were relocated to the nearby Condé Nast Building to reduce the number of broadcast stations using the Empire State Building. Eleven television stations and twenty-two FM stations had signed 15-year leases in the building by May 2003. It was expected that a taller broadcast tower in Bayonne, New Jersey, or Governors Island, would be built in the meantime with the Empire State Building being used as a \"backup\" since signal transmissions from the building were generally of poorer quality. Following the construction of One World Trade Center in the late 2000s and early 2010s, some TV stations began moving their transmitting facilities there.\n, the Empire State Building is home to the following stations:\nTelevision: WABC-7, WPIX-11, WXTV-41 Paterson, and WFUT-68 Newark\nFM: WINS-92.3, WPAT-93.1 Paterson, WNYC-93.9, WPLJ-95.5, WXNY-96.3, WQHT-97.1, WSKQ-97.9, WEPN-98.7, WHTZ-100.3 Newark, WCBS-101.1, WFAN-101.9, WNEW-FM-102.7, WKTU-103.5 Lake Success, WAXQ-104.3, WWPR-105.1, WQXR-105.9 Newark, WLTW-106.7, and WBLS-107.5\nNOAA Weather Radio station KWO35 broadcasts at a frequency of 162.550 MHz from the National Weather Service in Upton, New York."]], "corpus": "wikipedia"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
{"tstamp": 1722363747.5485, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722363745.3093, "finish": 1722363747.5485, "ip": "", "conv_id": "bb784d4759d840059823dfe465d8781d", "model_name": "voyage-multilingual-2", "prompt": "When was Barak Obama born?", "output": [["When was Barak Obama born?", "Barack Obama\n\nEarly life and career Obama was born on August 4, 1961, at Kapiolani Medical Center for Women and Children in Honolulu, Hawaii. He is the only president born outside the contiguous 48 states. He was born to an 18-year-old American mother and a 27-year-old Kenyan father. His mother, Ann Dunham (1942–1995), was born in Wichita, Kansas, and was of English, Welsh, German, Swiss, and Irish descent. In 2007 it was discovered her great-great-grandfather Falmouth Kearney emigrated from the village of Moneygall, Ireland to the US in 1850. In July 2012, Ancestry.com found a strong likelihood that Dunham was descended from John Punch, an enslaved African man who lived in the Colony of Virginia during the seventeenth century. Obama's father, Barack Obama Sr. (1934–1982), was a married Luo Kenyan from Nyang'oma Kogelo. His last name, Obama, was derived from his Luo descent. Obama's parents met in 1960 in a Russian language class at the University of Hawaiʻi at Mānoa, where his father was a foreign student on a scholarship. The couple married in Wailuku, Hawaii, on February 2, 1961, six months before Obama was born.\nIn late August 1961, a few weeks after he was born, Barack and his mother moved to the University of Washington in Seattle, where they lived for a year. During that time, Barack's father completed his undergraduate degree in economics in Hawaii, graduating in June 1962. He left to attend graduate school on a scholarship at Harvard University, where he earned an M.A. in economics. Obama's parents divorced in March 1964. Obama Sr. returned to Kenya in 1964, where he married for a third time and worked for the Kenyan government as the Senior Economic Analyst in the Ministry of Finance. He visited his son in Hawaii only once, at Christmas 1971, before he was killed in an automobile accident in 1982, when Obama was 21 years old. Recalling his early childhood, Obama said: \"That my father looked nothing like the people around me—that he was black as pitch, my mother white as milk—barely registered in my mind.\" He described his struggles as a young adult to reconcile social perceptions of his multiracial heritage."]], "corpus": "wikipedia"}
|
51 |
{"tstamp": 1722363788.9924, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722363786.7978, "finish": 1722363788.9924, "ip": "", "conv_id": "24d2c06987a441288423ee3f2375229b", "model_name": "text-embedding-004", "prompt": "How tall is the empire state building?", "output": [["How tall is the empire state building?", "Empire State Building\n\nThe Empire State Building is a 102-story Art Deco skyscraper in the Midtown South neighborhood of Manhattan in New York City. The building was designed by Shreve, Lamb & Harmon and built from 1930 to 1931. Its name is derived from \"Empire State\", the nickname of the state of New York. The building has a roof height of and stands a total of tall, including its antenna. The Empire State Building was the world's tallest building until the first tower of the World Trade Center was topped out in 1970; following the September 11 attacks in 2001, the Empire State Building was New York City's tallest building until it was surpassed in 2012 by One World Trade Center. , the building is the seventh-tallest building in New York City, the ninth-tallest completed skyscraper in the United States, and the 54th-tallest in the world.\nThe site of the Empire State Building, on the west side of Fifth Avenue between West 33rd and 34th Streets, was developed in 1893 as the Waldorf–Astoria Hotel. In 1929, Empire State Inc. acquired the site and devised plans for a skyscraper there. The design for the Empire State Building was changed fifteen times until it was ensured to be the world's tallest building. Construction started on March 17, 1930, and the building opened thirteen and a half months afterward on May 1, 1931. Despite favorable publicity related to the building's construction, because of the Great Depression and World War II, its owners did not make a profit until the early 1950s."]], "corpus": "wikipedia"}
|
52 |
{"tstamp": 1722363788.9924, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722363786.7978, "finish": 1722363788.9924, "ip": "", "conv_id": "0f22a0122fec4df88023658934c96ebe", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "How tall is the empire state building?", "output": [["How tall is the empire state building?", "Empire State Building\n\nThe September 11 attacks destroyed the World Trade Center and the broadcast centers atop it, leaving most of the city's stations without a transmitter for ten days until the Armstrong Tower in Alpine, New Jersey, was re-activated temporarily. By October 2001, nearly all of the city's commercial broadcast stations (both television and FM radio) were again transmitting from the top of the Empire State Building. In a report that Congress commissioned about the transition from analog television to digital television, it was stated that the placement of broadcast stations in the Empire State Building was considered \"problematic\" due to interference from nearby buildings. In comparison, the congressional report stated that the former Twin Towers had very few buildings of comparable height nearby thus signals suffered little interference. In 2003, a few FM stations were relocated to the nearby Condé Nast Building to reduce the number of broadcast stations using the Empire State Building. Eleven television stations and twenty-two FM stations had signed 15-year leases in the building by May 2003. It was expected that a taller broadcast tower in Bayonne, New Jersey, or Governors Island, would be built in the meantime with the Empire State Building being used as a \"backup\" since signal transmissions from the building were generally of poorer quality. Following the construction of One World Trade Center in the late 2000s and early 2010s, some TV stations began moving their transmitting facilities there.\n, the Empire State Building is home to the following stations:\nTelevision: WABC-7, WPIX-11, WXTV-41 Paterson, and WFUT-68 Newark\nFM: WINS-92.3, WPAT-93.1 Paterson, WNYC-93.9, WPLJ-95.5, WXNY-96.3, WQHT-97.1, WSKQ-97.9, WEPN-98.7, WHTZ-100.3 Newark, WCBS-101.1, WFAN-101.9, WNEW-FM-102.7, WKTU-103.5 Lake Success, WAXQ-104.3, WWPR-105.1, WQXR-105.9 Newark, WLTW-106.7, and WBLS-107.5\nNOAA Weather Radio station KWO35 broadcasts at a frequency of 162.550 MHz from the National Weather Service in Upton, New York."]], "corpus": "wikipedia"}
|
53 |
+
{"tstamp": 1722364246.1658, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722364245.9388, "finish": 1722364246.1658, "ip": "", "conv_id": "6e94a2b6dc134ea99ef8d30eb1530d85", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \"types\" according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers."]], "corpus": "arxiv"}
|
54 |
+
{"tstamp": 1722364246.1658, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722364245.9388, "finish": 1722364246.1658, "ip": "", "conv_id": "3de8ea25d77340518d8527eed178d6ab", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining\n\nAbstract: Large language models exhibit exceptional generalization capabilities, primarily attributed to the utilization of diversely sourced data. However, conventional practices in integrating this diverse data heavily rely on heuristic schemes, lacking theoretical guidance. This research tackles these limitations by investigating strategies based on low-cost proxies for data mixtures, with the aim of streamlining data curation to enhance training efficiency. Specifically, we propose a unified scaling law, termed BiMix, which accurately models the bivariate scaling behaviors of both data quantity and mixing proportions. We conduct systematic experiments and provide empirical evidence for the predictive power and fundamental principles of BiMix. Notably, our findings reveal that entropy-driven training-free data mixtures can achieve comparable or even better performance than more resource-intensive methods. We hope that our quantitative insights can shed light on further judicious research and development in cost-effective language modeling."]], "corpus": "arxiv"}
|
55 |
+
{"tstamp": 1722364257.2347, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722364257.0595, "finish": 1722364257.2347, "ip": "", "conv_id": "8a308b6e542942c5a5144ec9d52b95f3", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Can we submit to other venues after review at ARR without withdrawing the paper?", "output": [["Can we submit to other venues after review at ARR without withdrawing the paper?", "Tex Stackexchange\n\nQ: Online Overleaf Editor: Clicked the Submit Button by mistake By mistake I clicked on the \"Submit\" button provided in the overleaf editor. Now my document shows:\n\"Manuscript Submitted to ACM\"\nI have following questions:\n\n*\n\n*What I have to do with my document, it was an incomplete document?\n\n*Will my document be leaked out now?\n\n*How can I remove the line \"Manuscript Submitted to ACM\" from my document?\n\nSome body please guide me.\nZulfi.\n\nA: (Tom from Overleaf Support here.)\nIf you accidentally submitted a paper via the Overleaf's Submit modal, there are three options:\n\n*\n\n*Accidental submissions to the Overleaf Gallery: Simply contact us via [email protected] and as for the submission to be rejected. We're happy to assist with this.\n\n\n*Accidental submissions to journals with direct submission: In these cases, the files are not with us anymore. You'll need to get in touch with the journal's Editorial Office and ask for the submission to be disregarded. It shouldn't be an issue.\n\n\n*Accidental submissions when a ZIP file was provided to you: Nothing is needed. You were given the ZIP file yourself, it hasn't been sent to any 3rd party.\n"]], "corpus": "stackexchange"}
|
56 |
+
{"tstamp": 1722364257.2347, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722364257.0595, "finish": 1722364257.2347, "ip": "", "conv_id": "89bc9091aa614bccae37263f84d98252", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Can we submit to other venues after review at ARR without withdrawing the paper?", "output": [["Can we submit to other venues after review at ARR without withdrawing the paper?", "Tex Stackexchange\n\nQ: Online Overleaf Editor: Clicked the Submit Button by mistake By mistake I clicked on the \"Submit\" button provided in the overleaf editor. Now my document shows:\n\"Manuscript Submitted to ACM\"\nI have following questions:\n\n*\n\n*What I have to do with my document, it was an incomplete document?\n\n*Will my document be leaked out now?\n\n*How can I remove the line \"Manuscript Submitted to ACM\" from my document?\n\nSome body please guide me.\nZulfi.\n\nA: (Tom from Overleaf Support here.)\nIf you accidentally submitted a paper via the Overleaf's Submit modal, there are three options:\n\n*\n\n*Accidental submissions to the Overleaf Gallery: Simply contact us via [email protected] and as for the submission to be rejected. We're happy to assist with this.\n\n\n*Accidental submissions to journals with direct submission: In these cases, the files are not with us anymore. You'll need to get in touch with the journal's Editorial Office and ask for the submission to be disregarded. It shouldn't be an issue.\n\n\n*Accidental submissions when a ZIP file was provided to you: Nothing is needed. You were given the ZIP file yourself, it hasn't been sent to any 3rd party.\n"]], "corpus": "stackexchange"}
|
57 |
+
{"tstamp": 1722364292.975, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722364290.8355, "finish": 1722364292.975, "ip": "", "conv_id": "863d8484113b41f3ad486bce94296929", "model_name": "text-embedding-004", "prompt": "do you have to have a gun permit to shoot at a range", "output": [["do you have to have a gun permit to shoot at a range", "Shooting range\n\nA shooting range, firing range, gun range or shooting ground is a specialized facility, venue, or field designed specifically for firearm usage qualifications, training, practice, or competitions. Some shooting ranges are operated by military or law enforcement agencies, though the majority of ranges are privately owned by civilians and sporting clubs and cater mostly to recreational shooters. Each facility is typically overseen by one or more supervisory personnel, known as a Range Officer (RO), or sometimes a range master in the United States. Supervisory personnel are responsible for ensuring that all safety rules and relevant laws are followed at all times."]], "corpus": "wikipedia"}
|
58 |
+
{"tstamp": 1722364292.975, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722364290.8355, "finish": 1722364292.975, "ip": "", "conv_id": "2cadf86189454fcc929249ad0f97a89a", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "do you have to have a gun permit to shoot at a range", "output": [["do you have to have a gun permit to shoot at a range", "Overview of gun laws by nation\n\nPoland Gun ownership in Poland is regulated by the Weapons and Munitions Act of 1999 as further amended. A permit is required to own or possess firearms. Permits are granted for specific purposes, such as self-defense, sport, hunting, collecting etc. and are except the ones for self-defense, which are . Institutional permits allow for firearm ownership by private security companies, shooting ranges, sport clubs etc.\nPermits are issued to permanent residents over 21, not addicted to alcohol or other substances, having no criminal record, being in good physical and mental health, who passed an exam before sporting association, hunting association or the Police. Permits specify the purposes, categories and quantities of allowed guns; categories depend on permit purpose while quantities are assigned at the discretion of the Police, based on documentation provided. Quantities can be increased when one uses up all available slots.\nCarrying loaded guns is allowed on all types of permits except permits for collecting and memorial purposes. Hunters can carry loaded only on hunting grounds. Guns shall be carried concealed in a holster close to the body. Carrying in public transport is limited only to self-defense permit holders. It is prohibited to carry while intoxicated, and when attending public gatherings or mass events.\nEveryone, including foreigners, is allowed to use legal guns at registered shooting ranges, regardless of actual gun's ownership, e.g. rented from the range or shared by a legal owner present. Discharging a firearm for training or leisure out of a registered shooting range (even on a large private property) is prohibited."]], "corpus": "wikipedia"}
|