Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
260c989
·
verified ·
1 Parent(s): 5a12922

Scheduled Commit

Browse files
data/retrieval_battle-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -22,3 +22,4 @@
22
  {"tstamp": 1722368885.4442, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5101c949a71c4d10aec9c4ac9b9e5ba9", "0_model_name": "BM25", "0_prompt": "who created gcsl?", "0_output": [["who created gcsl?", "Title: Goal-Conditioned Supervised Learning with Sub-Goal Prediction\n\nAbstract: Recently, a simple yet effective algorithm -- goal-conditioned supervised-learning (GCSL) -- was proposed to tackle goal-conditioned reinforcement-learning. GCSL is based on the principle of hindsight learning: by observing states visited in previously executed trajectories and treating them as attained goals, GCSL learns the corresponding actions via supervised learning. However, GCSL only learns a goal-conditioned policy, discarding other information in the process. Our insight is that the same hindsight principle can be used to learn to predict goal-conditioned sub-goals from the same trajectory. Based on this idea, we propose Trajectory Iterative Learner (TraIL), an extension of GCSL that further exploits the information in a trajectory, and uses it for learning to predict both actions and sub-goals. We investigate the settings in which TraIL can make better use of the data, and discover that for several popular problem settings, replacing real goals in GCSL with predicted TraIL sub-goals allows the agent to reach a greater set of goal states using the exact same data as GCSL, thereby improving its overall performance."]], "0_corpus": "arxiv", "1_conv_id": "7cffcfc7754c4aa781e6dff22cb8b716", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "who created gcsl?", "1_output": [["who created gcsl?", "Title: Supporting the GLAST User Community\n\nAbstract: The Gamma-ray Large Area Space Telescope (GLAST) Science Support Center (GSSC) is the scientific community's interface with GLAST. The GSSC will provide data, analysis software and documentation. In addition, the GSSC will administer the guest investigator program for NASA HQ. Consequently, the GSSC will provide proposal preparation tools to assist proposers in assessing the feasibility of observing sources of interest."]], "1_corpus": "arxiv"}
23
  {"tstamp": 1722368932.7183, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ea5e400b4db942b7adbea39fd7263a77", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "ae5db96d81e84d8b8daa0d3142568b97", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Reading h264 stream in Python I am trying to read a video stream from a Parrot Bebop 2 drone. \nThe video stream is written as \"socket\" to a file as H264 stream.\n$ ffmpeg -i [STREAM]\nInput #0, h264, from 'stream_h264':\n Duration: N/A, bitrate: N/A\n Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 1280x720, 23.98 fps, 23.98 tbr, 1200k tbn, 47.95 tbc\n\nReading the video stream in MPlayer is not a problem using the parameters below. Playing it using VLC or ffmpeg should also not be too hard. For MPlayer the following works:\nmplayer -fs -demuxer h264es -benchmark stream_h264\n\nThis plays the stream in high-res. However my goal is to perform image processing on the frames using Python (mostly OpenCV). Therefore, I would like to read the frames into NumPy arrays. I have already considered using cv2.VideoCapture but this does not seem to work for my stream. Other (somewhat easy) to use options I am not aware of, therefore my question is whether someone recommend me how to read the video frames in Python? \nAll recommendations are more than welcome!\n"]], "1_corpus": "stackexchange"}
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
 
 
22
  {"tstamp": 1722368885.4442, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5101c949a71c4d10aec9c4ac9b9e5ba9", "0_model_name": "BM25", "0_prompt": "who created gcsl?", "0_output": [["who created gcsl?", "Title: Goal-Conditioned Supervised Learning with Sub-Goal Prediction\n\nAbstract: Recently, a simple yet effective algorithm -- goal-conditioned supervised-learning (GCSL) -- was proposed to tackle goal-conditioned reinforcement-learning. GCSL is based on the principle of hindsight learning: by observing states visited in previously executed trajectories and treating them as attained goals, GCSL learns the corresponding actions via supervised learning. However, GCSL only learns a goal-conditioned policy, discarding other information in the process. Our insight is that the same hindsight principle can be used to learn to predict goal-conditioned sub-goals from the same trajectory. Based on this idea, we propose Trajectory Iterative Learner (TraIL), an extension of GCSL that further exploits the information in a trajectory, and uses it for learning to predict both actions and sub-goals. We investigate the settings in which TraIL can make better use of the data, and discover that for several popular problem settings, replacing real goals in GCSL with predicted TraIL sub-goals allows the agent to reach a greater set of goal states using the exact same data as GCSL, thereby improving its overall performance."]], "0_corpus": "arxiv", "1_conv_id": "7cffcfc7754c4aa781e6dff22cb8b716", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "who created gcsl?", "1_output": [["who created gcsl?", "Title: Supporting the GLAST User Community\n\nAbstract: The Gamma-ray Large Area Space Telescope (GLAST) Science Support Center (GSSC) is the scientific community's interface with GLAST. The GSSC will provide data, analysis software and documentation. In addition, the GSSC will administer the guest investigator program for NASA HQ. Consequently, the GSSC will provide proposal preparation tools to assist proposers in assessing the feasibility of observing sources of interest."]], "1_corpus": "arxiv"}
23
  {"tstamp": 1722368932.7183, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ea5e400b4db942b7adbea39fd7263a77", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "ae5db96d81e84d8b8daa0d3142568b97", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Reading h264 stream in Python I am trying to read a video stream from a Parrot Bebop 2 drone. \nThe video stream is written as \"socket\" to a file as H264 stream.\n$ ffmpeg -i [STREAM]\nInput #0, h264, from 'stream_h264':\n Duration: N/A, bitrate: N/A\n Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 1280x720, 23.98 fps, 23.98 tbr, 1200k tbn, 47.95 tbc\n\nReading the video stream in MPlayer is not a problem using the parameters below. Playing it using VLC or ffmpeg should also not be too hard. For MPlayer the following works:\nmplayer -fs -demuxer h264es -benchmark stream_h264\n\nThis plays the stream in high-res. However my goal is to perform image processing on the frames using Python (mostly OpenCV). Therefore, I would like to read the frames into NumPy arrays. I have already considered using cv2.VideoCapture but this does not seem to work for my stream. Other (somewhat easy) to use options I am not aware of, therefore my question is whether someone recommend me how to read the video frames in Python? \nAll recommendations are more than welcome!\n"]], "1_corpus": "stackexchange"}
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
25
+ {"tstamp": 1722369669.9495, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for recent work on generating text with personalities and style", "0_output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "0_corpus": "arxiv", "1_conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am looking for recent work on generating text with personalities and style", "1_output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "1_corpus": "arxiv"}
data/retrieval_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -96,3 +96,9 @@
96
  {"tstamp": 1722369064.2119, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722369062.051, "finish": 1722369064.2119, "ip": "", "conv_id": "5fbd2684062f435bad280bb2050c59e0", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "where do the sirens live in the odyssey", "output": [["where do the sirens live in the odyssey", "Siren (mythology)\n\nIn Greek mythology, sirens () are humanlike beings with alluring voices; they appear in a scene in the Odyssey in which Odysseus saves his crew's lives. Roman poets place them on some small islands called Sirenum scopuli. In some later, rationalized traditions, the literal geography of the \"flowery\" island of Anthemoessa, or Anthemusa, is fixed: sometimes on Cape Pelorum and at others in the islands known as the Sirenuse, near Paestum, or in Capreae. All such locations were surrounded by cliffs and rocks.\nSirens continued to be used as a symbol for the dangerous temptation embodied by women regularly throughout Christian art of the medieval era. \"Siren\" can also be used as a slang term for a woman considered both very attractive and dangerous.\nNomenclature\nThe etymology of the name is contested. Robert S. P. Beekes has suggested a Pre-Greek origin. Others connect the name to σειρά (seirá, \"rope, cord\") and εἴρω (eírō, \"to tie, join, fasten\"), resulting in the meaning \"binder, entangler\", i.e. one who binds or entangles through magic song. This could be connected to the famous scene of Odysseus being bound to the mast of his ship, in order to resist their song.\nSirens were later often used as a synonym for mermaids, and portrayed with upper human bodies and fish tails. This combination became iconic in the medieval period. The circumstances leading to the commingling involve the treatment of sirens in the medieval Physiologus and bestiaries, both iconographically, as well as textually in translations from Latin to vulgar languages, as described below.\nIconography"]], "corpus": "wikipedia"}
97
  {"tstamp": 1722369396.3581, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722369393.9883, "finish": 1722369396.3581, "ip": "", "conv_id": "2fd990202605484187eda3a456338321", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is inputs?", "output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "corpus": "wikipedia"}
98
  {"tstamp": 1722369396.3581, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722369393.9883, "finish": 1722369396.3581, "ip": "", "conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "model_name": "embed-english-v3.0", "prompt": "what is inputs?", "output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "corpus": "wikipedia"}
 
 
 
 
 
 
 
96
  {"tstamp": 1722369064.2119, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722369062.051, "finish": 1722369064.2119, "ip": "", "conv_id": "5fbd2684062f435bad280bb2050c59e0", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "where do the sirens live in the odyssey", "output": [["where do the sirens live in the odyssey", "Siren (mythology)\n\nIn Greek mythology, sirens () are humanlike beings with alluring voices; they appear in a scene in the Odyssey in which Odysseus saves his crew's lives. Roman poets place them on some small islands called Sirenum scopuli. In some later, rationalized traditions, the literal geography of the \"flowery\" island of Anthemoessa, or Anthemusa, is fixed: sometimes on Cape Pelorum and at others in the islands known as the Sirenuse, near Paestum, or in Capreae. All such locations were surrounded by cliffs and rocks.\nSirens continued to be used as a symbol for the dangerous temptation embodied by women regularly throughout Christian art of the medieval era. \"Siren\" can also be used as a slang term for a woman considered both very attractive and dangerous.\nNomenclature\nThe etymology of the name is contested. Robert S. P. Beekes has suggested a Pre-Greek origin. Others connect the name to σειρά (seirá, \"rope, cord\") and εἴρω (eírō, \"to tie, join, fasten\"), resulting in the meaning \"binder, entangler\", i.e. one who binds or entangles through magic song. This could be connected to the famous scene of Odysseus being bound to the mast of his ship, in order to resist their song.\nSirens were later often used as a synonym for mermaids, and portrayed with upper human bodies and fish tails. This combination became iconic in the medieval period. The circumstances leading to the commingling involve the treatment of sirens in the medieval Physiologus and bestiaries, both iconographically, as well as textually in translations from Latin to vulgar languages, as described below.\nIconography"]], "corpus": "wikipedia"}
97
  {"tstamp": 1722369396.3581, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722369393.9883, "finish": 1722369396.3581, "ip": "", "conv_id": "2fd990202605484187eda3a456338321", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is inputs?", "output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "corpus": "wikipedia"}
98
  {"tstamp": 1722369396.3581, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722369393.9883, "finish": 1722369396.3581, "ip": "", "conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "model_name": "embed-english-v3.0", "prompt": "what is inputs?", "output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "corpus": "wikipedia"}
99
+ {"tstamp": 1722369543.0628, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722369542.8541, "finish": 1722369543.0628, "ip": "", "conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "I am looking for recent work on generating text with personalities and style", "output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "corpus": "arxiv"}
100
+ {"tstamp": 1722369543.0628, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722369542.8541, "finish": 1722369543.0628, "ip": "", "conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am looking for recent work on generating text with personalities and style", "output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "corpus": "arxiv"}
101
+ {"tstamp": 1722369688.8663, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722369688.4367, "finish": 1722369688.8663, "ip": "", "conv_id": "4e96fd3903de4469b641e3f2c6673ae4", "model_name": "voyage-multilingual-2", "prompt": "Donut", "output": [["Donut", "Doughnut\n\nA doughnut or donut () is a type of pastry made from leavened fried dough. It is popular in many countries and is prepared in various forms as a sweet snack that can be homemade or purchased in bakeries, supermarkets, food stalls, and franchised specialty vendors. Doughnut is the traditional spelling, while donut is the simplified version; the terms are used interchangeably.\nDoughnuts are usually deep fried from a flour dough, but other types of batters can also be used. Various toppings and flavors are used for different types, such as sugar, chocolate or maple glazing. Doughnuts may also include water, leavening, eggs, milk, sugar, oil, shortening, and natural or artificial flavors."]], "corpus": "wikipedia"}
102
+ {"tstamp": 1722369688.8663, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722369688.4367, "finish": 1722369688.8663, "ip": "", "conv_id": "e15bbba2976d4d3e8c193c49e5b6170e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Donut", "output": [["Donut", "Doughnut\n\nA doughnut or donut () is a type of pastry made from leavened fried dough. It is popular in many countries and is prepared in various forms as a sweet snack that can be homemade or purchased in bakeries, supermarkets, food stalls, and franchised specialty vendors. Doughnut is the traditional spelling, while donut is the simplified version; the terms are used interchangeably.\nDoughnuts are usually deep fried from a flour dough, but other types of batters can also be used. Various toppings and flavors are used for different types, such as sugar, chocolate or maple glazing. Doughnuts may also include water, leavening, eggs, milk, sugar, oil, shortening, and natural or artificial flavors."]], "corpus": "wikipedia"}
103
+ {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "5419d1689a664f7aaa8419ac2c4cbdc6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
104
+ {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "057320225ef24795adb273a944b2cc91", "model_name": "text-embedding-004", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}