--- annotations_creators: [] language: en license: cc-by-4.0 size_categories: - 1K ![image/png](dataset_preview.gif) This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1816 samples. ## Installation If you haven't already, install FiftyOne: ```bash pip install -U fiftyone ``` ## Usage ```python import fiftyone as fo import fiftyone.utils.huggingface as fouh # Load the dataset # Note: other available arguments include 'max_samples', etc dataset = fouh.load_from_hub("jamarks/emojis") # Launch the App session = fo.launch_app(dataset) ``` --- # Dataset Card for Emojis ![image/png](dataset_preview.gif) This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1816 samples. ## Installation If you haven't already, install FiftyOne: ```bash pip install -U fiftyone ``` ## Usage ```python import fiftyone as fo import fiftyone.utils.huggingface as fouh # Load the dataset # Note: other available arguments include 'max_samples', etc dataset = fouh.load_from_hub("jamarks/emojis") # Launch the App session = fo.launch_app(dataset) ``` ## Dataset Details ### Dataset Description - **Curated by:** Jacob Marks - **Language(s) (NLP):** en - **License:** cc-by-4.0 ### Dataset Sources - **Demo:** https://try.fiftyone.ai/datasets/emojis/samples ## Dataset Creation ### Curation Rationale Emojis sit at the intersection between textual and visual, providing a fascinating test-bed for exploring multimodal search and reranking techniques. This dataset was constructed to facilitate these experiments. For connected projects, check out: - [Emoji Search CLI Library](https://github.com/jacobmarks/emoji_search) - [Semantic Emoji Search Plugin for FiftyOne](https://github.com/jacobmarks/emoji-search-plugin) ### Source Data Samples in this dataset were constructed from rows in the Kaggle [Full Emoji Image Dataset](https://www.kaggle.com/datasets/subinium/emojiimage-dataset) #### Data Collection and Processing The base64-encoded images in the original csv were upscaled by 10x using [Real-ESRGAN](https://replicate.com/nightmareai/real-esrgan). OpenAI's CLIP-VIT-B/32 model was used to embed these images (vision encoder), the emoji names (text encoder), and the unicode sequences (text encoder). These embeddings were used to construct [Brain Runs](https://docs.voxel51.com/user_guide/brain.html) for performing similarity and semantic searches, as well as visualizing the structure of the dataset using UMAP dimensionality reduction. ## Dataset Card Authors [Jacob Marks](https://huggingface.co/jamarks)