File size: 2,919 Bytes
6d7718c
 
 
 
 
 
de6e26b
 
6d7718c
 
 
 
 
de6e26b
6d7718c
 
 
 
 
 
 
de6e26b
 
6d7718c
 
 
 
 
de6e26b
6d7718c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de6e26b
6d7718c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
846b0d5
6d7718c
 
 
846b0d5
6d7718c
 
 
846b0d5
6d7718c
 
 
 
 
 
846b0d5
6d7718c
846b0d5
 
6d7718c
 
 
846b0d5
6d7718c
 
 
846b0d5
6d7718c
846b0d5
6d7718c
846b0d5
6d7718c
de6e26b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
annotations_creators: []
language: en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-feature-extraction
task_ids: []
pretty_name: Emojis
tags:
- fiftyone
- image
dataset_summary: >



  ![image/png](dataset_preview.gif)



  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1816
  samples.


  ## Installation


  If you haven't already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  import fiftyone.utils.huggingface as fouh


  # Load the dataset

  # Note: other available arguments include 'max_samples', etc

  dataset = fouh.load_from_hub("jamarks/emojis")


  # Launch the App

  session = fo.launch_app(dataset)

  ```
---

# Dataset Card for Emojis

<!-- Provide a quick summary of the dataset. -->




![image/png](dataset_preview.gif)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 1816 samples.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("jamarks/emojis")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->



- **Curated by:** Jacob Marks
- **Language(s) (NLP):** en
- **License:** cc-by-4.0

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Demo:** https://try.fiftyone.ai/datasets/emojis/samples


## Dataset Creation

### Curation Rationale

Emojis sit at the intersection between textual and visual, providing a fascinating test-bed for exploring multimodal search and reranking techniques. This dataset was constructed to facilitate these experiments. For connected projects, check out:

- [Emoji Search CLI Library](https://github.com/jacobmarks/emoji_search)
- [Semantic Emoji Search Plugin for FiftyOne](https://github.com/jacobmarks/emoji-search-plugin)

### Source Data

Samples in this dataset were constructed from rows in the Kaggle [Full Emoji Image Dataset](https://www.kaggle.com/datasets/subinium/emojiimage-dataset)

#### Data Collection and Processing

The base64-encoded images in the original csv were upscaled by 10x using [Real-ESRGAN](https://replicate.com/nightmareai/real-esrgan).

OpenAI's CLIP-VIT-B/32 model was used to embed these images (vision encoder), the emoji names (text encoder), and the unicode sequences (text encoder). These embeddings were used to construct [Brain Runs](https://docs.voxel51.com/user_guide/brain.html) for performing similarity and semantic searches, as well as visualizing the structure of the dataset using UMAP dimensionality reduction.

## Dataset Card Authors

[Jacob Marks](https://huggingface.co/jamarks)