Search is not available for this dataset
image
imagewidth (px) 35
6.91k
| label
class label 309
classes |
---|---|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
|
0n00120010
|
End of preview. Expand
in Dataset Viewer.
Dataset Description
"ImageNet Unique Label" (imagenet-ul) contains 5942 classes, which contains about 1 million images. The data undergoes a multi-step filtering process:
- To ensure that all classes are not encountered during the pretraining of the vision model,
- To prevent the sharing of labels between two image classes,
- To exclude hyponyms from the label set,
- To ensure that each class contains at least 100 images.
It is a subset of ImageNet dataset (Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Fei-Fei, L. (2015).).
How to Use
from datasets import load_dataset
# Load the dataset
common_words = load_dataset("jaagli/imagenet-ul", split="train")
Citation
@article{10.1162/tacl_a_00698,
author = {Li, Jiaang and Kementchedjhieva, Yova and Fierro, Constanza and Søgaard, Anders},
title = {Do Vision and Language Models Share Concepts? A Vector Space Alignment Study},
journal = {Transactions of the Association for Computational Linguistics},
volume = {12},
pages = {1232-1249},
year = {2024},
month = {09},
abstract = {Large-scale pretrained language models (LMs) are said to “lack the ability to connect utterances to the world” (Bender and Koller, 2020), because they do not have “mental models of the world” (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT, and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy, and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1},
issn = {2307-387X},
doi = {10.1162/tacl_a_00698},
url = {https://doi.org/10.1162/tacl\_a\_00698},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00698/2473674/tacl\_a\_00698.pdf},
}
- Downloads last month
- 33