STEM-AI-mtl
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,6 @@ tags:
|
|
11 |
- STEM-AI-mtl
|
12 |
datasets:
|
13 |
- STEM-AI-mtl/City_map
|
14 |
-
|
15 |
widget:
|
16 |
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
|
17 |
example_title: Tiger
|
@@ -19,43 +18,22 @@ widget:
|
|
19 |
example_title: Teapot
|
20 |
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
|
21 |
example_title: Palace
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# The fine-tuned ViT model that beats [Google's base model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's GPT4
|
25 |
|
26 |
-
Image-classification model that identifies which city map is illustrated from an image input.
|
27 |
-
|
28 |
-
## Model description
|
29 |
-
|
30 |
-
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
|
31 |
-
|
32 |
-
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
|
33 |
|
34 |
-
|
35 |
|
36 |
|
37 |
-
### How to use
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
-
from transformers import ViTImageProcessor, ViTForImageClassification
|
43 |
-
from PIL import Image
|
44 |
-
import requests
|
45 |
|
46 |
-
url = 'https://assets.wfcdn.com/im/16661612/compr-r85/4172/41722749/new-york-city-map-on-paper-print.jpg'
|
47 |
-
image = Image.open(requests.get(url, stream=True).raw)
|
48 |
-
|
49 |
-
processor = ViTImageProcessor.from_pretrained('STEM-AI-mtl/City_map-vit-base-patch16-224')
|
50 |
-
model = ViTForImageClassification.from_pretrained('STEM-AI-mtl/City_map-vit-base-patch16-224')
|
51 |
-
|
52 |
-
inputs = processor(images=image, return_tensors="pt")
|
53 |
-
outputs = model(**inputs)
|
54 |
-
logits = outputs.logits
|
55 |
-
|
56 |
-
predicted_class_idx = logits.argmax(-1).item()
|
57 |
-
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
58 |
-
```
|
59 |
|
60 |
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
|
61 |
|
@@ -65,10 +43,13 @@ This [Google's ViT-base-patch16-224](https://huggingface.co/google/vit-base-patc
|
|
65 |
|
66 |
## Training procedure
|
67 |
|
|
|
|
|
|
|
68 |
|
69 |
## Training evaluation results
|
70 |
|
71 |
-
The quality of the training was evaluated with the training dataset and resulted in the following metrics:\
|
72 |
|
73 |
{'eval_loss': 1.3691096305847168,\
|
74 |
'eval_accuracy': 0.6666666666666666,\
|
@@ -81,4 +62,4 @@ The quality of the training was evaluated with the training dataset and resulted
|
|
81 |
## Model Card Authors
|
82 |
|
83 |
STEM.AI: [email protected]\
|
84 |
-
[William Harbec](https://www.linkedin.com/in/william-harbec-56a262248/)
|
|
|
11 |
- STEM-AI-mtl
|
12 |
datasets:
|
13 |
- STEM-AI-mtl/City_map
|
|
|
14 |
widget:
|
15 |
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
|
16 |
example_title: Tiger
|
|
|
18 |
example_title: Teapot
|
19 |
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
|
20 |
example_title: Palace
|
21 |
+
metrics:
|
22 |
+
- accuracy
|
23 |
---
|
24 |
|
25 |
# The fine-tuned ViT model that beats [Google's base model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's GPT4
|
26 |
|
27 |
+
Image-classification fine-tuned model that identifies which city map is illustrated from an image input.
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
+
The Vision Transformer base model(ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
|
30 |
|
31 |
|
|
|
32 |
|
33 |
+
### How to use:
|
34 |
|
35 |
+
[Inference script](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
|
|
|
|
|
|
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
|
39 |
|
|
|
43 |
|
44 |
## Training procedure
|
45 |
|
46 |
+
A Transformer training was performed on [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on a 4 Gb Nvidia GTX 1650 GPU.
|
47 |
+
|
48 |
+
[Training notebook](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/Trainer_ViT.ipynb)
|
49 |
|
50 |
## Training evaluation results
|
51 |
|
52 |
+
The most accurate output model was obtained from a learning rate of 1e-3. The quality of the training was evaluated with the training dataset and resulted in the following metrics:\
|
53 |
|
54 |
{'eval_loss': 1.3691096305847168,\
|
55 |
'eval_accuracy': 0.6666666666666666,\
|
|
|
62 |
## Model Card Authors
|
63 |
|
64 |
STEM.AI: [email protected]\
|
65 |
+
[William Harbec](https://www.linkedin.com/in/william-harbec-56a262248/)
|