|
--- |
|
license: mit |
|
library_name: keras |
|
tags: |
|
- dragon-detection |
|
- Keras |
|
- dragon |
|
- image-classification |
|
--- |
|
|
|
## Dragon detector with Tensor Flow |
|
This is a simple `tensorflow` model to detect dragon in images. |
|
If you just want to test the trained model, make sure you have the following packages: |
|
|
|
``` |
|
tensorflow keras sklearn-deap datasets transformers[torch] sentencepiece |
|
``` |
|
|
|
## Predict |
|
|
|
To run prediction you need to run below code: |
|
|
|
```python |
|
from huggingface_hub import from_pretrained_keras |
|
|
|
model = from_pretrained_keras("hadilq/dragon-notdragon") |
|
|
|
img = keras.preprocessing.image.load_img(filename, target_size=(224, 224)) |
|
x = keras.preprocessing.image.img_to_array(img) |
|
x = np.expand_dims(x, axis=0) |
|
x = keras.applications.vgg16.preprocess_input(x) |
|
prediction = model.predict(x) |
|
print("model:", filename, "dragon" if prediction[0][0] >= 0.99 else "notdragon") |
|
``` |
|
|
|
Additionally, you can check https://replicate.com/hadilq/dragon-notdragon to play around. |
|
|
|
## Training procedure |
|
I trained it in Google colab, where you can find the original code in `training` directory. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
|
|
| Hyperparameters | Value | |
|
| :-- | :-- | |
|
| name | Adam | |
|
| weight_decay | None | |
|
| clipnorm | None | |
|
| global_clipnorm | None | |
|
| clipvalue | None | |
|
| use_ema | False | |
|
| ema_momentum | 0.99 | |
|
| ema_overwrite_frequency | None | |
|
| jit_compile | True | |
|
| is_legacy_optimizer | False | |
|
| learning_rate | 9.999999747378752e-05 | |
|
| beta_1 | 0.9 | |
|
| beta_2 | 0.999 | |
|
| epsilon | 1e-07 | |
|
| amsgrad | False | |
|
| training_precision | float32 | |
|
|
|
|
|
## Model Plot |
|
|
|
<details> |
|
<summary>View Model Plot</summary> |
|
|
|
![Model Image](./model.png) |
|
|
|
</details> |
|
|