added images and updated readme
Browse files- README.md +15 -1
- teaser-int8.jpg +0 -0
README.md
CHANGED
@@ -11,9 +11,23 @@ pipeline_tag: text-to-image
|
|
11 |
Model Descriptions:
|
12 |
|
13 |
This repo contains OpenVino model files for SimianLuo's LCM_Dreamshaper_v7 int8 quantized.
|
14 |
-
This model is 1.4x faster than float32 model.
|
15 |
|
|
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
```py
|
18 |
from optimum.intel import OVLatentConsistencyModelPipeline
|
19 |
|
|
|
11 |
Model Descriptions:
|
12 |
|
13 |
This repo contains OpenVino model files for SimianLuo's LCM_Dreamshaper_v7 int8 quantized.
|
14 |
+
This 8 bit model is **1.4x** faster than `float32` model.
|
15 |
|
16 |
+
## Generation Results:
|
17 |
|
18 |
+
<p align="center">
|
19 |
+
<img src="teaser-int8.jpg">
|
20 |
+
</p>
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
You can try out model using [Fast SD CPU](https://github.com/rupeshs/fastsdcpu)
|
24 |
+
|
25 |
+
To run the model yourself, you can leverage the optimum-intel library:
|
26 |
+
1. Install the library:
|
27 |
+
```
|
28 |
+
pip install optimum-intel
|
29 |
+
```
|
30 |
+
2. Run the model:
|
31 |
```py
|
32 |
from optimum.intel import OVLatentConsistencyModelPipeline
|
33 |
|
teaser-int8.jpg
ADDED