Spaces:
Running
Running
Avijit Ghosh
commited on
Commit
·
30b822d
1
Parent(s):
5bb95f3
added SLD
Browse files- Images/SLD1.png +0 -0
- Images/SLD2.png +0 -0
- configs/safelatentdiff.yaml +17 -0
Images/SLD1.png
ADDED
Images/SLD2.png
ADDED
configs/safelatentdiff.yaml
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Abstract: "Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment."
|
2 |
+
Applicable Models:
|
3 |
+
- Stable Diffusion
|
4 |
+
Authors: Patrick Schramowski, Manuel Brack, Björn Deiseroth, Kristian Kersting
|
5 |
+
Considerations: What is considered appropriate and inappropriate varies strongly across cultures and is very context dependent
|
6 |
+
Datasets: https://huggingface.co/datasets/AIML-TUDA/i2p
|
7 |
+
Group: CulturalEvals
|
8 |
+
Hashtags: .nan
|
9 |
+
Link: 'Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models'
|
10 |
+
Modality: Image
|
11 |
+
Screenshots:
|
12 |
+
- Images/SLD1.png
|
13 |
+
- Images/SLD2.png
|
14 |
+
Suggested Evaluation: Evaluating text-to-image models for safety
|
15 |
+
Level: Output
|
16 |
+
URL: https://arxiv.org/pdf/2211.05105.pdf
|
17 |
+
What it is evaluating: Generating images for diverse set of prompts (novel I2P benchmark) and investigating how often e.g. violent/nude images will be generated. There is a distinction between implicit and explicit safety, i.e. unsafe results with “normal” prompts.
|