lovetillion commited on
Commit
0ded642
·
verified ·
1 Parent(s): 3a0bbab

Update README.md

Browse files

since there are some downloads, making it easier for new users

Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # About
5
+ This is a fork of MichalMlodawski/nsfw-image-detection-large which became unavailable.
6
+
7
+ # Usage example
8
+ ```
9
+ from PIL import Image
10
+ import torch
11
+ from transformers import AutoProcessor, FocalNetForImageClassification
12
+
13
+ DEVICE = torch.device("cuda")
14
+ model_path = "lovetillion/nsfw-image-detection-large"
15
+
16
+ # Load the model and feature extractor
17
+ feature_extractor = AutoProcessor.from_pretrained(model_path)
18
+ model = FocalNetForImageClassification.from_pretrained(model_path).to(DEVICE)
19
+ model.eval()
20
+
21
+ # Mapping from model labels to NSFW categories
22
+ label_to_category = {
23
+ "LABEL_0": "Safe",
24
+ "LABEL_1": "Questionable",
25
+ "LABEL_2": "Unsafe"
26
+ }
27
+
28
+ filename = "example.png"
29
+ image = Image.open(filename)
30
+ inputs = feature_extractor(images=image, return_tensors="pt")
31
+ inputs.to(DEVICE)
32
+
33
+ with torch.no_grad():
34
+ outputs = model(**inputs)
35
+ probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
36
+ confidence, predicted = torch.max(probabilities, 1)
37
+ label = model.config.id2label[predicted.item()]
38
+
39
+ if label != "SAFE":
40
+ print( label, confidence.item() * 100, filename )
41
+ else:
42
+ print( label, confidence.item() * 100, filename )
43
+ ```
44
+
45
+ # For more information
46
+
47
+ * Live demonstration in a production ensemble workflow: https://piglet.video
48
+ * Results from our ethical AI whitepaper: https://lovetillion.org/liaise.pdf
49
+ * Join us on Telegram at https://t.me/pigletproject