File size: 2,159 Bytes
138fc2f
 
0c0544e
 
ab5d39c
 
fbb44ff
 
 
138fc2f
0c0544e
 
fbb44ff
0c0544e
fbb44ff
0c0544e
fbb44ff
0c0544e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: mit
language:
- en
widget:
- text: "A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, 'I hope your planes are safe. Do they have a good track record for safety?' The airline agent replies, 'Sir, I can guarantee you, we've never had a plane that has crashed more than once.'"
  example_title: "A joke"
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book."
  example_title: "Not a joke"
---

### What is this?
This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It is based on Facebook's [RoBerta-MUPPET](https://huggingface.co/facebook/muppet-roberta-base). 

This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc.

See the example in the inference widget or How to use section for what constitues a narrative-style joke.

### Install these first
You'll need to pip install transformers & maybe sentencepiece

### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch, time
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = '/path/to/model'
max_seq_len = 510

tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_length=max_seq_len)
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)

premise = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """
hypothesis = ""

input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))  # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
is_joke = True if prediction[0] < prediction[1] else False

print(is_joke)
```