Spaces:
Runtime error
Runtime error
import streamlit as st | |
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline | |
st.title('Question-Answering NLU') | |
st.sidebar.title('Navigation') | |
menu = st.sidebar.radio("", options=["Introduction", "Parsing NLU data into SQuAD 2.0", "Generating Questions", "Training", | |
"Evaluation"], index=0) | |
if menu == "Introduction": | |
st.markdown(''' | |
Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering, | |
leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of | |
training an intent classifier or a slot tagger, for example, we can ask the model intent- and | |
slot-related questions in natural language: | |
``` | |
Context : I'm looking for a cheap flight to Boston. | |
Question: Is the user looking to book a flight? | |
Answer : Yes | |
Question: Is the user asking about departure time? | |
Answer : No | |
Question: What price is the user looking for? | |
Answer : cheap | |
Question: Where is the user flying from? | |
Answer : (empty) | |
``` | |
Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, | |
please read the paper: | |
[Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf). | |
In this Space, we will see how to transform [MATIS++](https://github.com/amazon-research/multiatis) | |
NLU data (e.g. utterances and intent / slot annotations) into [SQuAD 2.0 format](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) | |
question-answering data that can be used by QANLU. MATIS++ includes | |
the original English version of ATIS and a translation into eight languages: German, Spanish, French, | |
Japanese, Hindi, Portuguese, Turkish, and Chinese. | |
''') | |
elif menu == "Evaluation": | |
st.header('QANLU Evaluation') | |
tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True) | |
model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True) | |
qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer) | |
context = st.text_input( | |
'Please enter the context:', | |
value="I want a cheap flight to Boston." | |
) | |
question = st.text_input( | |
'Please enter the question:', | |
value="What is the destination?" | |
) | |
qa_input = { | |
'context': 'Yes. No. ' + context, | |
'question': question | |
} | |
if st.button('Ask QANLU'): | |
answer = qa_pipeline(qa_input) | |
st.write(answer) |