BERT Model - OSError

When running this BERT Model , it outputs OSError. the following is the model “nlptown/bert-base-multilingual-uncased-sentiment” ,

looking at the 2 recommended solutions, not 100 % positive if they both apply. For the second recommended solution, the file i see missing is “model.ckpt” from the uploaded model in huggingface, but not sure if that matters.

Below is the Code and Error Message,

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import requests
from bs4 import BeautifulSoup
import re

## Instantiate Model
# the nlp bert pre-trained model used here is from website huggingface. the website is " https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment "

tokenizer = AutoTokenizer.from_pretrained('nlptown/bert-base-multilingual-uncased-sentiment')  
model = AutoModelForSequenceClassification.from_pretrained('nlptown/bert-base-multilingual-uncased-sentiment')


## Encode and Calculate Sentiment ( Now entering string words to test the sentiment score )

tokens = tokenizer.encode('I hated this, absolutely the worst', return_tensors='pt' )
result = model(tokens)

print(result)

Error Message Below,
" OSError: Can’t load the model for ‘nlptown/bert-base-multilingual-uncased-sentiment’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘nlptown/bert-base-multilingual-uncased-sentiment’ is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack. "

Any solution found for this issue? I am also facing same issue.

Hey folks :wave:

I can’t reproduce the issue you described (see this colab).

My suggestions include:

  1. Make sure you don’t have a local folder with the same name
  2. Create a fresh python environment and install a recent version of transformers

I was having a similar error and managed to fix it by adding if __name__ == "__main__": at the start of my code. If your error is the same as mine, you should see

RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

further above in your stacktrace. Not having if __name__ == "__main__": causes a problem with how child processes are created as each child process runs the code for importing the model causing an infinite loop. This also explains why @joaogante could not reproduce the issue in a jupyter notebook.

Hope this helps someone in the future.

1 Like