Llama 2 Inference Endpoint Stop Working

I have been successfully using the Inference API of the Llama 2 70B for several months, but today (11/13), it suddenly stopped working. I am getting ‘Model meta-llama/Llama-2-70b-chat-hf is currently loading’, but the model doesn’t load. It’s been like this since about 4 PM EST, and now it’s 10 PM EST.

Any idea what is the problem?

The service is working again. 11/14 7:40 AM EST.

{‘error’: ‘Model meta-llama/Llama-2-70b-chat-hf is currently loading’,
‘estimated_time’: 5518.13232421875} receiving this error