Server Error

#4
by Darkclown1 - opened

Server error is being displayed while looking for the result in both collab as well as huggingface . How to resolve it?

AIRI - Artificial Intelligence Research Institute org

@Darkclown1 , hello! At the moment colab and huggingface demo are working correctly. Could you provide more information on what exactly is wrong?

Sharing the image of the error
E1.png
Error in the demo app.png

How to resolve this error

Same error occur, Please give solution for this.

Running the docker locally requires that you set the SERVER environment variable. This seems to need to point to a grpc server running the main HairFastGAN code from github.
I was able to get it working by:

  1. Cloning https://github.com/AIRI-Institute/HairFastGAN and following the instructions there to get main.py working.
    Make sure main.py works with the instructions in the README first!
  2. Copying inference_pb2_grpc.py inference_pb2.py and inference_pb2.pyi from this repo into a new folder in HairFastGAN called grpc_interface
    You will need to change the import in inference_pb2_grpc.py line 6 to from . import inference_pb2 as inference__pb2
  3. Running the script below in HairFastGAN, I saved it as server.py (you need to be able to import from hair_swap.py which only works when your script is in the same folder)
  4. Running the docker image in a separate terminal, replacing with e.g. 192.168.0.2 or whatever your PCs local IP address is.
    docker run -it --rm -p 7860:7860 --platform=linux/amd64 -e SERVER="<YOUR_LOCAL_IP_ADDRESS>:50051" registry.hf.space/airi-institute-hairfastgan:latest python app.py
  5. Opening the gradio app at http://localhost:7860

Note that my script only gets the basics working, it ignores all the "Advanced Options". @WideMax - it would be great for AIRI to publish an official script to serve the model, rather than the hacky thing I've pulled together.

import grpc
import logging
from concurrent import futures
from io import BytesIO

from torchvision.utils import save_image
from PIL import Image

from grpc_interface import inference_pb2
from grpc_interface import inference_pb2_grpc
from hair_swap import HairFast, get_parser


def bytes_to_image(image: bytes) -> Image.Image:
    image = Image.open(BytesIO(image))
    return image


class SwapServer(inference_pb2_grpc.HairSwapServiceServicer):
    def __init__(self):
        self.hair_fast = HairFast(get_parser().parse_args([]))

    def swap(self, request, context):
        # Load the data
        face = bytes_to_image(request.face)
        if request.shape == b'face':
            shape = face
        else:
            shape = bytes_to_image(request.shape)
        if request.color == b'shape':
            color = shape
        else:
            color = bytes_to_image(request.color)

        # Create image
        final_image = self.hair_fast.swap(face, shape, color)

        # Convert to png
        buffer = BytesIO()
        save_image(final_image, buffer, 'png')
        buffer.seek(0)

        return inference_pb2.HairSwapResponse(image=buffer.read())


def serve():
    port = "50051"
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=1))
    inference_pb2_grpc.add_HairSwapServiceServicer_to_server(SwapServer(), server)
    server.add_insecure_port("[::]:" + port)
    server.start()

    print("Server started, listening on " + port)
    server.wait_for_termination()


if __name__ == '__main__':
    logging.basicConfig()
    serve()

Sign up or log in to comment