1

I am new to AWS Sagemaker. I have custom CV PyTorch model locally and deployed it to Sagemaker endpoint. I used custom inference.py code to define model_fn, input_fn, output_fn and predict_fn methods. So, I'm able to generate predictions on json input, which contains url to the image, the code is quite straigtforward:

def input_fn(request_body, content_type='application/json'):

    logging.info('Deserializing the input data...')

    image_transform = transforms.Compose([
            transforms.Resize(size=(224, 224)),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])

    if content_type:
        if content_type == 'application/json':
            input_data = json.loads(request_body)
            url = input_data['url']
            logging.info(f'Image url: {url}')
            image_data = Image.open(requests.get(url, stream=True).raw)

        return image_transform(image_data)
    raise Exception(f'Requested unsupported ContentType in content_type {content_type}')

Then I am able to invoke endpoint with code:

client = boto3.client('runtime.sagemaker')
inp = {"url":url}
inp = json.loads(json.dumps(inp))
 
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME',
                                  Body=json.dumps(inp),
                                  ContentType='application/json')

The problem is, I see, that locally url request return slightly different image array comparing to the one on Sagemaker. Which is why on the same URL I obtain slightly different predictions. To check that at least model weights are the same I want to generate predictions on image itself, downloaded locally and to Sagemaker. But I fail trying to put image as input to endpoint. E.g.:

def input_fn(request_body, content_type='application/json'):

    logging.info('Deserializing the input data...')

    image_transform = transforms.Compose([
            transforms.Resize(size=(224, 224)),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])

    if content_type == 'application/x-image':
        image_data = request_body

        return image_transform(image_data)
    raise Exception(f'Requested unsupported ContentType in content_type {content_type}')

Invoking endpoint I experience the error:

ParamValidationError: Parameter validation failed: Invalid type for parameter Body, value: {'img': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=630x326 at 0x7F78A61461D0>}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object

Does anybody know how to generate Sagemaker predictions by Pytorch model on images?

1 Answer 1

1

As always, after asking I found a solution. Actually, as the error suggested, I had to convert input to bytes or bytearray. For those who may need the solution:

from io import BytesIO

img = Image.open(open(PATH, 'rb'))
img_byte_arr = BytesIO()
img.save(img_byte_arr, format=img.format)
img_byte_arr = img_byte_arr.getvalue()

client = boto3.client('runtime.sagemaker')
 
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME
                                  Body=img_byte_arr,
                                  ContentType='application/x-image')
response_body = response['Body'] 
print(response_body.read())

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.