Categories
Blog

Analyzing images using AWS Rekognition

In my last blog you could read about setting up a serverless microservice and in this blog I would like to also touch on something serverless. Image analysis is a complex an tedious process and requires a lot of machine learning to get right, but fortunately Amazon has a solution for this called Rekognition. It’s […]

In my last blog you could read about setting up a serverless microservice and in this blog I would like to also touch on something serverless. Image analysis is a complex an tedious process and requires a lot of machine learning to get right, but fortunately Amazon has a solution for this called Rekognition. It’s super easy and works by taking an image from a S3 Bucket that you already created, and putting it through their own engine with probably millions upon millions of photographs. In the next examples I will show you how to do an analyzation using Boto (Python SDK for AWS).

First off, setup your AWS credentials, you can do this with some of the steps in the previous blog here. After setting this up we will need a S3 bucket to put the images in. In my example the S3 bucket will be called “rekog-image”.

Let’s create a virtualenv and activate it and install boto3 inside of it.

# Create a folder for our test
mkdir image-rekognition
cd image-rekognition 
# Create the virtualenv
virtualenv env
# Activate the virtualenv 
source env/bin/activate 
# Install boto3, for the communication with AWS. 
pip install boto3 

After the steps above, you can upload a picture to your S3 bucket and start analyzing it. I will be using the image of this dog:

supercute dog rekognition

Great! Let’s make the script, I will call it rekog.py, but you can name it whatever you like. The content of the file should be the following:

import boto3

client = boto3.client('rekognition')

response = client.detect_labels(
    Image={
        'S3Object': {
            'Bucket': 'rekog-image',
            'Name': 'dog.jpg',
        }
    },
    MaxLabels=123,
    MinConfidence=90
)

print(response)

For the script to work, make sure you have setup correct access rights for your user, otherwise it will not work. How to do so, check the AWS IAM user guide.

Run the script by using, this will call Rekognition:

python rekog.py

This will give you a response with the labels that are found within the image (in my case it will even give me that it’s a Beagle! The cool thing is also that you can detect faces, to do this, upload an image with a face to S3, and change the contents of your script to the following:

import boto3

client = boto3.client('rekognition')

response = client.detect_faces(
    Image={
        'S3Object': {
            'Bucket': 'rekog-image',
            'Name': '',
        }
    },
    Attributes=[
        'ALL',
    ]
)

print(response)

This way you will get back the coördinates for the landmarks, and even an age-range of the person in the picture.

I hope that this will bring you further when you are looking for simple image analysis! If you have any questions regarding the blog post, please contact me in the comments below or on amer@livebyt.es.

Have you checked out the Bits vs Bytes podcast yet? Click here to listen to it.

By Amer Grgic

Amer Grgic is the Founder for Livebytes and hosts the Bits vs Bytes Podcast. Interested in all things Technology, Leadership or Business related.

One reply on “Analyzing images using AWS Rekognition”

input parameter must be base64 encoded. The AWS SDKs that these examples use automatically base64-encode images. You don’t need to encode image bytes before calling an Amazon Rekognition API operation. For more information, see Images .

Leave a Reply

Your email address will not be published. Required fields are marked *