AWS Partner Network (APN) Blog

How to Deploy AI Inference on the Edge with the LG AIoT Board and AWS IoT Greengrass

By Minsung Kim, Sr. IoT Architect at AWS
By Vanitha Ramaswami, IoT Partner Solutions Architect at AWS
By Kyuil Kim, Sr. Research Engineer at LG Electronics
By Eunjung Lim, Sr. Research Engineer at LG Electronics

With so many cloud applications infused with artificial intelligence (AI) and machine learning (ML) capabilities, AI/ML is being democratized by cloud services. The growth of AI in a wide range of applications demands more purpose-built processors to provide scalable levels of performance, flexibility, and efficiency.

The LG AIoT board helps customers accelerate their computer vision and ML journey using Amazon Web Services (AWS). OEMs can now easily incorporate visual intelligence, voice intelligence, and control intelligence into their products.

The LG Neural Engine (LNE) in the LG AIoT board offloads the compute requirements of deep learning algorithms to the specially designed processor, which delivers 1 TFLOPS of compute performance. The LG AIoT board implements on-device AI to learn and do real-time inference, which is essential for performance-critical applications.

In this post, we will show you how to build a simple AI-enabled application with AWS IoT Greengrass that takes advantage of the hardware AI acceleration on the LG AIoT board. AWS IoT Greengrass extends AWS on your device and offers the cloud programming model and tools at the edge.

We’ll also demonstrate how different ML models can be controlled from the cloud with AWS edge computing services and AWS IoT services.

Why Embed AI into Hardware

Embedding artificial intelligence into hardware such as cameras, home appliances, and other sensor-based devices is leading to the development of a new breed of applications that can make intelligent decisions at the edge.

LG-IoT-Greengrass-1

Figure 1 − The LG8111 AI chip.

LG-IoT-Greengrass-2

Figure 2 – Reference LG AIoT board embedding the LG8111 AI chip.

For example, an AI-enabled camera with built-in facial recognition has the potential to solve a wide range of problems across several application domains:

  • In-home appliances like robotic vacuum cleaners that use object detection can detect lost items during cleaning and generate alerts when items are misplaced. With facial recognition, more personalized services can be designed into connected home appliances, such as playing a family member’s preferred video or music, or setting a customized room temperature based on individual preferences.
    .
  • AI object classification can enable smart farming, reducing labor costs, and increasing yield through crop monitoring systems such as weed detection and crop quality checks.
    .
  • Applications for real-time inventory management and time-of-day staffing. These are transforming customer experience in retail stores and quality assurance in production lines.

Solution Overview

In the demo application, we’ll show you how to set up edge actions that do ML inferencing locally, and sends the inference results to AWS IoT for real-time monitoring and analysis in the cloud. AWS IoT also offers the capability to remotely select the relevant ML model depending on the application needs.

LG-IoT-Greengrass-3

Figure 3 – Solution overview.

The demo application is configured to work with three ML network models:

  • Face detection with Mtcnn (Figure 4)
  • Object classification with MobileNet (Figure 5)
  • Object detection with Tiny-Yolo (Figure 6)

LG-IoT-Greengrass-4

Figure 4 – Face detection with Mtcnn.

LG-IoT-Greengrass-5

Figure 5 – Object classification with MobileNet.

LG-IoT-Greengrass-6

Figure 6 – Object detection with Tiny-Yolo.

The application logic runs as an AWS Lambda function on AWS IoT Greengrass. Local users can check to find out what objects are detected by using a local web browser that connects to Lambda on AWS IoT Greengrass in real-time.

An external user or an application can switch the ML network models by publishing a simple control message, and also subscribe to inference results. Improved ML models can be remotely deployed anytime by sending a command to AWS IoT Greengrass.

We’ll show you how to use all three ML models in this demo application. Later, you can tailor our application further to suit your needs.

How it Works

AWS IoT Greengrass runs on the LG AIoT board, and all operations are handled by the local Lambda function. The local inference is done by the LG Neural Engine (LNE), and the application logic runs as an event-driven service.

With AWS Lambda, you can easily develop, update, or delete an application remotely at any time. In this demo application, we will demonstrate how to:

  • Change the network model.
  • Send inference results to the cloud.
  • Run a simple Flask-based web server for displaying inference results.

When the LG AIoT board starts, it first initializes the machine learning network models. It continues to capture images, does near real-time inferencing, and reports the results back to cloud.

The Lambda function has access to the camera and the LNE, which is the key hardware to accelerate the inferencing speed.

LG-IoT-Greengrass-7

Figure 7 – Sequence diagram of the AWS Lambda function.

How to Build It

These are the steps to build our demo application.

Prerequisites

Steps

  • Download the application from GitHub. We have provided both English and Korean versions, plus a README file.
    .
  • Prepare a pre-trained network model, or train your own model using Amazon SageMaker. Use the ML framework of your choice, such as TensorFlow or Caffe.
    .
  • Convert the network model to a format allowed by the LG Neural Engine model, terminating in .lne.
    .
  • Create an AWS IoT Greengrass group and download AWS IoT Greengrass core security resources and configuration files to the LG AIoT board.
    .
  • Upload the .lne file to an instance in Amazon Simple Storage Service (Amazon S3), and the application code to AWS Lambda.
    .
  • Configure Lambda, and AWS IoT Greengrass subscriptions and ML and local resources on the AWS IoT Greengrass group.

LG-IoT-Greengrass-8

Figure 8 – Local resource settings.

LG-IoT-Greengrass-9

Figure 9 – Machine learning resource settings.

  • Next, deploy the AWS IoT Greengrass group to the LG AIoT board.
    .
  • Run the AI application.

Once you deploy the model with its corresponding AI resource (LNE), you can see the results with a web browser or by subscribing to messages.

Following is a sample of the inferencing and visualization part of the Lambda function. You can adapt this code to meet your application requirements.

…
#Init network models
from network.mtcnn import Mtcnn
from network.mobilenet import MobileNet
from network.tinyyolo_lite import TinyYoloNet
#Init network models
…
def run_lne():
    try:
        global select_network
        global answer
        #Init network models
        lne_mtcnn = Mtcnn('./lib/libdq1face.so')
        lne_mobile = MobileNet('/home/ubuntu/models/mobilenet.lne', './labels/labels.txt')
        lne_tinyyolo = TinyYoloNet('/home/ubuntu/models/tiny-yolo.lne')
        #Init network models 
     
        cap = cv2.VideoCapture(0)
        while(cap.isOpened()):
            ret, frame = cap.read()
            frame = camera_convert(frame)
            if select_network == '1':
                #Execute inference 
                input_img = lne_mtcnn.resize_img(frame)
                answer_mtcnn = lne_mtcnn.inference(input_img)
                answer = str(answer_mtcnn)
                network_img = lne_mtcnn.post_draw(input_img, answer_mtcnn)
                #Execute inference
            …
            #Visualize results with Flask 
            yield (b'—frame\r\n'
                b'Content-Type: image/jpeg\r\n\r\n' + network_img + b'\r\n')
            #Visualize results with Flask

Configuring Network Models

In some scenarios, you may need to deploy your own model and its corresponding libraries. We will explain how the network models to be configured and set up.

In our demo application, we’re using three pre-compiled models. You can use your own network models trained with Caffe or TensorFlow Lite.

LG-IoT-Greengrass-10

Figure 10 – Network model conversion.

You can convert the custom ML model into a format acceptable by LNE.

Contact LG Electronics for further information regarding the LG AI Software Developer Kit (SDK).

Once you convert the model to LNE format, you can push it to AWS IoT Greengrass to perform the ML inference.

Upon successful installation of the ML network model to the LG AIoT board, you can perform the ML inferencing. In this demo application, you can find the modules (Python files) in the Network directory. Use them to customize and prepare your own ML library tailored to your specific application needs.

There are two ways to load the underlying libraries required by the network model. The primary way is similar to the standard TensorFlow Lite. In our application, we implement MobileNet and Tiny-Yolo in this way:

#Load Binary
from lne_tflite import interpreter as lt
..
interpreter = lt.Interpreter(args.model_name)
#Init Tensor
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]['index'], input_data)
#Run inference
interpreter.invoke()
#Get inference result
output_details = interpreter.get_output_details()
output_data = interpreter.get_tensor(output_details[0]['index'])

The other way is using a shared library running on top of the C++ based native APIs for LNE:

#Load Shared Library
def __init__(self, binary_path)
self.mtcnn_lib = cdll.LoadLibrary(binary_path)
self.mtcnn_lib.mtcnn_init()
#Pre/Post Processing and Run inference
def inference(self, img)
    self.mtcnn_num = (c_int*1)()
    self.pos = (c_int*1000)()
    self.lne_input = np.expand_dims(img, axis = 0)
    self.mtcnn_lib.mtcnn_run(self.lne_input.ctypes.data_as(POINTER(c_ubyte)), self.mtcnn_num, self.pos)
    return self.mtcnn_num[0]

Artificial Intelligence Network Model Selection

Though multiple ML models are loaded into our application, only one can be active at a time. You can select the active ML model by sending a control message thru an AWS IoT Core MQTT topic.

From the AWS IoT Greengrass edge device, configure a subscription to receive messages from the AWS cloud. In our subscription setting, we configure source as IoT Cloud and target as an AWS Lambda function named gg_blog:ggblog_3. The control message is published thru the topic lge/select_network.

LG-IoT-Greengrass-11

Figure 11 – Subscription configuration for a control message.

Which ML network model the Lambda function loads depends on the incoming message from IoT core topic lge/select_network.

For example, if the message payload is {“network”: “1”}t, then the Lambda function loads the Mtcnn network model. The numbers 1, 2, and 3 indicate the Mtcnn, MobileNet, and Tiny-Yolo models, respectively.

Reporting Inferencing Results in Near Real-Time

Once everything is set properly, you can subscribe to lge/answer_topic to see the results. This requires you to configure one more MQTT subscription. The source is your Lambda function gg_blog:ggblog_3, and target is IoT Cloud.

The demo application has a simple Flask-based web server for locally viewing the inference results in real-time using a web browser. To use it:

  1. Connect the user PC to the network where the LG AIoT board is connected.
  2. Open a web browser and enter this IP address: http://[LG AIoT Board IP]:1234

Here are the results we achieved with our ML inference test performance of the LG AIoT board.

A B C
1 Type Network FPS
2 Object classification MobileNet 134.86
3 Face detection Mtcnn 41.81
4 Object detection Tiny-Yolo 10.7

Conclusion

The combination of the LG AIoT board and AWS IoT Greengrass goes beyond the local execution of artificial intelligence to provide a complete environment to build modern edge-compute applications.

You can build a simple AI-based application using AWS IoT Greengrass, and you can manage multiple ML network models remotely, controlling them all from the AWS Cloud using AWS IoT Core.

Learn more about the efficiency and near real-time inferencing capability of the LG AIoT board appears in these videos.

For more information, check out the following resources: