The Internet of Things on AWS – Official Blog
How to access and display files from Amazon S3 on IoT devices with AWS IoT EduKit
AWS IoT EduKit is designed to help students, experienced engineers, and professionals get hands-on experience with IoT and AWS technologies by building end-to-end IoT applications. The AWS IoT EduKit reference hardware is sold by our manufacturing partner M5Stack. Self-paced guides are available online. The code and tutorial content are open to the community to contribute via their respective GitHub repositories. In this blog post, I walk you through how to access files from Amazon S3 and display them on IoT devices. You’ll learn how to download and display PNG images on a M5Stack Core 2 for AWS IoT EduKit. I use the AWS IoT EduKit tutorial, “Cloud Connected Blinky” as my starting point.
At the time of writing, there are five easy-to-follow tutorials and sample code that makes it simple to get started with AWS IoT EduKit. The first tutorial walks you through the process of setting up your environment and uploading a connected home application to the device that can be controlled remotely via an app on a mobile phone. The second tutorial takes you through the process of creating a “Cloud Connected Blinky”. You can build a smart thermostat that controls a fictitious Heating, Ventilation and Air Conditioning (HVAC) system by going through the third tutorial. The fourth tutorial uses AWS AI/ML services to build a smart thermostat that derives predictions from raw data for a room occupancy use case. And finally, the fifth tutorial includes the steps to create your own Amazon Alexa voice assistant that controls the onboard hardware.
Demo Overview
In this demo, I first walk you through the basic structure of the Blinky project using the “Cloud Connected Blinky” tutorial. Then, I extend the project by adding the code that displays PNG formatted images on the device. The device listens for incoming messages that contain a URL pointing to an image hosted in Amazon S3. Then, it downloads the image, stores it in RAM, decodes it into raw RGB (red, green, and blue) data to finally display in on the screen.
Here is a brief description of how I extend the project:
- The
iot_subscribe_callback_handler
is triggered every time a new MQTT message is received. This function calls the functioniot_subscribe_callback_handler_pngdemo
which stores the content of the message in RAM and a pointer in the queuexQueueMsgPtrs
. - A separate process monitors the queue
xQueueMsgPtrs
and triggers theprocessJSON
function. This function’s job is to read the message, download the image and decode it. The image bitmap is stored in RAM and a pointer to the image is stored inxQueuePngPtrs
. - Finally, a process that monitors this queue displays the image.
Figure 1 – Messages and downloaded files are stored in RAM. Queues and tasks used to process data.
Prerequisites
- The M5Stack Core2 for AWS IoT EduKit reference hardware
- Completed the “Cloud Connected Blinky” guide
- Basic familiarity with the C programming language and FreeRTOS
- A link to a 320×240 pixel PNG image hosted in Amazon S3
The Cloud Connected Blinky – How does it work?
The program starts by first setting up all the necessary hardware components like LEDs, touchscreen interface, and the LCD screen. Then, it starts the Wi-Fi components, connects to a Wi-Fi network and starts two tasks in parallel: blink_task
and aws_iot_task.
The aws_iot_task
waits until the Wi-Fi is ready to connect to AWS IoT Core and subscribes to a topic named after a hardware-based unique identifier using the Message Queuing Telemetry Transport (MQTT) protocol. The task sends 2 messages that contain the text “hello from SDK” every 3 seconds to AWS IoT Core. The task also downloads incoming messages as they become available.
The blink_task
starts in a suspended state but is configured to blink the LEDs every 200 milliseconds when it is set to resume. The function iot_subscribe_callback_handler
is triggered whenever a message is received. It is programmed to print the contents of the message it received in the local terminal window and to resume blink_task
if it is suspended and vice versa.
Now, I walk through the following procedures required to access files from Amazon S3 and display them on IoT devices.
You’ll learn how to:
- Add support to decode PNG images
- Store the contents of incoming message in a queue
- Retrieve messages from a queue and process them
- Build, flash, and test the device
Step 1 – Add support to decode PNG Images
The “Cloud Connected Blinky” example code comes with the Light and Versatile Graphics Library (LVGL) which is a library that makes it easier to create a graphical user interface on embedded devices. The library has PNG support but the functionality is not included in the default package.
To add PNG support to your project:
- Open the “Cloud Connected Blinky” using the PlatformIO development platform.
- Clone the repository as a subdirectory of the components directory.
- The project uses CMake to make it easy to build your project. Create a new
CMakeLists.txt
undercomponents/lv_lib_png
. This tells the CMake system to add the source and include files under this directory to the project should it be required. It also specifies that the component depends on the core2forAWS component. - Update the existing
CMakeLists.txt
file available in themain
directory by adding thelv_lib_png
component to theCOMPONENT_REQUIRES
list. - Create a new file called
pngdemo.c
, save it in themain
folder and add the following code:The AWS IoT EduKit reference hardware comes with a 320×240 LCD configured to use 16-bit color depth (BGR565). Images converted from PNG to raw bitmaps by LVGL use 24 bits by default (RGB888). A function that converts images to 16-bit color depth and swaps the blue and red color information is required to display images converted by LVGL.
- Create a function called
convert_color_depth
inpngdemo.c
.
You have completed step 1. The device can now handle PNG images and convert them to a format compatible with the LCD screen.
Step 2 – Store the contents of incoming message in a queue
The “Cloud Connected Blinky” program configures the aws_iot_task
to receive messages coming from AWS IoT Core and prints the contents of the message in the local terminal window. You can use a queue to send fixed or variable sized messages between tasks. The content of variable sized messages is not stored in the queue. Instead, a queue holds fixed size structures that contain pointers.
I modify aws_iot_task
so that it will store the data of an incoming message in the queue xQueueMsgPtrs
. The data will be accessed later by the task check_messages
created in Step 3. Since the message size is not known, space for the message payload is going to be dynamically allocated in RAM and a pointer to it is going to be stored in a queue.
To store the contents of incoming messages in a queue:
- Create a new file called
pngdemo.h
, save inside themain/includes
folder and add the following code. This file contains some definitions that determine the size of the queues and the maximum amount of memory that can be used per incoming message and downloaded file. - Open
main.c
and include the header filepngdemo.h
. - Create a queue handler in the global declaration section of the main.c file; this is at the top of the program and outside of any function.
- Create a queue inside the
app_main
function which is implemented in themain.c
file. Name this queuexQueueMsgPtrs
. The depth of the queue will be defined by a macro that will be defined later and the size of each item will be the size of a pointer.Remember that
aws_iot_task
is designed to call the functioniot_subscribe_callback_handler
every time a new message comes in. This function needs to be modified to pass the parameters to a new function which is designed to store the message in RAM and store the pointer in a queue. The new function needs to be able create a buffer to store the incoming message and to send the pointer to the queue usingxQueueSend
. - Update
pngdemo.c
by adding the headers described below and linkxQueueMsgPtrs
. - Create the function
iot_subscribe_callback_handler_pngdemo
inpngdemo.c
. - Open
main.c
and modify the functioniot_subscribe_callback_handler
to pass the payload toiot_subscribe_callback_handler_pngdemo
if the MQTT topic name has “/png”.Extend the existing
CMakeLists.txt
to include the new source file we created in step 1, procedure step 5 so that it will compile and link it to the executable that gets flashed into the microcontroller. - Open the CMakeLists.txt file and modify
set(COMPONENT_SRCS)
by adding the source filepngdemo.c
to the list.
You have completed step 2. Incoming messages are now being stored in memory as they come in and there is a queue that contains a pointer to each message.
Step 3 – Retrieve messages from a queue and process them
A new task check_messages
is created to access the data in the queue. The task’s job is to monitor the queue xQueueMsgPtrs
and process the data using a new function called processJSON
. The new function processJSON
parses a message and retrieves the contents of the key img_url
. Then, it downloads the image and stores it temporarily in RAM. The code is designed to process messages that use the JSON format and images are retrieved via HTTP. The cJSON library is used to decode the messages and the esp_http_client
library is used to download files. Incoming images are converted from PNG to raw format. Then, the color depth is converted from 24 to 16 bits storing the resulting data in RAM for later use. Finally, a pointer to the 16-bit image buffer is sent to a queue called xQueuePngPtrs
.
To retrieve messages from a queue and process them:
- Open
pngdemo.c
and add thefreertos/semphr.h
header, making sure it is added afterfreertos/FreeRTOS.h.
- Add the
esp_http_client.h
andcJSON.h
headers. - Link the file to the
xQueuePngPtrs
andxGuiSemaphore
- Open
pngdemo.c
and create a new function calledprocessJSON
.The function converts the raw message to a cJSON object using
cJSON_Parse
. Then it stores the contents of the keyimg_url
inside a new buffer calledurl_buffer
. - Allocate a large enough buffer to hold the image and use
esp_http_client
to download it. - Now that the image is stored in RAM, decode it from PNG to raw bitmap and converting it to 16-bit color depth.
- The image is ready. Send its pointer to the xQueuePngPtrs queue.
- Create a queue handler in the global declaration section of
main.c
- Create a new queue, and add it inside the
app_main
function. - Open
pngdemo.c
and create a new function calledcheck_messages
. This function continuously checks if messages available in the queuesxQueuePngPtrs
andxQueuePngPtrs
. The function processes available messages as they arrive by callingxQueueReceive
and passing a pointer to another function. - Update CMakeLists.txt by adding the
esp_http_client
andjson
components to theCOMPONENT_REQUIRES
list. - Create a new task called
check_messages
inmain.c
. Make sure the task is created after the queue created in step 3, procedure step 9 . This is important because the task monitors the queue contents as soon as it starts.
The code is now ready! Your device will listen for messages and shows an image on the display.
Step 4 – Build, flash, and test the device
You are now ready to build (compile) and upload the firmware to the microcontroller. The process is the same as with the other tutorial for building, flashing, and monitoring the serial output:
- Run the following command from the terminal window:
- Send a message to
<<CLIENT_ID>>/png
using the AWS IoT MQTT test client. This is almost identical to how you send the command to blink an LED in the “Cloud Connected Blinky” tutorial.
Here is a sample of the message payload the device is designed to receive:Figure 2 – Sending a message using the AWS IoT MQTT test client
Alternatively, use this script to test your code:
Clean Up
No additional resources have been created in your AWS account. However, the following command can be used to clear the contents the AWS IoT EduKit reference hardware flash memory:
Conclusion
AWS IoT EduKit makes it easy for developers—from students to experienced professionals—to get hands-on experience building end-to-end IoT applications by combining a reference hardware kit with a set of easy-to-follow educational tutorials and example code. In this blog post I used the “Cloud Connected Blinky” tutorial as a starting point to create a more advanced application. I then walked through the code that creates queues and exchanges data between two tasks. Finally, I demonstrated how a PNG formatted image is converted it to a format that is compatible with the LCD screen. I hope that my demonstration of the IoT EduKit Reference hardware proves valuable to anyone reading. To learn more about AWS IoT EduKit and get started with the tutorials, visit the AWS IoT EduKit webpage.