The Internet of Things on AWS – Official Blog
How to access and display files from Amazon S3 on IoT devices with AWS IoT EduKit
AWS IoT EduKit is designed to help students, experienced engineers, and professionals get hands-on experience with IoT and AWS technologies by building end-to-end IoT applications. The AWS IoT EduKit reference hardware is sold by our manufacturing partner M5Stack. Self-paced guides are available online. The code and tutorial content are open to the community to contribute via their respective GitHub repositories. In this blog post, I walk you through how to access files from Amazon S3 and display them on IoT devices. You’ll learn how to download and display PNG images on a M5Stack Core 2 for AWS IoT EduKit. I use the AWS IoT EduKit tutorial, “Cloud Connected Blinky” as my starting point.
At the time of writing, there are five easy-to-follow tutorials and sample code that makes it simple to get started with AWS IoT EduKit. The first tutorial walks you through the process of setting up your environment and uploading a connected home application to the device that can be controlled remotely via an app on a mobile phone. The second tutorial takes you through the process of creating a “Cloud Connected Blinky”. You can build a smart thermostat that controls a fictitious Heating, Ventilation and Air Conditioning (HVAC) system by going through the third tutorial. The fourth tutorial uses AWS AI/ML services to build a smart thermostat that derives predictions from raw data for a room occupancy use case. And finally, the fifth tutorial includes the steps to create your own Amazon Alexa voice assistant that controls the onboard hardware.
Demo Overview
In this demo, I first walk you through the basic structure of the Blinky project using the “Cloud Connected Blinky” tutorial. Then, I extend the project by adding the code that displays PNG formatted images on the device. The device listens for incoming messages that contain a URL pointing to an image hosted in Amazon S3. Then, it downloads the image, stores it in RAM, decodes it into raw RGB (red, green, and blue) data to finally display in on the screen.
Here is a brief description of how I extend the project:
- The
iot_subscribe_callback_handler
is triggered every time a new MQTT message is received. This function calls the functioniot_subscribe_callback_handler_pngdemo
which stores the content of the message in RAM and a pointer in the queuexQueueMsgPtrs
. - A separate process monitors the queue
xQueueMsgPtrs
and triggers theprocessJSON
function. This function’s job is to read the message, download the image and decode it. The image bitmap is stored in RAM and a pointer to the image is stored inxQueuePngPtrs
. - Finally, a process that monitors this queue displays the image.
Prerequisites
- The M5Stack Core2 for AWS IoT EduKit reference hardware
- Completed the “Cloud Connected Blinky” guide
- Basic familiarity with the C programming language and FreeRTOS
- A link to a 320×240 pixel PNG image hosted in Amazon S3
The Cloud Connected Blinky – How does it work?
The program starts by first setting up all the necessary hardware components like LEDs, touchscreen interface, and the LCD screen. Then, it starts the Wi-Fi components, connects to a Wi-Fi network and starts two tasks in parallel: blink_task
and aws_iot_task.
The aws_iot_task
waits until the Wi-Fi is ready to connect to AWS IoT Core and subscribes to a topic named after a hardware-based unique identifier using the Message Queuing Telemetry Transport (MQTT) protocol. The task sends 2 messages that contain the text “hello from SDK” every 3 seconds to AWS IoT Core. The task also downloads incoming messages as they become available.
The blink_task
starts in a suspended state but is configured to blink the LEDs every 200 milliseconds when it is set to resume. The function iot_subscribe_callback_handler
is triggered whenever a message is received. It is programmed to print the contents of the message it received in the local terminal window and to resume blink_task
if it is suspended and vice versa.
Now, I walk through the following procedures required to access files from Amazon S3 and display them on IoT devices.
You’ll learn how to:
- Add support to decode PNG images
- Store the contents of incoming message in a queue
- Retrieve messages from a queue and process them
- Build, flash, and test the device
Step 1 – Add support to decode PNG Images
The “Cloud Connected Blinky” example code comes with the Light and Versatile Graphics Library (LVGL) which is a library that makes it easier to create a graphical user interface on embedded devices. The library has PNG support but the functionality is not included in the default package.
To add PNG support to your project:
- Open the “Cloud Connected Blinky” using the PlatformIO development platform.
- Clone the repository as a subdirectory of the components directory.
cd components && git clone https://github.com/lvgl/lv_lib_png.git
- The project uses CMake to make it easy to build your project. Create a new
CMakeLists.txt
undercomponents/lv_lib_png
. This tells the CMake system to add the source and include files under this directory to the project should it be required. It also specifies that the component depends on the core2forAWS component. - Update the existing
CMakeLists.txt
file available in themain
directory by adding thelv_lib_png
component to theCOMPONENT_REQUIRES
list. - Create a new file called
pngdemo.c
, save it in themain
folder and add the following code:// File: main/pngdemo.c #include "lvgl/lvgl.h" #include "lodepng.h"
The AWS IoT EduKit reference hardware comes with a 320×240 LCD configured to use 16-bit color depth (BGR565). Images converted from PNG to raw bitmaps by LVGL use 24 bits by default (RGB888). A function that converts images to 16-bit color depth and swaps the blue and red color information is required to display images converted by LVGL.
- Create a function called
convert_color_depth
inpngdemo.c
.// File: main/pngdemo.c void convert_color_depth(uint8_t * img, uint32_t px_cnt) { lv_color32_t * img_argb = (lv_color32_t*)img; lv_color_t c; uint32_t i; for(i = 0; i < px_cnt; i++) { c = LV_COLOR_MAKE( img_argb[i].ch.blue, img_argb[i].ch.green, img_argb[i].ch.red); img[i*2 + 1] = c.full >> 8; img[i*2 + 0] = c.full & 0xFF; } }
You have completed step 1. The device can now handle PNG images and convert them to a format compatible with the LCD screen.
Step 2 – Store the contents of incoming message in a queue
The “Cloud Connected Blinky” program configures the aws_iot_task
to receive messages coming from AWS IoT Core and prints the contents of the message in the local terminal window. You can use a queue to send fixed or variable sized messages between tasks. The content of variable sized messages is not stored in the queue. Instead, a queue holds fixed size structures that contain pointers.
I modify aws_iot_task
so that it will store the data of an incoming message in the queue xQueueMsgPtrs
. The data will be accessed later by the task check_messages
created in Step 3. Since the message size is not known, space for the message payload is going to be dynamically allocated in RAM and a pointer to it is going to be stored in a queue.
To store the contents of incoming messages in a queue:
- Create a new file called
pngdemo.h
, save inside themain/includes
folder and add the following code. This file contains some definitions that determine the size of the queues and the maximum amount of memory that can be used per incoming message and downloaded file.// File: main/includes/pngdemo.h #pragma once // Max number of messages to store #define MSG_QUEUE_DEPTH 128 // Max number of images to store in the buffer #define PNG_QUEUE_DEPTH 1 // Max size of the incoming message containing an URL #define MAX_URL_BUFF_SIZE 1024 // Theoretical maximum size of an incoming PNG. // It includes the PNG file data, signature, chunks and CRC checksum. #define MAX_PNG_BUFF_SIZE (320*240*4)+8+4+4+4 // Function prototypes void convert_color_depth(uint8_t * img, uint32_t px_cnt); void processJSON(char * json); void iot_subscribe_callback_handler_pngdemo(char * payload, int len); void check_messages(void * param);
- Open
main.c
and include the header filepngdemo.h
. - Create a queue handler in the global declaration section of the main.c file; this is at the top of the program and outside of any function.
// File: main/main.c #include "pngdemo.h" QueueHandle_t xQueueMsgPtrs;
- Create a queue inside the
app_main
function which is implemented in themain.c
file. Name this queuexQueueMsgPtrs
. The depth of the queue will be defined by a macro that will be defined later and the size of each item will be the size of a pointer.// File: main/main.c xQueueMsgPtrs = xQueueCreate(PNG_QUEUE_DEPTH,sizeof(char *));
Remember that
aws_iot_task
is designed to call the functioniot_subscribe_callback_handler
every time a new message comes in. This function needs to be modified to pass the parameters to a new function which is designed to store the message in RAM and store the pointer in a queue. The new function needs to be able create a buffer to store the incoming message and to send the pointer to the queue usingxQueueSend
. - Update
pngdemo.c
by adding the headers described below and linkxQueueMsgPtrs
.// File: main/pngdemo.c #include <string.h> #include "freertos/FreeRTOS.h" #include "freertos/queue.h" #include "freertos/task.h" #include "esp_log.h" #include "pngdemo.h" static const char *TAG = "PNGDEMO"; // Link to the queue that stores pointers of the incoming messages extern QueueHandle_t xQueueMsgPtrs;
- Create the function
iot_subscribe_callback_handler_pngdemo
inpngdemo.c
.// File: main/pngdemo.c void iot_subscribe_callback_handler_pngdemo(char * payload, int len) { // Create buffer to store the incoming message char * myItem = heap_caps_malloc(len, MALLOC_CAP_SPIRAM); // Copy the incoming data into the buffer strncpy(myItem,payload,len); // Send the pointer to the incoming message to the xQueue. xQueueSend(xQueueMsgPtrs,&myItem,portMAX_DELAY); }
- Open
main.c
and modify the functioniot_subscribe_callback_handler
to pass the payload toiot_subscribe_callback_handler_pngdemo
if the MQTT topic name has “/png”.// File: main/main.c void iot_subscribe_callback_handler(AWS_IoT_Client *pClient, char *topicName, uint16_t topicNameLen, IoT_Publish_Message_Params *params, void *pData) { ... if (strstr(topicName, "/png") != NULL) { iot_subscribe_callback_handler_pngdemo( (char *)params->payload, (int)params->payloadLen ); } ... }
Extend the existing
CMakeLists.txt
to include the new source file we created in step 1, procedure step 5 so that it will compile and link it to the executable that gets flashed into the microcontroller. - Open the CMakeLists.txt file and modify
set(COMPONENT_SRCS)
by adding the source filepngdemo.c
to the list.
You have completed step 2. Incoming messages are now being stored in memory as they come in and there is a queue that contains a pointer to each message.
Step 3 – Retrieve messages from a queue and process them
A new task check_messages
is created to access the data in the queue. The task’s job is to monitor the queue xQueueMsgPtrs
and process the data using a new function called processJSON
. The new function processJSON
parses a message and retrieves the contents of the key img_url
. Then, it downloads the image and stores it temporarily in RAM. The code is designed to process messages that use the JSON format and images are retrieved via HTTP. The cJSON library is used to decode the messages and the esp_http_client
library is used to download files. Incoming images are converted from PNG to raw format. Then, the color depth is converted from 24 to 16 bits storing the resulting data in RAM for later use. Finally, a pointer to the 16-bit image buffer is sent to a queue called xQueuePngPtrs
.
To retrieve messages from a queue and process them:
- Open
pngdemo.c
and add thefreertos/semphr.h
header, making sure it is added afterfreertos/FreeRTOS.h.
- Add the
esp_http_client.h
andcJSON.h
headers. - Link the file to the
xQueuePngPtrs
andxGuiSemaphore
// File: main/pngdemo.c #include "freertos/semphr.h" #include "esp_http_client.h" #include "cJSON.h" // Queue to store the pointers to the PNG buffers extern QueueHandle_t xQueuePngPtrs; // Handler to the semaphore that makes the guiTask yield extern SemaphoreHandle_t xGuiSemaphore;
- Open
pngdemo.c
and create a new function calledprocessJSON
.// File: main/pngdemo.c void processJSON(char * json) { }
The function converts the raw message to a cJSON object using
cJSON_Parse
. Then it stores the contents of the keyimg_url
inside a new buffer calledurl_buffer
.// File: main/pngdemo.c void processJSON(char * json) { ... // Parse the JSON object cJSON * root = cJSON_Parse(json); // Find out how big is the string is int len = strlen(cJSON_GetObjectItem(root,"img_url")->valuestring); // Allocate memory, align it to use the SPIRAM char * url_buffer = heap_caps_malloc(MAX_URL_BUFF_SIZE, MALLOC_CAP_SPIRAM); // Copy the contents of the parsed string to the allocated memory memcpy(url_buffer, cJSON_GetObjectItem(root,"img_url")->valuestring, len+1); // Make sure the last byte is zero (NULL character) url_buffer[len+1] = 0; // Don't need the parsed object anymore, free memory cJSON_Delete(root); ... }
- Allocate a large enough buffer to hold the image and use
esp_http_client
to download it.// File: main/pngdemo.c void processJSON(char * json) { ... // Allocate a large buffer, align it to use the SPIRAM unsigned char * buffer = heap_caps_malloc(MAX_PNG_BUFF_SIZE, MALLOC_CAP_SPIRAM); if (buffer == NULL) { ESP_LOGE(TAG, "Cannot malloc http receive buffer"); return; } esp_err_t err; int content_length; int read_len; // Intialize the HTTP client esp_http_client_config_t config = {.url = url_buffer}; esp_http_client_handle_t http_client = esp_http_client_init(&config); // Establish a connection with the HTTPs server and send headers if ((err = esp_http_client_open(http_client, 0)) != ESP_OK) { ESP_LOGE(TAG, "Failed to open HTTP connection: %s", esp_err_to_name(err)); free(buffer); return; } // Immediately start retrieving headers from the stream content_length = esp_http_client_fetch_headers(http_client); // Retrieve data from the stream and store it in the SPI ram read_len = esp_http_client_read(http_client, (char *) buffer, content_length); // Validate that we actually read something if (read_len <= 0) { ESP_LOGE(TAG, "Error read data"); } ESP_LOGI(TAG, "HTTP Stream reader Status = %d, content_length = %d", esp_http_client_get_status_code(http_client), esp_http_client_get_content_length(http_client)); // Tear down the http session esp_http_client_cleanup(http_client); ... }
- Now that the image is stored in RAM, decode it from PNG to raw bitmap and converting it to 16-bit color depth.
// File: main/pngdemo.c void processJSON(char * json) { ... // Pointer that will point to the decoded PNG data unsigned char * png_decoded = 0; uint32_t error; uint32_t png_width; uint32_t png_height; // Use LodePNG to convert the PNG image to 32-bit RGBA and store it // in the new buffer. error = lodepng_decode32(&png_decoded, &png_width, &png_height, buffer, read_len); if(error) { ESP_LOGE(TAG, "error %u: %s\n", error, lodepng_error_text(error)); return; } // Clean up free(url_buffer); free(buffer); // Convert the 32-bit RGBA image to 16-bit and swap blue and red data. convert_color_depth(png_decoded, png_width * png_height); ... }
- The image is ready. Send its pointer to the xQueuePngPtrs queue.
// File: main/pngdemo.c void processJSON(char * json) { ... // All done, send the pointer that points to the PNG data to the queue. xQueueSend(xQueuePngPtrs,&png_decoded,portMAX_DELAY); ... }
- Create a queue handler in the global declaration section of
main.c
// File: main/main.c QueueHandle_t xQueuePngPtrs;
- Create a new queue, and add it inside the
app_main
function.// File: main/main.c xQueuePngPtrs = xQueueCreate(PNG_QUEUE_DEPTH,sizeof(char *));
- Open
pngdemo.c
and create a new function calledcheck_messages
. This function continuously checks if messages available in the queuesxQueuePngPtrs
andxQueuePngPtrs
. The function processes available messages as they arrive by callingxQueueReceive
and passing a pointer to another function.// File: main/pngdemo.c void check_messages(void *param) { char * pngPtr; char * msgPtr; while(1) { // Yield for 500ms to let other tasks do work vTaskDelay(500 / portTICK_RATE_MS); if(xQueuePngPtrs != 0) { if (xQueueReceive(xQueuePngPtrs,&pngPtr,(TickType_t)10)) { ESP_LOGI(TAG, "Got a PNG pointer, free heap: %d\n", esp_get_free_heap_size()); // Make sure the gui Task will yield xSemaphoreTake(xGuiSemaphore, portMAX_DELAY); // Object that will contain the LVGL image // in RAW BGR565 format lv_obj_t * image_background; // Clean the screen lv_obj_clean(lv_scr_act()); // Create a new object using the active screen and no parent image_background = lv_img_create(lv_scr_act(), NULL); lv_img_dsc_t img = { .header.always_zero = 0, .header.w = 320, .header.h = 240, .data_size = 320 * 240 * 2, .header.cf = LV_IMG_CF_TRUE_COLOR, .data = (unsigned char *)pngPtr }; // Force LVGL to invalidate the cache lv_img_cache_invalidate_src(&img); // Tell LVGL to load the data that the pointer points to lv_img_set_src(image_background, &img); // Free the PNG data free(pngPtr); // Let the guiTask continue so that the screen gets refreshed xSemaphoreGive(xGuiSemaphore); } } if(xQueueMsgPtrs != 0) { if (xQueueReceive(xQueueMsgPtrs,&msgPtr,(TickType_t)10)) { // Send the pointer that points to the string to process it processJSON(msgPtr); // Free the URL data free(msgPtr); } } } }
- Update CMakeLists.txt by adding the
esp_http_client
andjson
components to theCOMPONENT_REQUIRES
list.# File: main/CMakeLists.txt set(COMPONENT_REQUIRES "nvs_flash" "esp-aws-iot" "esp-cryptoauthlib" "core2forAWS" "lv_lib_png" "esp_http_client" "json" )
- Create a new task called
check_messages
inmain.c
. Make sure the task is created after the queue created in step 3, procedure step 9 . This is important because the task monitors the queue contents as soon as it starts.// File: main/main.c xTaskCreatePinnedToCore(&check_messages,"check_messages", 4096, NULL, 4, NULL, 1);
The code is now ready! Your device will listen for messages and shows an image on the display.
Step 4 – Build, flash, and test the device
You are now ready to build (compile) and upload the firmware to the microcontroller. The process is the same as with the other tutorial for building, flashing, and monitoring the serial output:
- Run the following command from the terminal window:
pio run --environment core2foraws --target upload --target monitor
- Send a message to
<<CLIENT_ID>>/png
using the AWS IoT MQTT test client. This is almost identical to how you send the command to blink an LED in the “Cloud Connected Blinky” tutorial.
Here is a sample of the message payload the device is designed to receive:{ "img_url" : "https://edukit.workshop.aws/en/AWS_IoT_EduKIt_Logo-320px_240px.png" }
Alternatively, use this script to test your code:
# file docs/test.py import boto3 import json ENDPOINT = 'https://<<IOT_ENDPOINT>>’ client = boto3.client('iot-data', endpoint_url=ENDPOINT) data = { 'img_url' : '<<URL_TO_PNG_FILE>>' } r = client.publish( topic=' <<CLIENT_ID>>/png', qos=0, payload = json.dumps(data) )
Clean Up
No additional resources have been created in your AWS account. However, the following command can be used to clear the contents the AWS IoT EduKit reference hardware flash memory:
pio run --environment core2foraws --target erase
Conclusion
AWS IoT EduKit makes it easy for developers—from students to experienced professionals—to get hands-on experience building end-to-end IoT applications by combining a reference hardware kit with a set of easy-to-follow educational tutorials and example code. In this blog post I used the “Cloud Connected Blinky” tutorial as a starting point to create a more advanced application. I then walked through the code that creates queues and exchanges data between two tasks. Finally, I demonstrated how a PNG formatted image is converted it to a format that is compatible with the LCD screen. I hope that my demonstration of the IoT EduKit Reference hardware proves valuable to anyone reading. To learn more about AWS IoT EduKit and get started with the tutorials, visit the AWS IoT EduKit webpage.