AWS Partner Network (APN) Blog
How AWS, STMicroelectronics, and LACROIX Are Making Cities Smarter and Safer with Edge AI
By Ahmed Elsenousi, EMEA Sr. Solutions Architect – AWS
By Franck Martins, Global Sr. IoT Partner Development Specialist – AWS
By Paul Lasserre, EMEA Partner Leader – Solutions – AWS
The rise of Internet of Things (IoT) has brought a proliferation of connected devices, with more convenience and efficiency for enterprise as well as end users.
As connected devices gain onboard processing power, the next evolution will be true edge computing and enable real-time decisions without continuous server communication. This shift reduces latency, power, and costs while allowing operations in limited connectivity environments.
Edge computing helps transform interactions with users as complex systems and tasks become more automated, responsive, and integrated into daily life. Bringing artificial intelligence (AI) at the edge—known as Edge AI—with cost-effective hardware is opening a new era of smart device development augmented by machine learning (ML). This enables devices to operate more autonomously, securely with no sensitive data transfer, and faster.
Microcontrollers are pivotal to deploying and running AI at the edge. In this post, we’ll showcase how STMicroelectronics, a leading microcontroller manufacturer, is collaborating with Amazon Web Services (AWS) and LACROIX, a leader in smart city solutions, to simplify Edge AI adoption.
This collaboration demonstrates the critical role of each partner in making edge computing commercially viable across use cases and industries. While edge computing faces hurdles, partnerships between cloud providers and chipmakers pave the way for smarter and more efficient edge devices.
Smarter and Safer Cities with Lacroix
Smart cities leverage technology to improve quality of life for citizens while optimizing operations and infrastructure. LACROIX supplies complete solutions for optimizing and digitalizing cities and regions with expertise on road safety, traffic management, and smart lighting.
The outcomes of the collaboration with STMicroelectronics and AWS allows LACROIX to equip its next generation of solutions with Edge AI capabilities to operate audio digital sensors with ML model inference on STM32 microcontrollers. This enables audio classifications at the edge to monitor and act upon different sound identifications found in our cities, such as car alarms and other loud noises that are cause for concern.
Together, the cloud computing power of AWS, smart connected devices from STMicroelectronics, and electronics expertise of LACROIX supply the fundamental components for cities to become more responsive, efficient, and safer.
Let’s take a look at how the AWS STM32 ML Accelerator Kit deploys an audio classification ML model to the edge STM32U5 chipset (STM32 X-CUBE-AWS reference integration) using a ready-to-deploy AWS Cloud Development Kit (AWS CDK) stack.
The ML Accelerator Kit leverages Amazon SageMaker, STMicroelectronics’ STM32Cube.AI Developer Cloud (STM32CubeAI-DC), and FreeRTOS to enable efficient MLOps infrastructure, model training, and edge deployment with over-the-air (OTA) updates. Additionally, the system employs AWS IoT Core and Amazon Managed Grafana for device monitoring and data visualization.
Partner Solution Building Enablement from AWS
This joint solution leverages Partner Solution Building Enablement from AWS, which offers AWS Partners a consistent experience for developing industry-transformative solutions to customers.
Partner Solution Building Enablement provides a solution-build framework where AWS Partners have an opportunity to engage with AWS subject matter experts and builders; in this case, STMicroelectronics and LACROIX collaborated with AWS IoT specialists and the AWS EMEA Prototyping team throughout the engagement.
“The AWS, LACROIX, and STMicroelectronics collaboration empowers developers by bridging the worlds of machine learning, IoT, and embedded devices,” says Marc Dupaquier, Managing Director AI Solutions at Microcontroller & Digital IC Group at STMicroelectronics. “This solution will advance the development of Edge AI on microcontrollers and widen adoption of these technologies. Edge AI is at the heart of ST’s strategy for the future of embedded devices, and the creation of a robust and reliable ecosystem is an asset long awaited by the ML community.”
About STMicroelectronics and LACROIX
STMicroelectronics works with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address the need to support a more sustainable world. Its technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of IoT and connectivity.
LACROIX supports customers in the construction and management of intelligent living ecosystems, thanks to connected equipment and technologies. It combines the essential agility required to innovate with the ability to industrialize robust and secure equipment, industry-leading know-how in industrial IoT (IIoT) solutions, and cutting-edge electronic equipment for critical applications.
Convinced that technology should contribute to simple, sustainable, and safer environments, LACROIX designs and manufactures connected and secure equipment to optimize the management of critical infrastructures such as smart roads (street lighting, traffic management, vehicle-to-everything, traffic signs) and the remote control of water and energy infrastructures.
“We are thrilled to work with AWS and STMicroelectronics to harness the power of Edge AI technologies for our smart city initiatives and especially our IoT offering,” says Stephane Gervais, EVP Innovation at LACROIX. “This advancement is not just timely but long overdue, since we believe Edge AI is paramount for a resilient AI at scale while unleashing new uses cases tackling sustainable cities needs.”
Challenges of Edge Computing
Edge computing offers numerous advantages, making it a compelling choice for various applications. By processing data locally, edge devices eliminate the need for cloud round trips, ensuring real-time power consumption reduction and low-latency inference.
Moreover, edge computing enhances data privacy and security, reducing the risk of data breaches and unauthorized access, and provides offline functionality to ensure continuous operation even in disconnected environments.
Deploying ML models on edge devices does present challenges. Limited computational resources can lead to slower inference times and reduced model accuracy, for example, and constrained memory and storage capacity require model optimization and compression. Intermittent network connectivity may also hinder access to cloud resources or model updates. Addressing these challenges is essential.
STM32 Microcontroller: Unleashing the Power of Edge Computing
For this solution, we utilize the STM32 B-U585I-IOT02A Discovery kit. The code running on the device implementing the solution comprises the following components:
Operating System – FreeRTOS
The foundation of any IoT solution lies in a robust operating system (OS) that efficiently manages the device’s resources and provides seamless updates. For this solution, we chose FreeRTOS, a real-time OS specifically tailored for microcontrollers. Its feature set includes the ability to perform OTA firmware updates, a key advantage for our deployment process.
With OTA updates, we can eliminate the need for manual device flashing, simplifying maintenance and ensuring devices are always running the latest version of the firmware.
AWS IoT Core Connectivity – X-CUBE-AWS
To seamlessly connect the STM32 development board to AWS IoT services, we rely on X-CUBE-AWS, an extension pack designed to integrate FreeRTOS with AWS IoT Core. By using X-CUBE-AWS, we can offload the complex task of handling connections and MQTT message publishing, allowing developers to focus on building high-level IoT applications.
This integration streamlines communication between devices and the cloud, opening up a world of possibilities for remote monitoring, control, and data analysis.
Audio Event Detection Model – STM32 Model Zoo
Our ML journey begins with STM32 Model Zoo, a collection of pre-trained models and optimized training scripts tailored for STM32 boards. We focus on the Audio Event Detection Model, a promising choice for audio classification tasks. This model is an optimized version for embedded devices of the popular Yamnet audio classification model.
We used the Yamnet-256 model from the Model Zoo which is a scaled-down version of the popular Yamnet audio classification model.
Application Code – Bridging the Gap
With the OS and AWS IoT Core connectivity components in place, it’s time to bring everything together in the application code. Leveraging the power of FreeRTOS and X-CUBE-AWS, the application code interacts with the STM32’s microphone sensor, capturing audio data. The captured audio is then fed into the audio classification model. The application code efficiently processes the model’s output and seamlessly publishes them to AWS IoT Core.
All of this code combined needs to be compiled into binary code and flashed to the device. Finally, the device continuously monitors and classifies the sounds it detects from its environment, as seen below.
Figure 1 – Code compilation stage.
Solution Architecture
To overcome these challenges and unleash the full potential of edge intelligence, our solution adopts a well-structured architecture built using AWS CDK, an infrastructure as code (IaC) solution.
Figure 2 – Edge AI cloud system architecture block diagram.
The architecture consists of three interconnected stacks: ML, IoT, and Pipeline:
- ML stack: Manages the MLOps process, including dataset processing and model training. Leverages STM32 Model Zoo and STM32Cube.AI to optimize the audio classification model’s performance.
- IoT stack: Facilitates edge deployment, OTA updates, and AWS IoT Core connectivity. Relies on FreeRTOS and X-CUBE-AWS for efficient management and seamless application code deployment.
- Pipeline stack: Orchestrates CI/CD workflow, automating the deployment of ML and IoT stacks. Streamlines development and management of the IoT solution.
ML Stack
The ML stack is a robust Amazon SageMaker pipeline that streamlines the entire development process and enables efficient training, optimization, and deployment of the audio classification model.
- Preprocessing: Data is the backbone of any AI model, and we utilize a subset of the publicly available datasets for training the audio classification model. These datasets contain diverse audio samples, allowing the model to generalize effectively across various sound sources and environments.
- Training: With the datasets prepared, our pipeline orchestrates the training process. During this phase, a TensorFlow Lite Model is generated, optimized for efficiency on resource-limited IoT devices. This lightweight model ensures the device’s constrained hardware does not compromise its audio classification performance.
- Evaluation: Pipeline connects to the STM32Cube.AI Developer Cloud, and we subject the model to rigorous benchmarking tests to measure latency, accuracy, and resource utilization. This ensures the model meets stringent requirements of resource-constrained IoT devices. The STM32Cube.AI Developer Cloud employs STM32Cube.AI, a powerful tool to optimize and convert our TensorFlow Lite Model into an efficient C code, tailored for STM32 microcontrollers. It then executes the generated code in STM32 boards farm in the cloud to benchmark performances of the model on various STM32 platforms and select the one most suited for this ML workload.
- Deployment: Having completed the development and evaluation stages, we take the final steps to deploy the audio classification model. Our pipeline registers the trained model in the SageMaker Model Registry, allowing for facilitated versioning and management. The optimized C code is published in an Amazon Simple Storage Service (Amazon S3) bucket, making it readily available to be integrated into IoT stacks for real-world applications.
IoT Stack
The IoT stack seamlessly integrates the following functionalities for efficient edge deployment.
- AWS CodeBuild integration: The IoT stack leverages AWS CodeBuild to bring together the application code and generated audio model code. Using STM32CubeIDE for compilation, it creates binary code that’s ready to run on the STM32 boards.
- Digital signing for OTA updates: The IoT stack digitally signs the compiled binary code, ensuring its integrity and authenticity during OTA updates. This prevents unauthorized code from running on the devices and safeguards the entire ecosystem.
- Seamless OTA deployment: Once the binary code is compiled and digitally signed, the IoT stack triggers an IoT job which seamlessly deploys the latest binary code onto the STM32 boards over the air. OTA updates are now a breeze, eliminating the need for manual intervention and allowing devices to stay up-to-date with the latest features and bug fixes.
Pipeline Stack
The backbone of our comprehensive CI/CD environment is the Pipeline stack, which plays a pivotal role in streamlining the development and deployment process for our solution.
With a one-time deployment, this stack lays the groundwork for efficiently managing future changes through seamless code repository integration. It handles the independent deployment of both the ML and IoT stacks, ensuring seamless cooperation between each component within the larger ecosystem.
By automating the entire CI/CD workflow, the Pipeline stack is configured to deploy a single ML stack and multiple IoT stacks for various environments, including dev, staging, and prod. This approach guarantees the Audio Detection Model created by the ML stack is deployed and utilized across all environments.
Figure 3 – CI/CD pipeline workflow.
Visualizing Data to Empower Actionable Insights
Data visualization is crucial for deriving actionable insights. Amazon Timestream acts as a fast, scalable, serverless time-series database that effectively collects and stores sound event data from IoT devices. This solution also uses Amazon Managed Grafana for the creation of a dynamic and interactive dashboard for real-time monitoring and analysis.
Figure 4 – Data visualization with Amazon Managed Grafana.
Conclusion
The AWS STM32 ML Accelerator Kit demonstrates the transformative potential of AI and IoT in edge computing. By leveraging AWS services, STM32 technology, and a comprehensive infrastructure, the kit empowers businesses to deploy efficient and accurate audio classification models on edge devices.
This solution unlocks real-time processing, enhanced privacy, and continuous operation, driving innovation and redefining possibilities in diverse industries. Organizations can embrace the power of edge intelligence with the AWS STM32 ML Accelerator Kit and witness a new era of connected intelligence.