AWS Startups Blog

Enabling AI and Machine Learning Model Training with Teraki

Guest post by Bilal Khaddaj & Jayanth Suresh, Product Management, Teraki

The Teraki platform, built by AI startup Teraki, automatizes intelligent sensor processing for telematics, video, and 3D point cloud data. The platform is developed with a single ideological concept/goal: Deliver scalability to manage the increasing need to handle sensor data from vehicles and devices in high volumes. In addition to cloud services, Teraki platform consists of Device SDKs conforming to automotive safety standards and are AutoSar and MISRA compliant. These SDKs enable customers to effortlessly develop scalable, embedded applications at the edge to capture and process high volumes of data.

Teraki product and platform offerings can be deployed in a wide range of industry applications, including L2+ functionality, driver monitoring, remote operations, autonomous driving, remote delivery, object detection, predictive maintenance, and many more. When building the platform, we chose the versatile AWS services to ensure higher availability and scalability of the infrastructure in the cloud. For a fully automated sensor data ingestion, we relied on AWS Internet of Things (IoT) device management. Here’s how we did it.

Platform Services

Teraki’s intelligent AI-algorithms tackle the extraction and selection of relevant information at sensor level. The REST APIs enable developers to easily create, improve and implement their own AI models. The Teraki platform currently consists of the following services:

•      Data upload service: Supports import and export of large amounts of sensor signals as well as video data.

•      Model training service: Includes the training and testing of intelligent data processing models to allows customers to develop, evaluate and select the best model that serves their use case.

•      Model service: Supports the management of Teraki models. It also enables developers to compile and export models along with TERAKI encoder to be installed on embedded devices.

•      Decoder service: This service is responsible for reconstructing encoded binary payloads generated at the edge by Teraki’s encoder.

These services allow customers to develop the best applications efficiently and swiftly for any targeted use-case. For more information, check out our Platform Launch blog post.

AWS in the Architecture

The architecture diagram below illustrates the underlying dependencies on AWS. Teraki platform services rely on various technologies offered by AWS for compute, container, storage, and IoT management.

Figure 1: Teraki’s services with AWS dependencies

Main Challenges

We at Teraki face complex challenges when developing platform services, including data storage, security, recovery, scalability, and availability. With our partnership with Leveraging AWS, we tackled these challenges in the most efficient and reliable way, making our platform scalable while guaranteeing high availability to support high volumes of sensor data.

AWS helped us achieve this by offering a secure and scalable cloud infrastructure. Moreover, AWS provides several on-demand resources, such as compute instances (for example, Amazon EC2), databases (Amazon RDS, Amazon ElastiCache), storage (Amazon S3, Amazon EBS, Amazon EFS), container orchestration services, and IoT device management services, which enable Teraki to build and develop a highly available and secure platform offering.

Data Upload Service

To train machine learning models such as Teraki Region of Interest Detector (RoI), huge video files (> 100 GB) with their corresponding labels have to be uploaded via the Data Upload Service. This poses a challenge since huge files upload takes longer than expected time and in case of error, network issue or timeout in video processor, the process must be repeated from the beginning. To address this challenge and reduce the number of steps, Teraki uses direct uploading to S3 with subsequent notification to AWS Lambda function that works as a bridge from S3 to Teraki Video processor service. Here, the Lambda function upon the completion of the file upload process triggers real time notifications from S3 and informs the video processor about this change and with all the necessary metadata.

Figure 2: AWS Lambda function and S3

Conclusion

Teraki enables customers to detect events and objects in the data with a high degree of accuracy. Our intelligent AI models pre-process the data and enable customers to implement data selection at the edge to derive high-quality data.

With Edge SDK conforming to automotive safety standards, the Teraki platform delivers an industry-wide solution to cope with exploding data volumes at the edge and turn them into accurate and efficient algorithms. It’s also accessible to customers through a visual interface (DevCenter) and REST APIs to create and implement custom AI models. The platform is made to deal with high volumes of data. The scalable and automated data ingestion bridges the gap between the planning and execution of new AI models without  compromising the safety of its application.

With the help of AWS services, our services are capable of handling huge data and can enable the training of machine learning models. Backed by AWS services, undoubtedly, we have a stable and reliable cloud solution that keeps our AI-models present across the world at any time. For more information about Teraki Platform, please refer to  Teraki’s DevCenter or visit www.teraki.com.