IntelloLabs Cuts Down on Fresh Produce Wastage using AI and Computer Vision
Guest post by Ramakrishnan M., Vice President – Sales, Intello Labs
$500 billion worth of food is wasted or lost every year, around the world. One billion extra people could be fed if food losses could be halved. We all know technology can do wonders, and its adoption in agriculture is pacing up. With new food safety regulations and the increasing need for transparency, businesses are exploring the potential of technologies such as AI and computer vision in the food supply chain. As techies, we were experimenting with different use cases when we happened to come across the challenges faced by the fresh food industry – changes in customer buying preferences and quality expectations on the one hand and increasing food losses on the other, and the software concepts behind Intello Labs were born.
After zoning in on our focal point, we created an app that captures the image of the fresh produce sample and gives the quality metrics – such as color, size, and visual defects – in real time. Anyone can get to know the quality of their produce and can make a decision on whether to accept the sample or reject, thus reducing value risk and wastage in the agriculture supply chains and ensuring the highest quality food reaches the consumers.
Getting the tech solution in place was not easy. Being a deep-tech company that deals with huge amounts of image-based data, the careful selection of resources has a huge impact on our business, cost, and efficiency.
There were multiple problems to solve:
- The quality grading of Fruits and Vegetables (F&V) happens at specific times (or ‘seasons’ in the year) for specific clients and for different locations. Scaling up and down the infrastructure as needed was the biggest problem.
- As we work with huge data for different F&V commodity images, the model training becomes very dynamic. So managing the model training infrastructure was a task. Additionally, managing the server manually took huge tech bandwidth.
- Managing huge data storage and accessing it when needed, while optimizing the cost.
- Security checks and balances of the data of images, video, and other graphics.
- Our engineers were spending a lot of their time manually managing the different permutations of model training. We were looking for services, where we could define the range hyper-parameters and the service can automatically run with different values and choose the values.
AWS came to the rescue:
We started using different services of AWS, and it solved all our problems seamlessly.
- Amazon EC2: Easy to understand services and detailed performance metrics of the Amazon EC2 services allowed us to monitor cost as well as performance of the system seamlessly.
- AWS Auto Scaling: For companies like us, where usage of our system varies significantly every day at different locations, AWS Auto Scaling helps us manage the services based on usage and based on timings. With AWS Auto Scaling, we were able to optimize, both the cost and infra.
- Amazon S3: With the powerful services of Amazon S3, we are able to store huge amounts of data including increasing amounts of images and videos. The multi-tier storing option helps us greatly to optimize our cost.S3, S3 Intelligent-Tiering and Amazon S3 Glacier are some of the tiers we have used greatly.
- Amazon SageMaker: Amazon SageMaker solves all our model training and model optimization problems, managing all the hyper-parameter optimization and model trainings well. Since we have our proprietary models, the option of customizing our own on Amazon SageMaker helped us. We use Amazon SageMaker to tune hyper-parameters and run different experiments and compare the model performances.
Following is a high-level architectural diagram of our system:
This architectural system consists of the following five components.
● End User flow
● Network Layer powered by AWS
● AWS Powered Client Server
● AWS services or Components
● Autovalidator server
The end user can send images from either mobile phone or from any other designated system like sorters etc. The application is designed to click a picture of Fruits, Vegetables or Grains using the mobile camera, this image (picture ) is then sent to the server for processing and getting the quality results.
The internal users also use the internet system “Autovalidator” to perform various activities like Annotation of Images, Dataset creation , Reviewing, Training Models, Client Dashboard preparation, Live Call to Action work, Testing Models and Result Validation etc. There are Admin Users, who control all the functionalities of Autovalidator and AWS components. All these systems are deployed on AWS.
Since this is a very dynamic system and surge of requests can arrive at any time, hence to distribute the load we use Application Load balancer. To enable the SSL, we use the AWS certificate Manager. Using the Domain Name, and Amazon Route 53, we route the traffic, based on various routing Policies (like Failover, Latency, weighted). After DNS resolution from Amazon Route53, the user requests from a mobile app or web application on Application Load Balancer and from there the requests are forwarded to the Web server.
We use multiple types of Amazon EC2 instances based on workload type with databases on separate instances. The Instances are continuously monitored by Amazon CloudWatch to monitor its performance. AMI of the Amazon EC2 is created on a monthly basis to have a backup. For every Client, there is a different bucket in Amazon S3 to store the Images. The Images from Load balancer reach the Application server (Rails) through the web server (Nginx) and are processed for AI model inference. All the relevant data is stored onto Amazon S3 bucket during this process.
Our internal tool, Autovalidator server manages the machine learning pipeline for us where we do the data collection, data annotation, Model KPI management, Model training and model optimization. The system uses AWS in the backend where Amazon EC2 GPU machines are being used for model training with help of Amazon SageMaker for model optimization simultaneously.
We use several components of AWS for performing the machine learning pipeline. To begin with, we make use of Amazon EC2 instances for hosting the servers (application server like AutoValidator and Client server) along with other powerful training GPU’s. We use Amazon S3 bucket as our primary storage service. Access to Amazon S3 bucket is given with the help of the IAM policies depending on the access requirements within the organization. For external users, we use the Amazon CloudWatch alarms to alert in case of performance problems.
Without AWS, we could not have reached where we are today. We are very much excited about the future. Can you imagine a scenario when a simple photograph-through-mobile can reduce food loss or waste? We look forward to AWS support along our journey.
Interested in ML on AWS? Contact us today!