This Guidance demonstrates how to use AGCO's solutions on AWS to help farmers predict healthy livestock feed requirements. By deploying this Guidance, farmers can collect information about levels of feed, rate of consumption, and receive alerts when nearing dangerously low levels of feed. Data is provided in near real-time through automated systems, and reported through remote connections to help the farmers better understand animal health, well-being, and contamination of their livestock. Moreover, machine learning and advanced analytics provide farmers with insights to help maximize yields, better forecast the feed needs, identify when animals are ready to transport, and determine times when the farm houses are empty. Together with AGCO's solutions, this Guidance offers tools to farmers to advance them towards precision farming and optimizing their resources. 

Please note: [Disclaimer]

Architecture Diagram

[text]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • This Guidance recommends using Amazon CloudWatch for each AWS service and configuring alarms and event notifications through Amazon Simple Notification Service (Amazon SNS) to increase operational efficiency. You can also establish AWS IoT rules to report to CloudWatch on devices experiencing issues. By using CloudWatch logs, you can understand the system performance and observe if end-user content consumption is meeting business goals. You can script this reference architecture using AWS CloudFormation, adding it to your own development pipeline and deploying it in your cloud environment. This Guidance also uses AWS CodePipeline to deploy changes to Amazon ECS and Lambda.

    Read the Operational Excellence whitepaper 
  • This Guidance uses only serverless and managed services to reduce your security maintenance tasks. For example, it uses AWS Identity and Access Management (IAM) policies to manage permissions and authorization for AWS IoT Core devices, and it authenticates message queuing telemetry transport (MQTT) messages to AWS IoT Core. An AWS IoT message broker encrypts all communications in transit. AWS IoT Core also lets you manage device security and certificates and publish alerts if a device exhibits certain behaviors. You should follow best practices when setting access requirements using IAM, including least-privilege access, password and key rotation, service control policies, and automated alerting. You should also implement appropriate OAuth or similar authentication for the dashboard services, such as by using Amazon Cognito.

    This Guidance uses network isolation of managed services and offers firewall options to control network access. Each specific AWS service encrypts its data, and AWS encrypts all data in transit between services. An AWS Certificate Manager (ACM) certificate encrypts all traffic in transit into AWS, and Application Load Balancer uses TLS 1.2 for communication. This Guidance also protects data in data lakes using SSE-S3 encryption and uses dashboards and data APIs instead of providing direct data access to users.

    Read the Security whitepaper 
  • This Guidance incorporates managed services with availability design goals of at least 99.9 percent. AWS IoT Core and the MQTT protocol were built for resilience, and AWS IoT Device Software Development Kits (SDKs) have built-in resilience features and support non-client-side disconnect and queuing of plain MQTT operations in the case of a network failure. AWS IoT Core stores information about IoT devices, CA certificates, device certificates, and device shadow data and automatically replicates it across Availability Zones in a hardware or network failure. The AWS IoT Device Shadow service uses AWS IoT Greengrass to sync local device shadow states with AWS IoT Core, so an app running on an IoT device can still communicate with its shadows and AWS IoT if the device goes offline. The AWS IoT Greengrass stream manager batches data feeds in a network failure and automatically forwards information when connectivity is restored. Additionally, all compute in this Guidance is stateless and relies on data storage that is purpose built to persist system state. 

    Read the Reliability whitepaper 
  • This Guidance uses services chosen for low latency, high availability, resilience, removal of undifferentiated heavy lifting, and efficiency. For data ingestion, it uses Amazon Kinesis, which can easily scale to hundreds of thousands of devices and millions of messages per month. For processing compute, it uses Lambda, which scales alongside serverless ingestion and data services. Amazon ECS provides steady state, high availability, and quick responsiveness. This Guidance scales its use of serverless and managed services and components up and down as needed. It can handle 100,000 messages per minute from devices and over one billion messages stored in DynamoDB.

    You can configure this Guidance to meet your needs. For example, you can set up Lambda functions and AWS IoT rules, as well as CloudWatch alerts, alarm thresholds, configurations, and logs. You can also experiment to pick the right data store for your needs, and AWS CodePipeline lets you make changes to the Amazon ECS and Lambda parts of the architecture. 

    Read the Performance Efficiency whitepaper 
  • This Guidance uses a serverless infrastructure to avoid overprovisioning resources, and it uses managed services to relieve your management burden, helping you save on operational costs. Serverless architectures provide a pay-as-you-go pricing model and scale based on demand. You can also optimize costs by service. For example, you cache dashboard responses in Amazon CloudFront or move data in Amazon S3 and DynamoDB based on access patterns. Kinesis Data Streams and DynamoDB let you choose between on-demand and automatically scaling modes, and you can implement throttling using AWS WAF. This Guidance does not anticipate any inter-Region data transfer charges. For OpenSearch Service you can purchase the optimal instance type for your needs and manage storage to reduce costs. You can also use Compute Savings Plans to optimize compute costs.

    You can select from existing IoT device partners that fit your technical and financial needs within the AWS Marketplace, or if you manufacture your own IoT hardware, you can directly control the connectivity costs using AWS IoT. AWS IoT Core lets you filter important equipment data and use the MQTT protocol to efficiently transfer data to AWS, minimizing repetitive data. This Guidance also recommends that you use AWS Budgets and Amazon Data Lifecycle Manager policies to reduce unnecessary costs and Cloud Intelligence Dashboards for comprehensive cost management.

    Read the Cost Optimization whitepaper 
  • This Guidance uses managed services that are serverless where possible and that you can easily scale up and down based on demand, minimizing the environmental impact of backend services. This Guidance also minimizes redundant data sent from IoT devices to AWS IoT Core and stores data once in DynamoDB, reducing data movement across the network.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

AWS for Industries
Blog

Livestock Health and Quality of Life Monitoring on AWS with Agriculture Leader AGCO

This post demonstrates how to use the AGCO livestock Health and Quality of Life Monitoring system to obtain a deeper level of data, collected through automated systems, and reported through remote connections to understand animal health, well-being, and contamination. 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?