Use Decision Support System for Agrotechnology Transfer for more efficient crop management
This Guidance demonstrates how to use Decision Support System for Agrotechnology Transfer (DSSAT) crop growth simulation technology to more efficiently run crop simulation models. DSSAT is a software application program that comprises crop simulation models for over 42 crops (as of Version 4.8) in addition to tools that facilitate effective use of the models.
By deploying DSSAT on AWS and using its available environmental datasets, you can collect data on plant characteristics and a crop’s vegetative and reproductive development stage. This helps you compare simulated outcomes and multi-year outcomes to make better decisions for crop management, grain marketing, logistics and supply chain, and risk management.
Please note: [Disclaimer]
Architecture Diagram
[text]
Step 1
Load farm data into Amazon Simple Storage Service (Amazon S3) using DSSAT version 4.8.
Step 2
Amazon S3 invokes an AWS Lambda function, which will load the farm data into the DSSAT application directory in Amazon Elastic Compute Cloud (Amazon EC2) for Microsoft Windows Server using a virtual private cloud (VPC) endpoint.
Step 3
Gather field-centric weather data from Internet of Things (IoT) weather stations that use AWS IoT libraries. Send the data to AWS IoT Core through message queuing telemetry transport (MQTT).
Step 4
Configure an AWS IoT rule to insert the weather station data into Amazon Timestream.
Step 5
Amazon EventBridge schedules a Lambda function to query, format, and save Timestream data into Amazon S3 and into Amazon EC2 for use in the crop simulations.
Step 6
Invoke a separate, scheduled Lambda function to run the crop simulations through DSSAT and move the output prediction files into Amazon S3 for long-term storage.
Step 7
The scheduled Lambda function also parses and inserts the crop prediction data into Timestream tables.
Step 8
Use Amazon API Gateway to invoke a Lambda function with logic to return crop prediction data and load it into your application for display and use.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance helps you improve operational efficiency by using Amazon CloudWatch for each AWS service and establishing AWS IoT rules to report device issues. You can configure alarms and event notifications on CloudWatch. CloudWatch help you understand your system performance and progress towards achieving business outcomes through successful end-user content consumption.
-
Security
This Guidance uses serverless and managed services to reduce your security maintenance tasks. It uses network isolation of services by Region or VPC and offers various firewall options to control network access. Each AWS service encrypts its own data, and all data within AWS is encrypted in transit between services. Data in data lakes is protected using SSE-S3 encryption.
You can use AWS Identity and Access Management (IAM) to set access requirements, service control policies, and automated alerting. You can use AWS IoT Core to manage device security and certificates and publish alerts. The AWS IoT message broker encrypts all communications in-transit. Additionally, you can implement API authentication for your applications using Amazon Cognito user pools in conjunction with API Gateway. You can also use Amazon Cognito and API Gateway to govern data access rules.
-
Reliability
This Guidance incorporates managed services with availability design goals of at least 99.9% for receiving, processing, and storing data. It uses only stateless compute and relies on purpose-built data storage to persist system states. This Guidance does not require high availability of the Amazon EC2 for Microsoft Windows Server instance, but you can use CloudWatch to monitor and report on the instance’s status and the status of all Lambda function invocations.
AWS IoT Core stores an IoT device registry, certificate authority (CA) certificates, device certificates, and device shadow data. This service automatically replicates this data across Availability Zones in the event of hardware or network failures. Additionally, AWS IoT Device Software Development Kits (SDKs) support non-client-side disconnect and queuing of plain MQTT operations in case of network failure.
-
Performance Efficiency
This Guidance uses serverless and managed service components as needed. For example, AWS IoT Core can easily scale to hundreds of thousands of devices and millions of messages a month, and Lambda scales alongside serverless ingestion and data services.
You can configure this Guidance to meet your needs by adjusting Lambda functions, CloudWatch alert and alarms thresholds and configurations, AWS IoT rules, and CloudWatch logs. You can also easily update Timestream tables to experiment with ingesting, storing, and processing the right data for your needs.
-
Cost Optimization
This Guidance uses a customer-managed Amazon EC2 for Microsoft Windows Server instance and a VPC endpoint, both of which incur costs for the amount of time used. However, the Guidance also uses serverless infrastructure, which helps you avoid unnecessary costs by only provisioning resources as needed.
This Guidance minimizes sending repetitive data in messages to reduce payload size. Using Amazon S3 and Timestream, you can move data to different storage types and tiers to reduce costs. API Gateway incurs charges for data transferred out and while APIs are in usage only, and you can use Timestream for scheduled queries and derivative summary tables to reduce the data returned through API responses.
-
Sustainability
This Guidance uses managed services and, where possible, incorporates serverless technologies with dynamic scaling based on demand, thereby minimizing the environmental impact of the backend services.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.