Generate near real-time pricing offers for individual or packaged airline content
This Guidance demonstrates how airline companies can use data analysis and new artificial intelligence and machine learning (AI/ML) capabilities to dynamically adjust pricing and personalize product offerings for their customers. In the travel and hospitality industry specifically, organizations want to optimize their pricing strategies. And with a business climate that’s constantly shifting, dynamic pricing is uniquely valuable to help companies increase personalization, adjust pricing, and offer products with new AI/ML capabilities. With dynamic pricing, airlines can increase sales and profit margins while maintaining customer satisfaction. The AI/ML services in this Guidance can also be trained to generate incremental revenue and prevent revenue dilution.
Architecture Diagram
Step 1
Data is ingested from booking engines and search data using Amazon Kinesis Data Firehose.
Step 2
Booking data is stored in Amazon Simple Storage Service (Amazon S3) and queried in batches using Amazon Athena by a generated AWS Lambda event to determine booking rates in the current period. For low latency requirements, Amazon Managed Service for Apache Flink runs queries on the ingestion stream.
Step 3
Search data is stored in an Amazon S3 bucket. For low latency requirements, Managed Service for Apache Flink runs rolling queries.
Step 4
Historic booking data is used to train models and then build demand forecast models. These are updated on a regular basis to help compare booking demand to projected bookings.
Step 5
Environment state variables such as booking rate, search rate, total margin, available capacity, and booking forecast are all stored in Amazon DynamoDB for the current versioned environment state.
Step 6
The pricing agent algorithm is generated by Lambda to assess the environment state and recommend the appropriate price adjustment. Adjustment recommendations update the DynamoDB pricing agent store.
Step 7
The trader evaluates the recommendation from the DynamoDB feed and rejects or approves it. The trader has dashboards available in Amazon QuickSight.
Step 8
Approved and timestamped price adjustments are available for the booking engine to search and use.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance can be scripted using an AWS CloudFormation template. You can then add CloudFormation templates to your own development pipeline and deploy them in your cloud environment. Using Amazon CloudWatch, you can get observability with level metrics and personalize dashboards and logs.
-
Security
All data is fully encrypted in transit and in storage, with least privilege AWS Identity and Access Management (IAM) roles in place for all service interactions on the environment state and pricing agent environments. Exposed APIs are only exposed to authorized users and integrate into existing on-premises or cloud-hosted solutions by clearly defined network paths with no public internet exposure.
-
Reliability
The use of serverless throughout the solution ensures high availability across the deployed AWS Region. All components scale automatically and the account limits should be clearly defined for the supported product range.
-
Performance Efficiency
Serverless architectures help to provision the exact resources that the workload needs. Strategies are in place for storage lifecycle management and ensuring auto capacity scaling is used for ingestion and read and write access patterns.
-
Cost Optimization
This Guidance is designed to be fully optimized for cost, only using resources where necessary and only accessing data using the services appropriate for the business need. All costs should align with the defined goals for pricing and clearly defined KPIs for managing batch compared with real time requirements to ensure the optimum value benefits.
-
Sustainability
By extensively using managed services and dynamic scaling, we minimize the environmental impact of the backend services. This should be monitored to ensure that assets such as data are stored in the optimum solution based on the read and write access patterns, with close attention to scaling of compute resources closely aligned to the booking demand.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.