AWS for SAP

Extend RISE with SAP on AWS with Analytics Fabric for SAP Accelerators

This post was jointly authored by the following individuals from AWS:

Pavol Masarovic, EMEA Principal SAP Innovation Architect 

Allison Quinn, Senior Analytics Solutions Architect Australia and New Zealand

Peter Perbellini, Head of APJ Solutions Architecture, ERP on AWS

Introduction

At AWS, we commonly hear from customers that they are looking to not only move their SAP systems to the cloud, but transform the way they use data and analytics capabilities within the organization. Aligned to our Joint Reference Architecture (JRA), AWS has integrated our native services to bring additional value to customers who run RISE with SAP on AWS. The AWS and SAP BTP Joint Reference Architecture was developed to address common questions from joint customers and partners on how to use SAP BTP or AWS services for different business solution scenarios.

SAP customers are already leveraging data lakes on AWS to combine SAP and non-SAP data and take advantage of AWS’s industry-leading storage and analytics capabilities. Some of them have been doing this for a number of years including customers such as: Bizzy, Invista, Zalando, Burberry, Visy, Delivery Hero and Engie. Customers are also seeking ways to build enterprise dashboards, ML models and ‘what-if’ scenarios that combine SAP and non-SAP data (for example, leads from a CRM system, customer satisfaction scores, housing statistics, weather data). Using a data driven approach, all SAP on AWS customers can turn SAP data into actionable insights within hours by using the AWS Analytics Fabric for SAP solution, and eliminate the burden of identifying AWS services and reduce build efforts by up to 90%.

With AWS Analytics Fabric for SAP, a user can now select functional areas such as products, customers, sales orders, deliveries, invoices and the solution intelligently identifies the required tables and relationships.

AWS Analytics Fabric for SAP delivers

  • Generated data models that carry business context, and offer business friendly names
  • Near real-time data pipelines that are simple and adaptable with comprehensive capabilities including incremental data capture (CDC), conversion rules, and automatic inclusion of custom fields
  • Data stored in cost effective tiers (using Amazon S3 Intelligent Tiering) and highly scalable in an AWS data lake using Amazon S3
  • High-quality, contextual data to users to build reports and perform advanced analytics with SAP data and non-SAP data at speed.

What are the key benefits of using AWS Analytics Fabric for SAP?

AWS Analytics Fabrics for SAP has been built to help accelerate RISE customer journeys and help them derive additional value from AWS Analytics and AI/ML services. Built around the framework of a Modern Data Architecture, allowing all SAP on AWS customers to leverage their data and scale to other purpose-built services that customer business use cases require. These accelerators can be used as a baseline for development, modified and customized easily to meet with your specific SAP implementation.

Data Ingestion is based on standard delivered SAP Business Content extractors. These can be modified to extract from SAP HANA CDS views or source tables depending on the requirements. Data Ingestion can be scheduled as often as every minute and based on changed data only. This method will ingest all data including any customer modifications made to extractors.

This data is then loaded into Amazon Redshift with AWS Step Functions orchestrating the loading and ensuring appropriate ordering for referential integrity. For Redshift, some sample full data models, joining the various SAP sources have also been provided, allowing these to be quickly consumed in Amazon QuickSight for immediate insights. Customers can easily enhance these pre-defined data models, or model their SAP source data with other data available in Redshift or their data lake environment with ease.

Cloud Data Warehouse on AWS with SAP

The architecture diagram shows the steps for building a cloud data warehouse using SAP data on AWS, by extracting data from SAP via OData protocol.

The following prerequisites will be explained more in details in the next section.

  1. PRE REQ – Install and activate standard SAP Business Content Extractors. Configure ODP for extraction in the SAP gateway of your SAP system.
  2. PRE REQ – Create the OData system connection from Amazon AppFlow to your SAP source system. This can be over PrivateLink for SAP on AWS/connect with AWS via VPN/direct connect, or over the internet.
  3. PRE REQ – Create an S3 bucket to store data.
  4. In Amazon AppFlow, create the Flow using the SAP source created in step 2. Run the flow to extract data from SAP and save to an S3 bucket .
  5. Create a data catalog entry with metadata for the extracted SAP data in your S3 bucket
  6. Load data into Amazon Redshift through ‘COPY’ commands. Model the data appropriately, enable the capability for visibility on historical movement of data.
  7. Create the dataset in Amazon QuickSight with Amazon Redshift as your datasource. Create a dashboard to visualize the business data as per requirements.
  8. Orchestrate the end to end process with AWS Step Functions.

With the data extracted from your SAP source system, customers can immediately derive key performance indicators (KPI’s) such as Number of Orders, Total Orders Value, Open Sales Orders Number & Open Sales Orders Value, Number of Returned Items, Average Order Value (AOV), Delivery In Full On Time (DIFOT), Invoiced Orders, Order Fulfilment rates, and other KPI’s relevant for Order to Cash. In addition to this, leveraging the full Change Data Capture (CDC), customers can analyze each order journey including all updates and changes such as order quantity changes, and delivery time-frame changes over time.

Prerequisites

There are three pre-requisites to deploying AWS Analytics Fabric for SAP – you may already have these in place for your SAP workload.

  • Install the extractors listed in the GIT repo in your SAP system. This may already be done if you already have an SAP BW system in place. These extractors need to be exposed as an ODP service in transaction code SEGW.
  • Create the connection to your SAP system in Amazon AppFlow. This can be as simple as entering connection details and credentials, however you may also require an AWS PrivateLink between AWS and your SAP source system. This is a one-off step that will need to be performed.
  • Create a bucket in S3, to land your SAP source data.

Once these pre-requisites are complete, the process is simple and straight forward.

Deployment

The deployment is automated using AWS CloudFormation templates and scripts, enabling you to quickly implement, modify and expand as required. The templates have been parametrized to enable fast deployment. The initial release of AWS Analytics Fabric for SAP focuses on the “order to cash” process group with plans for others to be released.

Implement the various components in the defined order as per instructions in the git repo. You will find additional help references in the git repo.

The deployment steps are:

  1. Deploy Amazon AppFlow flows to extract data from your SAP system. This will automatically activate them and start ingesting data based on the selected timing.
  2. Deploy AWS Glue Data Catalog, technical metadata catalog for entries. This enables immediate consumption of the data in services such as Amazon Athena and others.
  3. Create your Redshift cluster – in the git repo there are Redshift scripts. First run the DDL (Data Definition Language) scripts, these create table definitions for data to load into. You then execute the other scripts, which will create data pipeline components and CDC (Change Data Capture) pipeline management. All pre-defined scripts can be tailored and modified or extended, as an example if you have customizations in your SAP system source system.
  4. Deploy Step Functions for overall orchestration and alerting of your data pipelines and end to end process.
  5. Create your Amazon QuickSight account (if not already active), connect to your Amazon Redshift cluster, then deploy datasources based on the already available, modelled data within Amazon Redshift.

In the above visualization example using Amazon QuickSight from delivered CloudFormation templates data sets, you can visualize various order to cash KPIs with ML-powered insights to add forecasting of anomalies detection for SAP orders.

Summary

AWS Analytics Fabric for SAP provides an easy and efficient way to empower business users to turn SAP data into actionable insights quickly. Customers running SAP workloads on AWS can start using this service within the AWS Management Console. Customers still running SAP systems on-premises can also feed SAP data directly to Amazon S3 and start creating powerful data lakes on AWS to get more value from their SAP investments and enterprise data.

With no upfront costs to use the accelerators, you simply pay for the AWS services that you consume. As an example, Amazon AppFlow, flow execution costs just $0.001 per execution. If you execute the Sales Order Header ingestion every 5 minutes for 24 hours, you would have 288 executions per day at a cost of $8.76 per month [288 per day * (730 hours in a month / 24 hours in a day) = 8760 flow per month x $0.001 = $8.76 ] . You will also pay for the amount of data processed, $0.02 per GB up to 1GB, and $0.10 per GB over 1GB. By executing incrementally, you are minimizing the data processing costs. Glue Data Catalog is free for the first million objects stored, as well as the first million requests per month. Redshift serverless pricing starts from $0.36 per RPU hour (Redshift processing Unit). For more information on AWS pricing, visit our calculator. (**Note all pricing in this example is based on us-east-1 region)

To get started, visit the AWS Analytics Fabric for SAP repository. To learn why AWS is the platform of choice and innovation for more than thousands of active SAP customers, visit the AWS for SAP page.