AWS for SAP

Supercharge your SAP BTP Apps with AWS Services and SAP PrivateLink

Innovation and agility are key to staying ahead in the digital landscape, and thus businesses are increasingly looking to leverage the power of cloud services. However, a significant hurdle that many organizations face is the risk associated with transmitting sensitive data over the public internet. Recognizing this challenge, our focus in this blog is to explore how SAP Private Link provides a secure, private pathway for utilizing AWS native services within the SAP Business Technology Platform (BTP). This approach allows for the safe and private utilization of AWS native services, thereby facilitating innovation in a more secure and controlled network environment.

Overview
Private connectivity between AWS and SAP BTP offers several advantages, especially regarding security. As a result of integrating with SAP Private Link, these existing AWS capabilities are augmented, offering enhanced security. This includes using AWS AI services with improved data privacy, providing resilient storage for applications with secure access, enabling dependable messaging and notification handling within a controlled network environment, integrating with third-party solutions through secured queues, and accessing external databases without exposing data to the public internet. Our practical code examples will further highlight the organizational benefits, promoting optimized operations. The Service launched on September 7th 2023, and for more details and general information, you can see “How to connect SAP BTP Services with AWS Services using SAP Private Link Service“.

The architecture depicted in Image 1 illustrates at a high level the integration of components to augment the capabilities provided by SAP BTP, leveraging AWS Services like Amazon Simple Storage Service (S3) and beyond, with many additional services incorporated to date and the anticipation of more integrations in the future. Popular use cases are document management, text and voice recognition, preventative maintenance using IoT and AI/ML technologies and event management. The sample code provides a working prototype for a basic SAPUI5 application that can read Amazon S3 buckets and list them in the UI5 app. The focus of this example is to provide the foundation for the integration between SAP BTP and AWS services.

High Level Architecture:

Image 1: generic SAP BTP and AWS integration via SAP Private Link.

The key components in the architecture above are as follows:

  • Corporate Identity Provider (IdP) – Optional but most organizations use a corporate identity provider to authenticate their users to SAP BTP.
  • SAP Identity Authentication Service (IAS): SAP’s cloud Identity Provider. In this context it’s used to federate the corporate Identity Provider to SAP BTP.
  • UI5 App: Application used by the user to perform business functions. This is the frontend component, interacting with the backend (CAP App), where the business logic is performed.
  • CAP App: CAP stands for Cloud Application Programming, SAP’s BTP recommended programming framework, based on Node.js. This is the core component that orchestrates the operations between BTP and AWS components.
  • SAP BTP Credential Store: BTP service that allows secure management of credentials which integrates with SAP BTP services (such as CAP Applications in this case). In our scenario, this is used to store the key and secret required by IAM to provide a short-term token (STS) to interact with the AWS services.
  • Config: Generic component in the architecture above. This component is where configuration required by our application to interact with AWS is stored. This could be a DB table, User-provided instance, an MTA Extension file or Environment Variables, to mention a few. In our example, we’re using the User-provided instance to store the AWS Region and S3 endpoint.
  • SAP Private Link: Service that interacts with AWS Privatelink and creates the required sub-components (Elastic Network Interface, VPC Endpoint) for the connection to work. This is a service instance that defines a tunnel between BTP and AWS for the private connection. It is ultimately represented by a URL that the applications use to communicate from BTP to AWS.

AWS Services:

These are the services that SAP BTP will utilize. See SAP documentation for the list of supported services. In our scenario, the application will access an S3 Bucket.

  • Identity and Access Management (IAM) and AWS Security Token Service (STS) are integral components of AWS’s security architecture, but they address distinct aspects of access and authentication.
    • IAM is designed to centrally manage fine-grained permissions for AWS services and resources. With IAM, administrators can define who or what (e.g., users, applications) has permission to access specific AWS resources, and to what extent. It provides mechanisms like IAM user access keys that offer durable credentials for consistent access needs.
    • On the other hand, STS does not directly manage resource access. Instead, it furnishes temporary, limited-privilege credentials. These credentials derive their permissions from predefined IAM roles or users. It’s especially advantageous in scenarios demanding short-lived access, or when you need to grant permissions without permanently handing over AWS credentials. A prime example would be allowing a third-party application, such as an SAP BTP app, temporary access to certain AWS resources.

In essence, while IAM focuses on defining permissions and access, STS concentrates on delivering temporary authentication credentials based on those permissions.

The application flow from the diagram above is as follows:

  1. User enters the BTP App URL.
  2. BTP delegates authentication to Corporate Identity Provider via IAS, user is redirected to enter credentials and MFA. Token is returned to BTP.
  3. User accesses the UI5 App.
  4. UI5 App requires access to an AWS Service (Amazon S3, Amazon SNS, Amazon SQS, AWS Lambda, others).
  5. CAP App retrieves key and secret from BTP Credential Store.
  6. Service configuration is retrieved by the app. Configuration can be defined on the MTA extension file (on deployment), environment variables, User-provided service or application DB.
  7. The CAP App invokes the AWS STS’s AssumeRole API call, using the IAM key and secret, to request temporary credentials.
  8. CAP App uses key, secret and session token to access the AWS Service(s).

Example Business Scenario – Document Management

A popular business scenario is a SAP BTP custom application running in SAP Cloud Foundry to read or write data to Amazon S3 for document or file storage. This enables resilient and cost-effective document management in Amazon S3 buckets, where they can be accessed by multiple applications (both in SAP BTP as well as AWS).

Basic Scenario: In this case, users interact with the BTP application, where they need to upload files (i.e. document attachments for a financial transaction, floor plans for inspections, photos for asset maintenance). The BTP application will process the files and upload them to Amazon S3, keeping a record in the application database of where the file is located, which will be subsequently used to retrieve the file.

Advanced Scenario: This scenario can be further extended by utilizing an AWS Lambda function for BTP to interact with AWS. The BTP app will send the document Lambda that needs to be stored as well as metadata for the file (i.e.: document number, location, attributes). The Lambda function will subsequently store the file in Amazon S3 and metadata, as well as reference to the S3 file, in Amazon DynamoDB. To access the attachment, BTP will send the reference to the required files (i.e.: ID) and Lambda will return the required file and associated metadata. This approach provides the flexibility to reuse this Lambda function across multiple applications, including third party applications and AWS native applications.

Pre-Reqs: Services / Components Required

SAP BTP:

  1. BTP Subaccount (Trial or commercial): Set up a BTP subaccount, which can be either a trial or commercial version, according to the SAP Documentation.
  2. Private Link Entitlement (Set Up Private Link): Obtain the Private Link entitlement and configure it as directed in the ‘Set Up Private Link’ section of the SAP Documentation.
  3. Cloud Foundry Runtime Entitlement: Secure your entitlement for the Cloud Foundry runtime, using guidance from the SAP Documentation.
  4. Cloud Foundry Space: Establish a dedicated Cloud Foundry space for your resources and applications, following procedures in the SAP Documentation.
  5. Credential Store Entitlement: Get the entitlement for the credential store to securely manage authentication details, based on the SAP Documentation’s recommendations.
  6. Business Application Studio: Set up the Business Application Studio. Alternatively, you can use Visual Studio or another compatible IDE based on your preference.

AWS:

  1. AWS Account: Ensure you have an active AWS account to leverage the required services.
  2. S3 Bucket: Designate an S3 bucket, for storing data objects.
  3. IAM Role: Create an Identity and Access Management (IAM) role that dictates permissions for actions and resources in AWS.
  4. IAM User with Secret & Key: Set up an IAM user, ensuring it has the requisite secret and key for authentication.

Services / Components Setup:

  1. AWS – Create an S3 Bucket:
    • Create an S3 bucket using the ‘Creating a bucket’ guide in the AWS documentation.
    • Establish an access point via the ‘Access Points’ section under Amazon S3.
  2. AWS – Create IAM User:
    • Create an IAM user by referring to the ‘Creating IAM Users (console) – Adding a User section in AWS IAM.
    • Manage and obtain the required access keys by following the ‘Managing access keys (console) – Getting Access Keys guide in the AWS documentation.

AWS – Create IAM Role

  1. Follow the steps in the documentation for “Creating an IAM role using a custom trust policy” to create an IAM Role and attach a custom trust policy
  2. Sample trust policy for our example for account “123456789012” and user “youruser”
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com",
            "AWS": "arn:aws:iam::123456789012:user/youruser"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    
  3. Attach a permission policy to read S3 Buckets or any other actions you are interested in.
    1. Click on your Role → Permissions → Add permissions
    2. Showing example Permissions policy Below. Please set it according your requirement.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::specific-bucket-name",
                "arn:aws:s3:::specific-bucket-name/*"
            ]}
    ]
}

BTP – Private Link service instance (source SAP-samples)

To get started, create an SAP Private Link service instance by running the following command:

# Adjust the region in the service name if using a different region

cf create-service privatelink my-privatelink -c '{"serviceName": "com.amazonaws.eu-east-1.s3"}'

Note: This command should be executed in the Cloud Foundry CLI. If you haven’t already, you will need to download the Cloud Foundry CLI to proceed.”

Alternatively, if you want to use the BTP Cockpit to create a Private Link service instance, click on the sub-account and then click on “Instances and subscriptions”.

Provide “serviceName” and Click “Create”

BTP – Create a user-provided service

Create a user-provided service to define the Region and / or any additional configuration where the buckets are stored:

# adapt the properties according to your setup

cf cups my-service-config -p '{"region": "eu-east-1"}'

In the example above, the region is defined. This value is later retrieved in the CAP Application.

Using a user-provided service is one way to define these parameters. As described on step 6 in the high level architecture, these values can be stored in a number of different places.

BTP – Setup Credential Store

  1. Initial setup (SAP Help): Begin with the initial setup by configuring the necessary parameters as detailed in the SAP Documentation.
  2. Create Credential Store (SAP Help): Set up a credential store to manage and securely store authentication details, as guided by the SAP Documentation.
  3. Create Namespace (SAP Help): Define a namespace to segregate and organize resources, following the procedures in the SAP Documentation.
  4. Create Password using AWS Secret and Key (SAP Help): Generate a password by leveraging the AWS Secret and Key, adhering to the methods outlined in the SAP Documentation.

Step by Step – Code Sample

Initiate the Business Application Studio within your SAP BTP Sub-account and start a standard Node.js CAP application. Note: For creating a sample app, you may also utilize a productivity tool. For further assistance, please refer to SAP support.

Shown below index.html page presents a basic web application interface. The loadBuckets() function is triggered when the “Load Buckets” button is clicked. Upon activation, this function sends a GET request to the /catalog/Buckets API endpoint to fetch data related to buckets. Once the data is retrieved, it dynamically generates and displays folders for each bucket within a designated container on the page. Each folder visually represents a bucket, showcasing its ID and creation date.

<!DOCTYPE HTML>
<html>

<head>
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta http-equiv="Content-Type" content="text/html;charset=UTF-8" />
    <title>App</title>
</head>

<body>
    <h1>Business Application</h1>
    <h1>Buckets</h1>
    <button class="button" onclick="location.href='/catalog/'">Catalog</button>
    <br />
    <button class="button" onclick="location.href='/catalog/$metadata'">Metadata</button>
    <br />
    <button class="button" onclick="loadBuckets()">Load Buckets</button>
    <div id="folder-container"></div>
    <script>
      function loadBuckets() {
      // Make a GET request to the API
      fetch('/catalog/Buckets')
        .then(response => response.json())
        .then(data => {
          // Create a folder for each bucket in the data
          const folderContainer = document.getElementById('folder-container');
          data.value.forEach(bucket => {
            const folder = document.createElement('div');
            folder.classList.add('folder');
            folder.innerHTML = `
              <i class="fa fa-folder"></i>
              <p>${bucket.id}</p>
              <small>${bucket.created}</small>
            `;
            folderContainer.appendChild(folder);
          });
        })
        .catch(error => console.error(error));
    }
    </script>

</body>

</html>

Shown below srv/catalog-service.cds file serves as the backend component of a CAP application. Within it, there’s an entity named “Buckets” which defines a data structure with fields: id, name, and created. The data for this entity is sourced from the accompanying catalog-service.js script.

Additionally, an imported alias { CatalogService as my } is used to reference functions and entities from catalog-service. The service, CatalogService, is mapped to the path /catalog and features a function getBuckets() that returns an array of the “Buckets” entity. This setup essentially establishes the data model and the service endpoint for the CAP application, allowing retrieval of bucket-related data.

using { CatalogService as my } from './catalog-service';

service CatalogService @(path : '/catalog') {

  entity Buckets {
    key id: String;
    name: String;
    created: DateTime;
  }

  function getBuckets() returns array of my.Buckets;

}

The srv/catalog-service.js file forms the backend logic for fetching and managing AWS bucket information. It accompanies the srv/catalog-service.cds data model. This JavaScript code relies on various AWS SDK imports to interact with AWS services, mainly the S3 bucket service.

The primary function, getBuckets(), carries out several tasks:

  1. Configuration Gathering: It first retrieves the necessary configurations via getConfig(). This configuration includes the hostname of the private-link service, region details specified by the user, and long-term access keys from the credential store.
  2. STS Credentials: Leveraging these configurations, it then acquires temporary security credentials from AWS’s Security Token Service (STS) using the getSTSCredentials() function.
  3. S3 Client Initialization: With the temporary credentials in hand, an S3 client is initialized using the getS3Client() function. This client uses the private-link host endpoint, ensuring the connection goes through AWS’s private network.
  4. Bucket Listing: Finally, the code invokes the ListBucketsCommand to retrieve a list of all S3 buckets. If successful, it logs the buckets to the console and returns them; otherwise, it captures and logs any errors encountered.

The code integrates multiple functionalities – AWS’s STS for secure, short-lived credentials, and S3 services for bucket operations, all while relying on configuration settings sourced both from internal services and user-provided data.

import  { fromIni } from "@aws-sdk/credential-provider-ini";
import { ListBucketsCommand, S3Client } from "@aws-sdk/client-s3";
import cfenv from 'cfenv';
import { readCredential, readCredentialValue } from './lib/credStore.js';
import AWS from 'aws-sdk';
import Config from './config.js';

export default async function() {

  this.on('READ', 'Buckets', async (req) => {
    const buckets = await getBuckets();
    return buckets;
  });

const getBuckets = async () => {
  const config = await getConfig();

  //Getting temporary STS credentials
  const STScredentials = await getSTSCredentials(config, 'arn:aws:iam::6205:role/S3Read', 'MySession');

  // Calling getS3Client
  const s3Client = getS3Client(config, STScredentials);
  
  const command = new ListBucketsCommand({});
  try {
  const { Owner, Buckets } = await s3Client.send(command);
  console.log(
  `${Owner.DisplayName} owns ${Buckets.length} bucket${
    Buckets.length === 1 ? "" : "s"
  }:`
  );
  console.log(`${Buckets.map((b) => ` • ${b.Name}`).join("\n")}`);
  return Buckets;
  } catch (err) {
  console.error(err);
  return [];
  }
  };  


async function getConfig() {
    const config = new Config();
    
    // Private-link
    const myPrivatelinkCreds = await getCfEnv().getServiceCreds('my-privatelink');
    const host = myPrivatelinkCreds.hostname.replace(/^\*\./, '');
    config.setEndpointHostname(host);

    // Getting region defined in user-provided, Can be used for more params as needed 
    const s3ServiceCredentials = await getCfEnv().getServiceCreds('my-service-config');
    config.setRegion(s3ServiceCredentials.region);
    
    //Credential Store
    const credaccessKeyId = await credStore.readCredential('app', 'password', 'accessKeyId');
    const credsecretAccessKey = await credStore.readCredential('app', 'password', 'secretAccessKey');
    config.setAccessKeyId(credaccessKeyId.value);
    config.setSecretAccessKey(credsecretAccessKey.value);

    return config;
  }

  function getSTSCredentials(config, roleArn, roleSessionName) {
    // Set the AWS region and credentials using long-term access and secret keys
    AWS.config.update({
      region: config.getRegion(),
      accessKeyId: config.getAccessKeyId(),
      secretAccessKey: config.getSecretAccessKey()
    });
  
    const sts = new AWS.STS();
  
    const assumeRoleParams = {
      RoleArn: roleArn,
      RoleSessionName: roleSessionName
    };
  
    return new Promise((resolve, reject) => {
      sts.assumeRole(assumeRoleParams, (err, data) => {
        if (err) {
          reject(err);
        } else {
          const credentials = data.Credentials;
          resolve({
            accessKeyId: credentials.AccessKeyId,
            secretAccessKey: credentials.SecretAccessKey,
            sessionToken: credentials.SessionToken
          });
        }
      });
    });
  }
  
  
  function getS3Client(config, credentials) {
    // Set the AWS region and credentials using the temporary credentials
    AWS.config.update({
      region: config.getRegion(),
      credentials: {
        accessKeyId: credentials.accessKeyId,
        secretAccessKey: credentials.secretAccessKey,
        sessionToken: credentials.sessionToken
      }
    });
    
    // Set the S3 endpoint
    const endpoint = `https://bucket.${config.getEndpointHostname()}`;
    const s3Endpoint = new AWS.Endpoint(endpoint);
    // Create a new instance of the S3 service client
    const s3Client = new AWS.S3({ endpoint: s3Endpoint });
    return s3Client;
  }

  
  function getCfEnv() {
    return cfenv.getAppEnv();
  }

}

Summary

SAP Private Link for AWS provides private connectivity between SAP BTP and AWS services. The steps and code snippets included in this blog provides the foundation for this architecture to be implementing, utilizing both SAP Private Link as well as a secure authentication utilizing temporary credentials, leveraging the business platform provided by SAP BTP and the innovation services provided by AWS. You can now utilize the code examples in this blog to connect your BTP application to AWS securely.

You can learn more about SAP PrivateLink for AWS following the guide on how to Consume Amazon Web Services in SAP BTP