AWS for SAP
SAP ASE high availability design with SAP Fault Manager and AWS Network Load Balancer
Introduction:
As organizations increasingly prioritize business continuity and minimal downtime, many are assessing their HA configurations to achieve the optimal redundancy and cost-effectiveness for their needs. For SAP ASE databases, the different options available for HA are outlined in SAP note # 1650511 – SYB: High Availability Offerings with SAP Adaptive Server Enterprise (SAP S-User required).
When deploying SAP ASE with SAP NetWeaver, HADR and SAP Fault Manager, the SAP application inherently includes high-availability awareness, enabling automatic traffic routing to the primary database without requiring a virtual IP. For non-NetWeaver based workloads or for providing third-party applications uninterrupted access to the database, a virtual IP is required.
In this blog post, we will demonstrate how to achieve high availability for SAP ASE by implementing Synchronous Data Replication with SAP Replication Server (Hot Standby), combined with AWS Network Load Balancer (NLB) for database traffic routing, while embedding the SAP Fault Manager into the SAP pacemaker cluster ensuring automated and reliable system recovery.
Prerequisites:
Before we dive in, make sure you have:
- SAP NetWeaver deployed across multiple Availability Zones (AZs) with a pacemaker cluster
- SAP ASE installed with HADR enabled
- SAP Fault Manager 182 or higher
For details on how to deploy the above prerequisites, review the following documentation below:
- AWS Documentation: SAP NetWeaver on AWS: high availability configuration for SUSE Linux Enterprise Server (SLES) for SAP applications
- AWS Documentation: SAP NetWeaver on AWS: high availability configuration for Red Hat Enterprise Linux (RHEL) for SAP applications
- SAP Documentation: Installing HADR for Business Suite
Architecture:
This AWS architecture implements high availability for SAP through multi-AZ deployment. On the application layer, the pacemaker cluster manages SAP ASCS and ERS across two availability zones, with SAP Fault Manager embedded in the ASCS to handle system monitoring and failover operations.
The database tier features SAP ASE configured with synchronous data replication between primary and secondary instances across AZs. An AWS Network Load Balancer directs traffic to the primary database instance based on health checks running on a custom port.
An EFS serves the /sapmnt and /usr/sap/trans directories via NFS mounts. This design creates redundancy at each tier – application, database, and storage – while maintaining data consistency through synchronous replication mechanisms.
Implementation:
For the implementation of this solution, we’ll cover it in different segments.
SAP Fault Manager installation:
The SAP Fault Manager can either be integrated into the SAP ASCS or run as a standalone instance. In this blog post, we will cover integrating the SAP Fault Manager into the ASCS.
Install SAP Fault Manager on the ASCS instance by executing the below commands with <sid>adm:
# cd /usr/sap/<SID>/ASCS<SYS_NUM>/work # sydbfm install
Note: provide the parameters relevant for your SAP installation. For example, the value for “fault manager host” should be the virtual hostname of your ASCS installation
Update the variable LD_LIBRARY_PATH on both hosts
# vi .sapenv_<hostname>.csh (or .sh depending on your setup)
setenv LD_LIBRARY_PATH ${_DEF_EXE2}:${_DEF_EXE1}:/sapmnt/<SID>/global/syb/linuxx86_64/sybodbc
Validate the status of SAP Fault Manager by executing the below commands:
# cd /usr/sap/<SID>/ASCS<SYS_NUM>/work # sydbfm status
Set the cluster in maintenance mode and restart the application stack (including ASCS and ERS).
Verify that sapcontrol sees the SAP Fault Manager process:
# sapcontrol -nr <SYS_NUM> -function GetProcessList 22.09.2025 17:10:42 GetProcessList OK name, description, dispstatus, textstatus, starttime, elapsedtime, pid msg_server, MessageServer, GREEN, Running, 2025 09 20 22:08:59, 43:01:43, 4342 enserver, EnqueueServer, GREEN, Running, 2025 09 20 22:08:59, 43:01:43, 4343 sydbfm, SYB FaultManager, GREEN, Running, ,2025 09 20 22:08:59, 43:01:43, 4345
Pacemaker cluster configuration:
Update the pacemaker cluster configuration to monitor the SAP Fault Manager resource:
primitive rsc_sap_<SID>_ASCS<SYS_NUM> SAPInstance \ params InstanceName=<SID>_ASCS<SYS_NUM>_<hostname> START_PROFILE="/usr/sap/<SID>/SYS/profile/<SID>_ASCS<SYS_NUM>_<hostname>" AUTOMATIC_RECOVER=false MONITOR_SERVICES="sybdbfm|msg_server|enserver" \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10 \ operations $id=rsc_sap_<SID>_ASCS<SYS_NUM>-operations \ op monitor interval=11s timeout=60s on-fail=restart \ op stop interval=0s timeout=240s \ op start timeout=180s interval=0s \ op promote timeout=320s interval=0s \ op demote timeout=320s interval=0s
Note: Configuration parameters of your pacemaker cluster may differ depending on the architecture used (classical or simple-mount) or operating system. Make sure to adapt according to your operational requirements.
AWS Load Balancer setup:
Create an AWS Network Load Balancer according to the section “Details for setting up AWS Network Load Balancer (NLB)” of SAP Note # 3638446 – SAP on AWS: SYB Fault Manager Support for AWS Network Load Balancer (NLB). The Load Balancer should have an overwrite port for health check, which will be started by the SAP ASE database. During the initial setup, the port should be started manually by executing the following command in the SAP ASE primary database:
# isql-S<DBSID> -U<USER> -P<PASSWORD> -X # sp_listener 'start', 'tcp:<primary_db_host>:<custom_port>' # go
Verify that the port was started by reviewing the ASE database log:
# cd ~/ASE-<VERSION>/install # grep -i listen <DBSID>.log
Sample output:
00:0007:00000:00015:2025/09/14 22:04:48.03 kernel Listener with protocol tcp, host <hostname>, port <custom_port> started.
After approximately 15 seconds (may vary based on your health check configuration thresholds), the Load Balancer status should show as healthy on the AWS console for the primary instance:
Note: We recommend that you create a CNAME entry and map that to the AWS Load Balancer DNS address and reference the CNAME address in your SAP configuration. This will avoid errors with the SAP DBACOCKPIT, as SAP NetWeaver limits the characters allowed in DBCON. For more details, refer to SAP Note # 3290702 – Connection parameter Database Server(Instance) has invalid value – NetWeaver
SAP system integration with AWS Load Balancer:
Now that the AWS Load Balancer has been created and is in a healthy state for the SAP ASE primary database, we can configure the SAP system to use the AWS Load Balancer.
SAP Fault Manager:
Alter the SAP Fault Manager parameters on the SYBHA.PFL profile according to SAP Note # 3638446 – SAP on AWS: SYB Fault Manager Support for AWS Network Load Balancer (NLB).
# Parameter for OIP Management: ha/syb/support_floating_ip = 1 ha/syb/ip_managed_by_lb2 = 1 ha/syb/lb_port = <custom_port> ha/syb/vdbhost = <cname_address>
SAP NetWeaver:
Alter the SAP profile parameters in the default profile (DEFAULT.PFL) to point to the SAP ASE database using the CNAME that points to the AWS NLB DNS address. If you are running SAP NetWeaver JAVA, make sure to update configtool with the CNAME address as well.
SAPDBHOST = <cname_address> dbs/syb/server = <cname_address> j2ee/dbhost = <cname_address>
Adjust the environment variable for the database host on each application server:
# env | grep dbs_syb_server dbs_syb_server=<cname_address>
Adjust the database connection property SYBASE_SERVER in transaction DBCO to point to the CNAME address that is mapped to the AWS NLB DNS entry. Restart the application stack (SAP ASCS/Fault Manager and all the application servers) and then remove the pacemaker cluster from maintenance mode.
Validating database connectivity:
The below checks should be performed to ensure that the SAP system is connecting through the AWS NLB as expected:
- Run the command R3trans -d and confirm that the address used is the CNAME address
- Failover the database by shutting down the EC2 instance running as the primary database and monitor the traffic that is being routed to the new primary (ex-secondary) database server.
Note: during the first failover, validate that SAP ASE has started the custom monitoring port (<custom_port>) on the new primary (ex-secondary) database server.
Conclusion:
In this blog post, we displayed a reference architecture for SAP HADR with AWS Load Balancer, and walked through how to implement said architecture. The solution covered in this blog allows customers to configure a single access point to their database using the AWS NLB and route traffic to the primary (active) database in the HADR setup based on health checks to a custom port that is managed by SAP Fault Manager.
Key points to remember:
- During your first failover, verify that SAP ASE starts the custom monitoring port on the new primary database server
- Using CNAME entries mapped to the AWS Load Balancer DNS address helps avoid issues with SAP DBACOCKPIT
- Keep SAP Note #3638446 handy for AWS Network Load Balancer configuration details
Ready to make your SAP workloads more resilient? Thousands of customers trust AWS to run their mission-critical SAP systems. Visit our SAP on AWS page to learn more about what’s possible for your organization.

