AWS Database Blog

Automate pre-checks for your Amazon RDS for MySQL major version upgrade

Amazon Relational Database Service (Amazon RDS) for MySQL currently supports a variety of Community MySQL major versions including 5.7, 8.0, and 8.4 which present many different features and bug fixes. Upgrading from one major version to another requires careful consideration and planning. For a complete list of compatible major versions, see Supported MySQL major versions on Amazon RDS.

Amazon has announced long-term support for MySQL major version 8.4 on RDS for MySQL. In this post, we show you a solution that uses Amazon Elastic Compute Cloud (Amazon EC2) to run the MySQL Shell upgrade checker utility on Amazon RDS for MySQL instances within the same virtual private cloud (VPC).

Solution overview

The MySQL Shell upgrade checker utility is a tool developed by the MySQL community used to determine if your MySQL instances are ready to be upgraded. It runs automated checks to look for any incompatibilities or issues that would prevent an upgrade from succeeding to the specified target version of MySQL. After the execution of the MySQL Shell upgrade checker utility, a precheck log is sent to an Amazon Simple Storage Service (Amazon S3) bucket. You can use the same EC2 instance to parallelize the upgrade checker utility execution across many Amazon RDS for MySQL instances. For example, imagine you had 50 RDS for MySQL 8.0 instances and your team required a swift upgrade to MySQL 8.4. This solution allows you to perform the MySQL community pre-upgrade checks on all 50 instances which makes it easy to identify which may have incompatibilities in MySQL 8.4.

The following diagram shows the architecture of our proposed solution.

Solution Architecture

As part of this demonstration, we will need RDS for MySQL 5.7 or 8.0 instances. We will allow connections from the EC2 Instance to the Database Instance. We will also create an S3 Bucket to store the precheck log files. The EC2 Instance will communicate to the S3 Bucket via an IAM Role associated with the policy that we have authored.

The EC2 instance runs a bash script at launch, which downloads MySQL Shell 8.4 and runs the MySQL Shell upgrade checker utility across all RDS for MySQL instances specified by the user. The precheck log files are then uploaded to the specified S3 bucket for review.

Limitations

This solution isn’t compatible with Amazon Aurora MySQL-Compatible Edition. Additionally, if the RDS for MySQL instances reside in different VPCs, you must use VPC Peering or create the EC2 instance in the same VPC as the RDS for MySQL instances.

The MySQL Shell upgrade checker utility only performs standard checks defined by the community. When you start your upgrade on RDS, RDS will run additional checks to make sure we can identify and resolve any issues, before you start your upgrade. To prevent errors during upgrading, follow RDS for MySQL upgrade best practices.

In the following sections, we discuss how to set up the resources for performing MySQL major version prechecks through the MySQL Shell upgrade checker utility across RDS for MySQL instances:

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An AWS account with least privilege to manage Amazon EC2, AWS Secrets Manager, and Amazon S3
  • Existing Amazon RDS for MySQL instance/s, AWS Secrets Manager secret/s, and an Amazon S3 Bucket to store the precheck log file/s

Create an IAM role to allow the EC2 instance to connect to Amazon S3

For this demonstration, we create an IAM Policy and assign it to a newly created IAM Role.

To create an IAM role:

  1. In the AWS Management Console, select Identity and Access Management (IAM).
  2. In the navigation pane, choose Policies.
  3. In the Policies screen, choose Create Policy.
  4. In the Specify permissions window, select JSON.
  5. Paste the following JSON and replace the variables highlighted in the policy with your applicable resources.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                ],
                "Resource": "arn:aws:s3:::$bucketname”
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObjectAttributes",
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:ListMultipartUploadParts",
                    "s3:AbortMultipartUpload"
                ],
                "Resource": "arn:aws:s3:::$bucketname/*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:DescribeSecret",
                    "secretsmanager:ListSecrets"
                ],
                "Resource": [
                    "arn:aws:secretsmanager:$region:$accountID:secret:$secretname",
    				"arn:aws:secretsmanager:$region:$accountID:secret:$secretname"                       
    ]
            }
        ]
    }
  6. On Review and Create, enter a name and description for the new policy and choose Create policy.
  7. Select Roles from the navigation pane.
  8. On the Roles screen, choose Create Role.
  9. Under Select Trusted entity:
    1. Select AWS Service as the Trusted entity type.
    2. Under Use case, select EC2 for Service or use case
    3. For Use case, select EC2 and choose Next

  10. In the Add Permissions screen, select the policy created in Step 6 as well as the AWS managed policy AmazonSSMManagedInstanceCore, then choose Next.
  11. Enter a name and choose Create role.

Because the IAM Role is created using the console, it automatically creates an instance profile for this role. This instance profile will be required in the later steps. If you use the AWS Command Line Interface (AWS CLI) to complete this process, you must use the create-instance-profile and add-role-to-instance-profile commands to attach this role to an instance profile.

(Optional) Create a VPC endpoint for S3

This section is only needed if you are creating an EC2 instance in a private VPC to allow the VPC to connect to the S3 service.

To create a VPC endpoint:

  1. Open the Amazon Virtual Private Cloud (Amazon VPC) console.
  2. In the navigation pane, choose Endpoints and then choose Create Endpoint.
  3. Enter a name for your VPC endpoint
  4. In the search box, enter S3.
  5. Choose the filter that matches the following pattern: com.amazonaws.<region>.s3
  6. In the Services options, select Gateway.
  7. In the VPC drop-down, select the VPC into which you will deploy the EC2 instance.
  8. Select the route tables that your EC2 instance will use and choose Create Endpoint.

Launch the EC2 instance and run the prechecks

At this stage of the demonstration, we create an EC2 Instance to run the prechecks.

To launch the EC2 instance and run the prechecks:

  1. In the Amazon EC2 console, choose Instances in the navigation pane.
  2. In the Instances pane, choose Launch Instances.
  3. Configure and launch an instance:
    1. Under Name and tags, enter a name for the instance.
    2. Select Quick Start, and then select Amazon Linux.
    3. Under Instance Type, select your preferred EC2 instance type.
    4. Under Key pair (login) choose “Proceed without a key pair” as connecting to the EC2 instance is not needed for this solution. If for any reason you need to connect to the EC2 instance you can utilize AWS Systems Manager.
    5. Under Network settings, select or create a security group that allows the EC2 instance to connect to your RDS instance.
    6. Keep the default settings for Configure storage.
    7. Expand the Advanced details section and select the IAM instance profile that you created in the previous section.
    8. In the User data section, paste the following bash script
      #!/bin/bash
      
      EXPECTED_MD5='15a20fea9018662224f354cb78b392e7'
      # URL of the file to download
      FILE_URL="https://dev.mysql.com/get/mysql84-community-release-el9-1.noarch.rpm"
      # Name of the downloaded file
      DOWNLOADED_FILE='mysql84-community-release-el9-1.noarch.rpm'
      # MySQL GPG key URL
      GPG_KEY_URL="https://repo.mysql.com/RPM-GPG-KEY-mysql-2023"
      
      # Function to check if a command exists
      command_exists() {
          command -v "$1" >/dev/null 2>&1
      }
      
      # Check for root privileges
      if [ "$(id -u)" -ne 0 ]; then
          echo "This script must be run as root or with sudo."
          exit 1
      fi
      
      # Download the file
      echo "Downloading MySQL RPM from $FILE_URL..."
      if command_exists curl; then
          curl -L -o "$DOWNLOADED_FILE" "$FILE_URL"
      elif command_exists wget; then
          wget --max-redirect=5 -O "$DOWNLOADED_FILE" "$FILE_URL"
      else
          echo "Error: Neither curl nor wget is available. Please install one of them."
          exit 1
      fi
      
      # Check if the download was successful
      if [ $? -ne 0 ] || [ ! -s "$DOWNLOADED_FILE" ]; then
          echo "Error: Failed to download the file or file is empty."
          exit 1
      fi
      
      # Calculate MD5 hash
      if command_exists md5sum; then
          ACTUAL_MD5=$(md5sum "$DOWNLOADED_FILE" | awk '{print $1}')
      elif command_exists md5; then
          ACTUAL_MD5=$(md5 -q "$DOWNLOADED_FILE")
      else
          echo "Error: No MD5 calculation tool found. Please install md5sum."
          exit 1
      fi
      
      # Compare hashes
      if [ "$ACTUAL_MD5" = "$EXPECTED_MD5" ]; then
          echo "MD5 check passed. File integrity verified."
      else
          echo "MD5 check failed. File may be corrupted or tampered with."
          echo "Expected: $EXPECTED_MD5"
          echo "Actual: $ACTUAL_MD5"
          exit 1
      fi
      
      # Import MySQL GPG key
      echo "Importing MySQL GPG key..."
      rpm --import "$GPG_KEY_URL"
      
      if [ $? -ne 0 ]; then
          echo "Warning: Failed to import MySQL GPG key. Proceeding without verification."
      fi
      
      # Install the RPM using yum
      echo "Installing MySQL repository..."
      if ! yum localinstall -y "$DOWNLOADED_FILE"; then
          echo "Error: Failed to install MySQL repository."
          exit 1
      fi
      
      
      
      # Maximum number of retry attempts
      MAX_RETRIES=3
      # Delay between retries (in seconds)
      RETRY_DELAY=5
      
      # Function to check if MySQL Shell is installed
      check_mysql_shell() {
          if command -v mysqlsh &> /dev/null; then
              return 0  # MySQL Shell is installed
          else
              return 1  # MySQL Shell is not installed
          fi
      }
      
      
      # Function to handle retries
      retry_command() {
          local cmd=$1
          local description=$2
          local retries=0
          local return_value
      
          while [ $retries -lt $MAX_RETRIES ]; do
              echo "Attempting $description (Attempt $((retries+1))/$MAX_RETRIES)"
              
              # Execute the command
              eval "$cmd" &
              local cmd_pid=$!
              wait $cmd_pid
              return_value=$?
      
              if [ $return_value -eq 0 ]; then
                  echo "$description successful!"
                  return 0
              else
                  retries=$((retries+1))
                  if [ $retries -lt $MAX_RETRIES ]; then
                      echo "$description failed. Retrying in $RETRY_DELAY seconds..."
                      sleep $RETRY_DELAY
                  else
                      echo "$description failed after $MAX_RETRIES attempts."
                      return 1
                  fi
              fi
          done
          return 1
      }
      
      # Main installation function
      install_mysql_shell() {
          echo "Checking if MySQL Shell is already installed..."
          
          if check_mysql_shell; then
              echo "MySQL Shell is already installed!"
              exit 0
          fi
      
          echo "Installing MySQL Shell..."
      
          # Clean DNF cache with retry
          if ! retry_command "sudo dnf clean all" "Cleaning DNF cache"; then
              echo "Failed to clean DNF cache. Continuing anyway..."
          fi
      
          # Update package list with retry
          if ! retry_command "sudo dnf update -y" "Updating package list"; then
              echo "Failed to update package list. Continuing anyway..."
          fi
          
          # Install MySQL command line
          if ! retry_command "sudo dnf install -y mariadb105" "Installing MySQL Command Line"; then
              echo "Failed to install MySQL command line after all retry attempts."
              exit 1
          fi
      
          # Install MySQL Shell with retry
          if ! retry_command "sudo dnf install -y mysql-shell" "Installing MySQL Shell"; then
              echo "Failed to install MySQL Shell after all retry attempts."
              exit 1
          fi
      
          # Final verification
          if check_mysql_shell; then
              echo "MySQL Shell has been successfully installed!"
              echo "Installation version:"
              mysqlsh --version
          else
              echo "Installation verification failed! Please check the system manually."
              exit 1
          fi
      }
      
      # Execute main function with error handling
      {
          install_mysql_shell
      } || {
          echo "An error occurred during installation!"
          exit 1
      }
      
      # Function to get secret from AWS Secrets Manager
      get_secret() {
          local secret_name=$1
          local region=${2:-"us-east-1"} # Default region, change as needed
          
          secret=$(aws secretsmanager get-secret-value \
              --secret-id "$secret_name" \
              --region "$region" \
              --query 'SecretString' \
              --output text)
          
          if [ $? -ne 0 ]; then
              echo "Error retrieving secret: $secret_name"
              return 1
          fi
          
          echo "$secret"
      }
      
      # Function to parse JSON
      parse_json() {
          local json=$1
          local key=$2
          echo "$json" | jq -r ".$key"
      }
      
      # Database AWS Secrets Manager Secrets
      SECRET_NAMES=(
          "database1_secret"
          "database2_secret"
          # Add more secret names as needed
      )
      
      # Amazon S3 Bucket Info
      bucket="bucketname/folder"
      
      # Function to validate MySQL connection
      validate_connection() {
          local host=$1
          local user=$2
          local password=$3
          
          if mysql -h "$host" -u "$user" -p"$password" -e "exit" 2>/dev/null; then
              return 0
          else
              return 1
          fi
      }
      
      # Function to run mysqlshell prechecker
      run_prechecker() {
          local host=$1
          local user=$2
          local password=$3
          
          echo "Running prechecker on host: $host"
          mysqlsh --uri="${user}:${password}@${host}" -- util checkForServerUpgrade --targetVersion=8.0.39> /root/${host}.log
          aws s3 cp /root/$host.log s3://$bucket/$host.log
      }
      
      # Clear screen
      clear
      
      # Welcome message
      echo "MySQL Pre-checker Script"
      echo "======================="
      echo "Running pre-checker on configured databases..."
      echo
      
      # Check if AWS CLI is installed
      if ! command -v aws &> /dev/null; then
          echo "AWS CLI is not installed. Please install it first."
          exit 1
      fi
      
      # Check if jq is installed
      if ! command -v jq &> /dev/null; then
          echo "jq is not installed. Please install it first."
          exit 1
      fi
      
      # Process each secret
      for secret_name in "${SECRET_NAMES[@]}"; do
          echo "Processing secret: $secret_name"
          
          # Get secret from AWS Secrets Manager
          secret_json=$(get_secret "$secret_name")
          if [ $? -ne 0 ]; then
              echo "Skipping $secret_name due to error"
              continue
          fi
          
          # Parse secret values
          host=$(parse_json "$secret_json" "host")
          username=$(parse_json "$secret_json" "username")
          password=$(parse_json "$secret_json" "password")
          
          echo
          echo "Processing $host..."
          echo "-------------------------"
          echo "Username: $username"
          
          # Validate connection
          echo "Validating connection..."
          if validate_connection "$host" "$username" "$password"; then
              echo "Connection successful!"
              run_prechecker "$host" "$username" "$password"
          else
              echo "Failed to connect to $host. Please check credentials."
          fi
      done
      
      echo
      echo "Pre-checker analysis complete!"
    9. The script downloads the MySQL repository for Community MySQL 8.4. Following which, it installs the MySQL Shell and the MySQL command line client. After these installations are completed, the script first verifies that the EC2 instance is able to successfully connect to the RDS for MySQL databases. It then runs the Community MySQL prechecks and uploads them to a custom named file in S3.
    10. Before running the bash script, make sure you replace the placeholders in the script with your environment information, including the “Database AWS Secrets Manager Secrets” and “Amazon S3 Bucket Info” sections within the code.
    11. You can change the --targetVersion with the MySQL community version you want to check your instance against. You can specify any release from 8.0.11 up to 8.4 within the “Function to run mysqlshell prechecker” section within the script.
  4. Choose Launch instance.

The EC2 instance will launch and run the MySQL community prechecks on your RDS for MySQL instances. After this is completed, you can retrieve the files from your S3 bucket:

To retrieve files from S3:

  1. Open the Amazon S3 console.
  2. Select the bucket chosen in the previous procedure.
  3. Select the check box next to the log files you want to review.
  4. Choose Download or Open to view the file.

You can then review the downloaded log file for errors and resolve these errors before attempting to upgrade your RDS for MySQL instances.

Clean up

To avoid any additional charges after the automatic process has run, you need to delete the VPC endpoint, IAM roles, S3 bucket, and EC2 instance. Use the following documentation to remove each of these.

Conclusion

In this post, we demonstrated a custom solution that automates part of the Amazon RDS for MySQL upgrade prechecks applicable to MySQL major versions 8.0 and 8.4. The EC2 instance invokes the MySQL Shell upgrade checker utility on the selected Amazon RDS for MySQL instances and uploads the prechecker log files to Amazon S3 for each instance.

If you have questions about the solution in this post, contact your AWS representative or leave a comment.


About the Authors

NirupamNirupam Datta is a Senior Cloud Support DBE at AWS and has been with AWS for over 4 years. With over 12 years of experience in database engineering and infra-architecture, Nirupam is a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server and Amazon Aurora MySQL. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS Cloud.

PatPat Doherty is a Cloud Support Engineer at AWS supporting the database team. He has been with AWS for 2 years. He has 10 years in the IT industry, previously supporting major pharmaceutical companies, especially during the height of the COVID-19 pandemic. He currently provides technical support on MySQL, Amazon Aurora, MariaDB and SQL Server databases, as well as assistance with the AWS Database Migration service.

RyanRyan Mooreis a Cloud Support DBE II who has worked within the AWS database team for 2 years. Ryan is an Aurora MySQL and RDS MySQL subject matter expert that specializes in enabling customers to build performant, scalable, and secure architectures within the AWS Cloud.

CadeCade Kettneris a Cloud Support Engineer who has worked with the AWS Database Team for over 1 year. In his role, he provides technical assistance with AWS Services including RDS MySQL, RDS MariaDB, RDS SQL Server, Aurora MySQL, and AWS DMS to help troubleshoot technical issues and provide tailored solutions for customers.