AWS Database Blog

Migrate Google Cloud for MySQL to Amazon Aurora MySQL using AWS DMS

AWS recently added Google Cloud for MySQL as a source to AWS Data Migration Service (AWS DMS) for MySQL 8.0 and newer. In this post, we demonstrate how you can migrate from Google Cloud for MySQL to Amazon Aurora MySQL-Compatible Edition in an automated way and minimize downtime using AWS DMS.

As of this writing, there is a limitation on supporting databases that have Google SQL’s Allow only SSL connections setting because it requires both server-side and client-side certificate validation. AWS DMS only supports server-side certificate verification. In this post, we demonstrate an approach to migrate databases with and without SSL.

For homogeneous migrations such as this, we recommend using native tools. For this post, we assume that native tools aren’t sufficient for your use case and you want to migrate only certain schemas.

Solution overview

For Google Cloud for MySQL databases without SSL, we use AWS Schema Conversion Tool (AWS SCT) to apply the source schema to the target Aurora MySQL database. We use AWS DMS to migrate the data between Google Cloud for MySQL and Aurora MySQL with change data capture (CDC) enabled for incremental changes.

For Google Cloud for MySQL databases with SSL, we install a non-SSL MySQL proxy on a standalone Amazon Elastic Compute Cloud (Amazon EC2) instance. AWS DMS connects to that proxy MySQL instance to communicate with the source Google Cloud for MySQL database. This enables us to migrate the data from an SSL-enabled Google Cloud for MySQL database to Aurora MySQL. You have to reach the Google Cloud for MySQL database over the internet from your AWS environment, but that traffic is encrypted with SSL during transit.

The following diagram shows the architecture of the overall solution.

Overall solution architecture

We use a simple customers-products-orders table structure and the following dataset for this migration. The following figure shows the entity relationship of this sample dataset. There are three tables: customers, products and orders. Orders has two foreign keys, one from the customer table and the other from the product table.

ERD Diagram

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An existing Google Cloud for MySQL database with public connectivity
  • An AWS account
  • An Aurora MySQL database, which acts as the destination

This solution incurs costs in your account while using the services described.

Section 1: Migrate Google Cloud for MySQL databases without SSL

For homogeneous migrations, particularly for those that don’t need to capture ongoing changes, we recommend MySQL native tools like mysqldump and the MySQL Workbench Data Export and Import wizard. If the source is a different database engine, such as Google Cloud for SQL Server, then AWS SCT is required to convert the schema. However, even for this homogeneous migration (Google Cloud for MySQL to Aurora MySQL), we use AWS SCT to apply schema changes to the destination before starting the migration. The following diagram shows the architecture.

Solution Architecture No SSL

You can follow a similar approach for heterogenous migrations (for example, from Google Cloud for MySQL to Amazon Aurora PostgreSQL-Compatible Edition).

Install AWS SCT

AWS SCT is a free application that you can install on an on-premises server, a local workstation, or an EC2 instance. For installation instructions, refer to Installing, verifying, and updating AWS SCT. For this post, we create a t3.medium EC2 instance and install the tool there.

Create the AWS DMS replication instance

You need a replication instance to migrate databases. This instance requires sufficient storage and processing power to perform the tasks that you assign and migrate data from your source database to the target database. For appropriate instance selection, see Working with an AWS DMS replication instance.

Let’s start by creating a basic instance. We set the parameters as follows:

  1. For Name, enter a friendly name that uniquely identifies this instance.
  2. For Instance class, choose an instance according to your workload.
  3. For Engine version, we recommend choosing the latest version.
  4. For Allocated storage, choose the storage according to your workload.
  5. For VPC, choose the VPC where this instance will be placed.
  6. We recommend choosing Multi-AZ for production workloads.
  7. Select the Publicly accessible setting, because this instance needs to communicate with Google Cloud over the internet.

DMS Instance Setup

Create the target Aurora MySQL instance

If you already have an Aurora MySQL instance, feel free to use it. If not, create an Aurora MySQL cluster. Make sure the target has both AWS SCT and AWS DMS permissions. AWS SCT needs the following additional permissions to apply a schema to the target:

Grant CREATE, ALTER, DROP, INDEX, REFERENCES, CREATE VIEW, TRIGGER, CREATE ROUTINE, ALTER ROUTINE, EXECUTE on *.* to '<user>'@'IP';

Configure networking changes on Google Cloud

To ensure the AWS DMS instance and AWS SCT running on Amazon EC2 can connect to the Google Cloud for MySQL instance, make sure the instance has a public IP. Add the public IP of the AWS DMS replication instance and the AWS SCT EC2 instance under Authorized networks. The following image shows the configuration.

GCP Networking changes

Configure source MySQL changes

We create a database user and, using the principle of least privilege, assign only the permissions needed for the task:

create user 'dmsuser'@'%' identified by '<password>';

For the AWS SCT to read the schema properly, you need to add the following privileges:

grant select on *.* to 'dmsuser'@'%';
grant show view on *.* to 'dmsuser'@'%';

If you want to also capture ongoing changes in the AWS DMS task, you need to add the following privileges:

grant replication client on *.* to 'dmsuser'@'%';
grant replication slave on *.* to 'dmsuser'@'%';

Perform schema conversion

Now it’s time to apply the source schema to the target.

  1. Connect to the EC2 instance where AWS SCT is installed.
  2. Open the AWS SCT application and on the File menu, chose New Project Wizard.
  3. For Project name, enter an appropriate name.
  4. For Location, choose an appropriate location on your local machine.
  5. For Kind, choose SQL Database.
  6. For Source engine, choose MySQL.
  7. For Migration, select I want to keep the same engine but optimize for the cloud.
  8. Choose Next.
  9. For Connection name, enter an appropriate name.
  10. For Server name, choose the source Google Cloud for MySQL public IP.
  11. For Server port, use 3306 if using the default port.
  12. Enter the user name and password you created as a prerequisite.
  13. Choose Next.

After the source connection is established, it shows all the schemas that the MySQL user has access to.

  1. Select the schemas you want to migrate and choose Next.

The database migration assessment report summarizes all the schema conversion tasks and details the action items for schemas that can’t be converted to the DB engine of your target DB instance. Because this a homogeneous migration, the migration assessment may not provide many action items, as shown in in the following screenshot.

  1. Choose Next to proceed.SCT Database migration report

Now you configure the target.

  1. For Name, enter the Aurora MySQL DNS.
  2. For Server port¸ use 3306 if using the default port.
  3. Enter the user name and password for the user with appropriate write permissions.
  4. Choose Finish to end the wizard.

Because there are no action items in terms of converting or adjusting the source schema, you can directly load the schema from source to target, as shown in the following screenshot.

Gif showing SCT apply changes

Choose (right-click) the target schema and choose Apply to database.

A save icon appears on top of the objects, indicating that they’re applied to the target database.

Now that the source schema is applied to the target, it’s time to create AWS DMS endpoints and tasks.

Create the AWS DMS endpoints

An endpoint provides the connection, data store type, and location information about your sources and the targets. You need to create at least one source and one target endpoint.

Source endpoint

When creating the source endpoint, provide the following information:

  1. For Endpoint identifier, enter a friendly name.
  2. For Source engine, choose Google Cloud for MySQL.
  3. For Access to endpoint, select Provide access information manually.
  4. For Server name, enter the IP address of the Cloud SQL.
  5. For Port, use 3306 if using the default port.
  6. Enter the appropriate user name and password.
  7. Test the endpoint, then complete endpoint creation.

GCP source endpoint creation at DMS

Target endpoint

Repeat the previous process with the following parameters for the target endpoint:

  1. For Select RDS instance, choose the Aurora MySQL instance.
  2. For Access to endpoint, select Provide access information manually.
  3. For Server name, use the pre-populated setting.
  4. For Port, use 3306 if using the default port.
  5. Enter the appropriate user name and password.
  6. Test the endpoint, then complete endpoint creation.

Create the AWS DMS migration task

An AWS DMS task is where all the work happens. This is where you configure what database objects to migrate, logging requirements, error handling, and so on. Provide the following information when creating your task:

  1. For Task identifier, enter a friendly and identifiable name.
  2. For Replication instance, choose the one you created.
  3. For Source database endpoint, choose the Google Cloud for MySQL endpoint you created.
  4. For Target database endpoint, choose the Aurora MySQL endpoint you created.
  5. For Migration type, choose Migrate existing data and replicate ongoing changes.
    DMS Task for no SSL #1
  6. Enable Amazon CloudWatch Logs so you can debug issues.
  7. For Table mappings, select Wizard.
  8. Choose the schemas you want to migrate.
  9. Leave everything else as defaults and choose Create task.
    DMS Task for no SSL #2

As soon as the task is created, it’s in Creating status. After a few seconds, it changes to Ready status. The migration (and CDC if enabled) starts automatically.

After the load is complete, navigate to the Table statistics tab as shown in the following screenshot.

Full load completed for no SSL

If replicating ongoing changes, try inserting new records on the source to see the inserts column updated on the statistics, as shown in the last row in the following screenshot.

CDC on going

You may want to connect to the destination using something like MySQL Workbench. Also, you may want to perform data validation. When you’re satisfied with the counts between the source and destination, feel free to point your applications to the Aurora MySQL database and shut down the source Google Cloud for MySQL database.

Section 2: Migrate SSL-enabled Google Cloud for MySQL databases

As of this writing, AWS DMS does not support the Google Cloud for MySQL setting Allow only SSL connections. A possible way to work around this limitation is to create a MySQL proxy in a known, safe environment. The proxy accepts non-SSL connections but uses SSL to connect to the source Google Cloud for MySQL database. An EC2 instance with a MySQL proxy installed can handle that for you.

We use ProxySQL, an open-source, high performance, high availability, database protocol aware proxy for MySQL. The following diagram shows the architecture.

Solution architecture for migrating non SSL database

Install ProxySQL

The following instructions apply for an Amazon Linux AMI. You can follow a similar approach for other Linux distributions. If you don’t have an instance that is available, launch one. The size and instance type depend upon various factors, like how much data is there and how quickly you want the migration to complete.

When an instance is ready, make sure you allow SSH traffic from wherever you want to connect from. SSH into the instance and run the following commands. These commands get ProxySQL from the yum repository and install it. The command also installs MySQL. You may need to make slight adjustments for a different Linux distribution.

sudo yum update
sudo tee /etc/yum.repos.d/proxysql.repo<<EOF
[proxysql_repo]
name= ProxySQL YUM repository
baseurl=https://repo.proxysql.com/ProxySQL/proxysql-2.0.x/centos/latest
gpgcheck=1
gpgkey=https://repo.proxysql.com/ProxySQL/repo_pub_key
EOF
sudo yum makecache
sudo yum install proxysql
sudo systemctl enable --now proxysql
systemctl status proxysql
sudo yum install mysql
mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='ProxySQLAdmin> '

If everything worked, you should now be logged in to ProxySQL.

ProxySQL successfully installed

We recommend changing the default password for the ProxySQL admin from admin to something else:

UPDATE global_variables SET variable_value='admin:<password>' WHERE variable_name='admin-admin_credentials';

LOAD ADMIN VARIABLES TO RUNTIME;

Create the AWS DMS replication instance

As stated earlier, you need a replication instance to migrate databases. This instance requires sufficient storage and processing power to perform the tasks that you assign and migrate data from your source database to the target database. For appropriate instance selection, see Working with an AWS DMS replication instance.

Create your instance with the following parameters:

  1. For Name, enter a friendly name that uniquely identifies this instance.
  2. For Instance class, choose an instance according to your workload.
  3. For Engine version, we recommend choosing the latest version.
  4. For Allocated storage, choose the storage according to your workload.
  5. For VPC, choose the VPC where this instance will be placed.
  6. We recommend choosing Multi-AZ for production workloads.
  7. De-select the Publicly accessible setting

Create the target Aurora MySQL instance

If you already have an Aurora MySQL instance, feel free to use it. If not, create an Aurora MySQL cluster. Make sure the target has AWS DMS permissions.

Configure networking, security, and user permissions on Google Cloud for MySQL

On the Google Cloud Platform (GCP) console, navigate to the networking section under the MySQL cluster that you want to migrate, and complete the following steps:

  1. Add the public IP of the EC2 instance running ProxySQL so traffic can reach from that.
    GCP Networking Changes for SSL instance
  2. On the Security tab, select Allow only SSL connections.
  3. Generate a client certificate if you don’t have one already.
  4. Download both the server certificate and the client certificates.
    GCP generate and download SSL certificates
  5. On the source MySQL database, create two users: a dmsuser that is used by AWS DMS to migrate the data, and a monitor user that is used by ProxySQL to monitor the connection:
    create user 'dmsuser'@'%' identified by <password>;
    grant select on *.* to 'dmsuser'@'%';
    CREATE USER 'monitor'@'%' IDENTIFIED BY <password>;
    GRANT USAGE ON *.* TO 'monitor'@'%';

Configure ProxySQL

The following commands configure your ProxySQL instance to ensure it can connect to the source Google Cloud for MySQL database. You add the server information, user information, and SSL certificates, and also create a monitor user that pings the source to make sure it’s online. You should run these commands in the ProxySQLAdmin prompt. If you lost that on the ProxySQL EC2 instance, run the following code:

mysql -u admin -p<password> -h 127.0.0.1 -P6032 --prompt='ProxySQLAdmin> '
  1. Insert the source MySQL information in the mysql_servers table:
    INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight, use_ssl) VALUES ('<GCP MySQL IP>',0,3306,1000, 1);
  2. Load the servers to runtime and persist it to disk:
    LOAD MYSQL SERVERS TO RUNTIME;
    SAVE MYSQL SERVERS TO DISK;
  3. Insert the dmsuser in the mysql_users table:
    INSERT INTO mysql_users(username,password) VALUES ('dmsuser',<password>);
  4. Load the users to runtime and persist it to disk:
    load mysql users to runtime; 
    save mysql users to disk;
  5. Copy the SSL certificates you downloaded from GCP to the /var/lib/proxysql directory. These commands copy those files from a local machine (MacOS terminal). Follow the recommended directions for the operating system of your local machine if you’re not on MacOS.
    scp -i <key.pem> server-ca.pem ec2-user@<EC2IPRunningProxySQL>:~/.
    scp -i <key.pem> client-cert.pem ec2-user@<EC2IPRunningProxySQL>:~/.
    scp -i <key.pem> client-key.pem ec2-user@<EC2IPRunningProxySQL>:~/.
  6. They will be in the home directory in the EC2 instance for ProxySQL, so copy them to /var/lib/proxysql:
    sudo cp client-cert.pem client-key.pem server-ca.pem /var/lib/proxysql
  7. Run the following queries to set various variables for SSL and monitoring of the Google Cloud for MySQL database from ProxySQL:
    mysql -u admin -p<password> -h 127.0.0.1 -P6032 --prompt='ProxySQLAdmin> '
    SET mysql-ssl_p2s_cert="/var/lib/proxysql/client-cert.pem";
    SET mysql-ssl_p2s_key="/var/lib/proxysql/client-key.pem";
    SET mysql-ssl_p2s_ca="/var/lib/proxysql/server-ca.pem";
    SET mysql-ssl_p2s_cipher='ECDHE-RSA-AES256-SHA';
    UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';
    UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_password';
    UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');
    UPDATE GLOBAL_VARIABLES SET variable_value='8.0' WHERE variable_name='mysql-server_version';
    UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');
    UPDATE GLOBAL_VARIABLES SET variable_value='true' WHERE variable_name='admin-web_enabled';
    load mysql variables to runtime;
    save mysql variables to disk;
  8. Run SELECT on the MySQL connect log to make sure you can successfully connect to the source Google Cloud for MySQL database:
    SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 3;

    The following screenshot shows that ProxySQL is successfully interacting with the source database.

Proxy SQL successful connection to GCP

Create the AWS DMS endpoints

As stated earlier, an endpoint provides the connection, data store type, and location information about your sources and the targets. You need to create at least one source and one target endpoint.

Source endpoint

Because you already have a non-SSL proxy running for the Google Cloud for MySQL database, you can configure that as the source as follows:

  1. For Endpoint identifier, enter a friendly name.
  2. For Source engine, choose MySQL.
  3. For Access to endpoint, select Provide access information manually.
  4. For Server name, enter the IP address of the EC2 instance running ProxySQL.
  5. For Port, use 3306 if using the default port.
  6. Enter the appropriate user name and password.
  7. Test the endpoint, then complete endpoint creation.

ProxySQL Endpoint configuration at DMS

Target endpoint

Repeat the process with the following parameters for the target endpoint:

  1. For Select RDS instance, choose the Aurora MySQL instance.
  2. For Access to endpoint, select Provide access information manually.
  3. For Server name, use the pre-populated setting.
  4. For Port, use 3306 if using the default port.
  5. Enter the appropriate user name and password.
  6. Test the endpoint, then complete endpoint creation.

Create the AWS DMS task

As mentioned earlier, the AWS DMS task is where you can configure what database objects to migrate, logging requirements, error handling, and so on. Create your task with the following information:

  1. For Task identifier, enter a friendly and identifiable name.
  2. For Replication instance, choose the one you created.
  3. For Source database endpoint, choose the ProxySQL MySQL endpoint you created.
  4. For Target database endpoint, choose the Aurora MySQL endpoint you created.
  5. For Migration type, choose Migrate existing data.
    DMS Task for SSL database migration
  6. Enable CloudWatch Logs so you can debug issues.
  7. For Table mappings¸ select Wizard.
  8. Choose the schemas you want to migrate.
    DMS Task for no SSL #2
  9. Leave everything else as defaults and choose Create task.

As soon as the task is created, it’s in Creating status. After a few seconds, it changes to Ready status. The migration starts automatically.

After the load is complete, navigate to the Table statistics tab as shown in the following screenshot.

SSL Database migration completed

You may want to connect to the destination using something like MySQL Workbench. Also, you may want to perform data validation. When you’re satisfied with the counts between the source and destination, feel free to point your applications to the Aurora MySQL database.

Clean up

Stop the EC2 instances that are running AWS SCT and ProxySQL. Additionally, either delete the associated security groups for those instances or remove rules that are no longer needed. If you don’t need the DMS instance anymore, delete it.

Conclusion

In this post, we demonstrated how to migrate your database and schema from Google Cloud for MySQL (with and without SSL) to Aurora MYSQL using the AWS SCT and AWS DMS. Try this solution out and let us know if you have any comments.

See a blog post from a customer following a similar approach: Gearing up OLX Autos’ platform by migrating from GCP to AWS using AWS DMS


About the author

Author headshotKasi Muthu is a global partner solutions architect focusing on databases, analytics and AI/ML at AWS based out of Houston, TX. He is passionate about helping partners and customers accelerate their cloud data journey. He is a trusted advisor in this field and has plenty of experience architecting and building scalable, resilient and performant workloads in the cloud. Outside of work, he enjoys spending time with his family and spending quite a bit of his free time on YouTube.