Category: Schema Conversion Tool (SCT)


Introducing Data Extractors in AWS Schema Conversion Tool Version 1.0.602

by Eran Schitzer | on | in Schema Conversion Tool (SCT) | | Comments

Eran Schitzer is a product manager at Amazon Web Services.

We are pleased to announce the recent release of version 1.0.602 of the AWS Schema Conversion Tool (AWS SCT). The new version includes a new key capability—data migration agents that extract data from Oracle, MS SQL Server, Teradata, IBM Netezza, Greenplum and HPE Vertica data warehouses, prepare, optimize and upload it to Amazon Redshift.

The new data migration agents are locally installed agents designed to extract data from data warehouses based on Oracle version 11, MS SQL Server version 2008 and later, Teradata version 14.10 and later, IBM Netezza version 7.0.3 and up, Greenplum version 4.3 and up and HPE Vertica version 7.2.2 and up. When data is extracted, the AWS Schema Conversion Tool optimizes it for Amazon Redshift and saves it in local files. AWS can automatically upload these files to Amazon Redshift or you can transfer these files to an Amazon Simple Storage Service (Amazon S3) bucket, where they can be uploaded to Amazon Redshift manually. For large datasets, you can use AWS Snowball to ship the data to AWS.

The migration agents are designed to extract data in parallel and independently. For example, with a large Oracle data warehouse SCT will automatically distribute work between all available migration agents. SCT will automatically manage all the available agents to extract the data from the different partitions and tables in the schema in the most optimized way for Amazon Redshift, consolidate the data, and save it in local files. The migration agents work completely independent from SCT, and you can replace them, if needed, without any work lost. This unique independent parallel execution capability not only accelerates the extraction of the data, but can also withstand failure of one of the agents.

The new migration agents are supported on RedHat and Debian Linux distros, and the SCT client can run on Windows, Mac, Fedora, or Ubuntu. For more information on installation, security settings, and configuration, see Installing Migration Agents in the AWS SCT User Guide.

When the migration agents are installed and configured, you manage them by using the AWS Schema Conversion Tool. The migration agents act as listeners. When they receive instructions from the AWS Schema Conversion Tool, they extract data from your data warehouse. You can find instructions on how to register the migration agents with your AWS Schema Conversion Tool project in Managing Migration Agents in the AWS SCT User Guide.

After your migration extraction tasks complete, your data is ready. The migration agents save your data as files that are compressed, but not encrypted. The migration agents can copy the data files to your Amazon S3 bucket over an Internet connection. Alternatively, you can copy the files to an Amazon S3 bucket using AWS Snowball. Copy the data from the output folder, under the working folder that you specified for your agents.

After uploading your data files to Amazon S3, the data is ready to be uploaded to Amazon Redshift. The extraction task can automatically upload the data to Amazon Redshift or you can use the Amazon Redshift COPY command to load your data to Amazon Redshift. For more information, see Loading Data from Amazon S3 in the Amazon Redshift documentation.

For more information on migration agents, see Using Data Migration Agents in the AWS SCT User Guide.

For more information about version 1.0.602, see the AWS Schema Conversion Tool User Guide. You can download SCT from here.

Migrating Oracle Database from On-Premises or Amazon EC2 Instances to Amazon Redshift

by Ballu Singh and Pubali Sen | on | in DMS, Migration, Redshift, Schema Conversion Tool (SCT) | | Comments

Ballu Singh and Pubali Sen are solutions architects at Amazon Web Services.

AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS easily and securely. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogenous migrations such as Oracle to Oracle. It also supports heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Data replication with AWS Database Migration Service integrates tightly with the AWS Schema Conversion Tool (AWS SCT), simplifying heterogeneous database migration projects. You can use AWS SCT for heterogeneous migrations. You can use the schema export tools native to the source engine for homogenous migrations.

In this blog post, we focus on migrating the data from Oracle Data Warehouse to Amazon Redshift.

In the past, AWS SCT couldn’t convert custom code, such as views and functions, from Oracle Data Warehouse to a format compatible with the Amazon Redshift. To migrate views and functions, you had to first convert the Oracle Data Warehouse schema to PostgreSQL. Then you’d apply a script to extract views and functions that are compatible with Amazon Redshift.

After an update based on customer feedback, we’re happy to let you know that with AWS SCT and AWS DMS, you can now migrate Oracle Data Warehouse to Amazon Redshift along with views and functions.

The following diagram illustrates the migration process.

(more…)

How to Migrate Your Oracle Database to Amazon Aurora

by Ed Murray | on | in Aurora, DMS, Migration, Schema Conversion Tool (SCT) | | Comments

Ed Murray is a manager at Amazon Web Services.

This blog post gives you a quick overview of how you can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to facilitate and simplify migrating your commercial database to Amazon Aurora. In this case, we focus on migrating from Oracle to the MySQL-compatible Amazon Aurora.

Changing database engines can be daunting. However, the value proposition of a highly scalable, cost-effective fully managed service such as Amazon Aurora can make the challenge worth it—especially when there are tools to simplify the process. When migrating your database from one engine to another, there are two major things to consider: the conversion of the schema and code objects, and the migration and conversion of the data itself. Fortunately, AWS has tools to facilitate both the conversion and migration of your database.

The AWS Schema Conversion Tool helps simplify heterogeneous database migrations by automatically converting your source database schema and a majority of the custom code to a format compatible with your new target database. The custom code that the tool converts includes views, stored procedures, and functions. Any code that the tool cannot convert automatically is clearly marked so that you can convert it yourself. The AWS Database Migration Service helps you migrate your data easily and securely with minimal downtime.

Great! So where do we begin?

Working with AWS SCT
Usually, the first step in every migration is an assessment of feasibility and effort. You can use AWS SCT to generate a high-level overview of the effort required to convert your Oracle database to Aurora. The SCT runs on a number of operating systems. For the purposes of this blog, we’ll run the tool on Windows. To download a copy of the SCT, see the Installing and Updating the AWS Schema Conversion Tool documentation. To find overall documentation for SCT, start with What Is the AWS Schema Conversion Tool?

Although this blog post won’t cover the installation and configuration of the SCT, it’s important to point out that you need to install drivers for Oracle and MySQL to connect the SCT to your source and target databases. After connecting to your Oracle source database, you can right-click on any given schema and generate an assessment report. The assessment report tells you at a very high level how much of the schema can be converted automatically from Oracle to Aurora and the work that is left after conversion. You can find an example report following.

AssessmentReport

(more…)

Introducing AWS Schema Conversion Tool Version 1.0.502

by Eran Schitzer | on | in Schema Conversion Tool (SCT) | | Comments

Eran Schitzer is a product manager in Amazon Web Services.

We are pleased to announce the recent release of version 1.0.502 of the AWS Schema Conversion Tool (AWS SCT). Two key capabilities included with this release are the use of Amazon Aurora with PostgreSQL compatibility as a target and integration with AWS Database Migration Service (AWS DMS).

Amazon Aurora with PostgreSQL compatibility as a target
Following the announcement of Amazon Aurora with PostgreSQL compatibility preview at re:Invent, you can now use the AWS SCT to convert your schema to use with a PostgreSQL-compatible Aurora DB cluster.

The PostgreSQL-compatible edition of Amazon Aurora delivers up to two to three times the performance of PostgreSQL, along with other PostgreSQL capabilities valuable to customers—user-defined functions and data types, complex SQL support, NoSQL and JSON support, and broad application development language support. At the same time, it takes advantage of the scalability, durability, and security capabilities of Amazon Aurora, all for one-tenth the cost of commercial grade databases such as Oracle Database and Microsoft SQL Server. To sign up for the preview, see the preview webpage.

For more information about converting database schemas, see Converting Database Schema to Amazon RDS by Using the AWS Schema Conversion Tool.

Integration with DMS
After converting a schema with SCT, you can have the associated endpoints and tasks created automatically on a DMS instance. You can run and monitor tasks from either the DMS console or from SCT.

The following procedure shows how to create a DMS task from SCT. Once the schema conversion is complete, the DMS task created through the SCT will migrate the data to the target database engine.

To create and access DMS tasks from SCT, you’ll need to define at least one service profile. You can define one or more service profiles to let SCT access your resources running on AWS, including DMS, RDS and AWS Lambda.

  1. Open SCT, choose Settings, and then choose Global settings.
  2. On the AWS Service Profiles tab, type a name for Profile name, enter your AWS access key and AWS secret key, and choose the region where the profile will be created.GlobalSettings7
  3. Choose Save to save the profile.
  4. Choose Current project settings, and then choose the Project Environment tab. For AWS Service Profile, choose the default profile for the current project, and then choose OK.CurrentProjectSettingsOnce you have defined an AWS service profile, you can define, access, and execute DMS tasks from SCT.
  5. Choose the converted schema you’d like to migrate, open the context (right-click) menu for it, and choose Create DMS Task.CreateDMSTask
  6. In the Create DMS task window, enter the task details—task name, replication instance and source and target endpoints (you can also create new endpoints). You can change the defaults, if needed, for the rest of the parameters.
  7. Choose Create.
    CreateDMSTask1At this point, a DMS task is being created in AWS, as shown by this screenshot from the DMS console.DMSConsole

    You can view, start, stop, resume, delete, refresh, and show logs of the new DMS task in the DMS tasks list in both SCT and DMS.

    DMSOverview

For more information about version 1.0.502, see AWS Schema Conversion Tool User Guide. You can download SCT from here.

Cross-Engine Database Replication Using AWS Schema Conversion Tool and AWS Database Migration Service

by Prahlad Rao | on | in DMS, Migration, Schema Conversion Tool (SCT) | | Comments

Prahlad Rao is a solutions architect at Amazon Web Services.

Customers often replicate databases hosted on-premises to the AWS Cloud as a means to migrate database workloads to AWS, or to enable continuous replication for database backup and disaster recovery. You can both migrate and perform continuous replication on the AWS Cloud with AWS Database Migration Service (AWS DMS).

Although moving from on-premises to the cloud using the same database engine is a common pattern, increasingly customers also use different engines between source and target when replicating. This pattern especially occurs where the source is a commercial database engine and the target is open source. Such an approach can save on licensing costs. In other use cases, the target database might be used as read-only for a reporting application that is compatible with a certain database engine, or the application team is comfortable using an open source database for their needs. Or the customer might be migrating from a commercial database to an open source database on the AWS Cloud for cost and flexibility purposes. This blog post will walk you through the process of replicating a Microsoft SQL Server database to PostgreSQL using AWS Schema Conversion Tool (AWS SCT) and AWS DMS.

AWS DMS helps you migrate databases to AWS with relative ease and security. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The service supports homogenous migrations (like to like databases) and also heterogeneous migrations (between database platforms such as Oracle to Amazon Aurora or SQL Server to MySQL). AWS DMS can also be used for continuous data replication with high availability.

AWS SCT makes heterogeneous database migrations easier by automatically converting source database schema. AWS SCT also converts the majority of custom code, including views and functions, to a format compatible with the target database.

Heterogeneous database replication is a two-step process:

  1. Use AWS SCT to convert the source database schema (from SQL Server) to a format compatible with target database, in this case PostgreSQL.
  2. Replicate data between source and target using AWS DMS.SCT5

(more…)

Introducing AWS Schema Conversion Tool Version 1.0.500

by Eran Schitzer | on | in Schema Conversion Tool (SCT) | | Comments

Eran Schitzer is a product manager in Amazon Web Services.

Today we are pleased to announce the release of version 1.0.500 of the AWS Schema Conversion Tool (SCT).

Since version 1.0.100 was released at re:Invent last year, the SCT has helped customers convert their schema and database objects from commercial engines to Amazon Relational Database Services (RDS) open-source, engine-based databases. Because customer obsession is one of the key leadership principles at Amazon, we’ve used feedback from our customers to add many new and exciting capabilities to SCT version 1.0.500.

Some of those capabilities include:

  • Copy your on-premises schema to an Amazon RDS engine: You can now use SCT to copy your existing on-premises database schema to an Amazon RDS database instance running the same engine. This feature can help you analyze potential cost savings of moving to the cloud and changing your license type.
    An SCT assessment report provides statistics on the conversion of schema from the source to the target database. A sample SCT assessment report looks like this:
    SCT4
    Figure 1: SCT assessment report showing conversion statistics for copying SQL Server to Amazon RDS for SQL Server
    For more information, see Creating and Using the Assessment Report.
  • Create mapping rules: Before you use SCT to convert your schema, you can now set up rules that move objects from one schema to another and change the names of objects. For more information, see Creating Mapping Rules in the AWS Schema Conversion Tool.
  • Export a JSON file to AWS DMS: After converting your schema to your preferred database engine, you can now generate a JSON file, which will be used by AWS Data Migration Service. The JSON file contains the list of objects to be migrated by AWS DMS and any transformations on those objects.

For more information about version 1.0.500, see AWS Schema Conversion Tool User Guide. To download SCT, go to https://aws.amazon.com/dms/#sct.

Configuring the AWS Schema Conversion Tool

by Wendy Neu | on | in Schema Conversion Tool (SCT) | | Comments

Wendy Neu is a big data architect at Amazon Web Services.

The AWS Schema Conversion Tool (AWS SCT) is a project-style Java application that can be installed on a local computer or an Amazon EC2 instance. The tool can be downloaded from AWS and is supported on MS Windows, Mac OSX, and Ubuntu and Fedora Linux.

You can use the AWS Schema Conversion Tool to convert your existing database schema from one database engine to another. You can convert relational OLTP schema, or data warehouse schema. Your converted schema is suitable for an Amazon RDS MySQL DB instance, an Amazon Aurora DB cluster, an Amazon RDS PostgreSQL database instance, or an Amazon Redshift cluster.

SCT is simple to use and takes only minutes to get started after you install the tool locally on your machine. It requires only connectivity to the source and target databases. SCT uses JDBC drivers to connect to the source and target databases. When you set up your first DB connection to a new source, the wizard interface will prompt you for a driver. All supported drivers are listed by database engine in the Installing and Updating section of the documentation.

When both the source and target database connections are configured, simply right-click on the schema you want to analyze and choose Create Report.

CreateReport

SCT will examine in detail all of the objects in that schema, including the schema itself, tables, constraints, types, sequences, synonyms, collection types, views, procedures, functions, and packages. It will convert as much as possible automatically and provide detailed information about items it couldn’t convert. The report can be saved as a .csv file or a .pdf file for review.

Navigate to the Action Items tab in the report to see the items that the tool couldn’t convert and understand how much work you have to do.

ActionItems

When you’ve looked at the report, choose Actions, Convert schema from the menu bar to generate the data definition language (DDL) statements for the target database.

ConvertSchema

DDL for the target database is displayed in the project console, and you can edit it before applying it with SCT or simply saving it as an .sql file for application later.

SQL

In development, it’s a good idea to apply the changes directly from within SCT to a sandbox version of your database to understand what objects will be automatically converted, how many objects are involved, and what level of development effort is ahead. Beyond the sandbox and early development phases, it’s best practice to adhere to normal software lifecycle procedures within your organization. Scripted conversions will facilitate key processes like code reviews, automated builds, and automated deployments.

Deployment

Working with Global Settings
The AWS Schema Conversion Tool is very easy to use and lets a DBA or developer move very quickly to analysis and deployment. However, sometimes the default settings might need to be modified. For example, such a change might be necessary if the wrong driver was entered through the wizard for a database engine, a single schema includes thousands of objects and places the Java virtual machine (JVM) under memory pressure, or verbose logging must be enabled. AWS Schema Conversion Tool exposes fine control over all these key areas through the Global settings dialog box. The sections following offer some suggestions on how to use global settings for fine control over these areas.

All sections following are accessed through the SCT console by choosing Settings and then Global Settings from the top menu bar.

GlobalSettings

Driver settings
You must have the required JDBC drivers installed for the AWS Schema Conversion Tool to connect to your source or target database. When a driver is downloaded, you can upload the driver into the tool through the wizard or through the Global settings dialog box. The advantage of uploading the drivers through the Global settings dialog box is that you can enter or update all of the drivers at one time.

Choose Drivers from the side navigation panel in the Global settings dialog box, and set the target database driver by choosing Browse, navigating to the local machine, and choosing the .jar file to associate with SCT for a particular database engine.

GlobalSEttings2

All projects within SCT for that database engine will use the selected driver.

Performance and memory settings

SCT has menu options that control how aggressively the tool will consume JVM memory:

  • Fast conversion, but large memory consumption – This option optimizes for a fast conversion but might require more memory for the object reference cache.
  • Slower conversion, lower memory consumption – This option minimizes the amount of memory used but results in a slower conversion. Use this option if your computer has a limited amount of memory.
  • Balance speed with memory consumption – This option optimizes provides a balance between memory use and conversion speed.

To set these options, choose Performance and Memory from the side navigation panel in the Global settings dialog box, and change Performance and Memory Management Policy to the desired setting.

GlobalSettings3

JVM size settings
The default amount of memory allocated to SCT is just less than a gigabyte. This memory size is usually sufficient for small databases of less than 1000 objects. For many databases, it is necessary to reconfigure the SCT JVM to use more memory even when the JVM is configured in the low memory consumption configuration.

The screenshot that follows shows an SCT JVM under duress, with the tasks allocated hitting the maximum boundary of JVM memory. The setting was Slower conversion, lower memory consumption and failed due to a too-small JVM memory setting.

JVM Settings

You can increase (or decrease) the maximum and minimum memory allocated to the SCT JVM through configuration files.

Configuration files for SCT are located in the AWS Schema Conversion Tool installation directory under a subdirectory called app. The name of the main configuration file is AWS Schema Conversion Tool.cfg. You can modify the file using any text editor—however, don’t use a word processing tool (like Word or WordPad) because it might change certain characters in the file and prevent the file from being read correctly by the JVM.

The image following shows an AWS Schema Conversion Tool installation directory under C:\Program Files as an example.

FileExplorer

The default file that is installed with SCT has a few parameters for the tool itself and a single JVM option. None of the existing parameters in the file should be changed without consulting AWS Support.

SCT

You can modify the memory settings of the JVM by adding two parameters under the [JVMUserOptions] section. However, use caution when configuring the JVM size. If your maximum size exceeds your system’s available memory, it might crash your machine.

The first of the two JVM memory parameters is the maximum JVM memory size, Xmx, which is recorded in megabytes and should be in standard recognized memory values. To set a maximum JVM size of 4 GB, set this value to –Xmx4096m with no spaces on a single line.

The second parameter is the minimum JVM memory size Xms, which is recorded in megabytes and should be in standard recognized memory values. To set a minimum JVM size of 2 GB, set this value to –Xms2048m with no spaces on a single line.

SCT2

When you’re done editing, save the file and restart AWS SCT. You should see the values in the status bar change to reflect your new settings.

The screenshot that follows shows an SCT JVM running smoothly because the tasks allocated are operating comfortably within the maximum boundary of the JVM. The setting was Slower conversion, lower memory consumption, and the tool succeeded in converting nearly 5000 objects in about 10 minutes.

SCT3

Logging
In addition to managing the memory settings of the AWS Schema Conversion Tool, it can be a good idea to increase the logging information produced by the tool when converting new projects. Although increasing logging information might slow conversion slightly, this change will give robust information to AWS Support if any unforeseen errors arise.

To change the logging settings, choose Settings and then Global Settings. In the left navigation pane, choose Logging. See the section above Memory Management for help.

In the logging pane, change Debug mode to true and choose a few key items to increase the logging information for. Setting the Parser, Type Mapping, and User Interface to TRACE can help with key problem areas during conversion.

GlobalSettings4

You can also configure the log location. If the information becomes too verbose for the current file system where the logs are streaming, change the value to a location with sufficient space to capture the logs.

GlobalSettings5

To transmit logs to AWS Support, navigate to the directory where the logs are stored, zip up all the files into a manageable single file, and upload the .zip file with your support case.
When the initial analysis is completed and ongoing development is taking place, simply return Debug mode to false to eliminate the verbose logging and increase speed.

Tip: To manage the log size and streamline reporting issues, remove the logs or move them to another location after a successful conversion. Doing this ensures that only the relevant errors and information are transmitted to AWS Support and keeps the log file system from filling.

All references in this document refer to Version 1 of the AWS SCT and were tested on build 401 successfully.