Category: Amazon Aurora


Attend Technical Sessions at the Global Partner Summit on Nov. 29th

by Kate Miller | on | in Amazon Aurora, Big Data, Big Data Competency, Containers, re:Invent 2016, SaaS on AWS, Security | | Comments

APN Partners, are you joining us at the AWS Global Partner Summit at re:Invent? If you’ve not yet registered to attend the Summit, it’s not too late! Log into the re:Invent portal, and click on “Purchase Registration Items” to add Global Partner Summit to your registration. Adding the Global Partner Summit is free of charge.

This year during the Summit, we’re offering a number of technical sessions specifically tailored towards topics of interest for APN Partners. We want to highlight some sessions that you can still register to attend. Check them out, and then we recommend you log into the portal and sign up for the sessions you’d like to attend.

AWS Global Partner Summit – Technical Sessions

 

Title: Tips for Building Successful Solutions with AWS Marketplace and AWS Quick Start

Session ID:  GPSISV1

What You’ll Learn: 

Build it once, deploy it often. AWS Marketplace combined with AWS Quick Start can accelerate revenue, drive adoption, and enable your technical teams to focus on the customer instead of the basic infrastructure. In this session, we first dive deep into how to evaluate if your product is ready for AWS Marketplace, and what it takes to make it successful. We cover security, usage models, documentation, installation, configuration, and more. We answer questions concerning the structure of your AMI, e.g. if it contains the code required to launch your product, if its AMI minimally privileged, whether you need to use CloudFormation, how do you get paid, and how to meter usage. We then show how to use AWS Quick Start to onboard new customers. You can ensure that customer deployments reflect best practices by using your AWS Marketplace solution to build and publish world-class cloud reference architectures.

 

Title:  Tips for Passing APN Technical Validations

Session ID:  GPSISV2

What You’ll Learn:

Becoming an Advanced Technology Partner or being included in an AWS Competency program are key achievements for AWS partners and distinguish them from their competitors in the market. Inclusion in these programs demonstrates a partner’s expertise across industry segments and technical domains, plus delivery of a solid product to their customers. The technical bar for becoming an APN Advanced Partner or inclusion in an AWS Competency is high; partners must demonstrate both competency-relevant success and alignment with all four pillars of the AWS Well-Architected Framework. Join AWS Partner Solutions Architects as they outline what to expect from the process; describe how to best prepare for the conversation; and offer tips, tricks, and hints on how to get the most from the technical assessment process.

 

Title: Dollars and Sense: Technical Tips for Continual Cost Optimization

Session ID: GPSISV3

What You’ll Learn:

In this session, we explore techniques, tools, and partner solutions that provide a framework for monitoring, analyzing, and automating cost savings. We look at several case studies and real world examples where our customers have realized significant savings. Some of the specific topics covered are: migration cost management; cost-effective hybrid architectures; saving money with microservices; serverless computing with AWS Lambda, and Amazon EC2; using fungible components to drive down costs over time; cost vs. performance vs. value; AWS purchasing strategies (On-Demand, Reserved Instances, and the Spot Market), tools and services from both AWS (AWS Trusted Advisor, Amazon CloudWatch, etc.) and our partner solutions that can help with cost optimization. Finally, we roll all of these into an automated process for continuous optimization.

 

Title: Hybrid Architecture Design: Connecting Your On-Premises Workloads to the Cloud

Session ID:  GPSISV4

What You’ll Learn:

You’re trying to minimize your time to deploy applications, reduce capital expenditure, and take advantage of the economies of scale made possible by using Amazon Web Services; however, you have existing on-premises applications that are not quite ready for complete migration. Hybrid architecture design can help! In this session, we discuss the fundamentals that any architect needs to consider when building a hybrid design from the ground up. Attendees get exposure to Amazon VPC, VPNs, Amazon Direct Connect, on-premises routing and connectivity, application discovery and definition, and how to tie all of these components together into a successful hybrid architecture.

 

Title:  Managing and Supporting the Windows Platform on AWS

Session ID:  GPSSI401

What You’ll Learn:

Windows workloads are often the backbone of the data center and AWS Consulting Partners are responsible for the design, deployment, maintenance, and operation of these infrastructures. Deploying and operating a common set of management tooling is challenging and becomes even harder as you try to onboard new customers at scale. In this session, we discuss patterns for deploying a common shared infrastructure to host your management and backend assets. We dive deep on various components of the windows toolkit like core VPC, Active Directory, management tools, and finally a development pipeline. You walk away knowing how to design and deliver a common toolset so that you scale out instantly to any new customer workload.

 

Title:  Technical Tips for Helping SAP Customers Succeed on AWS

Session ID:  GPSSI402

What You’ll Learn:

In this session, AWS partners, both with and without SAP focused practices, learn how to develop and design services and solutions to help SAP customers migrate to and run on the AWS Cloud. We discuss the different types of services required by SAP customers and how to identify and qualify SAP on AWS opportunities. Based on actual SAP customer projects, we discuss what patterns work, where the potential pitfalls are, and how to ensure a successful SAP on AWS customer project.

 

Title:  Get Technically Inspired by Container-Powered Migrations

Session ID:  GPSSI403

What You’ll Learn:

This session is a technical journey through application migration and refactoring using containerized technologies. Flux 7 recently worked with Rent-a-Center to perform a Hybris migration from their datacenter to AWS, and you can hear how they used Amazon ECS, the new Application Load Balancer, and Auto Scaling to meet the customer’s business objectives.

 

Title:  The Secret to SaaS (Hint: It’s Identity)

Session ID:  GPSSI404

What You’ll Learn:

Identity is a fundamental element of any SaaS environment. It must be woven into the fabric of your SaaS architecture and design, enabling you to authorize and scope access to your multi-tenant services, infrastructure, and data effectively. In this session, we pair with AWS partner Okta to examine how tenant identity is introduced into SaaS applications without undermining flexibility or developer productivity. The goal here is to highlight strategies that encapsulate tenant awareness and leverage the scale, security, and innovation enabled by AWS and its ecosystem of identity solutions. We dig into all the moving parts of the SaaS identity equation, showcasing the best practices and common considerations that will shape your approach to SaaS identity management.

Read all of our SaaS-related APN Blog posts here

 

Title: Blockchain on AWS: Disrupting the Norm

Session ID: GPST301

What You’ll Learn:

Recent interest in leveraging distributed ledgers across multiple industries has elevated blockchain from mere theory and into the spotlight of real world use. Learn why some APN Partners have a vested interest in it, and how blockchain can be used with AWS. In this session, we explore the AWS services needed for a successful deployment and dive deep into a Partner’s blockchain journey on AWS.

 

Title: IoT: Build, Test, and Securely Scale

Session ID: GPST302

What You’ll Learn:

With the rapid adoption of IoT services on AWS, how do partners and organizations effectively build, test, scale, and secure these highly transaction-data laden systems? This session is a deep dive on the API, SDK, device gateway, rules engine, and device shadows. Consulting and Technology Partner customers share their experiences as we highlight lessons learned and best practices to increase audience efficacy.

 

Title: AWS Partners and Data Privacy

Session ID: GPST303

What You’ll Learn:

In this session, we share best practices and easily-leveraged solutions for enacting autonomous systems in the face of subversion. From gag orders to warrantless searches and seizures, learn about specific tactics to protect and exercise data privacy, both for partners and customers.

 

Title: Extending Hadoop and Spark to the Cloud

Session ID: GPST304

What You’ll Learn:

In this session, learn how to easily and seamlessly transition or extend Hadoop and Spark into the cloud without disruption. Learn how customers are taking advantage of AWS services without major architectural changes or downtime by using AWS Big Data Technology Partner solutions. In this session, we focus on patterns for data migration from Hadoop clusters to Amazon S3 and automated deployment of partner solutions for big data workloads.

 

Title: Amazon Aurora Deep Dive

Session ID: GPST401

What You’ll Learn:

Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability, and durability than was previously available using conventional monolithic database techniques. In this session, we dive deep into some of the key innovations behind Amazon Aurora, discuss best practices and migration from other databases to Amazon Aurora, and share early customer experiences from the field.

 

Title: Advanced Techniques for Managing Sensitive Data in the Cloud

Session ID: GPST403

What You’ll Learn:

In this session, we discuss compliance programs at AWS, as well as key AWS security best practices for technology and consulting partners. Regardless of whether you have customers with stringent compliance requirements, security should be a top priority when thinking about your customer service model. AWS provides native security tools at all layers with such services AWS Identity and Access Management (IAM) and AWS Key Management Service (AWS KMS), which we dive deep into during this session. We provide a framework for using IAM roles and customer-managed encryption keys to securely interact with your customer’s data and also showcase working example code that can be implemented across all compliance frameworks, as well as across applications that do not have specific compliance requirements.

This session will introduce the concept of ‘DevSecOps’ and demonstrate how to build a serverless self-defending environment using KMS, Lambda, and CloudWatch Events. We will also discuss multi-region key management strategies for protecting customer data at scale.

 

Title: Building Complex Serverless Applications

Session ID: GPST404

What You’ll Learn:

Provisioning, scaling, and managing physical or virtual servers—and the applications that run on them—has long been a core activity for developers and system administrators. The expanding array of managed AWS Cloud services, including AWS Lambda, Amazon DynamoDB, Amazon API Gateway and more, increasingly allows organizations to focus on delivering business value without worrying about managing the underlying infrastructure or paying for idle servers and other fixed costs of cloud services. In this session, we discuss the design, development, and operation of these next-generation solutions on AWS. Whether you’re developing end-user web applications or back-end data processing systems, join us in this session to learn more about building your applications without servers.

This session will cover complex serverless design patterns like Microservices and use cases like event stream processing. We will also share tips and tricks for Lambda, API Gateway, and strategies for securing your serverless applications.

 

APN Partner Webinar Series – AWS Database Services

by Kate Miller | on | in Amazon Aurora, Amazon DynamoDB, Amazon Redshift, APN Webcast, Database | | Comments

Want to dive deep and learn more about AWS Database offerings? This webinar series will provide you an exclusive deep dive into Amazon Aurora, Amazon Redshift, and Amazon DynamoDB. These webinars feature technical sessions led by AWS solutions architects and engineers, live demonstrations, customer examples, and Q&A with AWS experts.

Check out these upcoming webinars and register to attend!

Amazon Aurora Architecture Overview

September 26, 2016 | 11:30am-12:30pm PDT

This webinar provides a deep architecture overview of Amazon Aurora. Partners attending this webinar will learn how Amazon Aurora differs from other relational database engines with special focus on features such as High Availability (HA) and 5x Performance compared to MySQL.

Register Here >>

Understanding the Aurora Storage Layer

October 3, 2016 | 11:30am-12:30pm PDT

This webinar will dive deep into the Amazon Aurora Storage Layer. Attendees will receive a technical overview of performance and availability features as well as insights into future enhancements.

Register Here >>

Amazon Aurora Migration Best Practices

October 10, 2016 | 11:30am-12:30pm PDT

This webinar will cover best practices for migrating from Oracle to Amazon Aurora. Partners attending this webinar will learn about common migration opportunities, challenges, and how to address them.

Register Here >>

Selecting an AWS Database

October 17, 2016 | 11:30am-12:30pm PDT

Amazon Aurora, Amazon Redshift, and Amazon DynamoDB are managed AWS database offerings well-suited for a variety of use cases. In this webinar, partners will learn best practices for selecting a database and how each offering fits into the broader AWS portfolio of database services.

Register Here >>

Amazon RDS PostgreSQL Deep Dive

October 24, 2016 | 11:30am-12:30pm PDT

Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. Amazon RDS manages time-consuming administrative tasks such as PostgreSQL software upgrades, storage management, replication, and backups. This webinar will dive deep into the technical and business benefits of RDS PostgreSQL, including best practices for migrating from SQL Server and Oracle.

Register Here >>

We’ll be hosting more educations webinars for APN Partners throughout the end of the year. Stay tuned to the APN Blog for more information!

Key Metrics for Amazon Aurora

by Kate Miller | on | in Amazon Aurora, Database, Partner Guest Post | | Comments

This is a guest post by John Matson of Datadog. An expanded version of this post is available on the Datadog blog. Datadog is an Advanced APN Technology Partner, and is a Certified AWS MSP Technology Partner.

Amazon Aurora is a MySQL-compatible database offered on Amazon RDS (Relational Database Service). In addition to a number of performance benefits, Aurora provides valuable metrics that are not available for other RDS database engines.

In this article we’ll highlight a few key metrics that can give you a detailed view of your database’s performance.

There are three ways to access metrics from Aurora: you can collect standard RDS metrics through Amazon CloudWatch​, detailed system-level metrics ​via ​enhanced RDS monitoring, and numerous MySQL​-specific​ metrics from the database engine​.​ Standard RDS metrics are reported at one-minute intervals; the other metrics can be collected at higher time resolution​. The nuts-and-bolts section of this post discusses how to collect all these metrics.

Selected query metrics

 

Metric description CloudWatch name MySQL name
Queries Queries (per second) Queries (count)
Reads SelectThroughput (per second) Com_select + Qcache_hits (count)
Writes DMLThroughput (per second) Com_insert + Com_update + Com_delete (count)
Read query latency, in milliseconds SelectLatency
Write query latency, in milliseconds DMLLatency

The first priority in monitoring is making sure that work is being done as expected. In the case of a database, that means monitoring how queries are being executed.

You can monitor total query throughput as well as the read/write breakdown by collecting metrics directly from CloudWatch or by summing native MySQL metrics from the database engine. In MySQL, reads increment one of two status variables (Com_select or Qcache_hits), depending on whether or not the read is served from the query cache. A write increments one of three status variables depending on whether it is an INSERT, UPDATE, or DELETE.

In CloudWatch, all reads and writes are rolled into SelectThroughput and DMLThroughput, respectively, and their latencies are reported in the valuable Aurora-only metrics SelectLatency and DMLLatency.

For a deeper look into query performance, the performance schema stores lower-level statistics from the database server. More about the performance schema below.

Selected resource metrics

Metric description CloudWatch name Enhanced monitoring name MySQL name
Read I/O operations per second ReadIOPS diskIO.readIOsPS
Write I/O operations per second WriteIOPS diskIO.writeIOsPS
Percent CPU utilized CPUUtilization cpuUtilization.total
Available RAM in gigabytes FreeableMemory memory.free
Network traffic to Aurora instance NetworkReceiveThroughput (MB/s) network.rx (packets)
Network traffic from Aurora instance NetworkTransmitThroughput (MB/s) network.tx (packets)
Open database connections DatabaseConnections Threads_connected
Failed connection attempts LoginFailures (per second) Aborted_connects (count)

As Baron Schwartz, co-author of High Performance MySQL, notes, a database needs four fundamental resources: CPU, memory, disk, and network. Metrics on all four fundamental resources are available via CloudWatch.

RDS now also offers enhanced monitoring that exposes detailed system-level metrics. With additional configuration, users can monitor load, disk I/O, processes, and more with very high time resolution.

Disk I/O metrics

The CloudWatch metrics ReadIOPS and WriteIOPS track how much your database is interacting with backing storage. If your storage volumes cannot keep pace with the volume of requests, you will start to see I/O operations queuing up, as reflected in the DiskQueueDepth metric.

CPU metrics

High CPU utilization is not necessarily a bad sign. But if your database is performing poorly while metrics for IOPS and network are in normal ranges, and while the instance appears to have sufficient memory, the CPUs of your chosen instance type may be the bottleneck.

Memory metrics

Databases perform best when most of the working set of data can be held in memory. For this reason, you should monitor FreeableMemory to ensure that your database instance is not memory-constrained.

Network metrics

For Aurora, the NetworkReceiveThroughput and NetworkTransmitThroughput metrics track only network traffic to and from clients, not traffic between the database instances and storage volumes.

Connection metrics

Aurora has a configurable connection limit, which can be checked or modified by navigating to the RDS console and selecting the parameter group that your RDS instance belongs to.

If your server reaches its connection limit and starts to refuse connections, it will increment the CloudWatch metric LoginFailures, as well as the similar MySQL metric Aborted_connects and the more specific MySQL Connection_errors_max_connections counter.

Collecting Aurora metrics

As mentioned at the outset, Aurora users can access metrics from Amazon CloudWatch and many more from the MySQL-compatible database engine. Below we’ll show you how to collect both CloudWatch and engine metrics for a comprehensive view. To collect and correlate all your metrics, you can use a monitoring tool that integrates both with CloudWatch and with the database instance itself. The final part of this post details how to monitor Aurora with Datadog​, which will also allow you to monitor the new suite of RDS enhanced metrics​. To monitor enhanced metrics on another platform, consult the AWS documentation.

Collecting CloudWatch metrics

Below we’ll walk through two ways of retrieving metrics from CloudWatch:

  • Using the AWS Management Console
  • Using the command line interface

Using the AWS Console

The AWS Console allows you view recent metrics and set up simple alerts on metric thresholds. In the CloudWatch console, select RDS from the list of services and click on “Per-Database Metrics” to see your available metrics:

Just select the checkbox next to the metrics you want to visualize, and they will appear in the graph at the bottom of the console.

Using the command line interface

To query RDS metrics from the command line, you need to install the CloudWatch command line tool. You can then view your metrics with simple queries like this one to check the SelectLatency metric across a one-hour window:

mon-get-stats SelectLatency
--namespace="AWS/RDS"
--dimensions="DBInstanceIdentifier=instance-name"
--statistics Maximum
--start-time 2016-02-18T17:00:00
--end-time 2016-02-18T18:00:00

Full documentation for the mon-get-stats command is available here.

Collecting database engine metrics

To get a deeper look at Aurora performance you will often need metrics from the database instance itself. Here we cover three methods of metric collection:

  • Querying server status variables
  • Querying the performance schema and sys schema
  • Using the MySQL Workbench GUI

Connecting to your RDS instance

The design of RDS means that you cannot directly access the machines running your database, as you could if you manually installed MySQL or MariaDB on a standalone server. That said, you can connect to the database using standard tools, provided that the security group for your Aurora instance allows it.

If Aurora accepts traffic only from inside its security group, you can launch an EC2 instance in that security group, and then apply a second security group rule to the EC2 instance to accept inbound SSH traffic. By SSHing to the EC2 instance, you can then connect to Aurora using the mysql command line tool:

mysql -h instance-name.xxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u yourusername -p

Your instance’s endpoint (ending in rds.amazonaws.com) can be found in the RDS console.

Querying server status variables

Once you connect to your database instance, you can query any of the hundreds of metrics available, known as server status variables. To check metrics on connection errors, for instance:

mysql> SHOW GLOBAL STATUS LIKE '%Connection_errors%';

Querying the performance schema and sys schema

Server status variables largely capture high-level server activity. To collect metrics at the query level—for instance, to link latency or error metrics to individual queries—you can use the performance schema, which captures detailed statistics on server events.

Enabling the performance schema

Set the performance_schema parameter to 1 in the Aurora instance’s parameter group using the AWS console. This change requires an instance reboot.

Once the performance schema is enabled, server metrics will be stored in tables in the performance_schema database, which can be queried with ordinary SELECT statements.

Using the sys schema

Though you can query the performance schema directly, it is usually easier to extract meaningful metrics from the tables in the sys schema.

To install the sys schema, first clone the GitHub repo to a machine that can connect to your Aurora instance and position yourself within the newly created directory:

$ git clone https://github.com/mysql/mysql-sys
$ cd mysql-sys

Then, create an Aurora-compatible file for the sys schema:

$ ./generate_sql_file.sh -v 56 -b -u CURRENT_USER

Finally, load the file into Aurora, using the filename returned in the step above:

$ mysql -h instance-name.xxxxxx.us-east-1.rds.amazonaws.com -P 3306 -u yourusername -p < gen/sys_1.5.0_56_inline.sql

Now you can connect to Aurora using the mysql command line tool to access the sys schema’s many tables and functions. For instance, to summarize all the statements executed, along with their associated latencies, you would run:

mysql> select * from sys.user_summary_by_statement_type;

Using the MySQL Workbench GUI

MySQL Workbench is a free application for managing and monitoring MySQL databases. It provides a high-level performance dashboard, as well as a simple interface for browsing performance metrics (using the views provided by the sys schema).

If you have configured an EC2 instance to communicate with Aurora, you can connect MySQL Workbench to your Aurora instance via SSH tunneling:

You can then view recent metrics on the performance dashboard or click through the statistics available from the sys schema:

Monitor Aurora Using Datadog

You’ve now seen that you can easily collect metrics from CloudWatch and from the database engine itself for ad hoc performance checks. For a more comprehensive view of your database’s health and performance, however, you need a monitoring system that can correlate CloudWatch metrics with database engine metrics, that lets you see historical trends with full granularity, and that provides flexible visualization and alerting functionality. This post will show you how to connect Aurora to Datadog in two steps:

  • Connect Datadog to CloudWatch
  • Integrate Datadog with Aurora’s database engine

You can also use Datadog to collect, graph, and alert on the new enhanced monitoring metrics that are available for RDS. Full instructions are available in this post.

Connect Datadog to CloudWatch

To start monitoring metrics from RDS, you just need to configure our CloudWatch integration. Create a new user via the IAM console in AWS and grant that user read-only permissions to these three services, at a minimum:

  1. EC2
  2. CloudWatch
  3. RDS

You can attach managed policies for each service by clicking on the name of your user in the IAM console and selecting “Permissions”.

Once these settings are configured within AWS, create access keys for your read-only user and enter those credentials in the Datadog app.

Integrate Datadog with Aurora’s database engine

To access all the metrics available for Aurora, you can monitor the database instance itself in addition to collecting standard metrics from CloudWatch.

Installing the Datadog Agent on EC2

Datadog’s Agent integrates seamlessly with MySQL and compatible databases to gather and report key performance metrics. Where the same metrics are available through the Datadog Agent and through standard CloudWatch metrics, the higher-resolution Agent metrics should be preferred. Installing the Agent usually requires just a single command.

Because you cannot install anything on an RDS database instance, you must run the Agent on another machine, such as an EC2 instance in the same security group.

Configuring the Agent for RDS

Complete instructions for capturing Aurora metrics with the Agent are available here. Experienced Datadog users will note that monitoring Aurora is just like monitoring MySQL locally, with two small configuration exceptions:

  1. Provide the Aurora instance endpoint as the server name (e.g., instance_name.xxxxxxx.us-east-1.rds.amazonaws.com) instead of localhost
  2. Tag your Aurora metrics with the DB instance identifier (dbinstanceidentifier:instance_name) to separate database metrics from the host-level metrics of your EC2 instance

Unifying your metrics

Once you set up the Agent, all the metrics from your database instance will be uniformly tagged with dbinstanceidentifier:instance_name for easy retrieval, whether those metrics come from CloudWatch or from the database engine itself.

View your Aurora dashboard

Once you have integrated Datadog with RDS, a comprehensive dashboard called “Amazon – RDS (Aurora)” will appear in your list of integration dashboards. The dashboard gathers the key metrics highlighted at the start of this post and more.

You can filter your RDS metrics by selecting a particular dbinstanceidentifier in the upper left of the dashboard.

Enhanced monitoring dashboard

If you have set up enhanced monitoring for Aurora, you can also access a specialized RDS Enhanced Metrics dashboard in Datadog.

Monitor all the things

Monitoring Amazon Aurora gives you critical visibility into your database’s health and performance. Plus, Datadog integrates with 100+ infrastructure technologies, so you can correlate Aurora performance with metrics and events from the rest of your stack. If you don’t yet have a Datadog account, you can sign up for a free trial here.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Amazon Aurora Digest – APN Partner Highlights

by Kate Miller | on | in Amazon Aurora, APN Partner Highlight, Database | | Comments

Editor’s Note: Each month, we plan on doing an Amazon Aurora digest to highlight pieces from our APN Partners that profile Amazon Aurora.

Have you used Amazon Aurora? Have you helped your customers migrate to Aurora?

More and more, our APN Partners are telling us about the work that they’re doing with Aurora. We want to share with you a note from our Amazon Aurora team, along with links to a number of pieces of informational content developed by a few APN Partners highlighting Amazon Aurora best practices.

Amazon Aurora is a MySQL-compatible relational database engine that combines the speed, availability and security of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Our customers run mission-critical database workloads across multiple industries and our partners have been an important driver of customer success. We value the expertise that our partners bring to the table and support them through tools and training as customers continue to migrate increasingly complex workloads to Aurora. As we continue to help organizations move to Aurora and AWS, we look forward to strengthening and expanding our relationship with APN Partners.

– The Amazon Aurora team

Case Studies


 

CorpInfo, a Premier APN Consulting Partner: AWS Case Study – GoGuardian

AppsAssociates, a Premier APN Consulting Partner: PetTrax – Migrating to Amazon Aurora

Webinars & Presentations


 

AppsAssociates recorded joint webinar: Deploying High Performance Databases in the Cloud

Alfresco, an Advanced APN Technology Partner: Scaling Massive Content Stores with Amazon Aurora

Whitepapers/eBooks


 

AppsAssociates ebook: Three Reasons to Migrate your Database to the Cloud

AppsAssociates whitepaper: Amazon Aurora, A Fast, Affordable and Powerful RDBMS

Blog Posts


 

Alfresco: How Alfresco powered a 1.2 Billion document deployment on Amazon Web Services

BluePi, a Standard APN Consulting Partner: Amazon Aurora – Superior Cloud Database

 

Do you have links to content you’d like to share with us detailing how your firm has used Amazon Aurora to provide additional value to customers? Let us know! Email us: apn-blog@amazon.com