AWS Partner Network (APN) Blog

Testing AWS GameDay with the AWS Well-Architected Framework – Remediation

 

By Ian Scofield, Juan Villa, and Mike Ruiz

This is the second post in our series documenting our project to fix issues with the GameDay architecture by using tenets of the AWS Well-Architected Framework. See Part 1 of the post for an overview of the process, a description of the initial review, and a list of the critical findings identified by the review team.

In this post, we’ll cover the steps we took to fix the critical findings identified by the review team. In future posts, we will share our plans to refine our architecture through continuous improvement and collaboration with AWS solutions architects.

 

Findings

As noted in the last post, the review team delivered a list of critical findings that we needed to prioritize and fix immediately, as well as a list of non-critical ideas that we should consider addressing in our roadmap for the GameDay architecture.

Here were the critical items:

  • The legacy administrative scripts for GameDay use AWS access keys and secret access keys that are stored in plain text in an Amazon DynamoDB table.
  • The load generator is a single instance in a single Availability Zone without any recovery options configured.
  • The disaster recovery (DR) plan is not clearly defined, and the recovery point objectives (RPO) and recovery time objectives (RTO) are not set.  Also, the disaster recovery plan isn’t periodically tested against the RPO and RTO objectives.

The review team noted that they would welcome a chance to review plans for remediation before implementation, so we set to work analyzing the deficiencies, documenting our plans, and making a rough estimate of the level of effort required to implement the fixes before making any changes.

Here is the high-level remediation plan we came up with to address the critical findings, in order of priority:

  • Implement the use of cross-account roles to eliminate the usage of access and secret keys. 
    A quick review of the code suggested that we should allocate one day to develop and test cross-account access, and another day to update the instructions and train staff on the new feature. Since this fix seemed relatively simple and provided both security and operational benefits, we decided to set it as the highest priority to be integrated into the new design.

 

  • For the load generator, move from a single instance to a container model that would allow for a clustered deployment. 
    This change was a little more complicated than the previous access key fix. We needed to modify our application to store state in DynamoDB instead of writing locally, and we needed to package our various applications and binaries into Docker containers.  We planned to create an Amazon EC2 Container Service (Amazon ECS)  task definition and service for each of these components, which would take care of the scheduling and task placement for us. Switching to DynamoDB and containers would enable us to move our hard-coded configuration to an Auto Scaling group launch configuration, remove all hardcoded values in favor of variables set at launch, and use AWS CloudFormation templates for the deployment mechanism.  These changes would net considerable improvements and have little impact on the overall flow of the game, although we did have to update the load management tool and game setup scripts to use DynamoDB and the Auto Scaling group rather than a static configuration file.We allocated two weeks to develop and test these new features, and one week to update our documentation and train staff on the new operation.

 

  • Create a disaster recovery plan and validate it. Automating our infrastructure deployment with Amazon ECS and Auto Scaling groups simplified our disaster recovery plan, but it wasn’t a complete solution.  There were still gaps in our recovery process and furthermore, having a solution but not testing it, or, at a minimum, walking through various scenarios, would leave us vulnerable to process gaps when the time came to put the plan into action.  We allocated an additional week to create the plan, and an additional day to verify that all contingencies were covered and rehearsed.

 

We shared our plans with the review team before starting this work, to ensure that we were on the right track to meet all the requirements.  The review team gave us the thumbs up, and we began putting our plan into action.

 

Initial analysis and re-architecture

Although the list seemed straightforward, we quickly realized that these items pointed to a common underlying problem: our architecture was very simplistic, dated, and did not make good use of the modern features provided by the AWS platform.  When we initially built GameDay, we focused on functionality and built an environment based on previous experience.  However, the architecture didn’t really embrace modern tools or techniques, like the notion of building for failure, and anticipating the need for better disaster recovery capabilities.

With this in mind, we realized that we should attack this core architectural issue, which would, in turn, solve a majority of the critical findings as well as a large number of the non-critical recommendations.  To achieve this, we moved from a single instance load generator to Docker containers running in an Amazon ECS cluster (see diagram below). This immediately gave us the benefit of a Multi-AZ architecture and the ability to scale our infrastructure and handle the loss of components.  We also modified additional services from the load generator to be run as AWS Lambda functions, which handle scaling and infrastructure management automatically.

 

 

Updated Amazon ECS architecture

 

While pursuing this new architecture, we realized that our previous deployment process involved manual creation of resources and configuration.  We took a hard stance from the start to treat our infrastructure as code and used AWS CloudFormation to define our environment.  This allowed us to easily version our infrastructure as we progressed through the remediation phase, and also played a critical role in developing our new disaster recovery plan.

 

Remediation

Issue 1: AWS access keys

Surprisingly, this item turned out to be the easiest to address. AWS provides a feature to enable role-based access between accounts. Since we had already automated the configuration of both the admin and player accounts with AWS CloudFormation, it seemed simple to update the template to create a role rather than an access/secret key pair in the player account.

We initially thought it would be a massive undertaking to modify our entire code base to use sts:AssumeRole, but this turned out to be trivial.  Because we used the AWS SDKs,  and both access keys and IAM roles are part of the default credential provider chain and fully supported by the SDK, the only change we had to make was to remove the access keys and pass in the role ARN to assume.

 

Issue 2: Load generator

We solved this issue by moving from a single EC2 instance to an Amazon ECS cluster.  To do this, we had to modify the application to externally store team and player data.  Since we were already using Amazon DynamoDB for other metadata, we chose it for this purpose as well.  Moving the state to DynamoDB and polling for configuration was essential, as the load generator containers were now ephemeral, and we didn’t want to create a new service to track members, but just to push updates.

Amazon ECS enabled us to operate our load generator as a service within Amazon ECS, so we could scale the application throughout the game without having to manage a complex, distributed, configuration management tool.  It also provided us with additional fault tolerance by scheduling and placing tasks across three Availability Zones, and replacing containers in the event of any errors or failures.

 

Issue 3: Disaster recovery

Disaster recovery was by far the most difficult remediation we attempted. The issues were not solely technical in that  we had developed plans to expand the use of tools and techniques to rapidly and reliably deploy the application. However, the harder to solve challenges we encountered were things like definitions (How do we define disaster?), expectations (What is a reasonable recovery time objective?), compliance (How often do we test DR? What can we automate? Is our testing framework robust enough to tell us that our DR plan is still valid after a new release?), and ownership (Who is responsible for declaring a disaster? Who is responsible for ensuring that the process is adequately maintained over time?).

Ultimately, we decided on a phased, incremental approach rather than tackling all of these issues at once. We allocated one day to strategize a response to a simulated disaster that focused on a simulated event—loss of control of a production account—and another day to write up our findings.

The simulated test involved members of the team sitting in a room with a moderator responsible for presenting a scenario, and then simulating the recovery process with whatever materials we had at hand. The moderator would keep us honest, challenge responses, and add details as the scenario progressed. The recovery team would keep careful notes, identify gaps, lucky breaks, and areas for improvement, and ultimately document our effective RTO and RPO.

Given the scenario—total loss of control of our production account—we quickly decided that the only safe response available to us was to abandon ship and simulate recovery in an entirely new account. We happened to have an unused, largely unconfigured account we could use for this purpose, but it was clear that account creation and initial setup would need to be accounted for in our RTO. The deployment of the game assets in the new account was significantly eased by the use of the new CloudFormation templates, which, luckily, were stored in an S3 bucket owned by another AWS account that was unaffected by the breach.

Much more problematic was the recovery of the game data stored in DynamoDB. Our current backup plan was manually operated and pushed backups to an S3 bucket in the same account and AWS Region as the source data. Obviously, an attacker that had control of our primary account would also have control of our only backups. Since the backups weren’t automated and were inaccessible in the event we lost control of our account, our RPO could never be clearly defined.  Simply put, we could not assure a successful recovery in this scenario.

Despite these challenges, we deemed the DR simulation to be very successful. We were able to test recovery, identify what was working (decoupling data, automating deployment with AWS CloudFormation), what needed work (automated DynamoDB backups and audit logging to an S3 bucket in a different account), and what our current, achievable RTOs and RPOs were.

Ultimately, we added the recovery and audit work to our immediate roadmap, and agreed on a future cadence of quarterly DR simulations to continue to investigate our real response capability as changes were made and new disasters were contemplated.

 

Conclusion

We started this process with an architecture that, now looking back, was very fragile and left us vulnerable both from a security and a disaster recovery standpoint.  It was uncomfortable to have our shortcomings identified, but by having knowledgeable AWS solutions architects walk through our architecture and point out areas for improvement, we were able to document and prioritize changes.  This allowed us to take a step back and examine where we needed to focus our efforts in order to provide a better experience for our customers by mitigating potential problems down the road.  We now are much more confident in our architecture and know that we are better prepared for failure.

However, our GameDay application is far from perfect, as we found out in our disaster recovery simulation.  Additionally, we still have recommendations on our roadmap from the initial review.  As AWS comes out with new features, and best practices are updated, we’ll continue to work with other solutions architects to ensure that we are incorporating these features and practices into our architecture.

In the next post, we’ll take a look at what happened six months after we implemented the changes discussed in this post, considering that AWS released new features during that time.  We’ll walk through our evaluation process of these new features to see where they can be incorporated.  Additionally, we’ll discuss how we continued to work with our review team on the non-critical items.

Partner SA Roundup – October

This month, Partner Solutions Architects Tim Mattison, Roy Rodan, and Claudine Morales highlight offerings from AWS Technology Partners Cesanta, GorillaStack, and GuardiCore.

 

Cesanta, by Tim Mattison

 

In Internet of Things (IoT) systems, it is important to be able to build, test, and secure systems with minimal friction.  Customers need a way to get hardware that they can use to rapidly develop and test IoT solutions.  Cesanta, creator of Mongoose OS, which we highlighted in our May Partner SA Roundup, is in the business of providing streamlined tools that simplify the onboarding process and unblock developers.  Recently, they released their own hardware kit based on Espressif’s ESP32 platform, which provides a known-good component for customers to design, develop, and deploy IoT solutions.

The ESP32 is the successor to the ESP8266 IoT board, and it comes with several enhancements:  520K of RAM, a dual-core 240 MHz processor, new security features, 2.4 GHz Wi-Fi, and Bluetooth support.

The faster processor means customers are able to efficiently do more time-sensitive compute operations at the edge. The ESP32 is powerful enough to perform digital signal processing (DSP) operations on input signals, and even decode QR codes in real time from its camera interface.

Additional storage allows customers to add more features to their edge devices. This space can also be used to buffer data while the device is offline, store logs for debugging, or store a backup firmware image to allow for more robust over-the-air firmware deployments.

Bluetooth v4.2 and Bluetooth Low Energy (BLE) let the ESP32 interact with Bluetooth devices and sensors. With Mongoose OS, the ESP32 can even act as a BLE gateway to AWS IoT.

On the security side, the ESP32 supports secure boot, flash encryption, and cryptographic hardware acceleration. Secure boot prevents tampering with the flash contents. Flash encryption protects the flash contents from being read in plaintext. Cryptographic hardware acceleration allows devices to connect to the cloud faster. These features allow customers to build secure IoT devices.

If you’re looking to build a new IoT solution, I’d invite you to take a look at the ESP32.  For more information, please head on over to Cesanta’s website to learn more about their new hardware kit and Mongoose OS.

 

GorillaStack, by Roy Rodan

 

Managing cloud infrastructures can be tedious when it’s done manually. In the AWS Cloud, customers are able to use services like AWS CloudTrail and Amazon CloudWatch to help with these tasks. GorillaStack is an AWS Partner that enables DevOps and IT teams to keep track of events in their AWS environments by using simple rules, so they are always up-to-date on the usage and cost of their resources.

Configuring rules in GorillaStack involves three simple steps:

  1. Identifying the accounts to manage using cross-account AWS Identity and Access Management (IAM) roles
  2. Defining triggers based on time, cost, server, or volume status
  3. Specifying the action to be taken

GorillaStack enables customers to apply these rules to resources across all AWS accounts and Regions. For example, the following image shows a rule set up to delete all detached Amazon Elastic Block Store (Amazon EBS) volumes in a particular AWS Region:

 

Another helpful GorillaStack feature is its Slack and HipChat integration. This integration makes it even easier to digest events and alerts from CloudTrail and CloudWatch. Instead of having to crawl through CloudTrail or CloudWatch logs for one specific event, you can configure a chatbot to send real-time alerts to a channel or to a specific group of Slack or HipChat users based on any CloudTrail or CloudWatch event. The alert can be expanded to reveal the raw log content, so your DevOps team can take the right actions, as seen here:

 

 

 

This Slack and HipChat integration also enables you to receive alerts on scheduled actions before the events take place, so you can delay or cancel these actions. For instance, let’s say you schedule an action to stop development instances from running during the night in order to reduce costs. An alert is sent to a Slack channel to inform you of the upcoming action, and you can then decide if the shutdown process should proceed, snooze, or cancel.

 

 

If you’d like to learn more about how GorillaStack can help you manage your infrastructure, go to the GorillaStack site to sign up for a free 14-day trial and request a live demo of their products.

 

GuardiCore, by Claudine Morales

 

As part of the AWS Shared Responsibility Model, customers are responsible for their security configuration in the AWS cloud. This responsibility involves determining ways to protect and monitor network traffic, which can include obtaining internal workload visibility, effective east-west traffic policy management, and active breach detection and response. GuardiCore, an AWS Technology Partner, offers the Centra Security Platform to help bridge these gaps.

 

Once Centra agents are installed on Amazon Elastic Compute Cloud (Amazon EC2) instances, the Centra Security Platform performs an application discovery process and collects network traffic data, including layer 7 communication activities among applications running on these instances. The Centra user interface displays a map of all network processes, applications, and traffic flows, giving you the ability to easily visualize everything that is occurring in your network at any given time.  This same approach can be applied to your on-premises infrastructure, to give you a centralized view of your entire data center, regardless of location.

 

 

The Centra Security Platform provides micro-segmentation capabilities to give you more granular control over access policies. For instance, it can restrict access to a certain service not only by IP address and port number but also by process name.

Centra also helps safeguard environments from potential breaches by using multiple intrusion detection and prevention methods. One of these methods involves reputation analysis, where domain names, IP addresses, and file hashes that have been associated with malicious behavior are detected and eliminated instantaneously. Another method involves real-time dynamic deception, in which an attacker, when detected, is redirected to a GuardiCore-hosted deception environment where they continue to execute their attack, thinking that they have successfully penetrated the customer’s infrastructure. Inside the deception environment, the attacker’s actions are logged, recorded, and even screenshotted, to enable analysis and insight on their motives and techniques. This allows customers to implement changes to prevent further intrusions from the attacker.  The information on the attacker’s actions is also added to the GuardiCore Reputation Service database to enable detection of similar breach attempts in the future.

To learn more about the GuardiCore Centra Security Platform, visit the GuardiCore website or request a demo.

SaaS Quick Start Highlights Identity and Isolation with Amazon Cognito

Identity is not a new concept. There’s a large list of useful tools and technologies that effectively address the authentication and authorization needs of applications. However, for software as a service (SaaS) providers, the identity universe becomes a bit more complicated. SaaS extends the notion of identity, adding new kinds of roles and access considerations that shape and influence the fabric of your SaaS solutions.

The SaaS Identity and Isolation with Amazon Cognito Quick Start, which was recently published, equips developers with a full working solution that digs into the nuances of injecting tenant identity into SaaS applications. This Quick Start addresses a broad range of SaaS identity topics with specific emphasis on illustrating how tenant context is introduced via Amazon Cognito and used in combination with AWS Identity and Access Management (IAM) to scope access to tenant resources.

A key goal of this Quick Start is to create a model where user and tenant identity are merged into a unified model that flows seamlessly through your application’s architecture. The following diagram highlights the conceptual model underlying the Quick Start architecture.

 

The Quick Start introduces a model where, through Amazon Cognito, a user’s identity is bound to a tenant’s identity to create the notion of a SaaS identity. This SaaS identity is then treated as a first-class construct, and delivers all the context that is needed to represent both the user and any tenant attributes that may be needed to control and scope that user’s experience.

This is realized in a reference application that orchestrates all the moving parts associated with building a multi-tenant SaaS environment on AWS. This application was developed with an AngularJS client and a series of Node.js microservices to simulate the workflows of a simplified order management system. The goal was to provide a reference application that illustrates how identity influences all the different dimensions of your SaaS environment. Some of the key capabilities of this solution include:

  • Reduced tenant on-boarding and tenant activation friction
  • Provisioning of tenant-specific IAM roles and policies
  • Support for multiple user roles, including both system and tenant roles
  • Ability to manage system and tenant users
  • Tenant-scoped access to application infrastructure, including database access operations.
  • Use of JSON Web Tokens (JWT) to flow SaaS identity (and scoping) into each application microservice
  • Use of Amazon API Gateway and custom authorizers to scope and control access to application microservices
  • Illustration of identity in a pooled multi-tenant model where tenants share infrastructure

The application’s infrastructure includes a number of AWS services and constructs to create a highly scalable, highly available SaaS identity and isolation solution that conforms to best practices for deploying a container-based application in a virtual private cloud (VPC) that spans two Availability Zones. The following diagram provides a view of the environment that is provisioned by the Quick Start:

 

The Quick Start also includes a detailed guide that digs into the conceptual and architectural elements of the application. The guide outlines the steps associated with deploying and running the complete solution. Exploring this document will give you a better sense of the nature of the solution and the complexities of implementing a robust identity and isolation model in a SaaS environment.

This solution represents one of multiple options for addressing SaaS identity, and should provide a good foundation of concepts and implementation considerations that can accelerate your efforts to introduce identity into your SaaS environment. It also provides detailed insights into some of the fundamental mechanisms that you can use to improve the security of your SaaS environment without further complicating the developer experience.

For more information about the SaaS Identity and Isolation with Amazon Cognito Quick Start, see the data sheet and source repository.

New and Updated Partner Training Resources

We have a number of free in-person and online training resources designed specifically for APN Partners so you can more effectively help customers leverage the AWS Cloud. We regularly update and release new training courses so you can be sure you are learning the latest about AWS.

 

AWS Solutions Training for Partners: Windows (Business): Updated Instructor-Led and Web-Based Training

We’ve released a major update to our Solutions Training for Partners: Windows (Business) course. This course is available both in-person and online and teaches AWS Business Professional partners about the specific benefits of moving Windows workloads onto AWS. We recommend this course for business professionals at APN consulting partner companies who have knowledge of the general benefits of cloud and AWS.

The updated version of the course now:

  • Addresses the question “Why Windows on AWS?” by highlighting the differentiated benefits for Windows workloads offered by AWS
  • Directly provides partners with responses to debunk common erroneous claims and fear-based objections about running Windows on AWS
  • Includes an interactive Windows workload-specific first call role play to let partners practice what they’ve learned in a safe setting
  • Features recent customer wins, case studies, videos, and a sample business case

AWS Solutions Training for Partners: Amazon Connect for Business and Technical Professionals

Amazon Connect is a self-service, cloud-based contact center service that makes it easy for any business to deliver better customer service at lower cost. We have online training about Amazon Connect for both business and technical professionals.

  • Amazon Connect (Business): This course introduces the business benefits of setting up a cloud-based contact center using Amazon Connect. You will learn how to discuss the value proposition of Amazon Connect with your customers and explain the pricing model.
  • Amazon Connect (Technical): This course teaches you how to quickly set up a cloud-based contact center using Amazon Connect. You will learn the technical details of provisioning, configuring, and managing Amazon Connect. This course also highlights how you can integrate Amazon Connect with other AWS services like AWS Lambda, Amazon Kinesis, and Amazon Redshift.

AWS Solutions Training for Partners: Best Practices – Well-Architected

The Well-Architected Framework enables you to make informed decisions about your architectures in a cloud-native way and understand the impact of design decisions that are made. This AWS Solutions Training for Partners: Best Practices – Well-Architected course is designed to provide a deep dive into the AWS Well-Architected Framework and its 5 pillars. We recommend this course for Technical professionals at APN Consulting Partner Organizations.

You can explore more training resources for APN Partners here, and you can search for classes near you by logging into the AWS Training and Certification Portal with your APN Portal credentials. APN Partners have access to free partner-specific training and are eligible for a 20% discount on customer-facing public AWS Training delivered by AWS. You can also request a private onsite training for your team by contacting us.

AWS Cloud Solutions Transforming Financial Services event at AWS New York Loft

Showcasing Innovative Solutions Focusing on Governance, Risk & Compliance (GRC)

 

The Financial Services industry is highly regulated, with an increasing need to break the trade-off between compliance and innovation. With the surge in data and new business models, managing security, risk, and compliance have become increasingly complex, driving requirements for enhanced security mechanisms, complex financial calculations, and advanced analytics to simulate evolving market conditions. Faced with the limitations of their on-premises agility, storage, and computing capabilities, financial institutions of all sizes—FinTech startups, hedge funds and asset managers, insurers, commercial banks, and global investment banks, among others—are working with AWS and its AWS Partner Network (APN) Partners to become more agile, cost-effective, and customer-centric.

 

 

As AWS is continuing to enable scalable, flexible, and cost-effective solutions for banking, payments, capital markets, and insurance organizations of all sizes, it has also become evident that our highly specialized APN Partners need to organize by industry and develop solutions to better serve our customer needs in this industry. There’s a clear opportunity to help AWS customers within Financial Services to succeed in moving to AWS Cloud, and for the APN Partners to support the seamless integration and deployment of these solutions. As the financial services industry slowly but surely moves to the cloud, AWS has established the Financial Services APN Partner Competency Program to identify AWS Partner Network (APN) Consulting and Technology Partners with deep industry experience to assist our customers in this industry. These APN Partners have demonstrated industry expertise, readily implemented solutions that solve specific business problems, align with AWS architectural best practices, and have AWS-certified staff. They have gone through a rigorous business and technical validation by AWS teams and are closely engaged and aligned with AWS global financial services team.

There is a great opportunity to join this upcoming solutions showcase event in New York, led by our AWS Financial Services Competency Partners, to learn how their solutions are helping customers succeed and to join in on the conversations following the sessions. Not only will you hear from our top APN Partners in the industry, you will get an insight on trends in cloud transformation as the financial services industry intensifies its focus on innovation. Learn more from Brad Bailey, Research Director in Capital Markets at Celent, about trends in the financial service’s cloud transformation focusing on governance, risk, and compliance (GRC). Celent helps firms in the financial services industry make better decisions about technology through research and advisory. You will also have an opportunity to hear from Phil Moyers, AWS Director, Global Financial Services, about industry innovation at AWS.

Don’t miss this complimentary event – register now! Seating is limited.

“AWS Cloud Solutions Transforming Financial Services”

Showcasing Innovative Solutions Focusing on Governance, Risk & Compliance (GRC)

October 25th at AWS Pop-up Loft | 350 West Broadway | New York, NY 10013

Event Schedule:

  • 12:30 PM – 1:00 PM Registration and Check-in
  • 1:00 PM – 1:05 PM Welcome and Kick-off, hosted by Nitin Gupta, Global Head of APN, Financial Services
  • 1:05 PM – 1:45 PM Trends in financial service’s cloud transformation focusing on digital strategies, and Governance, Risk and Compliance by Brad Bailey, Research Director, Capital Markets at Celent
  • 1:45 PM – 4:30 PM Lightning sessions featuring AWS Financial Services Partner’s innovative solutions that are changing the industry
  • 4:30 PM – 5:15 PM:  An inside look at AWS innovation for Financial Services by Phil Moyer, AWS Director, Global Financial Services
  • 5:15 PM – 6:30 PM: We’ll close the day with an exclusive opportunity to meet with the solution providers 1:1 during the social hour

Solutions Showcase by AWS Financial Services Partners:

 

CTP (Cloud Technology Partners): CTP’s Continuous Compliance solution provides a single source of truth across Governance, Risk and Compliance (GRC) enabling real-time monitoring and remediation recommendations on AWS Cloud.

Domino Data: A single system for all your models -Documentation, code, and model inventories inevitably get out of sync when spread across systems. Domino solution tracks the provenance of a model from idea to impact, showing who worked on it, what they did, how they deployed it, and how it is used in production.

Nice Actimize: Nice Actimize ABC solution analyzes transactions and behavior across the organization and the supply chain, for a real-time, up to date view of their bribery and corruption risk across business, geographic, vendor and customer lines. External risks integrated AML lifecycle management that delivers insight across the breadth of customer activities, ensuring smart and cost-effective AML operations as well as a positive, holistic customer experience.

Accenture: This solution will focus on risk management using high power computer (HPC) of AWS. No longer do companies have to procure hardware in anticipation of high-compute workloads, and then let it sit idle when not used. Instead, operational tasks of running compute on AWS are simplified because you can fully automate provisioning and scaling by treating the infrastructure as part of your code base.

FICO: FICO Xpress Optimization, as part of the FICO Decision Management Suite, helps financial services organizations identify and fine-tune policies and processes across the customer lifecycle to improve Key Performance Indicators (KPIs) subject to business, operational and legal constraints. This is achieved by combining sophisticated analytics with powerful, easy-to-use optimization software and consulting services.

IHS Markit: Financial Risk Analytics from IHS Markit provides a range of financial risk management solutions on a single, integrated, yet modular platform. The solutions deliver support for counterparty credit risk requirements, pricing valuation adjustments and the Fundamental Review of the Trading Book (FRTB), and can be deployed on the AWS cloud.

GFT: GFT’s regulatory reporting service on AWS helps financial institutions manage, transform, and store data securely. With support from AWS, GFT offers innovative flexibility, transparency of regulatory data requirements, and proactive management of regulatory data reporting in a cloud-based solution.

We hope to see you there!

Coming Soon: SLES for SAP in AWS Marketplace

Post by Sabari Radhkrishnan, a SAP Partner Solutions Architect at AWS

 

Amazon Web Services (AWS) and SUSE have been working together for years to bring SUSE Linux Enterprise Server (SLES) to our joint SAP customers. Since 2012, when AWS first certified its platform for SAP workloads, including SAP HANA One, customers have been using the SLES operating system to run their SAP applications on AWS. When AWS certified its instances for SAP HANA in 2014, customers started using SLES for their SAP HANA deployments on AWS as well. In 2015, SLES for SAP was made available in AWS through SUSE’s bring-your-own-subscription program; and in 2016, SLES for SAP became available in AWS Marketplace as an on-demand image, so our existing and new customers could easily get started with their mission-critical SAP workloads.

As part of our continuing efforts to provide our joint SAP customers with the best experience, we are pleased to announce today that SLES for SAP will become available in AWS Marketplace as an AWS offering in Q4 2017. This will be the second joint listing from AWS and SUSE after SLES, which was launched almost seven years ago. With the release of X1 and X1e instances, which are purpose-built for in-memory workloads such as SAP HANA, we are seeing that more customers are choosing to run SAP workloads in development, test, and production on AWS.

The new SLES for SAP listing will make it easier for customers to run SAP workloads on AWS, because they will receive joint support from AWS and SUSE, and the new offering will be priced competitively. SUSE and AWS will continue to work together to enhance and optimize the OS for SAP workloads on AWS.

SLES for SAP includes the High Availability Extension (HAE), which allows SAP HANA instances to seamlessly fail over between Availability Zones. The software also includes other enhancements such as page cache management and kernel settings that are optimized for SAP workloads. In addition, SLES for SAP images carry extended service pack support so customers can run the next-to-last service pack for up to 18 months. You can find more details about the benefits of using SLES for SAP on the SUSE website.

Customers can easily get started with SLES for SAP for their SAP HANA workloads by using the AWS Quick Start for SAP HANA. This Quick Start helps provision and configure the infrastructure required to deploy SAP HANA in less than an hour, following best practices from AWS, SAP, and SUSE.

Customers can continue to leverage SUSE’s bring-your-own-subscription program to use their existing subscriptions to run their SAP workloads on AWS; see aws.amazon.com/suse for details. To learn more about running SAP on AWS, check out http://aws.amazon.com/sap.

To learn more about this announcement, see the SUSE blog.

If you’re at SUSECON, be sure to check out these SUSE and AWS sessions:

 

How DNAnexus and Edico Genome are Powering Precision Medicine on Amazon Web Services (AWS)

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Ujjwal Ratan is a Healthcare and Life Sciences Solutions Architect at AWS

 

Diagnosing the medical mysteries behind acutely ill babies can be a race against time, filled with a barrage of tests and misdiagnoses. During the first few days of life, a few hours can save or seal the fate of patients admitted to the neonatal intensive care units (NICUs) and pediatric intensive care units (PICUs). Accelerating the analysis of the medical assays conducted in these hospitals can improve patient outcomes, and, in some cases, save lives.

Precision medicine relies on the aggregate of these types of tests (and others) to advance healthcare. Due to decreasing costs and faster turnaround times, genome sequencing is one such test that is gaining adoption throughout healthcare. Understanding a patient’s genetic predisposition to different diseases is fundamental to establishing a medical risk baseline. In certain cases, such as in NICUs, a patient’s genetic profile can unlock the specific cause of a disease and inform the subsequent medical interventions that might work.

Today, we’d like to tell you about two of our AWS Partner Network (APN) Partners, DNAnexus and Edico Genome, who are working together to advance the principles of precision medicine, and are already changing lives through genomics.

 

Introducing DNAnexus

DNAnexus, an AWS Life Sciences Competency Partner, offers data management, next-generation sequencing data analysis, and secure collaboration for large-scale life sciences enterprises. The DNAnexus platform-as-a-service (PaaS) solution provides a secure and unified system that scales to meet its clients’ unique needs, such as merging de-identified clinical data with genetic data. The API-based DNAnexus platform enables customers (e.g., pharma, researchers, hospitals) to create custom workflows to analyze genomics data as they see fit, such as to develop new drugs or diagnose rare diseases.

Naturally, the data generated by these processes is sensitive and its protection is paramount. DNAnexus has architected their platform to align with key security and compliance frameworks, such as HIPAA, 21 CFR Part 11, CLIA, and FedRAMP.

 

Introducing Edico Genome

Edico Genome, an APN Standard Tier Partner and Amazon EC2 F1 Instance Partner, is focused on facilitating the growth of precision medicine. By accelerating one of precision medicine’s central components, genome sequencing analysis, without sacrificing accuracy, Edico Genome enables researchers and clinicians to understand the relationships between genetic variation and disease.

Edico Genome accelerates sequencing analysis by using field-programmable gate arrays, or FPGAs, in its Dynamic Read Analysis for GENomics (DRAGEN) solution. In contrast to conventional CPU-based systems, which must execute lines of software code to perform an algorithmic function, FPGAs use logic circuits to accelerate algorithms and provide outputs almost instantaneously. By replicating these logic circuits thousands of times over, DRAGEN is able to achieve industry-leading speeds by allowing for massive parallelism—unlike CPUs, which are limited to running only one task per core. FPGAs are also fully reconfigurable, allowing users to quickly switch between different functions and pipelines

Today, Edico Genome is deployed on our FPGA-based Amazon EC2 F1 instances and can process a whole genome sequence in about 70 minutes on an f1.2xlarge instance type and about 30 minutes on an f1.16xlarge instance type. These speeds can be over 10 times faster than current state-of-the-art algorithms.

 

How they’re working together

Recently, DNAnexus and Edico Genome announced a joint partnership to integrate Edico Genome’s DRAGEN solution, deployed on Amazon EC2 F1 instance family, into the DNAnexus platform. This integration gives customers the ability to leverage the speed of DRAGEN to analyze genomes coming from high-throughput sequencers, while also inheriting the security and compliance controls that DNAnexus has implemented. At a high level, here’s what this collaboration looks like:

 

 

DNAnexus ingests raw data (called base calls or reads) from genome sequencers such as Illumina’s NovaSeq. These reads are fed into DRAGEN, which is running on an EC2 F1 instance, to speed up the identification of genome variations that can influence disease progression. Results are stored in Amazon Simple Storage Service (Amazon S3) using industry-standard compression algorithms. Depending on the use case, customers can then collaborate across research sites while adhering to the regulatory requirements around sensitive data by using the capabilities built into the DNAnexus platform.

 

Collaborating to improve clinical care

Rady Children’s Institute for Genomic Medicine is one of the global leaders in advancing precision medicine. To date, the institute has sequenced the genomes of more than 3,000 children and their family members to diagnose genetic diseases. 40% of these patients are diagnosed with a genetic disease, and 80% of these receive a change in medical management. This is a remarkable rate of change in care, considering that these are rare diseases and often involve genomic variants that have not been previously observed in other individuals.

The institute adopted the DNAnexus platform to gain a secure, flexible, and scalable environment for local and distributed sequencing and analysis. Using DNAnexus with DRAGEN provides Rady with a highly optimized, end-to-end, whole genome sequencing analysis solution. Edico Genome’s DRAGEN data analysis pipeline is central to Rady’s ultra-rapid genomic data analysis, because it significantly reduces costs and turnaround time while maintaining accuracy. (Edico and Rady also previously worked together to set a Guinness World Record for fastest genetic diagnosis.)

“Our goal is to ensure that genome-powered precision medicine is available to every child who needs it. To do this, we needed a rapid research-to-bedside pipeline and be able to scale it and make it accessible to hospitals around the world,” said Stephen Kingsmore, M.D., D.Sc., president and chief executive officer at Rady Children’s Institute for Genomic Medicine. “DNAnexus has the technology and expertise to facilitate this ambitious project, Edico Genome’s rapid testing capability allows for rapid diagnosis of critically ill newborns.”

 

Based on the success of this partnership, Rady Children’s Institute is expanding this integrated solution to their partners nationally, fostering a growing genomic database that healthcare providers can access to quickly diagnose rare diseases in children.

 

Collaborating to advance precision medicine

One of the most rewarding things about working in Healthcare and Life Sciences at AWS is seeing how our APN Partners can work together to change the lives of people. The DNAnexus and Edico Genome partnership is one such example that is making a lasting impact on the healthcare industry. We look forward to seeing the results of this partnership advance precision medicine, and deliver results like the ones experienced at Rady Children’s Institute across the clinical landscape.

To learn more about how DRAGEN on DNAnexus can securely accelerate sequencing analysis, and take advantage of a reduced rate for analysis (valid through October 31, 2017), see the DRAGEN on DNAnexus promotional offer.

To learn more about how customers and APN Partners are using genomics on AWS, check out Genomics in the Cloud on the AWS website.

Please leave your feedback and questions in our Comments area.

New Training Courses Available: Introduction to Machine Learning & Deep Learning on AWS

AWS Training and Certification offers guidance to APN Partners so you can more effectively help customers leverage the AWS Cloud. We have two new courses to help you learn more about artificial intelligence solutions using AWS: Introduction to Machine Learning web-based training and Deep Learning on AWS instructor-led training. If you are looking to learn more about how to put artificial intelligence capabilities to use, start with Introduction to Machine Learning. Developers who are looking to learn more should then then attend the 1-day instructor-led training.

Here’s a bit more about each of these new training courses:

 

Introduction to Machine Learning is a free 40 minute web-based training intended for developers, solution architects, and IT decision makers who already know the foundations of working with AWS. This online course will give an overview of machine learning, walk through an example use case, teach relevant terminology, and walk through the process for incorporating machine learning solutions into a business or product. The course also includes knowledge checks to help validate understanding.

Deep Learning on AWS is a one day instructor-led training for developers who are interested in learning more about AWS solutions for deep learning. This course will teach individuals about the deep learning model and give them a roadmap for understanding what challenges deep learning can solve. Solutions related to image recognition, speech recognition, and speech translation are covered.

We recommend individuals take Introduction to Machine Learning before attending Deep Learning on AWS, but it is not required. In addition, individuals who are looking to learn more about how to leverage data for Deep Learning should consider taking Big Data Technology Fundamentals and Building a Serverless Data Lake.

APN Partners are eligible for 20% off instructor-led training delivered by AWS. Click here to sign in to the AWS Training and Certification Portal using your APN Portal credentials and browse all of our training offerings

How Can You Find Top APN Partners on AWS?

Explore the AWS Competency Program: Helping Customers Identify Top AWS Partner Network (APN) Partners on AWS across Industries, Verticals, and Solutions

 

Customers use AWS to meet a wide variety of their IT needs, and many customers leverage the AWS Partner Network (APN) to provide additional value. Are you looking to engage with an AWS Consulting Partner who can help you effectively migrate your applications to AWS? Are you a media firm hoping to identify an AWS Technology Partner solution that can help you render video footage in the cloud? Chances are, whatever you’re looking to do on AWS, there’s an APN Partner whose services or solutions can help you leverage all of the benefits that the AWS Cloud provides.

But how do you identify the right APN Partner with whom to engage? And what does it mean to identify the right APN Partner?

Here’s where the AWS Competency Program comes in.

The AWS Competency Program is the global APN Partner program focused on providing our customers and sellers with guidance on the most qualified APN Technology and Consulting Partners who have deep expertise and proven customer success in specific solutions areas, such as Big Data, DevOps, Migration, and IoT; in vertical markets such as Financial Services, Healthcare and Life Sciences, Government, Digital Media; and with enterprise business applications, including Microsoft Workloads and SAP. AWS Competencies help customers find APN Partners who can bring the right expertise for their specific business needs by quickly narrowing the search among the tens of thousands of partners in the APN network.

“Our AWS Healthcare Competency partners repeatedly demonstrate expertise in serving the needs of the payer and provider communities to advance human health,” says Dr. Oxana Pickeral, the Global Segment Leader for Healthcare & Life Sciences at AWS. “Healthcare customers routinely leverage their rapid pace of innovation to tackle the entire healthcare spectrum, from security and compliance to population health analytics.”

The AWS Competency Program is the vehicle by which the APN Partners with the right solution or industry expertise are identified and validated. This validation, however, does not come easily.

AWS Competency Partners go through a rigorous technical assessment and verification of their expertise specific to each AWS Competency. AWS solutions architects perform a thorough technical validation that challenges APN Partners to raise the bar on their AWS Competency-specific solutions and the use of AWS best practices for security and architecture in the AWS Cloud. Additionally, AWS Competency Partners’ case studies go through a review by an independent third-party audit firm before they are accepted into the AWS Competency Program.

Each AWS Competency is different and has a distinct set of requirements that can be easily viewed by AWS Customers. This facilitates transparency and helps customers understand the meticulous process that AWS Competency Partners go through. The requirements are designed to evolve and become more stringent over time as industry verticals and solution areas mature in the AWS Cloud, meeting a core goal of continually raising the bar for AWS Competency achievements. Additionally, AWS Competency Partners are re-evaluated every 12-24 months to ensure the program only validates APN Partners who are truly committed to enhancing and refining their expertise and leadership in their space. The AWS Competency Program’s overarching goal is to give AWS Customers the confidence that they are choosing APN Partners that are highly specialized in a specific AWS solution or industry vertical.

Raising the Bar for Customers and APN Partners

 

The AWS Competency status is attained by mature APN Partners who work closely with AWS business and technical teams and share a common vision and dedication to delivering top notch customer experience and meaningful success.

Here’s what some of our AWS Competency Partners are saying about the AWS Competency Program:

  • Scott Udell, Vice President of IoT Solutions at Cloud Technology Partners (CTP), notes that CTP’s Security, IoT, Migration, DevOps, and Financial Services Competency achievements are a “stamp of approval”. “This is a strong recognition of our work around IoT, and it will help us gain traction in the marketplace.”
  • ClearData CEO Darin Brannan says his firm’s AWS Healthcare Competency Partner is “an achievement that affirms our position in healthcare security, compliance, and managed services, while our work on AWS enables healthcare organizations to quickly deploy services and apps in a healthcare-fortified AWS environment.”
  • Aaron Klein, Founder and Chief Operating Officer of CloudCheckr, says, “While AWS holds its APN Partners to high standards, AWS also consistently supports and works with those who meet the standards. AWS’ help through the APN and Competency Program has proven invaluable in helping CloudCheckr thrive and grow as a solution and as a company. We would not be where we are today without the benefits of being an APN Partner.”
  • “The APN takes its Competency designations seriously. This gives credence to the companies that achieve these Competencies and provides a benchmark that differentiates other APN Partners in this space,” explains Robert Groat, EVP, Technology and Strategy, Smartronix. “We want our government clients to know that we are committed to delivering AWS solutions that meet their unique and demanding requirements. As an AWS Premier Partner in the AWS Government Competency Program we have benefited from being able to deliver highly secure, highly available, fault tolerant and innovative solutions that have transformed the way our government customers deliver services to its constituents.”

The AWS Competency Program currently has 17 programs under the AWS Competency umbrella, and will continue to expand as solution areas mature in the AWS Cloud. The AWS Competency Program was founded on the philosophy of quality over quantity, which is why each AWS Competency is designed with a high requirements bar.

Once the APN Partners have achieved an AWS Competency status, they qualify for a number of benefits in the APN Program designed to enable them to accomplish even greater successes. For example, AWS Competency Partners are invited to upcoming product release information sessions, gain access to AWS private betas, they are the first APN Partners to be given an opportunity to host subject-matter webinars, and gain access to AWS roadmap briefings.

AWS Competency Partners are closely engaged with APN teams and have the opportunity to leverage AWS Competency Partner exclusive events to provide feedback to the APN leadership. AWS Competency Partners receive a special designation in the APN Partner Solutions Finder directory and a listing on the AWS segment or solutions web pages, to mention a few of the program benefits.

The AWS Competency Program offers an enormous value to AWS Competency Partners who choose to continuously raise the bar together with AWS and share our customer obsession, but for AWS Customers, this value is potentially greater! It gives our customers a high degree of confidence in choosing a company that is aligned with AWS rapid innovation and dedication to deliver the best possible results.

Explore AWS Competency Partners by Solution, Industry Vertical, or Business Applications

AWS Partner Webinar Series- September and October

The AWS Partner Webinar Series is a selection of live and recorded online presentations that cover a broad range of topics at varying technical levels and scale. Each webinar is hosted by an AWS Solutions Architect and an AWS Competency Partner who has successfully helped customers evaluate and implement the tools, techniques, and technologies of AWS.

These webinars feature technical sessions with AWS solutions architects and engineers, live demonstrations, customer examples, and expert Q&A sessions.

See the upcoming webinars below:

 

Salesforce Webinars

Salesforce IoT: Monetize your IOT Investment with Salesforce and AWS

Register for Upcoming Webinar: October 3, 2017 | 10am-11am PDT

 

Salesforce Heroku: Build Engaging Applications with Salesforce Heroku and AWS

Register for Upcoming Webinar: October 10, 2017 | 10am-11am PDT

 

SAP Migration Webinars

Accenture: Reduce Operating Costs and Accelerate Efficiency by Migrating Your SAP Applications to AWS with Accenture

Register for Upcoming Webinar: September 20, 2017 | 10am-11am PDT

 

Capgemini: Accelerate your SAP HANA Migration with Capgemini & AWS FAST

Register for Upcoming Webinar: September 21, 2017 | 10am-11am PDT

 

Windows Migration Webinars

Cascadeo: How a National Transportation Software Provider Migrated a Mission-Critical Test Infrastructure to AWS with Cascadeo

Register for Upcoming Webinar: September 26, 2017 | 10am-11am PDT

 

Datapipe: Optimize App Performance and Security by Managing Microsoft Workloads on AWS with Datapipe

Register for Upcoming Webinar: September 27, 2017 | 10am-11am PDT

 

Datavail: Datavail Accelerates AWS Adoption for Sony DADC New Media Solutions

Register for Upcoming Webinar: September 28, 2017 | 10am-11am PDT

 

Life Sciences Webinars

SAP, Deloitte & Turbot: Life Sciences Compliance on AWS

Register for Upcoming Webinar: October 4, 2017 | 10am-11am PDT

 

Healthcare Webinars

AWS, ClearData & Cloudticity: Healthcare Compliance on AWS

Register for Upcoming Webinar: October 5, 2017 | 10am-11am PDT

 

Storage Webinars

N2WS: Learn How Goodwill Industries Ensures 24/7 Data Availability on AWS

Register for Upcoming Webinar: October 10, 2017 | 8am-9am PDT

 

Big Data Webinars

Zoomdata: Build an On-Demand Data Science Workstation with Zoomdata

Register for Upcoming Webinar: October 10, 2017 | 10am-11am PDT

 

Attunity: Cardinal Health: Moving Data to AWS in Real-Time with Attunity

Register for Upcoming Webinar: October 11, 2017 | 11am-12pm PDT

 

Splunk: How TrueCar Gains Actionable Insights with Splunk Cloud

Register for Upcoming Webinar: October 18. 2017 | 9am-10am PDT

 

To see all upcoming AWS Partner Webinars, click here.