AWS Partner Network (APN) Blog

How to create an approval flow for an AWS Service Catalog product launch using AWS Lambda

AWS Service Catalog allows organizations to centrally manage commonly deployed IT services, achieve consistent governance, and help meet compliance requirements. AWS Service Catalog provides a standardized landscape for product provisioning. Users browse listings of products (services or applications) that they have access to, locate the product that they want to use, and launch it on their own as a provisioned product. The AWS Service Catalog API also provides programmatic control over all user actions.

Let’s say you need to build an approval workflow for a launch request from a user. Many solutions are available that use AWS Service Catalog APIs to build complex custom workflows are available (for example, ServiceNow). In this blog post, I will describe how to build a simple workflow approval process using AWS Lambda, Amazon API Gateway, AWS CloudFormation, and Amazon Simple Notification Service (Amazon SNS),  from the perspective of an AWS Service Catalog administrator.

To build this approval process, I’ll be using AWS CloudFormation features like WaitCondition and WaitHandle, along with AWS Lambda as a custom resource to create a simple approval workflow. This approach is beneficial if you are looking for an AWS native solution to extend existing AWS Service Catalog features. This will also help retain the AWS Service Catalog user interface for product launch.

Architecture Overview:



  1. The user launches a product from their available product list and fills in all required data  via the AWS Service Catalog interface. You can obtain the user’s email address through this input.
  2. For products that require administrator approval, there will be three additional CloudFormation resources: a WaitHandle, the WaitCondition, and the custom resource. The Lambda custom resource is called to notify the admin who is responsible for approving the product launch. The stack will be in a waiting state until it receives a response from the admin.
  3. The admin receives an email notification about the product launch and an approval URL to allow stack creation. The URL contains the WaitHandle pre-signed URL as a parameter for signaling the stack to continue.
  4. When the admin clicks the URL, a Lambda function behind API Gateway receives the admin approval to proceed.
  5. If the admin approves the product launch, the Lambda approval function sends the confirmation for the WaitHandle to proceed with stack creation. Otherwise, the stack is rolled back after the maximum wait time of 12 hours.
  6. The user receives either a completion or rolled back status on the AWS Service Catalog console. Additionally, the admin could reach out to the user to ask for more information on the launch request before proceeding with the approval.

Build Steps:

Now that we’ve covered the steps, let’s build the required resources for the approval flow. I have attached an AWS CloudFormation template for your convenience so you can follow along. When you launch the template, you will be prompted to enter an email address for the approval flow. After stack completion, the following resources will be created:

SNS topic: An SNS topic along with the provided email subscription. You will be getting an email to confirm your subscription. Subscribe to the topic to receive messages.

SNS notification function: A Lambda function to send the approval mail. Whenever a new product launch requires approval, this Lambda function will be called. This function will get the WaitHandle pre-signed URL and user email address as input.

Approval function: A Lambda function to notify the CloudFormation stack by sending the status of the WaitHandle pre-signed URL.

In addition to these resources, an API Gateway API and IAM roles will also be created.

Note the ARN for the Lambda function from the output. You will need this later to test the setup.


To test the setup, you can use the attached sample CloudFormation template. This is a standard template provided by Amazon that deploys WordPress on AWS, but I’ve modified it to introduce approval flow and added three additional resources: WaitCondition, WaitConditionHandle, and NotificationFunction.

WaitCondition and WaitConditionHandle are used to pause the creation of a stack and to wait for a signal before continuing to create the stack. All other resources in the template depend on WaitCondition for approval status.

    Type: 'AWS::CloudFormation::WaitConditionHandle'
    Type: 'AWS::CloudFormation::WaitCondition'
        Ref: 'WaitHandle'
      Timeout: '43200'

NotificationFunction is a custom resource that triggers the Lambda function responsible for sending approval email.

    Type: Custom::NotificationFunction
      ServiceToken: '<REPLACE YOUR LAMBDA ARN>'
      Region: !Ref "AWS::Region"
      WaitUrl: !Ref WaitHandle
      EmailID: !Ref UserEmail

You’ll need to download the template and modify the NotificationFunction resource’s ServiceToken parameter to specify the ARN you obtained in the previous section. Once you have updated the Lambda ARN, you can add this template as a new product to your existing catalog or test the template in the CloudFormation console.

When the template has launched successfully, you’ll receive email requesting approval to proceed, similar to this:

When you choose the approval link, the Lambda function behind the API will send a confirmation for WaitHandle to proceed with stack creation. Otherwise, the stack will be rolled back after the maximum wait time of 12 hours.


If you don’t receive the approval mail, check the SNS topic subscription status. Also, verify that you’ve specified the correct Lambda ARN in the template. Check Amazon CloudWatch logs for any exceptions or errors launching the stack. Additionally, you can check the following sources for general troubleshooting help with services such as Amazon SNS, API Gateway, and AWS Lambda:


You can now add a simple approval workflow to your Service Catalog stack by adding the three resources from the sample test template. For more information about managing portfolios, products, and constraints from an administrator console, check this documentation.

I hope this post and sample templates were useful in helping you extend AWS Service Catalog features. Feel free to leave your feedback or suggestions in the comments.

Testing AWS GameDay with the AWS Well-Architected Framework

By Ian Scofield, Juan Villa, and Mike Ruiz


GameDay is an immersive, team-based event we’ve hosted at AWS Summits and re:Invent over the past few years. The event has teams of players settling into a challenging—and hopefully entertaining—scenario as DevOps leads at Unicorn.Rentals, a popular startup minutes away from the very public launch of a widely anticipated product. For more information, see the GameDay website.

Of course, we have a lot going on behind the scenes to make GameDay work. Beyond all the enthusiastic acting and silly props, you’ll find a complex AWS infrastructure that includes a live score tracking engine, a single-instance load generator capable of dynamically varying the load over the course of the game, and various command and control functions. Overall, the infrastructure is simplistic in design but complex to operate, with room for improvement by incorporating the same best practices we encourage players to adopt during the course of the game.

Today, in an attempt to improve the player experience (and our quality of life), we have invited a team of AWS Partner Solutions Architects to review the GameDay architecture against a standard benchmark: the AWS Well-Architected Framework. The review team will work to understand the details of our architecture, ask detailed questions about our design and intent, and then deliver a document with prioritized findings.

In this post, we’ll cover the initial architecture review and the findings delivered from the review team. In future posts, we will share the process of making improvements and our plans to refine our architecture through continuous improvement and collaboration with AWS Solutions Architects.


Architecture Overview

We began the review session by providing the review team with an architectural overview of GameDay, using diagrams and other collateral to highlight various components and relationships where appropriate. To help you follow along, here’s a summary of the high-level details we shared regarding the architecture of GameDay:

The GameDay infrastructure runs in a master AWS account, with each team having their own player AWS account (Figure 1). Various components in the master account serve load to player accounts, and host other services such as the scoreboard and cost calculator.  The master account utilizes an IAM Cross-Account Role in each player account that gives it the required permissions to perform administrative tasks throughout the day.

Figure 1: Master – player account relationship

The master account has the following components:

  1. Scoreboard – This is a static site hosted in an Amazon Simple Storage Service (Amazon S3) bucket, written in JavaScript and HTML.
  2. Cost calculator – In order to encourage players to take cost optimization into account, we charge players for their Amazon Elastic Computer Cloud (Amazon EC2) utilization (as in the real world!). The cost calculator includes three AWS Lambda functions that deduct points proportional to their consumption.
  3. Amazon DynamoDB – We use several Amazon DynamoDB tables to hold team information, score information, generic game configuration values, and other supporting information that is used by the master account components.
  4. Load generator – This is the heart of the game implementation. It is made up of a single EC2 instance.  The load generator controls the game and initiates administrative actions.
    1. When player accounts are dynamically created, a message is posted to an Amazon Simple Notification Service (Amazon SNS) topic in the master account with a notification of the account creation.  On the load generator, PHP scripts run to do the account registration/provisioning based on the SNS messages.
    2. The load generator runs one process per team that initiates connections to the infrastructure running in each player’s account.
    3. The number of messages delivered to player accounts is scaled by creating additional processes per team within this load generator instance.

Figure 2 shows a high-level overview of the master account architecture:

Figure 2: Initial architecture


Deep Dive

Once they understood the architecture, the review team began a deep dive and asked clarifying questions on the various components based on the questions in the appendix of the Well-Architected Framework whitepaper.  In particular, they were very interested in manual operations (especially in the operation of the load generator), disaster recovery (specifically the recovery timing for assets lost before an event), and the security of the application as a whole (specifically the security of customer data and credentials). On the whole, the review was comprehensive and took approximately three hours to complete.

Review Findings

The review team consolidated the data and provided us with a written report that outlined the various findings.  In addition, they provided us with notes and prioritized recommendations for each finding, which would serve as a starting point for us to develop our remediation plan.

Looking at GameDay through the lens of the Well-Architected Framework, it was obvious that there were many opportunities for improvement. The AWS review team prioritized the findings into two sets: critical and recommended. Most of the findings were classified as recommended—these don’t pose an immediate risk and will be incorporated into our roadmap.  However, the three elements that were identified as critical needed to be addressed immediately.

Here’s the text of the findings from the review team:


SEC11. How are you managing keys?

Critical finding:

The legacy administrative scripts for GameDay use AWS access keys and secret access keys and are stored in plain text in an Amazon DynamoDB table.


The legacy administrative scripts require the use of an AWS access key and secret access key in order to interact with the AWS API on the player’s account, and do not support cross-account roles. Currently, these keys are being stored in plain text in an Amazon DynamoDB table, which the scripts query to retrieve the keys.  AWS access keys and secret access keys are long-lived credentials that do not expire until they are explicitly revoked. Storing them in plain text increases the probability of the keys being compromised, and in the current design, any person with read access to the DynamoDB table (though the application or application administrative interface, indirectly via backups or logs, or directly via the AWS DynamoDB API) can read and exploit the keys.


Modify the legacy administrative scripts to support cross-account roles in order to avoid the need to store and use AWS access keys and secret access Keys.

REL 7. How are you planning for disaster recovery?

Critical finding:

There is no clearly defined disaster recovery plan, recovery point objectives (RPO), or recovery time objectives (RTO).  Additionally due to not having a plan, it cannot be periodically tested against the RPO and RTO objectives.


GameDay was originally conceived as a set of instructions players would iteratively execute in a minimally configured account. As tooling and additional features were added over time, they have failed to step back and consider the entire stack and how to protect it from accidental, malicious, or environmental faults. Although it’s just a game, GameDay customers invest a whole day to attend and deserve as good an experience as can be delivered; having to scramble to invent a recovery process in the run-up to an event or, worse, in the middle of a live game would be a bad experience for all involved.


  1. Define a disaster recovery plan, including RPO and RTO.
  2. Periodically test the plan against the defined objectives.

REL 2. How does your system withstand component failures?

Critical finding:

Currently the load generator is a single instance in a single Availability Zone, and no recovery options have been configured.


If this load generator instance were to fail or become unavailable either due to a hardware fault or in the (unlikely) event of an Availability Zone failure, the game would no longer be able to continue, because there is no automated process to recover the failed node.  The load generator is currently not in an Auto Scaling group, nor does it have EC2 instance recovery configured.  Additionally, the instance has been configured manually and doesn’t contain all the necessary settings and scripts.  Lastly, all state is stored locally on the instance and will need to be broken out when implementing a multi-instance architecture.  By storing state externally, this will also alleviate the issue of losing state in the event of an instance failure.


  1. Implement an EC2 Auto Scaling group with a launch configuration by creating an Amazon Machine Image (AMI) which self-contains all necessary components.  Optionally, you can utilize user data to pull down all necessary components.
  2. Configure your Auto Scaling group to span multiple Availability Zones to increase the resiliency and fault tolerance of your architecture.
  3. Make your instances stateless to reduce the chance of losing information in the event of a failure.


Next Steps

Now that the review team has given us this feedback and the list of critical items that need to be resolved, we need to construct our remediation plan to correct these deficiencies.  In our next blog post, we’ll go through this remediation plan and explain in depth how we plan to correct these items to improve the security and reliability of the GameDay application.

Announcing the Addition of four AWS Management Tools to the AWS Service Delivery Program

By Ben Perak, APN Global Segment Leader


The Amazon Web Services (AWS) Service Delivery Program launched in November 2016 with one simple goal: to help customers easily identify AWS Partner Network (APN) Partners with a successful track record of delivering specific AWS services and a demonstrated ability to provide expertise in a particular service or skill area.

Nineteen AWS services are included in the AWS Service Delivery Program, including many database services, compute services, content delivery services, security services, serverless computing services, and analytics services. This program also highlights APN Partners who deliver workloads in the AWS GovCloud (US) Region.

Today, we’re excited to announce the addition of four AWS Management Tools to the program: AWS CloudFormation, Amazon EC2 Systems Manager, AWS Config and AWS CloudTrail.

Raising the Bar in Cloud Management with AWS Management Tools Service Delivery Partners

Cloud operations are quickly shifting from ‘How do I do it’ to ‘How do I do it BETTER.’ That is why AWS has developed an extensive portfolio of Management Tools that provide APN Partners and customers with leading cloud-management capabilities. These services –whether used individually or combined as an end-to-end solution—provide cloud operations teams with the solutions they need to keep pace with their agile businesses. Whether it is provisioning resources or a group of resources called stacks via AWS CloudFormation, pushing OS patches at scale with Amazon EC2 Systems Manager, tracking configuration changes in a highly regulated environment using AWS Config, or keeping track of user activity with AWS CloudTrail—these services have you covered. Combined with our APN Partners’ deep domain knowledge, customers can be assured they are getting a world class cloud management solution.

“AWS Management Tools solutions let our customers access the big advantage of cloud: the ability to provision, query and compare the current to desired state,” says Flux7 Chief Technology Officer Ali Hussain. “These services allow us to easily move from an early stage proof of concept to an enterprise-ready product, adding in compliance, security, and long-term maintenance controls.”

Congratulations to our Launch Partners:

The following APN Consulting Partners have demonstrated their ability to raise the bar in delivering results with these services and have become Management Tools Service Delivery launch partners:

AWS CloudFormation Partners

  • 2nd Watch
  • Cognizant
  • Datapipe
  • Flux7
  • Foghorn Consulting
  • Stelligent

AWS CloudTrail Partners

  • 2nd Watch
  • Cloudreach
  • Cognizant
  • Datapipe
  • Flux7
  • Foghorn Consulting
  • Stelligent

AWS Config Partners

  • 2nd Watch
  • Cloudreach
  • Cognizant
  • Flux7
  • Stelligent

Amazon EC2 Systems Manager Partners

  • Cloudnexa
  • Cloudreach
  • Cloudticity
  • Flux7
  • Logicworks
  • REAN Cloud
  • Stelligent

Why should APN Consulting Partners with expertise in Management Tools join?

Joining the program enables you to promote your firm as an AWS-validated expert in AWS Management tools. By becoming an AWS Management Tools Delivery Partner, with a focus on one or more of the included services, you can increase your firm’s visibility to customers seeking your type of expertise through a number of channels, such as the AWS Service Delivery website. Additionally, you’ll be distinguished as an expert in the applicable service in the Partner Solutions Finder and will be featured on the services partner page.

What are the requirements?

In addition to meeting the minimum requirements of the program listed on this page, your company must pass service-specific verification of customer references and a technical review. This instills confidence in prospective customers that they are working with partners who provide recent and relevant experience.

Want to learn more?

Learn more about the Service Deliver program and the partners participating in it by visiting the Service Delivery Program homepage. If you are a partner and would like to join the AWS Service Delivery Program, apply within the APN Portal.

Meet our Financial Services Competency Partners at the AWS New York Summit

By Renata Melnyk, AWS Financial Services Competency Program Manager


AWS is enabling scalable, flexible, and cost-effective solutions for banking and payments, capital markets, and insurance organizations of all sizes, from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the Financial Services Partner Competency Program to identify AWS Partner Network (APN) Consulting and Technology partners with deep industry experience to assist our customers in their migration to the AWS Cloud. AWS Financial Services Competency Partners have demonstrated industry expertise, readily implemented solutions that align with AWS architectural best practices, and have AWS-certified staff.

This year at the AWS NY Summit, some of our AWS Competency Partners are demonstrating the unique and innovative work they’ve done with customers. If you are attending the NY Summit on August 14th, don’t miss these sessions:


AWS Financial Services Competency Partners and the 2017 AWS New York Summit


Summit Keynote – FICO

Our AWS Financial Services Competency Partner, FICO will be presenting during the Summit keynote this year. FICO’s CIO Claus Moldt, will speak about how the company uses data, advanced analytics, and mathematical algorithms to help clients transform their business. FICO is a leading analytics software company, helping businesses in more than 90 countries make better decisions that drive higher levels of growth, profitability, and customer satisfaction.

As an APN Technology Partner, FICO was also one of the first APN Partners to achieve the AWS Financial Services Competency in the Risk Management category. This category validates solutions that help financial institutions identify, model, and assess risk; ensure monitoring and compliance with industry regulations; or help in surveillance or fraud monitoring.

Migration Journey of AIG’s Global Claims Web Application from Mainframe to Public Cloud – Deloitte

The New York Summit will also feature AWS Financial Services Competency Partners through sessions, such as Migration Journey of AIG’s Global Claims Web Application from Mainframe to Public Cloud, with speakers from AWS, AIG, and Deloitte.

This session will detail the successful migration of a critical AIG business application from mainframe to the cloud. Global Insurance companies such as AIG are taking the lead in ensuring that their business applications are agile and cost efficient. AWS provided the necessary services to enable AIG to architect an optimum solution to migrate their application from private data centers to AWS. AIG collaborated with Deloitte, AWS Financial Services Competency and Premier APN Consulting Partner, and AWS teams to enable successful execution of the initiative, which entailed an end-to-end application migration with outcomes that included operational cost optimization, enhanced application performance, and flexibility improvements. AWS offered a variety of architectural choices, allowing the AIG project teams to structure a migration in the most effective way. The migration and right-sizing helped AIG’s application team realize substantial cost savings through the reduction of compute costs and reduction of infrastructure footprint, which has lowered operation costs. Over the course of 12 months, the program team developed and implemented a migration roadmap, a solution architecture leveraging AWS cloud native services, a testing strategy, operationalization for production, and a cutover plan.

Machine Learning in Capital Markets – IHS Markit

Financial services companies are using machine learning to reduce fraud, streamline processes, and improve their bottom line. AWS provides tools that help them easily use AI tools like MXNet and Tensor Flow to perform predictive analytics, clustering, and more advanced data analyses. If you’re at the New York Summit, stop by this session to learn how IHS Markit, an Advanced APN Technology Partner and AWS Financial Services Competency Partner, has used machine learning on AWS to help global banking institutions manage their commodities portfolios. You will also learn how the Amazon Machine Learning service can take the hassle out of AI.

To learn more about AWS Financial Services Competency Partners, please visit our AWS Financial Services Partner Solutions page.

About the AWS Competency Program

The AWS Competency Program is designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas. Attaining an AWS Competency allows partners to differentiate themselves to customers by showcasing expertise in a specific solution area.

Get AWS Certified at the AWS Summit – New York

Post by Lisa Learoyd, Head of AWS Global Events


Join us at the AWS Summit – New York

Join us for the AWS Summit—New York on August 14 at the Javits Center to get access to AWS education, training, and certification exams. Register for the AWS Summit for free.

AWS Training can help APN Partners develop deeper AWS knowledge and skills to more effectively help customers leverage AWS, and AWS Certifications can help you gain visibility and credibility for your proven experience working with AWS. Certifications also help fulfill APN Consulting Partner Requirements.


Get AWS Certified

We are offering onsite AWS Certification exams at the AWS Summit—New York. Onsite certification exams are offered from 9am-5pm on August 14 at the AWS Summit. We encourage you to schedule your exam in advance and plan enough time to complete your exam. Associate exams take 80 minutes to complete and cost $150.00. The Professional and Specialty Exams take 170 minutes to complete and cost $300.00. Learn more about certification exams at the AWS Summit—New York here.

Please be aware registration for the AWS Summit—New York is required to take an onsite certification exam. Register here for the summit for free.

Onsite AWS Certification Exams– Schedule your exam today
Offered August 14

Select the “AWS Summit New York” Testing Location
to register


AWS Certification Activities

AWS Certified individuals (including those who pass an exam onsite) get access to special event benefits. Come hang out in the AWS Certification Lounge where you can re-charge and meet other certified professionals. In addition, join us for food and drinks at the AWS Certification Appreciation Reception August 14 from 6:00pm-7:00pm Learn more here.

We look forward to seeing you in New York!

How Implementing a Real World Evidence Platform on AWS Drives Real World Business Value

Guest post by Scot Johnson, a Solution Architect for ConvergeHEALTH by Deloitte, part of Deloitte Consulting LLP’s Innovation group (DCI).

In light of new laws such as the 21st Century Cures Act and evolving scientific insights, life sciences companies are being pressed to demonstrate clinical value to payers and health authorities.  As a result, life sciences companies are shifting the way they develop and bring their pharmaceutical and medical products to market through the application of Real World Evidence (RWE). In order to discover, optimize, and demonstrate the value of RWE, life sciences companies are embracing new strategies, deeper partnerships, and innovative technology solutions. Industry-wide shifts, such as the move from volume-based to value-based payment models and more personalized healthcare, have helped fuel interest in RWE to demonstrate the value of drug and device innovations.

In this blog post, I will discuss the business drivers behind the rising importance of RWE to life sciences companies for research and product development, and how Deloitte’s ConvergeHEALTH Evidence Lifecycle Management Platform on the AWS Cloud enables RWE use cases to drive real world business value.

Delivering Value to Pharmas through RWE

The biopharmaceutical landscape has transformed due to significant advancements in science, increases in the amounts and types of data, shifts in market economics, legislation, and reimbursements. The rise of data volumes and disparate data sources, including health records, lab results, sensors, images, genomics, and claims data, have resulted in a shift from traditional research and development approaches to new collaborative models that integrate non-traditional partners across geographically dispersed resources and participants. A growing number of life sciences organizations are accommodating these disruptions with scalable on-demand storage and compute capabilities necessary to accelerate the shift toward data-driven insights and end-to-end evidence management.

Figure 1: Business value increases as RWE is leveraged across research, clinical development, and commercialization business functions

Deloitte’s ConvergeHEALTH Evidence Lifecycle Management Platform

In response to the growing demand for data-driven insights and evidence lifecycle management (ELM), Deloitte developed the ConvergeHEALTH Evidence Lifecycle Management Platform on the AWS Cloud.  The ConvergeHEALTH ELM Platform leverages an integrated AWS Cloud environment preconfigured with the relevant data and tools.  The ConvergeHEALTH ELM Platform consists of three main configurable layers, designed to help our clients in their efforts to quickly realize the promise of RWE and big data analytics: data layer, analysis layer, and knowledge layer.  The platform’s flexible, modular design and open architectures empower domain experts and facilitate business function integration into existing and emerging plug-in analytic services from vendors and open source communities.

High-Level Architecture

Figure 2: Reference architecture for the ConvergeHEALTH ELM Platform

Data layer – Facilitates the organization, governance, and usability of disparate datasets in support of the life science organization’s mission.

Analysis layer – Houses the analytic tools and processes for data exploration and analysis.

Knowledge management layer – Defines and powers RWE governance policies, tools, roles, and processes across the platform.

Platform Products and Accelerators

The ConvergeHEALTH ELM Platform includes several components to give enterprises visibility into information that exists within the organization and enable stakeholders to collaborate across their enterprise.

Cohort Insight – Cohort selection service (available as an API) with a web UI that allows cohort querying against datasets stored in the AWS Cloud. Allows users to iteratively apply inclusion/exclusion cohort selection criteria and facilitate interactive cohort exploration.

Cohort Integrator – Cohort synchronization service (available as an API) that synchronizes cohorts across multiple datasets, accelerating data exploration and reducing time to insights.

Data Asset Explorer & Characterization Service – A microservices API that collects domain-specific dataset profiling results for data search, discovery, and profiling.

Research Trust for Big Data – A standards-based data linkage and semantic governance model and repository built on the big data technologies Hadoop, Hive, Impala, and Spark.  Research Trust facilitates the organization, linkage, semantic standardization, and exploration of data across multiple datasets.

ConvergeHEALTH Evidence Lifecycle Management Platform Implementation on AWS

Figure 3: High-level AWS reference architecture for the ConvergeHEALTH ELM Platform

The following section highlights some key concepts of the ConvergeHealth ELM Platform and its implementation on AWS.

Security Elements

Protecting and managing patient health information (e.g., data encryption, authorized access, transporting of patient data across networks and borders) is managed by the ConvergeHEALTH ELM Platform through AWS services.

HIPAA Considerations

AWS aligns its HIPAA risk management program with FedRAMP and NIST 800-53, which are higher security standards that map to the HIPAA Security Rule. AWS provides a standard Business Associate agreement (called the “Business Associate Addendum,” or BAA) for customers deploying on the platform.  Customers can use any AWS service in an account designated as a HIPAA Account under the BAA, but they must only process, store, and transmit protected health information (PHI) using certain HIPAA Eligible Services defined in the BAA.


Consistent with the AWS BAA, PHI on the ConvergeHEALTH ELM Platform is encrypted both in-transit and at-rest. Data at-rest on Amazon Elastic Block Store (Amazon EBS) is encrypted using AWS Key Management Service (AWS KMS), and data in-transit is encrypted using [256-bit SSL].

Identity and Access Management

The AWS environment for the ConvergeHEALTH ELM Platform relies on AWS Identity and Access Management (IAM) to authorize, authenticate, and enforce user policies. IAM policies and roles allow resource access to be fine-tuned for the myriad roles within the enterprise.

Configuration Management

Virtual Private Clouds

The ConvergeHEALTH ELM Platform leverages Amazon Virtual Private Cloud (Amazon VPC), which allows customers to launch their AWS resources into a virtual network that closely resembles an on-premises environment combined with the benefits of using the scalable AWS infrastructure.

Automated Deployment

Using AWS CloudFormation templates, we’ve pre-defined and automated the deployment of the Cohort Insight application (see Figure 4). The CloudFormation template for Cohort Insight includes launch configurations and Auto Scaling groups of EC2 instances coupled with AWS Elastic Load Balancing.  The template provides rapid, repeatable, and reliable on-demand deployment of Cohort Insight to handle unpredictable analytic workloads to reduce query response times and shorten time to insight.

Figure 4: AWS CloudFormation template for Cohort Insight

High Availability

AWS services have been designed to natively leverage multiple Availability Zones to build highly available, fault tolerant, and scalable solutions. For customers who desire highly available RWE applications for improved uptime necessary to meet internal stakeholders’ SLAs, the ConvergeHEALTH architecture can be configured with either a single AWS Region composed of multiple Availability Zones, or across multiple AWS Regions with multiple Availability Zones. Each Availability Zone consists of one or more discrete data centers with redundant power, networking, and connectivity.

The ConvergeHEALTH ELM Platform also uses native service features for high-availability deployment such as the Amazon Relational Database Service (Amazon RDS) and its out-of-the-box high availability and replication features.

Example Big Pharma Deployment


A Big Pharma organization was seeking opportunities to shift to value-based, personalized health care to help patients live longer, healthier lives.

Cloud Strategy

Their cloud strategy provides end-to-end visibility across the information value chain with connected processes and platforms across all functions to improve R&D productivity, product launch effectiveness, and overall operational excellence.


They chose the ConvergeHEALTH ELM Platform on AWS to serve as the global foundation for facilitating information discovery, access, analysis, governance, collaborations, and partnerships, both internally and externally.


The ConvergeHEALTH ELM Platform provides transparency into available data across the enterprise, facilitating the sharing of insights and enabling researchers to collaborate.  Multiple disparate datasets are connected to gain new insights. And by developing new coordinated partner strategies, they are able to build networks of strategic partners with advanced and integrated expertise.

Realizing the Business Benefits from RWE and Next Steps

Managing the shift from volume-based to value-based payment models and the move to personalized health care will require more agile real-world evidence capabilities along with new strategies, partnerships, and technologies. Pharmas that manage the transformative shift to operationally utilize RWE will be positioned to realize the potential and business value of RWE. Deloitte focuses on helping companies achieve these results/objectives through its relationship with AWS and the blending of our skills and capabilities to make our client’s transformative shift to RWE a manageable journey.

To learn more about the ConvergeHEALTH Evidence Lifecycle Management Platform, see the ConvergeHEALTH website.

About Deloitte

Deloitte professionals guide traditional health care and life sciences companies and new market entrants in navigating the complexities of the US and global health care system. As market, political, and legislative changes alter the industry, we help our clients develop innovative and practical solutions.

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Getting the “Ops” Half of DevOps Right: Automation and Self-Service Infrastructure

Next generation Managed Services Providers (MSPs) are able to offer customers significant value today, above and beyond the basics of retroactive notifications or outsourced helpdesk services that were common with early, traditional MSPs. Today’s cloud evolved MSPs are able to drive revolutionizing business outcomes, even including DevOps transformations. This week in the MSP Partner Spotlight series, we hear from Jason McKay, CTO of Logicworks, as he writes about the importance of automation in operations and helping their customers meet their DevOps goals.



Getting the “Ops” Half of DevOps Right: Automation and Self-Service Infrastructure

By Jason McKay, CTO of Logicworks

DevOps has been a major cultural force in IT for the past ten years. But a gap remains between what companies expect to get out of DevOps and the day-to-day realities of working on a IT team.

Over the past ten years, I’ve helped hundreds of IT teams manage a DevOps cultural shift as part of my role as CTO of Logicworks. Many of the companies we work with have established a customer-focused culture and have made some investments in application delivery and automation, such as building a CI/CD pipeline, automating code testing, and more.

But the vast majority of those companies still struggle with IT operations. Their systems engineers spend far too much time putting out fires and manually building, configuring, and maintaining infrastructure. A recent survey found that it takes more than a month to deliver new infrastructure for 33 percent of companies, and more than half had no access to self-service infrastructure. The result is that systems engineers burn out quickly, developers are frustrated, and new projects are delayed. Add to the mix a constantly shifting regulatory landscape and dozens of new platforms and tools to support, and chances are that your operations team is pretty overwhelmed.

Migrating to Amazon Web Services (AWS) is often the first step to improving infrastructure and security operations for DevOps teams. AWS is the foundation for infrastructure delivery for the largest and most mature DevOps teams in the world, but running IT operations on AWS the same way you did on traditional infrastructure is simply not going to work.

The Power of Automation

Transforming operations teams for DevOps begins with a cultural shift in the way engineers perceive infrastructure. You’ve no doubt heard it before: Operations can no longer be the culture of “no.” Keeping the lights on is no longer enough.

The key technology and process change that supports this cultural change is infrastructure automation. If you’re already running on AWS, there is no better cloud service for building a mature infrastructure automation practice—it integrates with what your developers are doing to automate code deployment, and makes it easier for your company to launch and test new software.

AWS has all the tools you need. But you also need people who know how to use those tools. That’s what Logicworks helps companies do. We are an extension of our clients’ IT teams, helping them figure out IT operations on AWS in this new world of DevOps and constant change.

Our corporate history mirrors the journey most companies are going through today. Ten years ago, the engineers at Logicworks also spent most of their time nurturing physical systems back to health, responding to crises, and manually maintaining systems. When Amazon Web Services launched, we initially wondered if we would have a place in this new paradigm. Where does infrastructure maintenance fit in a world where companies want infrastructure “out of the way” of their fast-moving development teams? Then we realized that not only could we keep managing infrastructure, but we could do something an order of magnitude more sophisticated and elegant for our clients. We started to approach the business not as racking and stacking hardware, but instead using AWS to create responsive and customized infrastructure without human intervention. That really changed the business model.

Today our engineers spend their time writing and maintaining automation scripts that orchestrate AWS infrastructure, not manually maintaining the thousands of instances under our control. In many ways, we have become a software company. We write custom software for each client that makes it easier for their operations teams to deliver AWS infrastructure quickly and securely. Of course we still have 24x7x365 NOC teams, networking professionals, DBAs, etc., but all of our teams approach every infrastructure problem with this question: How can we perform this (repetitive, manual) task smarter? How can we stop doing this over and over and focus on solutions that make a substantial difference for our customers?

Infrastructure Automation in Practice

Many of the best practices of software development — continuous integration, versioning, automated testing — are now the best practices of systems engineers. In enterprises that have embraced the public cloud, servers, switches, and hypervisors are now strings and brackets in JavaScript Object Notation (JSON). The scripts that spin up an instance or configure a network can be standardized, modified over time, and reused. These scripts are essentially software applications that build infrastructure and are maintained much like a piece of software. They are versioned in GitHub and engineers patch the scripts or containers, not the hardware, and test those scripts again and again on multiple projects until they are perfected.

An example of an instance build-out process with AWS CloudFormation and Puppet.

In practice, infrastructure automation usually addresses four principal areas:

  • Creating a standard operating environment or baseline infrastructure template in AWS CloudFormation that lives in a code repository and gets versioned and tested.
  • Coordinating with security teams to automate key tools, packages, and configurations, usually in a configuration management tool like Puppet or Chef and Amazon  EC2 Systems Manager.
  • Delivering infrastructure templates to developers in the form of a self-service portal, such as the AWS Service Catalog.
  • Ensuring that all templates and configurations are maintained consistently over time and across multiple environments/accounts, usually in the form of automated tests in a central utility hub that can be built with Jenkins, Amazon Inspector, AWS Config, Amazon CloudWatch, AWS Lambda, and Puppet or Chef.

Together, this set of practices makes it possible for a developer to choose an AWS CloudFormation template in AWS Service Catalog, and in minutes, have a ready-to-use stack that is pre-configured with standard security tools, their desired OS, and packages. Your developers only launch approved infrastructure, never have to touch your infrastructure configuration files, and no longer wait a month to get new infrastructure. Imagine what your developers could test and accomplish when they’re not hampered by lengthy operations cycles.

This system obviously has a big impact on system availability and security. If an environment fails or breaks during testing, you can just trash it and spin up another testing stack. If you need to make a change to your infrastructure, you change the AWS CloudFormation template or configuration management script, and relaunch the stack. This is the true meaning of “disposable infrastructure”, also known as “immutable infrastructure”—once you instantiate a version of the infrastructure, you never change it. Since your infrastructure is frequently replaced, the potential for outdated configurations or packages that expose security vulnerabilities is significantly reduced.

Example of AWS Service Catalog.

This is why the work we do at Logicworks to automate infrastructure is so appealing to companies in risk-averse industries. Most of our customers are in healthcare, financial services, and software-as-a-service because they want infrastructure configurations to be consistently applied (and can prove that they are applied universally across multiple accounts to auditors) and changes are clearly documented in code. Automated tests ensure that any configuration change is either proactively corrected or alerts a 24×7 engineer.

Responsibilities and External Support

If you’re managing your own infrastructure, your operations team is responsible for everything up to (and including) the Service Catalog layer. Your developers are responsible for making code work. That creates a nice, clear line of responsibility that simplifies communication and usually makes developer and ops relationships less fraught.

If you’re working with an external cloud managed service provider, look for one that prioritizes infrastructure automation. Companies that work with Logicworks appreciate that we have  abandoned the old style of managed services. Long gone are the days when you paid a lot of money just to have a company alert you after something went wrong, or when a managed service provider was little more than an outsourced help desk. AWS fundamentally changed the way the world looks at infrastructure. AWS also has changed what companies expect from outsourced infrastructure support, and has redefined what it means to be a managed services provider. Logicworks is proud to have been among the first group of MSPs to become an audited AWS MSP Partner and to have earned the DevOps and Security Competencies, among others. We have evolved to continue to add more value to our customers and to help them achieve DevOps goals from the operations side—and not just to keep the lights on.

Whether you outsource infrastructure operations or keep it in-house, the most important thing to remember is that you cannot create a culture that innovates at a higher velocity if your AWS infrastructure is built and maintained manually. Don’t ignore operations in your enthusiasm to build and automate code delivery. Prioritize automation for your operations team so that they can stop firefighting and start delivering value for your business.

Managed Security and Continuous Compliance

As we continue our MSP Partner Spotlight series, let’s dive into managed security, continuous compliance, and the convergence of what have traditionally been the separate focuses of Managed Service Providers (MSPs) and Managed Security Service Providers (MSSPs). A next-generation MSP must have a deep understanding of their customers’ security and compliance needs and possess the ability to deliver solutions that meet these needs. This week we hear from APN Premier Consulting Partner, MSP Partner, and Competency Partner, Smartronix, on how they approach this for their customers.


Managed Security and Continuous Compliance:

Next Generation Autonomic Event Based Compliance Management

By Robert Groat, Executive Vice President – Technology and Strategy at Smartronix

One of the least understood and often overlooked benefits of deploying cloud services is the ability to transform and operationalize security compliance. This means that services native to the cloud can help assess, enforce, remediate, and report on security compliance semi-autonomously. Every action that affects any change in AWS, from the initial creation of the environment, to provisioning and deprovisioning resources, to changes made to even the most mundane setting are all affected via an API service call, and every API service call is logged and audited as an event.

AWS has enabled native capabilities that allow you to respond programmatically to these events. In effect, you can use automation such as AWS CloudFormation and AMIs to create an environment that is compliant at creation, and thereafter can have an autonomic response to events to enable remediation, self healing, reporting, or systematic incident response capabilities. Essentially, our customers’ environments remain continuously compliant via programmatic management.

Smartronix has been working in cooperation with AWS since 2008. Our initial infrastructure development efforts focused on creating reusable templates that incorporated security best practices, followed by a combination of proactive and reactive continuous monitoring, alerting, trouble ticket generation, and manual remediation. AWS Lambda, introduced in 2014, has been a key enabler for reaching the next level.

Lambda is a serverless (0-management) solution that can connect events with algorithms written as Lambda functions. Once an event is identified as meaningful—for example, a boundary configuration change—we can write a Lambda function that executes automatically whenever the event occurs.

The other key enabler is AWS Config, a native service that helps you continuously record, monitor, compare, and react to changes in your environment. We can now associate custom AWS Config rules with Lambda functions that enforce compliance. For example, if policy dictates encrypted root volumes, then we can monitor server launch events and enforce these policies automatically. If an attempt is made to create an instance with an unencrypted root volume, the action can be remediated by either  quarantining or deleting the resource via the AWS Lambda function.

Compliance actions can be reactive, such as when privileged account usage is identified, automatically verifying that an associated trouble ticket exists before authorizing the request. Other compliance actions can be scheduled. For example, certain rules can run every 24 hours to monitor license compliance, automate backups, or enforce tagging on deployed resources.

Speaking of tagging, your nascent library of Lambda functions should automate, reinforce, and be advised by your tagging strategy. That tagging strategy should help you differentiate activities within your compliance functions. Smartronix refers to this process as Attribute-Based Service Management. Lambda compliance functions can then behave differently based on tags. An instance tagged “environment = development” may not need the same compliance remediation as one tagged “environment = production”. Bringing this strategy full circle, you can actually write Lambda functions that enforce a compliance policy dictating that all deployed resources must include a set of predefined tags.

The high degree of flexibility that custom Lambda functions provide can also improve incident response and alerting when policy deviations occur. For Next Generation MSPs like Smartronix, this is an incredibly efficient way to manage multiple environments in a consistent and scalable manner. Although customers may have varying security and compliance requirements, we now have a framework enabled by AWS that helps us customize and respond in a repeatable, efficient manner.

Combining AWS CloudTrail, AWS Lambda, AWS Config, the instrumentation ecosystem, and a source code control system like GitHub, organizations can now manage their software-defined security and compliance processes in the same way they manage their software-defined infrastructure. This improves reusability, reduces errors, ensures policy compliance, automates response, and reduces the typically onerous reporting burden. Your AWS Config Rules and AWS Lambda functions are now important parts of your security controls documentation and you now have a natural audit mechanism for proving how you enforce these controls.

Smartronix is also extending this model into the areas of forensics, threat prediction, and log aggregation and analysis. Combining AWS CloudTrail, AWS Config, and AWS Lambda with Amazon Machine Learning and Amazon AI has enormous potential to change the signal-to-noise ratio of complex and active environments, ensuring that the anomaly envelope is adaptive and that outliers are raised, assessed, and reincorporated into the growing, learning, adapting, intelligent security ecosystem.

The availability of these tools and evolving experience is making NextGen Managed Services Providers highly competitive, if not superior, in entering a new opportunity space. Traditional MSPs have focused on IT service management, incident response, patch management, backup, and break/fix services. With software-defined infrastructure and now software-defined security and compliance, NextGen MSPs are blurring the lines between traditional Managed Service Providers and traditional Managed Security Services Providers. These new services, enabled by the cloud, include continuous monitoring, automated vulnerability scanning and analysis, automated boundary management, log aggregation and analysis, end user behavior analytics, and anomaly detection. At Smartronix, we are excited about disrupting the way enterprises view security and are democratizing services that at one time were the province of only a handful of the world’s largest enterprise companies.

Smartronix has managed highly secure, large-scale global environments for more than 22 years. When we say you can achieve greater security in the cloud, you now have a better perspective on how we and other NextGen Service Providers achieve it. You can choose to replicate how you manage on-premises environments in the cloud, but true transformational value occurs when you rethink your approaches that can make use of the newest, most powerful, and innovative services available to you.

Three new AWS Training specialty courses now available

AWS Training can help APN Partners deepen AWS knowledge and skills and better serve customers. We are adding three of our most popular training bootcamps from events to our permanent instructor-led training portfolio based on feedback from our customers. These one-day courses are intended for individuals who would like to dive deeper into a specialized topic with an expert trainer. The three new courses are:


You can explore our complete course catalog here, and you can search for a public class near you by logging into the AWS Training and Certification Portal with your APN Portal credentials. APN Partners are eligible for a 20% discount on public AWS Training delivered by AWS. You can also request a private onsite training for your team by contacting us.

New AWS Marketplace IoT Discovery Webpage Accelerates IoT Innovation

AWS Marketplace now has an IoT discovery webpage that makes it easier for you to buy IoT software from popular software vendors that’s integrated with, or running on, AWS Cloud services. This page features 17 IoT software providers.

IoT is a complex industry represented by connected devices and the data they produce, supported by a variety of interrelated technologies across hardware and software platforms. The IoT value chain consists of several categories, including hardware (sensors, edge devices, gateways), connectivity, cloud and infrastructure, applications, and professional services. The IoT space is growing at a rapid-fire pace, and presents a nearly overwhelming selection for customers who want to find the right products to integrate into their AWS IoT projects. Customers look to AWS Marketplace for IoT software solutions, and the new IoT discovery webpage will help them make sense of the fragmented environment of products and software by placing these services in one easy-to-find location.

AWS Marketplace is a sales channel that software companies use to offer software solutions to AWS customers. You can easily find and buy software as a service (SaaS) products, Amazon Machine Images (AMIs), or AWS CloudFormation template-based software deployments from popular software vendors. The software solutions listed on the IoT discovery webpage integrate with AWS IoT or other AWS services, and are billed to the customer’s AWS account rather than being billed by the vendor.

AWS Marketplace vendors offer over 60 products with IoT use cases, across networking, security, database, business intelligence, and other categories. The AWS Marketplace IoT discovery webpage helps customers select the right products faster by showcasing products within the following subcategories, to reduce the time and resources required to discover, procure, and implement an IoT project:

  • Edge, gateway, and connectivity: Includes software to manage data ingestion, device certificates/security, edge processing on the gateway, and global connectivity.
  • Development tools: Offers solutions to help partners and customers build best-in-class applications, reducing the friction developers face today when building IoT applications.
  • Data analytics and machine learning: Offers solutions to turn data into meaningful information to support business insights and outcomes.

Today’s featured partners who have earned the AWS IoT Competency include Eseye, Bsquare, ThingLogix, Splunk and Bright Wolf.


Pinacl is a Consulting Partner that leveraged the AWS Marketplace IoT selection to deliver IoT services quickly to Newport City Council, in Wales.

“ on AWS Marketplace made it possible for Pinacl to very quickly launch a smart city proof of concept for Newport that is powered by AWS,” says Mark Lowe, strategic relations director at Pinacl. “If you’re setting up infrastructure the traditional way, in phase one, you have to set up to handle thousands of sensors when you might only want to start with 10. Using on AWS meant Newport could start small with very little investment or risk and figure out which projects delivered the most value.”


“Our experience in dealing with industrial IoT deployments across a number of market segments shows data is the primary determinant in achieving the business outcomes our customers seek,” said Dave McCarthy, Bsquare Senior Director of Products. “By making DataV Discover available in AWS Marketplace, businesses can quickly determine IoT use cases that their data will support, thereby reducing risk and maximizing the probability of success.”

Now, you can more easily navigate, discover, and purchase the software and services needed as we  build successful IoT solutions and applications to fuel innovation and their business.

Get started building your IoT solution by visiting: