Category: APN Consulting Partners


Shift Security Left through DevSecOps

by Kate Miller | on | in APN Consulting Partners, AWS Competencies, DevOps on AWS, DevSecOps, Partner Guest Post, Premier Partners, Security | | Comments

Fusing application development with integrated, automated security processes

By Christian Lachaux, AABG Security Lead, Accenture; Federico Tandeter, Cloud Security Offering Development Lead, Accenture.

Accenture is a Premier APN Consulting Partner and AWS MSP who holds a number of AWS Competencies, including Migration.


Development+Security+Operations, better known as DevSecOps, is revolutionizing application development by integrating automated security reviews directly into the software development process. By 2019, more than 50% of enterprise DevOps initiatives will have incorporated application security testing for custom code, up from less than 10% in 2016. 1

Agile, security-focused enterprises are now taking it to the next level by applying DevSecOps in a cloud environment, and many are doing so on the AWS Cloud, which emphasizes security as its highest priority.2 This further simplifies and accelerates application development by accessing cloud-based packaged security tooling and testing services via API calls. With this innovative method, CIOs can ensure that vital security testing is performed at each step of the software development lifecycle—seamlessly and at high velocity.

To support this approach, Accenture DevOps is working to incorporate DevSecOps into the Accenture DevOps Platform service—which we feel will have the dual benefits of making security both easier and quicker, while also making it more measurable and reliable. Additionally, the Accenture AWS Business Group (AABG) helps customers secure cloud deployments using AWS security capabilities and best practices, such as the Center for Internet Security (CIS) AWS Foundations Benchmark, augmented by third-party tools and Accenture services.

Make way for a new method

With agile or waterfall application development approaches, security testing is typically not part of the initial design process. Instead, it is performed as a final manual step on a completed package—which increases the risk of application release delays and compounds costs if issues found in security testing require reengineering or redesign.

Despite these concerns, some companies stick to the traditional methods, partly due to the perception that security testing slows the application development lifecycle or injects complex requirements too late in the process. In some cases, this has reinforced the rift between application development teams and security teams, even though both groups report to the CIO. Forward-looking companies can overcome this challenge through a shift security left approach, which introduces security at the inception of the development lifecycle and automates testing within the DevOps workflow.

Representing a windfall over more traditional methods, shift security left makes security an inherent part of the design and architecture of every software product. Using DevOps techniques–including automated provisioning, extensive monitoring, frequent testing, and continuous integration–application developers and security teams can collaborate in a streamlined and secure development process. Specifically, the DevSecOps process parallelizes component development and automates security testing to achieve an iterative, fail-fast model of continuous development and testing at the unit level and then final security testing of the completed package.

Security automation industrialized on cloud

CIOs can apply the versatile DevSecOps process to application development and security processes on-premises or in the cloud. However, we feel that cloud provides a clear benefit in two primary ways: first, by supporting programmatic testing; and second, by facilitating DevSecOps through pre-packaged services that use infrastructure as code to automate core security testing.

If a security issue is identified, the developer can address it on the spot, or if necessary, involve the proper security team member to provide a quick fix. The cloud-native environment with embedded security services makes it even easier to develop applications and conduct security testing at the functional and user level on multiple iterations.

Hyperscale cloud providers like AWS facilitate DevSecOps through the infrastructure as code, API-driven automation capabilities, as well as the services that enable DevSecOps—including AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy and AWS CodePipeline. (See this recent AWS technical blog for more detail.) Using packaged services like these, companies can expedite the DevSecOps process and then top-off with custom code for an enterprise-ready business process or customer-facing service.

Getting started with DevSecOps

Overall, DevSecOps leads to a more effective risk-based approach to security. Rather than deciding which security apps to apply to an environment, companies can assess where potential risks and vulnerabilities lie and solve them holistically. To reap the near-term and longer-term benefits, Accenture suggests that CIOs follow these steps:

  • Start with a solid DevOps foundation across the development environment. Working with an external provider with strong DevOps experience can accelerate this process through education, training, and tooling.
  • Foster collaboration between development and security teams to embed security in the design. Just as security architects are not necessarily developers, developers may not always be as current on the latest security threats and trends.
  • Deploy continuous security testing built into the continuous integration/continuous development pipeline via automation. It will be critical to select the right security tools to support automated testing.
  • Extend monitoring to include security and compliance by monitoring for drift from the design state in real time to enable alerting, automated remediation, or quarantine of resources marked as non-compliant.

To learn more about implementing DevSecOps into your company’s application development lifecycle, contact christian.lachaux@accenture.com or federico.tandeter@accenture.com. If you have any comments for us, please leave them in the comments section. We’d love to hear from you.


The content and opinions in this blog are those of the third party authors and AWS is not responsible for the content or accuracy of this post.
1DevSecOps: How to Seamlessly Integrate Security Into DevOps,” by Neil MacDonald and Ian Head September 30, 2016 ID: G00315283
2 For more information, see the AWS Shared Responsibility Model, which delineates AWS’s role in managing security of the cloud, and a customer’s role in retaining control of their chosen security tools to protect their content in the cloud.

 

Tapping the Benefits of the Cloud: A Webinar with BlazeClan & CloudVelox

by Kate Miller | on | in APN Consulting Partners, APN Technology Partners, AWS Competencies, Migration, Premier Partners, Third Party Webinars | | Comments

BlazeClan is a Premier APN Consulting Partner who holds the AWS Big Data and DevOps Competencies. CloudVelox is an Advanced APN Technology Partner who holds the AWS Migration and Storage Competencies. Together, these two APN Partners will be hosting a webinar on Thursday, March 9th, at 11 am PST/2 pm EST to discuss:

  • How you can get started with your cloud journey
  • An overview of BlazeClan’s assessment framework
  • How to migrate your applications to the cloud
  • How to accelerate your cloud migration with CloudVelox’s automation tool
  • Best practices and cloud migration success stories

The webinar will also include a live demo. BlazeClan and CloudVelox encourage business decision makers, CIOs, CTOs, IT Directors, and IT Managers to attend.

Register Here >>

To learn more about BlazeClan’s journey to become a Premier APN Consulting Partner, click here.

Achieving Compliance Through DevOps with REAN Cloud

by Aaron Friedman | on | in APN Consulting Partners, AWS Partner Solutions Architect (SA) Guest Post, DevOps on AWS, Healthcare, Life Sciences, Premier Partners | | Comments

Aaron Friedman is a Healthcare & Life Sciences Partner Solutions Architect with Amazon Web Services

When I survey our Healthcare and Life Sciences Partners, one of the common competencies I see is a great foundation in DevOps best practices. By building software in an automated and traceable manner, you are able to more easily determine the “Who, What, Where, and When” of any activity performed in the environment. This determination is a cornerstone for any compliant (HIPAA, GxP, etc.) environment.

REAN Cloud (“REAN”), an AWS Partner Network (APN) Premier Consulting and AWS MSP Partner who is also an AWS Public Sector Partner. The company holds a number of AWS Competencies, including DevOps, Healthcare, Financial Services, Migration, and Government. REAN is a cloud-native firm with deep experience in supporting enterprise IT infrastructures and implementing continuous integration, continuous delivery pipelines. The team routinely implements complex and highly scalable architectures for workloads in highly regulated industries such as Healthcare and Life Sciences, Financial Services, and Government. DevOps principles are core to REAN’s philosophy, and the solutions they develop are bundled with advanced security features to help address clients’ compliance needs ranging from HIPAA and HITRUST through FedRAMP and PCI.

Every solution that REAN builds on top of the AWS Cloud has security and compliance as its top priority. Healthcare and Life Sciences are highly regulated industries and many of its workloads are subject to regulatory requirements such as HIPAA and GxP. There are several common themes that must be addressed in every regulated workload including:

  • Logging, Monitoring, and Continuous Compliance
  • Documentation and Non-Technical Controls
  • Administrative Environment Access and Separation of Duties

In this blog post, I’ll discuss these concepts and discuss how REAN approaches each of these focus areas on the AWS Cloud. Let’s dive a little deeper.

Logging, Monitoring and Continuous Compliance

Tracking how your environment changes over time, and who accesses it, is central to meeting many different regulatory requirements. In order to paint the full picture of what is occurring in your environment, you store application logs, operating system logs and other environment specific logs and performance data. AWS services such as AWS CloudTrail, Amazon CloudWatch, and AWS Config produce and store critical information about your environment that should be organized and retained for potential use during troubleshooting activities or compliance audits. With the AWS Cloud, you can use these services to capture, organize and verify the logs and information that describes the cloud environment itself.

REAN Cloud addresses the challenge of managing all of this log information by leveraging a DevOps Accelerator that they have created called REAN Radar.

REAN 1
Radar ingests logs from many different sources, configures meaningful dashboards of information relevant to the environment being managed, and evaluates that information in the context of well-respected security and compliance frameworks such as Center for Internet Security (CIS) benchmarks. REAN Managed Services uses Radar dashboards to monitor for configuration drift, changes to sensitive data access, misconfigured infrastructure, broken ingest pipes, and numerous other environment specific metrics and measures.

REAN 2

Radar adapts as the environment grows and shrinks – new systems are automatically added to scope as the pipelines are grown, and old components are removed when no longer needed. Radar dashboards can be configured to suit a wide variety of customer requests and are well suited for providing “at-a-glance” visibility for management or governance committees. For example, a dashboard can be created to monitor in real time who has access to a particular set of data – this is very useful for HIPAA environments where monitoring access to protected health information (PHI) is critical.

Documentation and Non-Technical Controls

Documentation and Non-Technical Controls are an important part of the overall compliance story for a system. AWS provides a variety of compliance resources that our HCLS partners can use while addressing regulated workloads. With our Shared Responsibility Model, AWS manages the security of the cloud while customers and APN Partners, such as REAN, manage security in the cloud. For example, REAN, as an APN Partner, and REAN customers might decide to refer to AWS controls (such as for hardware management and physical environment security) and other audits and attestations that AWS has achieved for different services (such as SOC 2 (Type 2) or FedRAMP). AWS Artifact provides on-demand access to many of these audit artifacts, which APN Partners can use in their own system documentation.

REAN Cloud helps customers achieve system compliance by supporting a wide range of activities including the creation of a Cloud Security and Compliance strategy for an entire organization to manual document creation to meet specific compliance needs. In addition, REAN has helped their customers navigate HITRUST audits.

One of REAN’s goals is to apply the same automation principles to the (often manual) documentation creation process by applying a Pipeline-based approach to system and data center deployments. REAN leadership believes that system documentation packages can be automated alongside the environment itself. REAN accelerators are being used to improve speed of delivery and consistency for these important artifacts that demonstrate control of an environment.

As an example, REAN Managed Services uses REAN AssessIT and document accelerators every month to produce security assessment reports for every managed environment. These reports examine over 40 important security best practices and are generated automatically and tailored for each customer to focus on areas that are relevant to their business.

REAN 3

For customers requiring extensive environment documentation packages (such as GxP compliance) REAN is developing a pipeline to tie an entirely automated documentation generation to the automated creation of the environments. Again, REAN continues to develop new technology to maximize the value of documentation and applies a consistent disciplined approach to environment management while striving to minimize the human cycles required to produce such outcomes.

Administrative Environment Access and Separation of Duties

A major piece of any compliance story is the ability to demonstrate control of an environment. Authentication and authorization are central to this process, allowing a user to access the specific data they need. An area of concern for auditors is administrative access in an environment due to the broad permissions generally associated with this role. By using AWS native services such as Amazon VPC, AWS Identity and Access Management (IAM), and Amazon WorkSpaces, REAN helps customers build segregated and secure application environments of any size and scale required while still allowing REAN Managed Services or other Application Support Personnel to keep the environment running and provide support for any incidents that may occur.

REAN embraces the concept of “Control Accounts” when designing healthcare and life sciences application environments. A Control Account is used as a common area for hosting shared services and administrative tools that run against the “Managed Accounts”. Here is a simple example:REAN 4

In this diagram, the Control Account is used to manage:

  • Jenkins and all pipeline deployments into the Dev and Prod accounts
  • Nessus vulnerability scans into the other accounts
  • REAN Radar
  • WorkSpaces for administrative access into the other environments. As REAN manages environments with PHI, WorkSpaces (which is not listed as HIPAA-eligible) is not used to remediate specific situations that involve PHI.

 

AWS features such as VPC Peering and IAM Cross-Account Roles make this approach possible and allow REAN to focus on hardening the application hosting environments (such as Dev and Prod) to allow only the absolute minimum required permissions and network communication. Governance and oversight can then focus on the Control account to ensure that the applications and services there that are used to support the other environments are locked down and only granted to the required team members.

Benefit to Customers

Ultimately, the benefits that REAN provides with their DevOps principles only apply if there is tangible benefit to their customers. REAN has helped customers across a wide range of regulated industries including Financial Services, Healthcare & Life Sciences, and Government & Education achieve their desired regulatory and technology transformation outcomes on the AWS Cloud.

One such example is how REAN helped Aledade meet their HIPAA goals for their platform. In addition to architecting a solution on the AWS Cloud in accordance with best practices, REAN served as Aledade’s compliance guide. According to Chris Cope, previously the DevOps Lead at Aledade, “REAN Cloud’s staff was a huge help navigating HIPAA/HITECH compliance best practices on approved cloud services. They also had extraordinary attention to detail on security matters and are leaders at defining best practices on AWS.”

In November of 2016, The American Heart Association and AWS announced the launch of the “AHA Precision Medicine Platform”, “a global, secure cloud-based data marketplace that will help revolutionize how researchers and clinicians come together as one community to access and analyze rich and diverse data to accelerate solutions for cardiovascular diseases — the No. 1 cause of death worldwide.”

REAN Cloud, in partnership with AWS Professional Services, worked with AHA leadership to develop and implement the platform on AWS. REAN Engineers have implemented pipeline-driven automated deployments of the entire AHA Precision Medicine Platform and continue to show how security and compliance can move as fast as the development team.

The AHA Precision Medicine Platform leverages REAN Radar dashboards to monitor the environment, the Control Account approach to shared services and administrative access, and the team has established an effective weekly communication plan with AHA leadership to drive priorities. AHA and REAN work jointly to establish proofs of concept, minimal viable solutions, and test these solutions with a series of beta-testers. REAN recently published a case study on AHA that you can read here.

Conclusion

Data sensitivity is central to regulated workloads, and we often focus on how we process, store, and transmit that data. Yet the surrounding components, such as logging and access control, are just as important when building a compliant solution. REAN Cloud and their healthcare and life sciences customers achieve an end to end solution with REAN Cloud’s top of the line in-cloud security and management tools combined with the power of the multi-dimensional strengths of AWS.

If you are interested in learning about how REAN Cloud can support your healthcare and life sciences related workloads to meet your security and compliance requirements, please email them at hcls@reancloud.com

 

If you’re interested in learning more about how AWS can add agility and innovation to your healthcare and life sciences solutions be sure to check out our Cloud Computing in Healthcare page. Also, don’t forget to learn more about both our Healthcare and Life Sciences Competency Partners and how they can help differentiate your business.

Will you be at HIMSS? Be sure to stop by our booth #6969! We’d love to meet with you.

Please leave any questions and comments below.


The content and opinions in this blog are those of the author and is not an endorsement of the third-party product.  AWS is not responsible for the content or accuracy of this post.  This blog is intended for informational purposes and not for the purpose of providing legal advice.

How Cognizant Approaches GxP Workloads on AWS

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Partner Highlight, Life Sciences, Premier Partners | | Comments

By Vandana Viswanathan, Associate Director, Process & Quality Consulting, Cognizant Technology Solutions, and Joseph Stellin, Associate Director, Cognizant Cloud Services. 

Cognizant is a Premier APN Consulting Partner, an AWS MSP Partner, an AWS Public Sector Partner, and holds a number of AWS Competencies, including Healthcare, Life Sciences, Migration, Big Data, Financial Services, and Microsoft SharePoint. 

Life sciences firms are rapidly accelerating their adoption of AWS to not only advance research in the space, but to optimize the development of software and the environment it runs on. We’ve found that questions around regulatory quality, security and privacy have been addressed to the point where many senior executives actively pursue using AWS as an extension of or replacement for their on-premises environments.

Most companies manufacturing medical products or developing drugs are required by regulations to follow Good Manufacturing, Clinical, and Laboratory Practices (GxP). IT systems running “GxP Applications” are subject to FDA audit and failure to comply with the appropriate guidelines could result in fines and potential work stoppage. Due to this impact, GxP regulations are often at the forefront of our customers’ minds when considering a move to the cloud.

In January 2016, AWS released a white paper on Considerations for Using AWS Products in GxP Systems. With this guidance, it has become easier to develop these regulated workloads on AWS. We have found that life sciences firms are able to achieve the same benefits of scale, cost reduction, and resiliency for their GxP applications that they’ve come to expect from non-regulated workloads on AWS. This was exemplified at re:Invent 2016 where Merck spoke publicly about how they have built GxP solutions on AWS.

At Cognizant, we’ve developed a transformation framework based on our experience working with many large organizations within the life sciences and healthcare verticals. This framework consists of many steps including analyzing cloud providers, developing and executing validation plans, and creation of governance and support procedures to ensure compliance to FDA regulations. This framework enables successful qualification of the cloud infrastructure (IQ) execution and operations and ensures compliance of the application/software being hosted on the cloud. We’ve applied our approach to live migrations of multiple GxP workloads, including Trackwise and Maximo, as well as to building out of new GxP environments natively on AWS.

Design principles for GxP

When developing GxP applications for our customers, we’ve found there are key design and operation principles that each workload requires. It is important to note that in a cloud environment, infrastructure is continuously improvable with new features and capabilities added regularly. The need to stay compliant shouldn’t stifle innovation, but proper controls need to be enforced to ensure that FDA requirements are continuously met. We like to think about compliance not as a fixed goal, but a continuous operational and design requirement.

The following key principles relate to the Cognizant proprietary transformation framework as well as key AWS and third-party services we use to address these principles.

Cloud Provider Assessment: This enables us to evaluate all cloud providers based on their viability of hosting a GxP application and also the ability to support the specific environment being migrated. The evaluation parameters include regulatory compliance, information security, data privacy, infrastructure application dependencies, and business criticality amongst other key parameters.

Data Security: All sensitive data should be encrypted both at-rest and in-transit. For example, we use AES256 encryption for data at rest. We always engage our enterprise security team to evaluate all current customer security solutions to determine if there a need for additional security solutions to meet customer compliance and security requirements.

Authentication and Authorization: As the data flowing through a GxP application can be sensitive, we need to ensure that only the appropriate authorized Individuals can access the data and control the access limitaions. We utilize AWS Identity and Access Management and/or extend out current on-premises domain controller resources to the cloud in a secure way.

Traceability and Auditability: We need to have a time-stamped, secure audit trail that documents how and when users access the environment and application and any changes to the core infrastructure or applications. The benefit of infrastructure as code is that we can validate and log changes to our infrastructure in the same way we do software. We use AWS CloudTrail for all logs and leverage Amazon CloudWatch for any alerts and notifications. We have also integrated a proprietary tool called Cloud360 for all tracking, monitoring, management and audit information.

How our GxP approach leads to customer success

Our Transformation Framework has helped simplify the process of creating and maintaining validated environments in a continuously advancing technology. This innovation has helped these organizations to take advantage of key benefits of the cloud including: reduction in cost, agility, time to market, scalability, and more importantly reliability through redundancy.

For several of our top 10 pharmaceutical clients, implementation of the transformation framework has enabled successful movement of regulated applications to the cloud. A framework for validating GxP workloads was established and precedence has been set to move ongoing applications to the cloud.

Looking ahead

As this quest to move validated workloads to the cloud continues in the Life Sciences and Healthcare verticals, processes and technologies will evolve and be adopted to expedite the validation process, ensure compliance, and achieve larger cost savings. We look forward to our strong continuous relationship with AWS to assist many organizations with building confidence in moving GxP workloads to the cloud, advancing technology and streamlining validation processes.

Please leave any questions and comments below.

 

If you’re interested in learning more about how AWS can add agility and innovation to your healthcare and life sciences solutions be sure to check out our Cloud Computing in Healthcare page. Also, don’t forget to learn more about both our Healthcare and Life Sciences Competency Partners and how they can help differentiate your business.

Will you be at HIMSS? Stop by the Cognizant booth #3214. And be sure to stop by our booth #6969! We’d love to meet with you.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

AWS MSP Partner Program – Raising the Bar

by Barbara Kessler | on | in APN Consulting Partners, Cloud Managed Services, DevOps on AWS, MSPs on AWS | | Comments

By Barbara Kessler and Thomas Robinson

In our last post about the evolution of managed services, we wrote about how the landscape is evolving for managed service providers (MSPs) who are working with customers in hyperscale cloud environments. In our opinion, established MSPs can no longer focus exclusively on running and operating their customers’ environments and must expand their reach further up the stack and into what has traditionally been the purview of consulting companies to include professional services and greater involvement in customers’ requirements such as compliance and development practices. This evolution also opens the door for established Consulting Partners to expand their reach into what has traditionally been the purview of MSPs, but now includes next gen capabilities. This evolution is driving a convergence that allows both types of APN Partners to fulfill customers’ full lifecycle needs: plan/design >> build/migrate >> run/operate >> optimize. Let’s now expand that discussion to review the AWS Managed Service Provider Program for APN Partners and how this program recognizes and validates the capabilities of the next generation of bar-raising MSPs.

This program has grown out of customers asking AWS to help them identify not what they have traditionally viewed as MSPs, but consulting and professional services APN Partners who can help them with this full lifecycle. We have in turn built this program to connect this request from customers to the best qualified APN Partners to deliver the kind of experience that is being sought. The program introduces a rigorous set of validation requirements in the AWS MSP Partner Validation Checklist that are assessed in a 3rd party audit process. These requirements address each of the areas discussed in our previous post:

Design, architect, automate

Next gen MSPs must be AWS experts. They must possess a depth and breadth of knowledge around AWS services and features, so they are asked to demonstrate this knowledge and provide examples of customer use cases as a critical part of their MSP audit. APN Partners must then expand this to show evidence of detailed designs and implemented customer solutions. These APN Partners must also demonstrate the ability to identify when solutions such as Amazon DynamoDB, AWS Lambda, or Amazon Redshift would provide a more efficient and less costly solution in their customers’ environments. We are looking to see that these leading edge APN Partners are leveraging their knowledge and using documented AWS best practices, as well as their own extensive experience, to create intelligent and highly automated architectures that allow customers to take advantage of the agility that the AWS Cloud enables.

Software/Cloud-based solutions

The move to cloud-based solutions has also driven changes in how MSPs handle billing and cost management for their customers. AWS MSPs are often also AWS Resellers and as such they become experts in AWS tools and services that allow depth of visibility and understanding around customers’ usage of various services. These MSPs typically leverage 3rd party or homegrown software solutions that enable robust rebilling capabilities and insights including proactive recommendations and proposed buying strategies, including proactive recommendations on instance sizing, reserved instance purchases, and use of managed solutions such as Amazon RDS. All AWS Resellers are asked to demonstrate this knowledge and their tools during their MSP validation audit.

Distributed operations and resources

We also dive deep into our partners’ support capabilities to validate the maturity of their operations and ability to consistently deliver an excellent customer experience. In addition to meeting industry standards for IT service management (ITSM), AWS MSPs demonstrate how these capabilities apply specifically to their AWS practices in areas such as service intelligence monitoring, customer dashboards, event/incident/problem management, change management, as well as release and deployment management. We believe that this foundation is critical to delivering a highly valuable experience for customers. APN Partners who are looking to expand into cloud operations capabilities can also consider incorporating the new AWS Managed Services for automating AWS infrastructure operations such as patch management, security, backup, and provisioning to add to their applications management capabilities.

Solution/Application-based SLAs

MSPs have traditionally provided Service Level Agreements (SLAs) to customers to address foundational concerns, such as response and restoration times, as well as infrastructure uptime, but this further evolves for next gen MSPs. Infrastructure SLAs for cloud-centric customers focus not on the uptime of hardware, but on uptime based on high availability architecture provided and maintained by the MSP. These SLAs should then expand into the customers’ workloads and application performance to focus on the outcome of experience of the customer. Review of these SLAs, documentation, processes, metrics and continual improvements are a valuable aspect of the MSP Program audit.

DevOps – CI/CD

AWS MSPs enable additional agility and efficiency for their customers through integration of DevOps practices and principles. ITSM standards for infrastructure and application release and deployment management are already broadly adopted by next gen MSPs and are baseline requirements for AWS MSP Program Partners. APN Partners demonstrate how they enable and/or manage continual integration and continual deployment (CI/CD) pipelines, as well as deployment and release management with repeatable and reusable mechanisms. APN Partners are asked to evidence this capability with a demonstration and customer examples during their 3rd party audit. We also encourage APN Partners to further build and enhance their DevOps practices through attainment of the DevOps Competency for Consulting Partners, which garners additional credit in the audit process.

Dynamic monitoring with anomaly detection

By designing and implementing advanced and intelligent environments, leveraging auto scaling, infrastructure as code and self-healing elements, next gen MSPs enable a significant shift in the focus of their ongoing monitoring and management efforts. AWS MSPs embrace a new approach utilizing next generation monitoring capabilities. Rather than setting a pre-defined static monitoring thresholds, these APN Partners often incorporate machine learning to determine normal state for their customers’ dynamic environments and they are able identify anomalies outside of normal patterns of behavior. These APN Partners then use this knowledge to deliver valuable management services and insights to their customers, the technology for which they demonstrate during the AWS MSP audit.

Security by design

Significant focus on security is another bar-raising element of the AWS MSP Partner Program. Next gen MSPs are engaging with customers earlier in the plan/design phase and they are able to address security needs from the onset of a project. During the AWS MSP audit, partners are asked to provide evidence and demonstrate their capabilities to protect their customers’ environments, as well as their own, using industry standards and AWS best practices. They are also asked to review access management strategy, security event logging and retention, disaster recovery practices, and use of appropriate AWS tools. APN Partners are then given an opportunity to demonstrate how they use these tools and practices to deliver continuous compliance solutions to help customers achieve various regulatory compliance and reduce potential exposure in this capacity.

Trusted advisor and partner

In addition to reviewing APN partners’ specific technical capabilities in each of these categories, AWS works with APN Partners and our 3rd party auditors to provide an objective validation of broader business practices and capabilities. During their audit, APN Partners provide an overview of their business, including financial assessments, risk mitigation, succession planning, employee satisfaction, resource planning, and supplier management, amongst other controls. They also provide evidence of their process to solicit and collect objective customer feedback, respond to that feedback, and conduct regular reviews with their customers. We also look to AWS MSPs to be vocal thought leaders who evangelize the next gen MSP point of view and work to educate customers on the evolution of cloud managed services and specifically the value of DevOps enabled automation. Due to the invaluable role of the AWS MSP, APN Partners must demonstrate in the third-party audit the viability of their business, their obsessive focus on customers, and their thought leadership to enable them to earn and maintain a trusted advisor role with their customers.

Raising the Bar

The AWS MSP Partner Program recognizes APN Partners who embrace this new approach to providing cloud managed services and who are experts that can unlock agility and innovation for their customers. The rigorous process of the program validation audit is designed to be consultative in nature to continually share best practices and deliver significant value for the APN Partners participating, while also giving customers a means to confidently identify those APN Partners whose have raised the bar in managed services. Please see the MSP Program webpage to learn more and to find the current list of validated APN Partners.

What are your thoughts on the evolution of next gen MSPs? Talk to us in the comments section!

Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite

by Kate Miller | on | in APN Consulting Partners, AWS CloudFormation, AWS CodeBuild, AWS CodeCommit, AWS Competencies, DevOps on AWS, Guest Post, Partner Guest Post, re:Invent 2016 | | Comments

This is a guest post from Paul Duvall, Stelligent, with contributions from Brian Jakovich and Jonny Sywulak, Stelligent. Paul Duvall is CTO at Stelligent, and an AWS Community Hero

Stelligent is an AWS DevOps Competency Partner. 

At re:Invent 2016, AWS announced a new fully managed service called AWS CodeBuild that allows you to build your software. Using CodeBuild, you can build code using pre-built images for Java, Ruby, Python, Golang, Docker, Node, and Android or use your own customize images for other environments without provisioning additional compute resources and configuration. This way you can focus more time on developing your application or service features for your customers.

In our previous post, An Introduction to CodeBuild, we described the purpose of AWS CodeBuild, its target users, and how to setup an initial CodeBuild project. In this post, you will learn how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation. By automating all of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the process along with a detailed screencast that shows you every step in launching the solution and testing the deployment.

Figure 1 shows this deployment pipeline in action.

Figure 1 – CodePipeline building with CodeBuild and deploying with CodeDeploy using source assets in CodeCommit

Keep in mind that CodeBuild is a building block service you can use for executing build, static analysis, and test actions that you can integrate into your deployment pipelines. You use an orchestration tool like CodePipeline to model the workflow of these actions along with others such as polling a version-control repository, provisioning environments, and deploying software.

Prerequisites

Here are the prerequisites for this solution:

These prerequisites will be explained in greater detail in the Deployment Steps section.

Architecture and Implementation

In Figure 2, you see the architecture for launching a deployment pipeline that gets source assets from CodeCommit, builds with CodeBuild, and deploys software to an EC2 instance using CodeDeploy. You can click on the image to launch the template in CloudFormation Designer.

Figure_2_Post_2_Stelligent_CodeBuild

Figure 2 – Architecture of CodeBuild, CodePipeline, CodeDeploy, and CodeCommit solution

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeDeploy – CodeDeploy automates the deployment to the EC2 instance that was provisioned by the nested stack using the AWS::CodeDeploy::Application and AWS::CodeDeploy::DeploymentGroup
  • AWS CodePipeline – I’m defining CodePipeline’s stages and actions in CloudFormation code which includes using CodeCommit as a source action, CodeBuild as a build action, and CodeDeploy for a deploy action (For more information, see Action Structure Requirements in AWS CodePipeline)
  • Amazon EC2 – A nested CloudFormation stack is launched to provision multiple EC2 instances on which the CodeDeploy agent is installed. The CloudFormation template called through the nested stack is provided by AWS.
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic The SNS topic is used by the CodeCommit repository for notifications.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including EC2, IAM, and SNS. You can find a link to the CloudFormation template at the bottom of this post.

CodeBuild

AWS CloudFormation has provided CodeBuild support from day one. Using the AWS::CodeBuild::Project resource, you can provision your CodeBuild project in code as shown in the sample below.

    "CodeBuildJavaProject":{
      "Type":"AWS::CodeBuild::Project",
      "DependsOn":"CodeBuildRole",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "Description":"Build Java application",
        "ServiceRole":{
          "Fn::GetAtt":[
            "CodeBuildRole",
            "Arn"
          ]
        },
        "Artifacts":{
          "Type":"no_artifacts"
        },
        "Environment":{
          "Type":"LINUX_CONTAINER",
          "ComputeType":"BUILD_GENERAL1_SMALL",
          "Image":"aws/codebuild/java:openjdk-8"
        },
        "Source":{
          "Location":{
            "Fn::Join":[
              "",
              [
                "https://git-codecommit.",
                {
                  "Ref":"AWS::Region"
                },
                ".amazonaws.com/v1/repos/",
                {
                  "Ref":"AWS::StackName"
                }
              ]
            ]
          },
          "Type":"CODECOMMIT"
        },
        "TimeoutInMinutes":10,
        "Tags":[
          {
            "Key":"Owner",
            "Value":"JavaTomcatProject"
          }
        ]
      }
    },

The key attributes, blocks, and values of the CodeBuild CloudFormation resource are defined here:

  • Name – Define the unique name for the project. In my CloudFormation template, I’m using the stack name as a way of uniquely defining the CodeBuild project without requiring user input.
  • ServiceRole – Refer to the previously-created IAM role resource that provides the proper permissions to CodeBuild.
  • Environment Type – The type attribute defines the type of container that CodeBuild uses to build the code.
  • Environment ComputeType – The compute type defines the CPU cores and memory the build environment uses
  • Environment Image – The image is the programming platform on which the environment runs.
  • Source Location and Type – In Source, I’m defining the CodeCommit URL as the location along with the type. Along with the CODECOMMIT type, CodeBuild also supports S3 and GITHUB. In defining CodeCommit as the type, CodeBuild automatically searches for a yml file in the root directory of the source repository. See the Build Specification Reference for AWS CodeBuild for more detail.
  • TimeoutInMinutes – This is the amount of time before the CodeBuild project will cease running. This modifies from the default of 60 minutes to 10 minutes.
  • Tags – I can define multiple tag types for the CodeBuild project. In this example, I’m defining the team owner.

For more information, see the AWS::CodeBuild::Project resource documentation.

CodeCommit

With CodeCommit, you can provision a fully managed private Git repository that integrates with other AWS services such as CodePipeline and IAM. To automate the provisioning of a new CodeCommit repository, you can use the AWS::CodeCommit::Repository CloudFormation resource. You can create a trigger to receive notifications when the master branch gets updated using an SNS Topic as a dependent resource that is created in the same CloudFormation template. For a more detailed example and description, see Provision a hosted Git repo with AWS CodeCommit using CloudFormation.

CodeDeploy

AWS CodeDeploy provides a managed service to help you automate and orchestrate software deployments to Amazon EC2 instances or those that run on-premises.

To configure CodeDeploy in CloudFormation, you use the AWS::CodeDeploy::Application and AWS::CodeDeploy::DeploymentGroup resources.

CodePipeline

While you can create a deployment pipeline for CodePipeline in CloudFormation by directly writing the configuration code, we often recommend that customers manually create the initial pipeline using the CodePipeline console and then once it’s established run the get-pipeline command (as shown below) to get the proper CodePipeline configuration to use in defining the CloudFormation template. To create a pipeline using the console, follow the steps in the Simple Pipeline Walkthrough. Choose CodeCommit as a source provider, CodeBuild as a build provider and CodeDeploy as a deploy provider.

In the following snippet, you see how you can use the AWS::CodePipeline::Pipeline resource to define the deployment pipeline in CodePipeline. A snippet of this configuration is shown below.

 "CodePipelineStack":{
      "Type":"AWS::CodePipeline::Pipeline",
      "Properties":{
      ...
        "Stages":[
...

Once the CodePipeline has been manually created using the AWS console, you can run the following command to get the necessary resource configuration that can be copied and modified in CloudFormation. Replace PIPELINE-NAME with the name of the pipeline that you manually created.

aws codepipeline get-pipeline --name PIPELINE-NAME

You will get the configuration output using this command. You can add this configuration to the CodePipeline resource configuration in CloudFormation. You’ll need to modify the attribute names from lowercase to title case.

In configuring the CodeBuild action for the CodePipeline resource, the most relevant section is in defining the ProjectName as shown in the snippet below.

  "ProjectName":{
    "Ref":"CodeBuildJavaProject"
  }
},
…

CodeBuildJavaProject references the CodeBuild project resource defined previously in the template.

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodeDeploy – No additional cost.
  • CodePipeline – $1 a month per pipeline unless you’re using it as part of the free tier. For more information, see AWS CodePipeline pricing.
  • EC2 – There are a number of Instance types and pricing options. See Amazon EC2 Pricing for more information.
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

So, for this particular sample solution, if you just run it once and terminate it within the day, you’ll spend a little over $1 or even less if your CodePipeline usage is eligible for the AWS Free Tier.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution. 

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region
  3. Create a key pair. To do this, in the navigation pane of the Amazon EC2 console, choose Key Pairs, Create Key Pair, type a name, and then choose Create.

Step 2. Launch the Stack

Click on the Launch Stack button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

 

 

Time to deploy: Approximately 7 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Click on the CodePipelineURL Output in your CloudFormation stack. You’ll see that the pipeline has failed on the Source action. This is because the Source action expects a populated repository and it’s empty. The way to resolve this is to commit the application files to the newly-created CodeCommit repository. First, you’ll need to clone the repository locally. To do this, get the CloneUrlSsh Output from the CloudFormation stack you launched in Step 2. A sample command is shown below. You’ll replace {CloneUrlSsh} with the value from the CloudFormation stack output. For more information on using SSH to interact with CodeCommit, see the Connect to the CodeCommit Repository section at: Create and Connect to an AWS CodeCommit Repository.

{CloneUrlSsh}
cd {localdirectory}

Once you’ve cloned the repository locally, download the sample application files from the aws-codedeploy-sample-tomcat Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following to commit and push the new files to the CodeCommit repository:

git add . 
git commit -am "add all files from the AWS Java Tomcat CodeDeploy application" 
git push

Once these files have been committed, the pipeline will discover the changes in CodeCommit and run a new pipeline instance and all stages and actions should succeed as a result of this change. It takes approximately 3-4 minutes to complete all stages and actions in the pipeline.

Access the Application and Pipeline Resources

Once the CloudFormation stack has successfully completed, select the stack and go to the Outputs tab and click on the CodePipelineURL output value. This will launch the deployment pipeline in CodePipeline console. Go to the Deploy action and click on the Details link. Next, click on the link for the Deployment Id of the CodeDeploy deployment. Then, click on the link for the Instance Id. From the EC2 instance, copy the Public IP value and paste into your browser and hit enter to launch the Java sample application – as displayed in Figure 3.

Figure 3 – Deployed Java Application

You can access the Source and Build using the CodePipeline Action Details. For example, go to the pipeline and click on commit id for the Source action and click on the Details to the Build action. See Figure 4 for a detailed illustration of this pipeline.

Figure 4 – CodePipeline with Action Details

There are also direct links for the CodeCommit, CodeBuild, and CodeDeploy resources in the CloudFormation Outputs as well.

Commit Changes to CodeCommit

Make some visual modifications to the src/main/webapp/WEB-INF/pages/index.jsp page and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned the local version of your CodeCommit repo (in the directory created by your git clone command). To push these changes to the remote repository, see the commands below.

git add .
git commit -am "modify front page for AWS sample Java Tomcat CodeDeploy application"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser. Once deployed, you should see the modifications you made in the application upon entering the URL – as shown in Figure 5.

Figure 5 – Deployed Java Application with Changes Committed to CodeCommit, Built with CodeBuild, and Deployed with CodeDeploy

How-to Video

In this video, I walk through the deployment steps described above.

Additional Resources

Summary

In this post, you learned how to define and launch a CloudFormation stack capable of provisioning a fully-codified continuous delivery solution using CodeBuild. Additionally, the example included the automation of a CodePipeline deployment pipeline – which included the CodeCommit, CodeBuild, and CodeDeploy integration.

Furthermore, I described the prerequisites, architecture, implementation, costs, and deployment steps of the solution.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/aws-codedeploy-sample-tomcat. Let us know if you have any comments or questions @stelligent.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Introducing AWS Managed Services

by Kate Miller | on | in APN Consulting Partners, AWS Product Launch, Cloud Managed Services, Migration, MSPs on AWS | | Comments

This is a guest post from Forest Johns, Principal Product Manager at AWS.

Today we are launching an operations management service called AWS Managed Services, as announced in Jeff Barr’s AWS Blog. Designed and built based on requests and feedback from some of our largest Enterprises, AWS Managed Services (AWS MS) provides customers with an alternative to in-house and outsource data center operations management.

What is AWS MS?

AWS MS follows IT Service management best practices, and standard features include patch management, backup, monitoring, security, and operational process for incident, change, and problem management. At launch, the service supports 23 AWS services, and is available in four AWS Regions including US East (Northern Virginia), US West (Oregon), Asia Pacific (Sydney), and EU (Ireland). AWS MS provides prescriptive guidance for data center deployment in the AWS Cloud at scale, and provides standard APIs, stack templates, and automation for common operations.

What’s the Distinction Between AWS MS and the AWS Managed Services Program?

AWS Managed Services is not to be confused with the AWS Managed Services Program, which thoroughly vets an APN Partner’s own managed services offerings and next-generation cloud managed services capabilities. AWS MSP Partners undergo a rigorous validation audit of over 80 checks that includes capabilities around application migration, DevOps, CI/CD, security, as well as cloud and application management. They also have many years of experience in providing full lifecycle migration, integration, cloud management, application management and application development. In addition to targeting Enterprises, the AWS MS offering was also built to enable AWS Managed Service Providers to augment or replace their existing AWS infrastructure management capabilities, allowing them to focus on migration and application management work for their clients.

What’s the Role of AWS Consulting Partners in AWS MS?

APN Partners were key in the development of this service, and play an active role in the deployment and use of AWS MS. Having a standard operating environment not only fast tracks customer onboarding, but creates many different opportunities for APN Partners to enable and add value for AWS MS customers. In the coming weeks, we will also be launching a new AWS Managed Services designation as part of the AWS Services Delivery Program for APN Partners (stay tuned to the APN Blog for more information to come).

Key to the integration and deployment of AWS MS, AWS Consulting Partners enable Enterprises to migrate their existing applications to AWS and integrate their on-premises management tools with their cloud deployments. Consulting Partners will also be instrumental in building and managing cloud-based applications for customers running on the infrastructure stacks managed by AWS MS. Onboarding to AWS MS typically requires 8-10 weeks of design/strategy, system/process integration, and initial app migration, all of which can be performed by qualified AWS Consulting Partners. In order to participate, APN Partners will need to complete either the AWS Managed Service Provider Program validation process, and/or earn the Migration or DevOps Competency, as well as complete the specialized AWS MS partner training.

Learn More

We invite APN Partners who are interested in becoming involved in offering the AWS Managed Services to email us at aws-managed-services-inquiries@amazon.com.

Congratulations to our Premier Consulting Partners – Eight New Premier Partners Announced at re:Invent 2016

by Kate Miller | on | in APN Consulting Partners, APN Partner Highlight, Premier Partners, re:Invent 2016 | | Comments

Reaching the APN Premier tier is an enormous achievement. APN Premier Partners have deep experience on AWS, are consistently raising the bar in their AWS-based practice, and are constantly looking for new ways to drive customer success. We also find that Premier Partners go above and beyond in their AWS Training & Certification. Having a deep bench of AWS Trained & Certified individuals, I’ve often been told from Premier Partners, changes the conversation they have with customers. BlazeClan, for instance, has told us:

 

“When initially engaging with a customer, it not only helps for us to be able to tell the customer how many Associate and Professional Certified resources we have on the team, but it changes the entire conversation with the customer. When our AWS Certified resources engage with the customer, they have a different level of conversation. And it brings a different level of credibility to our company.” – Varoon Rajani, Co-Founder & CEO, BlazeClan; read the full BlazeClan case study here

We are very proud of our AWS Premier Partners, and at the Global Partner Summit at re:Invent 2016, we announced that eight more APN Consulting Partners have officially earned Premier tier status.

Learn more about our new Premier Partners:

We also had the pleasure of recognizing 11 Premier Partners who’ve been in the Premier tier for five years – congratulations to these firms!

Learn more about these Premier Partners here:

Congratulations to all of our 55 Premier Partners!

Learn about the VMware Cloud on AWS Partner Program, Coming in 2017

by Kate Miller | on | in APN Consulting Partners, APN Launches, re:Invent 2016 | | Comments

Earlier this year, VMware and Amazon Web Services (AWS) announced a strategic alliance to build and deliver a seamlessly integrated hybrid offering that will give customers the full software-defined data center (SDDC) experience from the leader in the private cloud, running on the world’s most popular, trusted, and robust public cloud. VMware Cloud™ on AWS will enable customers to run applications across VMware vSphere®-based private, public, and hybrid cloud environments.

Delivered, sold, and supported by VMware as an on-demand, elastically scalable service, VMware Cloud on AWS allows VMware customers to use their existing VMware software and tools to leverage AWS’s global footprint and breadth of services, including storage, databases, analytics, and more. For more information on VMware Cloud on AWS, visit VMware Cloud on AWS.

Today, VMware and AWS are announcing that we are working on a joint initiative, the VMware Cloud on AWS Partner Program, that will be launched in 2017. The program will provide support for APN Partners that help customers deploy and operate VMware workloads on AWS.

If you’re interested in this program and want to stay informed as more information becomes available, please submit your interest at https://aws.amazon.com/partners/vmware.

Connect with Customers through the AWS Partner Solutions Finder

by Kate Miller | on | in APN Competency Partner, APN Consulting Partners, APN Launches, APN Program News, APN Technology Partners, re:Invent 2016 | | Comments

Our top priority is to ensure that we are helping you connect with customers whose business needs you can help meet on AWS. Customers have told us that they often look to find APN Partners who are focused in delivering services and solutions to solve very specific use cases within their industry, and often seek APN Partners with a presence and focus in particular regions.

Today, I’m excited to announce the launch of the AWS Partner Solutions Finder (PSF), a new web-based tool meant to help customers easily filter, find, and connect with APN Partners to meet specific business needs.

What is the AWS Partner Solutions Finder?

Built based on customer and partner feedback, the PSF is a whole new way to connect customers and partners. Say you’re a Consulting Partner focused on the Financial Services industry, and you hold the AWS Financial Services Competency. As customers come to the PSF and filter by ‘Financial Services’, your firm may appear higher in the search results, along with other AWS Competency holders. Customers can continue to filter by use case, location, and products, to find exactly what they need.

With the AWS Partner Solutions Finder, customers can also:

  • Easily identify authorized AWS Resellers and validated AWS Managed Service Providers
  • Quickly find APN Partners who hold an AWS Competency and/or AWS Service Delivery Program distinction
  • Learn about different APN Partners at a glance, with data that is verified by AWS
  • Seamlessly get in touch with an APN Partner

Are Customers Seeing Your Updated Information?

If you are an APN Partner at the Standard tier or higher, it is important that your Partner Detail page is up-to-date in the Partner Solutions Finder. AWS Customers will benefit from learning more about your company and AWS offerings. The Alliance Lead of your APN account can update your information in the APN Portal by following these steps:

  1. Alliance lead must log in to the APN Portal
  2. Click “Manage Directory Listing” located on the left navigation pane
  3. Click “Edit” to modify content

To visit the AWS Partner Solutions Finder, click here.

To hear from AWS leadership about the PSF, watch our video below, featuring Terry Wise, Worldwide VP of Channels & Alliances, AWS, and Mike Clayville, Global VP of Commercial Sales & Business Development, AWS: