Category: APN Consulting Partners


Getting the “Ops” Half of DevOps Right: Automation and Self-Service Infrastructure

Next generation Managed Services Providers (MSPs) are able to offer customers significant value today, above and beyond the basics of retroactive notifications or outsourced helpdesk services that were common with early, traditional MSPs. Today’s cloud evolved MSPs are able to drive revolutionizing business outcomes, even including DevOps transformations. This week in the MSP Partner Spotlight series, we hear from Jason McKay, CTO of Logicworks, as he writes about the importance of automation in operations and helping their customers meet their DevOps goals.

 

 

Getting the “Ops” Half of DevOps Right: Automation and Self-Service Infrastructure

By Jason McKay, CTO of Logicworks

DevOps has been a major cultural force in IT for the past ten years. But a gap remains between what companies expect to get out of DevOps and the day-to-day realities of working on a IT team.

Over the past ten years, I’ve helped hundreds of IT teams manage a DevOps cultural shift as part of my role as CTO of Logicworks. Many of the companies we work with have established a customer-focused culture and have made some investments in application delivery and automation, such as building a CI/CD pipeline, automating code testing, and more.

But the vast majority of those companies still struggle with IT operations. Their systems engineers spend far too much time putting out fires and manually building, configuring, and maintaining infrastructure. A recent survey found that it takes more than a month to deliver new infrastructure for 33 percent of companies, and more than half had no access to self-service infrastructure. The result is that systems engineers burn out quickly, developers are frustrated, and new projects are delayed. Add to the mix a constantly shifting regulatory landscape and dozens of new platforms and tools to support, and chances are that your operations team is pretty overwhelmed.

Migrating to Amazon Web Services (AWS) is often the first step to improving infrastructure and security operations for DevOps teams. AWS is the foundation for infrastructure delivery for the largest and most mature DevOps teams in the world, but running IT operations on AWS the same way you did on traditional infrastructure is simply not going to work.

The Power of Automation

Transforming operations teams for DevOps begins with a cultural shift in the way engineers perceive infrastructure. You’ve no doubt heard it before: Operations can no longer be the culture of “no.” Keeping the lights on is no longer enough.

The key technology and process change that supports this cultural change is infrastructure automation. If you’re already running on AWS, there is no better cloud service for building a mature infrastructure automation practice—it integrates with what your developers are doing to automate code deployment, and makes it easier for your company to launch and test new software.

AWS has all the tools you need. But you also need people who know how to use those tools. That’s what Logicworks helps companies do. We are an extension of our clients’ IT teams, helping them figure out IT operations on AWS in this new world of DevOps and constant change.

Our corporate history mirrors the journey most companies are going through today. Ten years ago, the engineers at Logicworks also spent most of their time nurturing physical systems back to health, responding to crises, and manually maintaining systems. When Amazon Web Services launched, we initially wondered if we would have a place in this new paradigm. Where does infrastructure maintenance fit in a world where companies want infrastructure “out of the way” of their fast-moving development teams? Then we realized that not only could we keep managing infrastructure, but we could do something an order of magnitude more sophisticated and elegant for our clients. We started to approach the business not as racking and stacking hardware, but instead using AWS to create responsive and customized infrastructure without human intervention. That really changed the business model.

Today our engineers spend their time writing and maintaining automation scripts that orchestrate AWS infrastructure, not manually maintaining the thousands of instances under our control. In many ways, we have become a software company. We write custom software for each client that makes it easier for their operations teams to deliver AWS infrastructure quickly and securely. Of course we still have 24x7x365 NOC teams, networking professionals, DBAs, etc., but all of our teams approach every infrastructure problem with this question: How can we perform this (repetitive, manual) task smarter? How can we stop doing this over and over and focus on solutions that make a substantial difference for our customers?

Infrastructure Automation in Practice

Many of the best practices of software development — continuous integration, versioning, automated testing — are now the best practices of systems engineers. In enterprises that have embraced the public cloud, servers, switches, and hypervisors are now strings and brackets in JavaScript Object Notation (JSON). The scripts that spin up an instance or configure a network can be standardized, modified over time, and reused. These scripts are essentially software applications that build infrastructure and are maintained much like a piece of software. They are versioned in GitHub and engineers patch the scripts or containers, not the hardware, and test those scripts again and again on multiple projects until they are perfected.

An example of an instance build-out process with AWS CloudFormation and Puppet.

In practice, infrastructure automation usually addresses four principal areas:

  • Creating a standard operating environment or baseline infrastructure template in AWS CloudFormation that lives in a code repository and gets versioned and tested.
  • Coordinating with security teams to automate key tools, packages, and configurations, usually in a configuration management tool like Puppet or Chef and Amazon  EC2 Systems Manager.
  • Delivering infrastructure templates to developers in the form of a self-service portal, such as the AWS Service Catalog.
  • Ensuring that all templates and configurations are maintained consistently over time and across multiple environments/accounts, usually in the form of automated tests in a central utility hub that can be built with Jenkins, Amazon Inspector, AWS Config, Amazon CloudWatch, AWS Lambda, and Puppet or Chef.

Together, this set of practices makes it possible for a developer to choose an AWS CloudFormation template in AWS Service Catalog, and in minutes, have a ready-to-use stack that is pre-configured with standard security tools, their desired OS, and packages. Your developers only launch approved infrastructure, never have to touch your infrastructure configuration files, and no longer wait a month to get new infrastructure. Imagine what your developers could test and accomplish when they’re not hampered by lengthy operations cycles.

This system obviously has a big impact on system availability and security. If an environment fails or breaks during testing, you can just trash it and spin up another testing stack. If you need to make a change to your infrastructure, you change the AWS CloudFormation template or configuration management script, and relaunch the stack. This is the true meaning of “disposable infrastructure”, also known as “immutable infrastructure”—once you instantiate a version of the infrastructure, you never change it. Since your infrastructure is frequently replaced, the potential for outdated configurations or packages that expose security vulnerabilities is significantly reduced.

Example of AWS Service Catalog.

This is why the work we do at Logicworks to automate infrastructure is so appealing to companies in risk-averse industries. Most of our customers are in healthcare, financial services, and software-as-a-service because they want infrastructure configurations to be consistently applied (and can prove that they are applied universally across multiple accounts to auditors) and changes are clearly documented in code. Automated tests ensure that any configuration change is either proactively corrected or alerts a 24×7 engineer.

Responsibilities and External Support

If you’re managing your own infrastructure, your operations team is responsible for everything up to (and including) the Service Catalog layer. Your developers are responsible for making code work. That creates a nice, clear line of responsibility that simplifies communication and usually makes developer and ops relationships less fraught.

If you’re working with an external cloud managed service provider, look for one that prioritizes infrastructure automation. Companies that work with Logicworks appreciate that we have  abandoned the old style of managed services. Long gone are the days when you paid a lot of money just to have a company alert you after something went wrong, or when a managed service provider was little more than an outsourced help desk. AWS fundamentally changed the way the world looks at infrastructure. AWS also has changed what companies expect from outsourced infrastructure support, and has redefined what it means to be a managed services provider. Logicworks is proud to have been among the first group of MSPs to become an audited AWS MSP Partner and to have earned the DevOps and Security Competencies, among others. We have evolved to continue to add more value to our customers and to help them achieve DevOps goals from the operations side—and not just to keep the lights on.

Whether you outsource infrastructure operations or keep it in-house, the most important thing to remember is that you cannot create a culture that innovates at a higher velocity if your AWS infrastructure is built and maintained manually. Don’t ignore operations in your enthusiasm to build and automate code delivery. Prioritize automation for your operations team so that they can stop firefighting and start delivering value for your business.

Managed Security and Continuous Compliance

As we continue our MSP Partner Spotlight series, let’s dive into managed security, continuous compliance, and the convergence of what have traditionally been the separate focuses of Managed Service Providers (MSPs) and Managed Security Service Providers (MSSPs). A next-generation MSP must have a deep understanding of their customers’ security and compliance needs and possess the ability to deliver solutions that meet these needs. This week we hear from APN Premier Consulting Partner, MSP Partner, and Competency Partner, Smartronix, on how they approach this for their customers.

 

Managed Security and Continuous Compliance:

Next Generation Autonomic Event Based Compliance Management

By Robert Groat, Executive Vice President – Technology and Strategy at Smartronix

One of the least understood and often overlooked benefits of deploying cloud services is the ability to transform and operationalize security compliance. This means that services native to the cloud can help assess, enforce, remediate, and report on security compliance semi-autonomously. Every action that affects any change in AWS, from the initial creation of the environment, to provisioning and deprovisioning resources, to changes made to even the most mundane setting are all affected via an API service call, and every API service call is logged and audited as an event.

AWS has enabled native capabilities that allow you to respond programmatically to these events. In effect, you can use automation such as AWS CloudFormation and AMIs to create an environment that is compliant at creation, and thereafter can have an autonomic response to events to enable remediation, self healing, reporting, or systematic incident response capabilities. Essentially, our customers’ environments remain continuously compliant via programmatic management.

Smartronix has been working in cooperation with AWS since 2008. Our initial infrastructure development efforts focused on creating reusable templates that incorporated security best practices, followed by a combination of proactive and reactive continuous monitoring, alerting, trouble ticket generation, and manual remediation. AWS Lambda, introduced in 2014, has been a key enabler for reaching the next level.

Lambda is a serverless (0-management) solution that can connect events with algorithms written as Lambda functions. Once an event is identified as meaningful—for example, a boundary configuration change—we can write a Lambda function that executes automatically whenever the event occurs.

The other key enabler is AWS Config, a native service that helps you continuously record, monitor, compare, and react to changes in your environment. We can now associate custom AWS Config rules with Lambda functions that enforce compliance. For example, if policy dictates encrypted root volumes, then we can monitor server launch events and enforce these policies automatically. If an attempt is made to create an instance with an unencrypted root volume, the action can be remediated by either  quarantining or deleting the resource via the AWS Lambda function.

Compliance actions can be reactive, such as when privileged account usage is identified, automatically verifying that an associated trouble ticket exists before authorizing the request. Other compliance actions can be scheduled. For example, certain rules can run every 24 hours to monitor license compliance, automate backups, or enforce tagging on deployed resources.

Speaking of tagging, your nascent library of Lambda functions should automate, reinforce, and be advised by your tagging strategy. That tagging strategy should help you differentiate activities within your compliance functions. Smartronix refers to this process as Attribute-Based Service Management. Lambda compliance functions can then behave differently based on tags. An instance tagged “environment = development” may not need the same compliance remediation as one tagged “environment = production”. Bringing this strategy full circle, you can actually write Lambda functions that enforce a compliance policy dictating that all deployed resources must include a set of predefined tags.

The high degree of flexibility that custom Lambda functions provide can also improve incident response and alerting when policy deviations occur. For Next Generation MSPs like Smartronix, this is an incredibly efficient way to manage multiple environments in a consistent and scalable manner. Although customers may have varying security and compliance requirements, we now have a framework enabled by AWS that helps us customize and respond in a repeatable, efficient manner.

Combining AWS CloudTrail, AWS Lambda, AWS Config, the instrumentation ecosystem, and a source code control system like GitHub, organizations can now manage their software-defined security and compliance processes in the same way they manage their software-defined infrastructure. This improves reusability, reduces errors, ensures policy compliance, automates response, and reduces the typically onerous reporting burden. Your AWS Config Rules and AWS Lambda functions are now important parts of your security controls documentation and you now have a natural audit mechanism for proving how you enforce these controls.

Smartronix is also extending this model into the areas of forensics, threat prediction, and log aggregation and analysis. Combining AWS CloudTrail, AWS Config, and AWS Lambda with Amazon Machine Learning and Amazon AI has enormous potential to change the signal-to-noise ratio of complex and active environments, ensuring that the anomaly envelope is adaptive and that outliers are raised, assessed, and reincorporated into the growing, learning, adapting, intelligent security ecosystem.

The availability of these tools and evolving experience is making NextGen Managed Services Providers highly competitive, if not superior, in entering a new opportunity space. Traditional MSPs have focused on IT service management, incident response, patch management, backup, and break/fix services. With software-defined infrastructure and now software-defined security and compliance, NextGen MSPs are blurring the lines between traditional Managed Service Providers and traditional Managed Security Services Providers. These new services, enabled by the cloud, include continuous monitoring, automated vulnerability scanning and analysis, automated boundary management, log aggregation and analysis, end user behavior analytics, and anomaly detection. At Smartronix, we are excited about disrupting the way enterprises view security and are democratizing services that at one time were the province of only a handful of the world’s largest enterprise companies.

Smartronix has managed highly secure, large-scale global environments for more than 22 years. When we say you can achieve greater security in the cloud, you now have a better perspective on how we and other NextGen Service Providers achieve it. You can choose to replicate how you manage on-premises environments in the cloud, but true transformational value occurs when you rethink your approaches that can make use of the newest, most powerful, and innovative services available to you.

Automation in the Cloud

Continuing our MSP Partner Spotlight series from last week’s post, Unlocking Hybrid Architecture’s Potential with DevOps, automation is another critical area of capability for next generation Managed Service Providers (MSPs). Automation incorporating elements such as configuration templates, code deployment automation, and self-healing infrastructure reduces the need for manual interventions, the potential for errors, and the operating costs for MSPs.  This week we hear from Cloudreach (APN Premier and MSP Partner, with numerous AWS Competencies) and their perspective on the value of automation in the cloud.

 

Automation in the Cloud

By: Neil Stewart, Cloud Systems Engineer, Cloudreach

Before my life at Cloudreach, my understanding of a lot of relevant technologies and terminologies were non-existent. I was inspired by a recent Cloudreach blog post about our placement as a Leader in the Gartner Managed Services Magic Quadrant, as well as the blog post about the flexibility of working here, and it got me thinking about my experience so far and how things have progressed.

I joined Cloudreach fresh out of University in May 2014. From there I was given the opportunity to show what I could do with a little time and bright people around me to learn from. Quickly, I began to learn the tricks of the trade when working in the cloud, and more importantly, while working in a managed services environment such as a Cloud Operations team. I learned how to do a variety of things that were totally new to me, such as how to navigate and use Linux, diagnose a Microsoft SQL Server mirroring setup, and write my first Ruby script to delete old AMI’s in AWS. I was able to learn to appreciate command line over GUI and how much you could do with code and scripting. Which leads me to the point of this post.

Automating all the things

I love automation. I have smart lights, smart speakers, and a smart kettle that all have automation involved at home. It can be as simple as turning a light on when I walk into a room or boiling a kettle in the morning when I wake up. Automation is fantastic.

While automation in my personal life is fun, efficient, useful and awesome, automation in the cloud, especially from a cloud operations perspective, is essential. For example, rebooting an instance after an update is fine for the first time for a single instance, but doing it more than 30 times is painful!

I love to approach these asks with a “Let’s automate that” frame of mind. Some examples of automation we often use at Cloudreach include running a script on a fresh AWS account that will identify all default VPC’s in every region and delete associated resources as well as the VPC itself. Sounds simple? It is. However, as AWS adds more regions, this task takes longer. Repeat that across lots of new customer accounts and… you get where this is going.

Writing some code to perform a task like this is not difficult; when you approach other tasks in this way, it only becomes easier. Consider the example below:

import boto3

client = boto3.client('ec2',region_name='eu-west-1')  
regions = [region['RegionName'] for region in client.describe_regions()['Regions']]

for region in regions:

    print "Finding VPCs in {}".format(region)
    client = boto3.client('ec2', region_name=region)
    vpcs = client.describe_vpcs()['Vpcs']

    default_vpc = [x for x in vpcs if x['IsDefault'] == True]

    if len(default_vpc) > 0:
        default_vpc = default_vpc[0]
        print "Found Default VPC {}".format(default_vpc['VpcId'])

        delete = raw_input("Would You like to delete {}?(Y/N)".format(default_vpc['VpcId'])).lower()

        if delete == 'y':
            print "Deleting {}".format(default_vpc['VpcId'])

            subnets = [x['SubnetId'] for x in client.describe_subnets(
                Filters=[{
                    'Name': 'vpc-id',
                    'Values': [
                        default_vpc['VpcId']
                    ]
                }]
            )['Subnets']]

            internet_gateways = [x['InternetGatewayId'] for x in client.describe_internet_gateways(
                Filters=[{
                    'Name': 'attachment.vpc-id',
                    'Values': [
                        default_vpc['VpcId']
                    ]
                }]
            )['InternetGateways']]

            for internet_gateway in internet_gateways:
                client.detach_internet_gateway(
                    VpcId=default_vpc['VpcId'],
                    InternetGatewayId=internet_gateway
                )
                client.delete_internet_gateway(
                    InternetGatewayId=internet_gateway
                )

            for subnet in subnets:
                client.delete_subnet(
                    SubnetId=subnet
                )

            client.delete_vpc(
                VpcId=default_vpc['VpcId']
            )

        else:
            print "Not Deleting {}".format(default_vpc['VpcId'])

    else:
        print "No Default VPC found in {}".format(region)

Ok, this could go on for pages, but you get the idea. Easy, right? There are a lot of improvements you could make to this, but in its simplest form, this is a great example of automating a small and simple task that you don’t need to do manually. Lovely.

Automation at an MSP Level

Simple scripts are great. The power of automation in an MSP environment really shines through when you have lots of these simple scripts that all trigger and run when they need to. This is the difference between working on simple and small environments versus the management and monitoring of multiple large-scale, growing, and sophisticated environments. As our customers shift towards highly scalable and serverless applications and away from more monolithic architecture, automation is less “nice to have” and more “you had better get on the wagon before the wagon runs you over.”

Looking at this in a more real world sense, let us imagine we have some applications running in the cloud that we want to apply automation to.

Backup taking and retention

Backups and retention of backups is automation 101. We need to be able to back up servers that are not stateless, such as database servers. This can be as simple or as sophisticated as you like. Implementing something like AWS Lambda and an Amazon CloudWatch event to trigger a backup function as often as needed is simple. A function that generates a list of required instances to be backed up, and then fires a process to back each of them up in parallel, is more effective.

As part of this solution, retention of backups is important too. This can be another AWS Lambda function. It could be configured to run daily to check all backups that have already been taken to determine whether or not it has passed its use-by date. If it has, delete it.

Without much effort, you can have a quick and simple backup solution in place—no manual manual work required once in place and it scales. You could tie all of this together with an AWS API Gateway and a Describe function and you have a new backup taking and reporting API.

At Cloudreach, we work with customers to implement backup solutions that work within their requirements. This might take shape as AWS Lambda functions as explained above, as third-party products, or custom solutions developed for the customer. Within the Cloud Operations team, we also use in-house tools that allow us to easily automate backups and deal with retention too.

Security Compliance

Automation and security are a perfect match. Where you enable automation within security can vary greatly. A great example of this in place would be security group auditing.

Keeping your resources secure in AWS is important and there are plenty of ways to do it. Security groups and their rules are one of the simplest but also one of the most powerful security features in AWS and an important layer to control. Whether it be accidentally leaving remote access open to any IP address, or a developer opening access from a coffee shop IP address so they can work can work more easily—these situations are not just bad, they can also potentially violate security policies and compliance standards.

These are both examples of where we can automate to mitigate.

Cloudreach has helped implement functionality for customers where we can alert and report on security group changes. We can restrict users from an IAM perspective so that security group creation has to go through an approval process. This works well but can be time consuming to implement. More simply, we can implement a AWS Lambda function that is triggered each time a security group is created or changed using AWS Config or Cloudwatch Events. Once triggered, the function checks that security group, checks if the ports and sources in that rule are valid—possibly against a configuration file in Amazon S3 or against an RDS database table of allowed IPs/Sources. If a rule in a group is not allowed, it removes the rule if it is an addition to an existing group, or deletes it from the group if the rule was added as part of the group creation.

Either way, we can report on the “breach” through something like SNS or a logging tool such as Splunk. Most importantly, the time spent in violation of security policy is minimized to seconds, rather than waiting on an alert to be triggered and investigated by an engineer.

Code deployments

Introducing a CI/CD platform to integrate with your source control system is an awesome way to introduce automation into your development cycle. This is an area that is exciting to get involved in. An effective and deep pipeline integration can enable your team to push minor code changes to dev/pre-prod but can also be expanded to full on deployments to production.

Cloudreach helps customers manage their CI/CD pipelines by working with them to ensure the infrastructure behind the scenes is running as it should and, if issues arise, they are resolved. We also work with customers from very early on in a cloud enablement or agile ops project to figure out where we can incorporate CI/CD automation as well as how they can manage the risk of moving to automated deployments. We encourage our customers to keep this in mind from day one and push the subject as a must-have rather than a nice-to-have.

AWS Infrastructure changes

Similar to code deployments, infrastructure changes paired with Jenkins and a source control system are powerful and fast.

Here you want to look at using AWS CloudFormation as much as possible; we recommend adopting Sceptre, Cloudreach’s open-source tool for AWS CloudFormation template development and deployment. It has commands that can be used in the testing, approval, and deployment of new and updated infrastructure in AWS.

This setup is useful for changes to sensitive resources, such as IAM, Security Groups or VPC components. With a CD pipeline in place, you can restrict changes to these resources to only people who are allowed and only to changes that pass a set of standards and approval.

Moving on

I hope it has been helpful to see how you can easily automate some key areas in working in the cloud. Focusing on automation helps deliver financial, security, and innovation benefits to a business and its teams. Pipelines allow you to control how changes are implemented and to what environments, keeping things secure and costs down when it comes to rolling back changes if something goes wrong. Imagine the revenue that could be lost if a production change that was manually deployed caused your application to fail. From an innovation perspective, automation of tasks allows for your teams to focus on more challenging and exciting work, such as improving application features or fixing bugs that may have been looked over when teams are stretched to focus on those tedious and often boring tasks.

Hopefully, this post will encourage you to implement automation in your cloud environments or at least look into how automation can help your business work more effectively in the cloud. At Cloudreach, automation is fundamental to working successfully and keeping up with the pace of change in the cloud. We’d love to hear some examples of how automation has been implemented by others and also hear your thoughts on where you think automation could be seen next.

Neil Stewart

Cloud Systems Engineer

Cloudreach

Unlocking Hybrid Architecture’s Potential with DevOps

Last week in our MSP Partner Spotlight series, we heard from Jeff Aden at 2nd Watch and learned about the value that next gen MSPs can bring to their customers through well managed migrations and through 2nd Watch’s Four Key Steps of Optimizing Your IT Resources. Another area of new value that AWS MSPs can bring to their customers is management of their hybrid IT architecture, allowing customers at any stage of the cloud adoption journey to best leverage the AWS Cloud. This week we hear from Datapipe (APN Premier Partner, MSP Partner and holder of several AWS Competencies and AWS Service Delivery designations) as they discuss their approach and considerations in supporting their customers’ hybrid architectures.

Unlocking Hybrid Architecture’s Potential with DevOps

By David Lucky, Director of Product Management at Datapipe

Hybrid IT architecture, or what many customers call hybrid cloud, is increasingly prevalent in today’s fast-paced technology industry. Over the past few years, Datapipe has seen an initial reluctance towards cloud adoption transform into excitement, and hybrid architecture is emerging as a go-to solution for enterprise organizations looking for a way to manage their complex operations and run AWS as a seamless extension of their on-premises infrastructure.

Hybrid architecture gives organizations Application Programming Interface (API) accessibility, providing developers with programmatic access to control their environments through well-defined methods. APIs, commonly defined as “code that allows two software programs to communicate with each other,” are increasing in popularity in part due to the rise of cloud computing, and have steadily improved software quality over the last decade. Now, instead of having to custom develop software for a specific purpose, software is often written referencing APIs with widely useful features, which reduces development time and cost, and alleviates risk of error.

With API accessibility, developers can easily repurpose proven APIs to build new applications instead of having to manage them manually. This gives them more room to experiment and innovate and creates a culture of curiosity. In this way, the API accessibility of hybrid architecture leads to a necessary rebalancing of development and operations teams looking to solve problems earlier and more automatically than was previously possible with purely on-premises solutions.

To maintain the culture of curiosity that’s enabled by API accessibility through hybrid environments, we recommend organizations remove the silos that traditionally separate development and operations teams, and encourage open communication and collaboration – better known as DevOps. Implementing a DevOps culture helps organizations take advantage of a hybrid infrastructure to increase efficiencies along the entire software development lifecycle (SDLC). At Datapipe, we understand how critical the adoption of DevOps methodologies and agile workflows are for IT organizations to remain competitive and respond to the constantly evolving technology landscape. It’s the reason we expanded our professional services to include DevOps, and why we help organizations make the cultural switch to DevOps the right way, starting with people.

Individuals Over Tools

While many people conflate DevOps with an increase in automation tools, an organization can’t fully realize DevOps culture without starting with its people. A DevOps culture fosters open communication and constant collaboration between team members. It dissolves barriers between operations and development departments, giving everyone ownership over the SDLC as a whole, beyond their traditional, individual responsibilities. Being able to see the big picture allows team members to transition from being reactive to being proactive. That, in turn, involves shifting away from addressing problems as they arise to determining the root cause of the problem and finding a solution as a part of a continuous improvement mindset. Organizations that fully embrace this full-stack DevOps approach can provision a server in minutes instead of weeks, which is a vast improvement on the traditional SLDC model.

This mindset also means moving from a reactionary approach and solving problems through “closing tickets” to a proactive approach that involves consistently searching for inefficiencies and addressing them in real-time, so an organization’s software is continually improving at the most fundamental levels. Of course, addressing inefficiencies in the software also means addressing inefficiencies in workflows, which leads to the use of DevOps tools such as automation and writing reusable utilities.

However, productivity tools won’t increase efficiency on their own. An effective DevOps culture starts with open collaboration between team members, and then is reinforced by tools. At Datapipe, we see incorporating a DevOps culture through the lens of the “Agile Manifesto,” which promotes “individuals and interactions over processes and tools.” When you combine agile working practices with DevOps, you can manage change in a feature-focused manner, providing faster interaction and response. Managing change in this way means that organizations achieve their goals through a strong DevOps culture that automates the majority of the overall development and delivery process, enabling teams to focus on areas that create a differential experience. This takes time – and collaboration among team members – to set up. The real-time collaboration that marks a full-stack DevOps approach reduces the number of handoffs in a SDLC, thus accelerating the entire process, and decreasing an applications’ time-to-market.

Looking Ahead

Hybrid architecture growth is expected to continue. Industry analyst firm IDC predicts that 80 percent of all enterprise IT organizations will commit to hybrid architecture by the end of this year. This prediction is in line with what we’re seeing from our customers. As a next-gen MSP, we’ve seen an increase in enterprise companies looking for guidance on incorporating a DevOps culture to complement their digital transformations.

Take our work with British Medical Journal (BMJ), for example. BMJ started out over 170 years ago as a medical journal. Now, as a global online brand, BMJ has expanded to encompass 60 specialist medical and allied sciences journals with millions of readers. As a result of their dramatic growth, their old infrastructure could no longer support their application release process. In addition, as an increasingly global organization, BMJ’s capacity for allowing downtime – scheduled or otherwise – was diminishing. To solve this problem, BMJ needed to move to a sustainable development cycle of continuous integration and automation, which is only possible through a shift to a DevOps type culture. We helped BMJ implement this culture while assisting with changes to their infrastructure. The switch to a more open, collaborative culture not only allowed BMJ to implement a sustainable development cycle, complete with continuous integration and automation, but it also made them feel better prepared to take their next planned step of moving workloads to the AWS Cloud and embracing a hybrid environment. (More about how we helped BMJ move to a DevOps-oriented culture can be found here).

If you’re interested in leveraging DevOps to get the most out of your hybrid environment, we recommend starting with the following considerations:

  • Leverage object-oriented programming principles such as abstraction and encapsulation to build re-usable and parameterized components that can be assembled like building blocks. This can be done in configuration management with Chef Recipes, Puppet Modules, and Ansible Roles, or through infrastructure building blocks like Terraform Modules and AWS CloudFormation scripts.
  • When automating infrastructure management, test destruction as deeply as the creation process. This will give you the ability to iterate and test cleanly.
  • Balance the effort being put into upfront engineering versus operational management activities. More upfront engineering unlocks some great features with Auto Scaling on AWS. For more steady-state applications, the resources needed to set up and configure can sometimes be much less than the effort of working through automation. This makes it worthwhile to look for open-source modules to help you in your infrastructure and configuration management workflows.
  • For Auto Scaling groups within AWS, consider, as you engineer your process, the time tolerances your workload has from the time when AWS detects the need for a new instance to when they are fully operational. Fully-baked Amazon Machine Images tend to be the fastest time to operational, but this would require building an image for every version of your application. Packer is a great tool for this purpose. In addition, the more you embed user data or configuration management processes, the longer your instance will take to reach an operational state. Finally, keep in mind processes like domain joins and renaming of instances, which require reboots, can add time to the launch process and use them as sparingly as possible.
  • For a low-latency link between your resources in and out of the cloud, consider taking advantage of higher-level services like AWS Direct Connect, which provides a virtual interface directly to public AWS services and allows you to bypass Internet service providers in your network path. Datapipe client ScreenScape used Direct Connect to link their on-premises environment to Amazon CloudFront for a cloud environment that’s highly available, fully managed, and able to scale over time with proven capability. (Learn more here.)

Hybrid architecture offers organizations the power of both on-premises and cloud environments like AWS, giving them the tools to grow and innovate at a lower cost. For companies to fully capitalize on the benefits of these mixed environments, a culture change is necessary. By shifting to a DevOps culture and enabling teams to work together in a full-stack perspective, organizations can not only increase efficiency in their SDLCs, but also open up opportunities for immense engagement and creativity – qualities necessary for innovation. A next-generation MSP, with DevOps and Software-as-a-Service (SaaS) capabilities, can be a valuable guide for IT teams on their hybrid cloud journey. At Datapipe, we pride ourselves on being a next-generation MSP, and our proficiency with DevOps was a key differentiator that led to our position as a leader in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide. By partnering with a next-gen MSP, like those included in AWS Managed Service Partner program, organizations don’t have to make the shift to DevOps on their own.

To get started or for assistance on your cloud journey, contact us at www.datapipe.com

David Lucky

Director of Product Management

www.Datapipe.com

Four Key Steps of Optimizing Your IT Resources

In past posts, we have written about The Evolution of Managed Services in Hyperscale Cloud Environments and discussed how AWS and our APN Partners are Raising the Bar in light of this progression. Let’s now hear directly from a few of our MSP Partners who have embraced this new ideology in managed services and who are enabling increasing levels of innovation for their customers.

This is the first in a weekly series of MSP APN Partner guest posts that we will be publishing here on the APN blog this summer. Our partners will be sharing their expertise on critical topics that enable customers to embrace the AWS Cloud and more fully realize the business values that this allows them. Topics will include cloud automation, optimizing after migration, hybrid cloud management, managed security, continuous compliance, DevOps transformation, and MSPs as innovation enablers.

Let first hear from Jeff Aden, Founder & EVP Marketing and Business Development at 2nd Watch (APN Premier Partner, MSP Partner, and holder of multiple AWS Competencies), as he discusses the importance of refactoring and optimizing workloads after migration to AWS and how a next gen AWS MSP Partner can deliver this value.

The Four Key Steps of Optimizing Your IT Resources, by Jeff Aden, Founder & EVP Marketing and Business Development at 2nd Watch

The 2nd Watch team has been very fortunate to have the opportunity to help some of the world’s leading brands as they strategize, plan, and migrate their workloads to the cloud as well as support them in ongoing management of those workloads. I have worked with some of the foremost cloud experts within 2nd Watch, AWS, and our customers, and can share best practices and Four Steps to Optimizing your IT Resources that we have learned over the years.

Optimization in the world of cloud requires data science, expertise in cloud products and, most of all, an understanding of the workloads that are supported by the infrastructure. We are not talking about a single application where anyone can provision a few instances. This is about large migrations and management of many thousands of workloads. Within that context, the  sheer volume of choices in cloud products and services can be overwhelming, but the opportunities for digital transformation and optimization can be huge.

Reaping maximum performance and financial optimization requires a combination of experience and automation. On average, we’ve been able to save our customers 42 percent more with our optimization services than if they managed their own resources.

We see optimization in cloud typically driven by:

1. Migration to cloud

Typically this is the first step to optimizing your IT resources and the most common first step for many enterprises. By simply moving to the cloud, you can instantly save money by paying only for what you use. The days of old—where you bought IT resources for future use versus what you needed today—are over. This brings enormous savings in three areas: The time it takes you (IT) and the business in planning for future needs; the amount of space you need to own or rent to hold your data center along with all the logistics of operating a data center; and the cost of IT hardware and software, since you are not buying today for what you may use by 2020. The initial move can save companies between 40 to 60 percent over their current IT spend.

All of this can be done by migrating to AWS without rewriting every application or making huge bets on large and expensive transformation projects. Customers like Yamaha have achieved savings of $500,000 annually, and DVF racked up 60 percent savings on hosted infrastructure compared to a legacy provider. And, one of the most recognizable use cases is Conde Nast, which increased performance while saving 40 percent on operational costs.

2. Performance

This is an ongoing optimization strategy in the cloud. Auto scaling is probably one of the most widely-leveraged strategies for tuning performance in the cloud. It provides the ability for IT to add resources based on internal or external demand, in real time and with automation. Once you have your workloads in the cloud, you can begin to understand what your resources are really using and how to leverage auto scaling to save money and meet demand with both enterprise applications and consumer facing applications.

However, in order to take advantage of auto scaling, you must first ensure that you have fine-tuned some of the basics to really increase performance. One way to do this is by ensuring that you have selected the correct instance type and type of storage. Another is to ensure your auto scaling is primed and ready to take the load. Customers can also  purchase additional IOPS or replace Amazon EBS volumes as needed.

3. Financial

Optimization is probably the most intricate and ever-changing aspect of cloud computing today. An introduction of a new product or version can benefit customers dramatically but can also cause some headaches along the way. Often, we see prospects start with the financial optimization even before they have looked at migration or performance and how it weighs into the comprehensive financial mechanism a company uses to achieve an overall savings.

We encourage clients to migrate their workloads and understand the performance needs prior to purchasing any Reserved Instances (for example) in order to maximize their resources and ROI. Clients should focus on getting some other foundational areas established first, like best practices for tagging organization-wide, visibility into the right reporting KPIs, and understanding the workload needs. Once this is achieved, we find that RIs can be a very effective way to save money and increase ROIs. Clients like Motorola, who took this approach have saved more and increased their ROI more than was possible with their legacy manage service provider.

2nd Watch was built on cloud-native tools and leverages automation that allows for scalability. We manage more than 200,000 Amazon EC2 instances today and understand the complexity beyond the numbers. Behind our solutions for optimization is a team of experienced data scientists and architects that can help you maximize the performance and returns of your cloud assets.

4. Evolving to optimal AWS services

This is the next step in maximizing performance and achieving greater savings. Some companies elect to start by refactoring. While this is the right starting point for some companies, we have found—over many years and through many customer examples—that this can be challenging if customers are also trying to familiarize themselves with running on the cloud. The 2nd Watch proven methodology aligns with where clients are positioned in their journey to the cloud. If you take our approach, the company’s employees and vendors become immersed in the cloud and see it as the “new normal.” Once acclimated, they can explore and understand new products and services to propel them to the next level.

Companies like Scor Velogica saved an additional 30 percent on hosting and support costs by evolving their application in an SOC2 cloud native environment. Celgene reduced the time to run HPC jobs from months to hours by taking this approach. Not only do we preach this approach, we practice it before putting it into effect with clients. We moved off of Microsoft SQL to Amazon Aurora, increasing performance and capacity without increasing costs or risks. Other clients take it in steps and move to a product like Amazon Relational Database Service (Amazon RDS) and experience savings on the operational side when they can clone a new database and save money, all without the administrative overhead of traditional IT.

A hyper-scale AWS Managed Service Provider (MSP) partner like 2nd Watch can provide tested, proven, and trusted solutions in a complex world. Gartner has named 2nd Watch a Leader in its Magic Quadrant for Public Cloud Infrastructure Managed Services Providers, Worldwide report for its ability to execute and completeness of vision. Access the report, compliments of 2nd Watch, for a full evaluation of all MSPs included. With the right level of experience, focus and expertise it is not so cloudy out there.

To get started or for help, contact www.2ndwatch.com.

 

Jeff Aden

Founder & EVP Marketing and Business Development

2nd Watch, Inc.

www.2ndwatch.com

Announcing the Security Competency for APN Consulting Partners

Recognizing APN Consulting Partners who provide deep technical and consulting expertise helping enterprises adopt, develop, and deploy complex security projects.

Security is the top priority at AWS.

Under the AWS shared responsibility model, AWS provides a global secure infrastructure and foundation compute, storage, networking and database services, as well as higher level services. While AWS manages security of the cloud, security in the cloud is the responsibility of the customer. Customers retain control of what security they choose to implement to protect their own content, platform, applications, systems and networks, no differently than they would for applications in an on-site data center.

For customers and APN Partners, it’s important to understand this distinction and learn where your responsibilities lie. But you certainly don’t need to go this alone. There are a number of APN Technology and Consulting Partners who build and offer value-added security solutions and services to help you implement the security measures you need to meet your unique requirements, all while taking advantage of the data center and network architecture built by AWS to meet the requirements of the most security-sensitive organizations.

Today, we’re thrilled to announce the launch of the Security Competency for Consulting Partners to help you identify and connect with APN Consulting Partners who’ve demonstrated technical and consulting expertise helping enterprises adopt, develop, and deploy complex security projects.

The Security Competency for Consulting Partners

Our customers often ask us to help them identify consultants who have expertise in building, deploying, and managing workloads that meet stringent security requirements. Furthermore, customers are looking at ways to take advantage of the automation and agility the cloud affords to efficiently and effectively meet and exceed their security needs. APN Consulting Partners can provide value-added support and guidance to help augment the resources customers may have to address the unique security requirements of their applications and workloads on AWS. Security Competency Consulting Partners possess deep technical and consulting expertise helping enterprises adopt, develop, and deploy complex security projects. In order to achieve the Competency, APN Partners must provide supporting documentation such as architectural designs for review by AWS and its 3rd party auditor.

Security Competency Consulting Partners possess technical and consulting expertise in one or more of the following areas:

Security Operations and Automation

  • Help customers move to an “Infrastructure as Code” process for managing their AWS Footprint and Security Controls by using immutable building constructs such as CI/CD build pipelines and associated tools such as Git, Jenkins, AWS CloudFormation, AWS CodePipeline, and AWS CodeDeploy
  • Build security by default into continuous integration, continuous deployment (CI/CD), and DevOps pipelines
  • Help implement DevSecOps or SecDevOps and automate security changes (e.g., patch management, AMI pipelines) at scale
  • Implementing Digital Forensics and Incident Response (DFIR) programs, analysis and automated response to security events

Security Engineering

  • AWS infrastructure security deployments (firewalls, IDS, proxies, etc.)
  • VPC design including multi-VPC design patterns and multi-region redundancy
  • Design infrastructure for secrets management, DDoS Resiliency, centralized logging and authentication, etc.
  • Build custom applications to serve security needs
  • Guide and implement security strategies across multiple AWS accounts

Governance, Risk, and Compliance

  • Privileged user and role management, logging, and alerting
  • Designing organizational-wide security playbooks and standard operating procedures
  • Facilitating work to assure that workloads maintain compliance to specific regimes and assurance programs (e.g., PCI, HIPAA, SOX, etc.) and maintain appropriate certification for APN Partner personnel, where required

Congratulations to our launch partners!

The following Advanced and Premier tier APN Consulting Partners are launch Security Competency Partners:

  • 8K Miles
  • Accenture
  • Cloudreach
  • Cloud Technology Partners
  • Deloitte
  • GuidePoint Security
  • Logicworks
  • Nomura Research Institute
  • REAN Cloud LLC
  • stackArmor

Want to become a Security Competency Partner?

Are you looking for a way to differentiate your firm to customers based on your expertise helping customers build and manage secure environments on AWS? Becoming a Security Competency Partner through the APN can increase your visibility to your target customers – you gain public designation as a Security Competency Partner throughout the AWS website and AWS Partner Solutions Finder as well as use of a Competency Badge in marketing collateral. You also gain a number of additional benefits such as eligibility for targeted demand generation campaigns with AWS, preferred access to AWS private betas, and preferred access to AWS roadmap briefings from AWS services teams.

Learn more about the requirements for the Security Competency here.

Want to learn more about AWS Cloud security? Click here.

Shift Security Left through DevSecOps

Fusing application development with integrated, automated security processes

By Christian Lachaux, AABG Security Lead, Accenture; Federico Tandeter, Cloud Security Offering Development Lead, Accenture.

Accenture is a Premier APN Consulting Partner and AWS MSP who holds a number of AWS Competencies, including Migration.


Development+Security+Operations, better known as DevSecOps, is revolutionizing application development by integrating automated security reviews directly into the software development process. By 2019, more than 50% of enterprise DevOps initiatives will have incorporated application security testing for custom code, up from less than 10% in 2016. 1

Agile, security-focused enterprises are now taking it to the next level by applying DevSecOps in a cloud environment, and many are doing so on the AWS Cloud, which emphasizes security as its highest priority.2 This further simplifies and accelerates application development by accessing cloud-based packaged security tooling and testing services via API calls. With this innovative method, CIOs can ensure that vital security testing is performed at each step of the software development lifecycle—seamlessly and at high velocity.

To support this approach, Accenture DevOps is working to incorporate DevSecOps into the Accenture DevOps Platform service—which we feel will have the dual benefits of making security both easier and quicker, while also making it more measurable and reliable. Additionally, the Accenture AWS Business Group (AABG) helps customers secure cloud deployments using AWS security capabilities and best practices, such as the Center for Internet Security (CIS) AWS Foundations Benchmark, augmented by third-party tools and Accenture services.

Make way for a new method

With agile or waterfall application development approaches, security testing is typically not part of the initial design process. Instead, it is performed as a final manual step on a completed package—which increases the risk of application release delays and compounds costs if issues found in security testing require reengineering or redesign.

Despite these concerns, some companies stick to the traditional methods, partly due to the perception that security testing slows the application development lifecycle or injects complex requirements too late in the process. In some cases, this has reinforced the rift between application development teams and security teams, even though both groups report to the CIO. Forward-looking companies can overcome this challenge through a shift security left approach, which introduces security at the inception of the development lifecycle and automates testing within the DevOps workflow.

Representing a windfall over more traditional methods, shift security left makes security an inherent part of the design and architecture of every software product. Using DevOps techniques–including automated provisioning, extensive monitoring, frequent testing, and continuous integration–application developers and security teams can collaborate in a streamlined and secure development process. Specifically, the DevSecOps process parallelizes component development and automates security testing to achieve an iterative, fail-fast model of continuous development and testing at the unit level and then final security testing of the completed package.

Security automation industrialized on cloud

CIOs can apply the versatile DevSecOps process to application development and security processes on-premises or in the cloud. However, we feel that cloud provides a clear benefit in two primary ways: first, by supporting programmatic testing; and second, by facilitating DevSecOps through pre-packaged services that use infrastructure as code to automate core security testing.

If a security issue is identified, the developer can address it on the spot, or if necessary, involve the proper security team member to provide a quick fix. The cloud-native environment with embedded security services makes it even easier to develop applications and conduct security testing at the functional and user level on multiple iterations.

Hyperscale cloud providers like AWS facilitate DevSecOps through the infrastructure as code, API-driven automation capabilities, as well as the services that enable DevSecOps—including AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy and AWS CodePipeline. (See this recent AWS technical blog for more detail.) Using packaged services like these, companies can expedite the DevSecOps process and then top-off with custom code for an enterprise-ready business process or customer-facing service.

Getting started with DevSecOps

Overall, DevSecOps leads to a more effective risk-based approach to security. Rather than deciding which security apps to apply to an environment, companies can assess where potential risks and vulnerabilities lie and solve them holistically. To reap the near-term and longer-term benefits, Accenture suggests that CIOs follow these steps:

  • Start with a solid DevOps foundation across the development environment. Working with an external provider with strong DevOps experience can accelerate this process through education, training, and tooling.
  • Foster collaboration between development and security teams to embed security in the design. Just as security architects are not necessarily developers, developers may not always be as current on the latest security threats and trends.
  • Deploy continuous security testing built into the continuous integration/continuous development pipeline via automation. It will be critical to select the right security tools to support automated testing.
  • Extend monitoring to include security and compliance by monitoring for drift from the design state in real time to enable alerting, automated remediation, or quarantine of resources marked as non-compliant.

To learn more about implementing DevSecOps into your company’s application development lifecycle, contact christian.lachaux@accenture.com or federico.tandeter@accenture.com. If you have any comments for us, please leave them in the comments section. We’d love to hear from you.


The content and opinions in this blog are those of the third party authors and AWS is not responsible for the content or accuracy of this post.
1DevSecOps: How to Seamlessly Integrate Security Into DevOps,” by Neil MacDonald and Ian Head September 30, 2016 ID: G00315283
2 For more information, see the AWS Shared Responsibility Model, which delineates AWS’s role in managing security of the cloud, and a customer’s role in retaining control of their chosen security tools to protect their content in the cloud.

 

Tapping the Benefits of the Cloud: A Webinar with BlazeClan & CloudVelox

BlazeClan is a Premier APN Consulting Partner who holds the AWS Big Data and DevOps Competencies. CloudVelox is an Advanced APN Technology Partner who holds the AWS Migration and Storage Competencies. Together, these two APN Partners will be hosting a webinar on Thursday, March 9th, at 11 am PST/2 pm EST to discuss:

  • How you can get started with your cloud journey
  • An overview of BlazeClan’s assessment framework
  • How to migrate your applications to the cloud
  • How to accelerate your cloud migration with CloudVelox’s automation tool
  • Best practices and cloud migration success stories

The webinar will also include a live demo. BlazeClan and CloudVelox encourage business decision makers, CIOs, CTOs, IT Directors, and IT Managers to attend.

Register Here >>

To learn more about BlazeClan’s journey to become a Premier APN Consulting Partner, click here.

Achieving Compliance Through DevOps with REAN Cloud

Aaron Friedman is a Healthcare & Life Sciences Partner Solutions Architect with Amazon Web Services

When I survey our Healthcare and Life Sciences Partners, one of the common competencies I see is a great foundation in DevOps best practices. By building software in an automated and traceable manner, you are able to more easily determine the “Who, What, Where, and When” of any activity performed in the environment. This determination is a cornerstone for any compliant (HIPAA, GxP, etc.) environment.

REAN Cloud (“REAN”), an AWS Partner Network (APN) Premier Consulting and AWS MSP Partner who is also an AWS Public Sector Partner. The company holds a number of AWS Competencies, including DevOps, Healthcare, Financial Services, Migration, and Government. REAN is a cloud-native firm with deep experience in supporting enterprise IT infrastructures and implementing continuous integration, continuous delivery pipelines. The team routinely implements complex and highly scalable architectures for workloads in highly regulated industries such as Healthcare and Life Sciences, Financial Services, and Government. DevOps principles are core to REAN’s philosophy, and the solutions they develop are bundled with advanced security features to help address clients’ compliance needs ranging from HIPAA and HITRUST through FedRAMP and PCI.

Every solution that REAN builds on top of the AWS Cloud has security and compliance as its top priority. Healthcare and Life Sciences are highly regulated industries and many of its workloads are subject to regulatory requirements such as HIPAA and GxP. There are several common themes that must be addressed in every regulated workload including:

  • Logging, Monitoring, and Continuous Compliance
  • Documentation and Non-Technical Controls
  • Administrative Environment Access and Separation of Duties

In this blog post, I’ll discuss these concepts and discuss how REAN approaches each of these focus areas on the AWS Cloud. Let’s dive a little deeper.

Logging, Monitoring and Continuous Compliance

Tracking how your environment changes over time, and who accesses it, is central to meeting many different regulatory requirements. In order to paint the full picture of what is occurring in your environment, you store application logs, operating system logs and other environment specific logs and performance data. AWS services such as AWS CloudTrail, Amazon CloudWatch, and AWS Config produce and store critical information about your environment that should be organized and retained for potential use during troubleshooting activities or compliance audits. With the AWS Cloud, you can use these services to capture, organize and verify the logs and information that describes the cloud environment itself.

REAN Cloud addresses the challenge of managing all of this log information by leveraging a DevOps Accelerator that they have created called REAN Radar.

REAN 1
Radar ingests logs from many different sources, configures meaningful dashboards of information relevant to the environment being managed, and evaluates that information in the context of well-respected security and compliance frameworks such as Center for Internet Security (CIS) benchmarks. REAN Managed Services uses Radar dashboards to monitor for configuration drift, changes to sensitive data access, misconfigured infrastructure, broken ingest pipes, and numerous other environment specific metrics and measures.

REAN 2

Radar adapts as the environment grows and shrinks – new systems are automatically added to scope as the pipelines are grown, and old components are removed when no longer needed. Radar dashboards can be configured to suit a wide variety of customer requests and are well suited for providing “at-a-glance” visibility for management or governance committees. For example, a dashboard can be created to monitor in real time who has access to a particular set of data – this is very useful for HIPAA environments where monitoring access to protected health information (PHI) is critical.

Documentation and Non-Technical Controls

Documentation and Non-Technical Controls are an important part of the overall compliance story for a system. AWS provides a variety of compliance resources that our HCLS partners can use while addressing regulated workloads. With our Shared Responsibility Model, AWS manages the security of the cloud while customers and APN Partners, such as REAN, manage security in the cloud. For example, REAN, as an APN Partner, and REAN customers might decide to refer to AWS controls (such as for hardware management and physical environment security) and other audits and attestations that AWS has achieved for different services (such as SOC 2 (Type 2) or FedRAMP). AWS Artifact provides on-demand access to many of these audit artifacts, which APN Partners can use in their own system documentation.

REAN Cloud helps customers achieve system compliance by supporting a wide range of activities including the creation of a Cloud Security and Compliance strategy for an entire organization to manual document creation to meet specific compliance needs. In addition, REAN has helped their customers navigate HITRUST audits.

One of REAN’s goals is to apply the same automation principles to the (often manual) documentation creation process by applying a Pipeline-based approach to system and data center deployments. REAN leadership believes that system documentation packages can be automated alongside the environment itself. REAN accelerators are being used to improve speed of delivery and consistency for these important artifacts that demonstrate control of an environment.

As an example, REAN Managed Services uses REAN AssessIT and document accelerators every month to produce security assessment reports for every managed environment. These reports examine over 40 important security best practices and are generated automatically and tailored for each customer to focus on areas that are relevant to their business.

REAN 3

For customers requiring extensive environment documentation packages (such as GxP compliance) REAN is developing a pipeline to tie an entirely automated documentation generation to the automated creation of the environments. Again, REAN continues to develop new technology to maximize the value of documentation and applies a consistent disciplined approach to environment management while striving to minimize the human cycles required to produce such outcomes.

Administrative Environment Access and Separation of Duties

A major piece of any compliance story is the ability to demonstrate control of an environment. Authentication and authorization are central to this process, allowing a user to access the specific data they need. An area of concern for auditors is administrative access in an environment due to the broad permissions generally associated with this role. By using AWS native services such as Amazon VPC, AWS Identity and Access Management (IAM), and Amazon WorkSpaces, REAN helps customers build segregated and secure application environments of any size and scale required while still allowing REAN Managed Services or other Application Support Personnel to keep the environment running and provide support for any incidents that may occur.

REAN embraces the concept of “Control Accounts” when designing healthcare and life sciences application environments. A Control Account is used as a common area for hosting shared services and administrative tools that run against the “Managed Accounts”. Here is a simple example:REAN 4

In this diagram, the Control Account is used to manage:

  • Jenkins and all pipeline deployments into the Dev and Prod accounts
  • Nessus vulnerability scans into the other accounts
  • REAN Radar
  • WorkSpaces for administrative access into the other environments. As REAN manages environments with PHI, WorkSpaces (which is not listed as HIPAA-eligible) is not used to remediate specific situations that involve PHI.

 

AWS features such as VPC Peering and IAM Cross-Account Roles make this approach possible and allow REAN to focus on hardening the application hosting environments (such as Dev and Prod) to allow only the absolute minimum required permissions and network communication. Governance and oversight can then focus on the Control account to ensure that the applications and services there that are used to support the other environments are locked down and only granted to the required team members.

Benefit to Customers

Ultimately, the benefits that REAN provides with their DevOps principles only apply if there is tangible benefit to their customers. REAN has helped customers across a wide range of regulated industries including Financial Services, Healthcare & Life Sciences, and Government & Education achieve their desired regulatory and technology transformation outcomes on the AWS Cloud.

One such example is how REAN helped Aledade meet their HIPAA goals for their platform. In addition to architecting a solution on the AWS Cloud in accordance with best practices, REAN served as Aledade’s compliance guide. According to Chris Cope, previously the DevOps Lead at Aledade, “REAN Cloud’s staff was a huge help navigating HIPAA/HITECH compliance best practices on approved cloud services. They also had extraordinary attention to detail on security matters and are leaders at defining best practices on AWS.”

In November of 2016, The American Heart Association and AWS announced the launch of the “AHA Precision Medicine Platform”, “a global, secure cloud-based data marketplace that will help revolutionize how researchers and clinicians come together as one community to access and analyze rich and diverse data to accelerate solutions for cardiovascular diseases — the No. 1 cause of death worldwide.”

REAN Cloud, in partnership with AWS Professional Services, worked with AHA leadership to develop and implement the platform on AWS. REAN Engineers have implemented pipeline-driven automated deployments of the entire AHA Precision Medicine Platform and continue to show how security and compliance can move as fast as the development team.

The AHA Precision Medicine Platform leverages REAN Radar dashboards to monitor the environment, the Control Account approach to shared services and administrative access, and the team has established an effective weekly communication plan with AHA leadership to drive priorities. AHA and REAN work jointly to establish proofs of concept, minimal viable solutions, and test these solutions with a series of beta-testers. REAN recently published a case study on AHA that you can read here.

Conclusion

Data sensitivity is central to regulated workloads, and we often focus on how we process, store, and transmit that data. Yet the surrounding components, such as logging and access control, are just as important when building a compliant solution. REAN Cloud and their healthcare and life sciences customers achieve an end to end solution with REAN Cloud’s top of the line in-cloud security and management tools combined with the power of the multi-dimensional strengths of AWS.

If you are interested in learning about how REAN Cloud can support your healthcare and life sciences related workloads to meet your security and compliance requirements, please email them at hcls@reancloud.com

 

If you’re interested in learning more about how AWS can add agility and innovation to your healthcare and life sciences solutions be sure to check out our Cloud Computing in Healthcare page. Also, don’t forget to learn more about both our Healthcare and Life Sciences Competency Partners and how they can help differentiate your business.

Will you be at HIMSS? Be sure to stop by our booth #6969! We’d love to meet with you.

Please leave any questions and comments below.


The content and opinions in this blog are those of the author and is not an endorsement of the third-party product.  AWS is not responsible for the content or accuracy of this post.  This blog is intended for informational purposes and not for the purpose of providing legal advice.

How Cognizant Approaches GxP Workloads on AWS

By Vandana Viswanathan, Associate Director, Process & Quality Consulting, Cognizant Technology Solutions, and Joseph Stellin, Associate Director, Cognizant Cloud Services. 

Cognizant is a Premier APN Consulting Partner, an AWS MSP Partner, an AWS Public Sector Partner, and holds a number of AWS Competencies, including Healthcare, Life Sciences, Migration, Big Data, Financial Services, and Microsoft SharePoint. 

Life sciences firms are rapidly accelerating their adoption of AWS to not only advance research in the space, but to optimize the development of software and the environment it runs on. We’ve found that questions around regulatory quality, security and privacy have been addressed to the point where many senior executives actively pursue using AWS as an extension of or replacement for their on-premises environments.

Most companies manufacturing medical products or developing drugs are required by regulations to follow Good Manufacturing, Clinical, and Laboratory Practices (GxP). IT systems running “GxP Applications” are subject to FDA audit and failure to comply with the appropriate guidelines could result in fines and potential work stoppage. Due to this impact, GxP regulations are often at the forefront of our customers’ minds when considering a move to the cloud.

In January 2016, AWS released a white paper on Considerations for Using AWS Products in GxP Systems. With this guidance, it has become easier to develop these regulated workloads on AWS. We have found that life sciences firms are able to achieve the same benefits of scale, cost reduction, and resiliency for their GxP applications that they’ve come to expect from non-regulated workloads on AWS. This was exemplified at re:Invent 2016 where Merck spoke publicly about how they have built GxP solutions on AWS.

At Cognizant, we’ve developed a transformation framework based on our experience working with many large organizations within the life sciences and healthcare verticals. This framework consists of many steps including analyzing cloud providers, developing and executing validation plans, and creation of governance and support procedures to ensure compliance to FDA regulations. This framework enables successful qualification of the cloud infrastructure (IQ) execution and operations and ensures compliance of the application/software being hosted on the cloud. We’ve applied our approach to live migrations of multiple GxP workloads, including Trackwise and Maximo, as well as to building out of new GxP environments natively on AWS.

Design principles for GxP

When developing GxP applications for our customers, we’ve found there are key design and operation principles that each workload requires. It is important to note that in a cloud environment, infrastructure is continuously improvable with new features and capabilities added regularly. The need to stay compliant shouldn’t stifle innovation, but proper controls need to be enforced to ensure that FDA requirements are continuously met. We like to think about compliance not as a fixed goal, but a continuous operational and design requirement.

The following key principles relate to the Cognizant proprietary transformation framework as well as key AWS and third-party services we use to address these principles.

Cloud Provider Assessment: This enables us to evaluate all cloud providers based on their viability of hosting a GxP application and also the ability to support the specific environment being migrated. The evaluation parameters include regulatory compliance, information security, data privacy, infrastructure application dependencies, and business criticality amongst other key parameters.

Data Security: All sensitive data should be encrypted both at-rest and in-transit. For example, we use AES256 encryption for data at rest. We always engage our enterprise security team to evaluate all current customer security solutions to determine if there a need for additional security solutions to meet customer compliance and security requirements.

Authentication and Authorization: As the data flowing through a GxP application can be sensitive, we need to ensure that only the appropriate authorized Individuals can access the data and control the access limitaions. We utilize AWS Identity and Access Management and/or extend out current on-premises domain controller resources to the cloud in a secure way.

Traceability and Auditability: We need to have a time-stamped, secure audit trail that documents how and when users access the environment and application and any changes to the core infrastructure or applications. The benefit of infrastructure as code is that we can validate and log changes to our infrastructure in the same way we do software. We use AWS CloudTrail for all logs and leverage Amazon CloudWatch for any alerts and notifications. We have also integrated a proprietary tool called Cloud360 for all tracking, monitoring, management and audit information.

How our GxP approach leads to customer success

Our Transformation Framework has helped simplify the process of creating and maintaining validated environments in a continuously advancing technology. This innovation has helped these organizations to take advantage of key benefits of the cloud including: reduction in cost, agility, time to market, scalability, and more importantly reliability through redundancy.

For several of our top 10 pharmaceutical clients, implementation of the transformation framework has enabled successful movement of regulated applications to the cloud. A framework for validating GxP workloads was established and precedence has been set to move ongoing applications to the cloud.

Looking ahead

As this quest to move validated workloads to the cloud continues in the Life Sciences and Healthcare verticals, processes and technologies will evolve and be adopted to expedite the validation process, ensure compliance, and achieve larger cost savings. We look forward to our strong continuous relationship with AWS to assist many organizations with building confidence in moving GxP workloads to the cloud, advancing technology and streamlining validation processes.

Please leave any questions and comments below.

 

If you’re interested in learning more about how AWS can add agility and innovation to your healthcare and life sciences solutions be sure to check out our Cloud Computing in Healthcare page. Also, don’t forget to learn more about both our Healthcare and Life Sciences Competency Partners and how they can help differentiate your business.

Will you be at HIMSS? Stop by the Cognizant booth #3214. And be sure to stop by our booth #6969! We’d love to meet with you.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.