AWS Startups Blog
Accelerating Software Delivery on AWS
In this post, we look at some of the methods that startups take to support this customer-driven development approach and the services AWS provides to support these methods.
Starting Up Lean
Many startups and, increasingly, large companies look to the Lean Startup method as a way to quickly determine and build products or services their target market wants, while reducing the risk of spending time and expense building products or services that don’t meet the market’s needs.
“A key principle of the Lean Startup method that guides you to the right product with the right set of features is the iterative process of the build–measure–learn feedback loop. At the heart of this process is the minimum viable product (MVP).”
Eric Ries, the author of The Lean Startup, describes the MVP in this way:
“That product which has just those features and no more that allows you to ship a product that early adopters see and, at least some of whom resonate with, pay you money for, and start to give you feedback on.”
With the build–measure–learn feedback loop, you build an MVP, capture feedback from customer use of the MVP, and use the insights gained from customer feedback to drive the features in the next build–measure–learn iteration. The faster you can get through the build–measure–learn feedback loop, the quicker you can provide tangible value to your customers and raise the bar for potential competitors.
To quickly incorporate customer feedback into the MVP and accelerate time to market, you need to minimize the amount of time it takes to go through the build–measure–learn feedback loop. By adopting agile development techniques that lend themselves to the iterative nature of the build–measure–learn process, you can allow for the rapid development of incremental updates to the MVP, leading to shorter feedback loops.
Use the following lean development principles to drive faster acceleration through the build–measure–learn feedback loop:
- Eliminate waste by avoiding unnecessary development processes and product features
- Build quality in through increased feedback by using techniques like test-driven development and continuous integration
- Deliver rapidly by developing iteratively
Embracing agile development allows a development team to quickly create releases that incorporate new or updated features that are driven by customer feedback. This increased velocity in the number of releases requires a corresponding increase in the throughput of releases through the test and deploy steps of the release process. Increasing the throughput through the test and deploy steps, which is sometimes referred to as the “last mile,” can be achieved by automating the steps in the last mile using continuous delivery.
With continuous delivery, the cycle time from when a change is made in development to when the change is staged and ready for production is compressed. The compression is achieved by automating the build, deploy, and test steps in the release process.
Implementing continuous delivery requires setting up a deployment pipeline that models the release process as stages. The stages represent the automated build, test, and deployment steps in the process.
AWS provides a number of services and integrations with third-party tools that support creating the deployment pipeline and processing changes as they flow through the pipeline. AWS CodePipeline is the primary service for supporting continuous delivery on the AWS platform.
Building Your Deployment Pipeline
AWS CodePipeline supports the orchestration of the steps in the release process. With AWS CodePipeline, you model the release process steps in a pipeline by defining the stages that make up the pipeline and the actions performed in each stage, along with tools or services that you use to perform the action.
Once you set up your pipeline, AWS CodePipeline will detect any newly checked-in change to the version control system (VCS) associated with the pipeline, build the change, and deploy and test the change in each of the test stages in the pipeline before deploying the change into production.
The following illustration shows the main stages in a pipeline: source, build, staging, and production.
The source stage for the pipeline is associated with a VCS. AWS CodePipeline supports using GitHub or Amazon S3 as the VCS for the pipeline. In the source stage, AWS CodePipeline kicks off a pipeline’s workflow when it detects that a new change has been added to the VCS repository. A change committed to a GitHub repository or an update to a file in a versioned S3 bucket associated with the pipeline will kick off the pipeline’s workflow.
The build stage for the pipeline supports continuous integration by performing an automated build anytime a new change is committed to the repository associated with the pipeline. You can configure the pipeline to use Jenkins to perform the continuous integration build, or you can use a custom build action that enables the use of your existing build tools to perform the continuous integration builds.
The staging stage is a testing stage and defines the actions required to automate the deployment and testing of the changes in the pipeline. The pipeline can have one or more test stages, with each stage representing a different phase of testing, such as UAT, exploratory, or stress.
AWS CodePipeline gives you the option of using either AWS CodeDeploy or AWS Elastic Beanstalk to deploy the changes to the test environment associated with the test stage. You can also create custom actions to allow your existing deployment tools to perform the deployment.
AWS CodePipeline supports a number of built-in integrations with third-party automated testing services — including Apica, Blazemeter, Ghost Inspector, or Runscope — for executing automated tests against the test environments associated with the test stages in the pipeline. You can also create custom actions to allow your existing test automation tools to execute automated tests.
The production stage supports deploying to the production environment. By default, the changes are automatically deployed into production if they successfully pass through all the test stages in the pipeline. You can disable the automatic transition to the production stage, requiring user intervention to re-enable the transition before the changes can be deployed into production.
Automating Your Deployments
AWS has a number of services that you can use to automate deployments, including AWS Elastic Beanstalk, AWS CodeDeploy, AWS OpsWorks, and AWS CloudFormation.
The question “Which of these services should I use?” often comes up. Although the services have some overlap in functionality, each service has specific use cases that make it the optimal choice for automating a deployment.
In this section, we briefly describe the services and provide guidance about selecting a particular service for a particular type of deployment. Let’s start with AWS Elastic Beanstalk and AWS CodeDeploy, both of which have integrations with AWS CodePipeline.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is one of the easiest ways to deploy an application on AWS. It handles both the provisioning of the target environment and the deployment of the application to the target environment. AWS Elastic Beanstalk manages the details of provisioning EC2 instances, setting up load balancing, auto scaling, support for zero downtime deployments, and monitoring for the target environment. By using this service, you can spend more time developing your application through the abstraction of the infrastructure details.
AWS Elastic Beanstalk can provision and deploy to application servers running the following:
- Apache Tomcat for Java
- Generic Java applications
- Apache HTTP Server for PHP
- Python and Node.js
- Nginx for Node.js
- Passenger for Ruby
- Microsoft IIS 7.5 for .NET
AWS Elastic Beanstalk is a good fit for deploying web or application servers based on one of its supported development stacks, such as WordPress sites on IIS or Apache HTTP Server.
Unlike AWS Elastic Beanstalk, which provisions the target environment, AWS CodeDeploy does not support provisioning the target environment as a step in the deployment process. Instead, the service requires that the target server is provisioned before starting a deployment. AWS CodeDeploy is language and stack agnostic and can be used to deploy changes to any application stack that runs on Amazon Linux, Red Hat Enterprise Linux, Ubuntu, and Windows on servers that are on-premises or on AWS.
AWS CodeDeploy’s unit of deployment is called a revision, which consists of the files and/or binaries that make up the change, the scripts needed to support the deployment of the change, and an Appspec file, which is a YAML-formatted file that tells CodeDeploy where on the target server to copy the files that make up the change and what scripts and commands to run to deploy the change.
AWS CodeDeploy also supports using Puppet, Ansible, Saltstack, and Chef to perform the configuration of the target server as part of the deployment process.
If the deployment of your application requires rolling updates across an existing fleet of AWS EC2 instances and/or on-premises servers, then AWS CodeDeploy would be a good fit.
Similar to AWS Elastic Beanstalk, AWS OpsWorks supports the provisioning of the target environment and deploying the application to the target environment.
Unlike AWS Elastic Beanstalk and similar to AWS CodeDeploy, AWS OpsWorks is language and stack agnostic. You can use it to deploy changes to any application stack that runs on any of the operating systems supported by AWS OpsWorks (both on-premises and on AWS).
AWS OpsWorks also supports more complex deployment scenarios by letting you model the architecture of your target environment by creating a stack that is made up of layers. The layers are associated with EC2 instances and provide the blueprint for configuring the instances using Chef recipes. AWS OpsWorks comes with a number of built–in layers such as HAProxy, Memcached, and NodeJS App Server. AWS OpsWorks also allows you to create your own custom layers with your own Chef recipes.
If an application requires complex, multilayer deployment and/or is being deployed to a stack not supported by AWS Elastic Beanstalk, AWS OpsWorks would be a good fit. While AWS CodePipeline does not have built-in support for AWS OpsWorks, it supports creating custom actions to allow the use of AWS OpsWorks as a deployment option in your pipeline.
AWS CloudFormation provides an easy way of provisioning AWS resources. It lets you describe the AWS resources needed to run an application in a template, which is a JSON-formatted text file. You can edit the file using CloudFormation Designer, a graphical tool that lets you view and modify the AWS resources in the template as diagrams. CloudFormation uses the template to create a stack that consists of the running AWS resources declared in the template.
You can use CloudFormation to perform deployments. In addition, the service complements all the deployment services described earlier because you can use it to provision AWS resources in support of the other AWS deployment services.
Although both AWS Elastic Beanstalk and AWS OpsWorks can provision AWS resources, AWS CloudFormation supports the provisioning of additional resources that are not supported directly by AWS Elastic Beanstalk and AWS OpsWorks. For example, Elastic Beanstalk uses CloudFormation through its .ebextensions configuration file to support the provisioning of any AWS resource supported by AWS CloudFormation.
Accelerating software delivery by shortening the cycle time from when a feature is envisioned to when the feature is deployed in production requires automation.
Continuous delivery is an approach to automating the build, test, and deploy steps in the release process. AWS CodePipeline is the primary service for supporting continuous delivery on the AWS platform.
AWS CodePipeline is integrated with AWS CodeDeploy and AWS Elastic Beanstalk for automated deployments and supports a number of additional third-party tools to support automating building, testing, and deploying changes.
AWS has a number of services that can be used for deployments. Although there is some overlap in functionality, each has specific use cases.
For more information, see the whitepaper Overview of Deployment Options for AWS.