AWS Cloud Enterprise Strategy Blog

Transitioning to DevOps and the Cloud

“Excellent firms don’t believe in excellence — only in constant improvement and constant change.” -Tom Peters

Last year I wrote a mini-series around DevOps in the Enterprise. This continues to be a frequent topic of discussion in my conversations with CIOs, which is why I am honored to host a guest post from AWS’ Scott Wiltamuth on the topic. Scott owns Developer Productivity and Tools for AWS, and spends a lot of time thinking about this trend. Without further ado…

***

Moving to the cloud is an exciting priority for many companies looking to use on-demand computing and reduce their IT burden and capital expenditures. These companies seek to benefit from the cloud’s on-demand pricing, flexibility, and scale to help them lower costs and enable their technology teams to move faster.

Yet, these companies often discover that moving to the cloud not only changes where they run IT, but it also presents an opportunity for them to change how they run and manage IT. In my many conversations with enterprises, a common question I receive is “How do we change our internal processes so that we can move faster and be more agile?”

My answer to them is usually DevOps.

DevOps is a term that has been around for a while and has been embraced by many in the startup world. Recently, many enterprises also have started to adopt DevOps principles in order to innovate quicker for their customers and compete more effectively. While DevOps is sometimes a difficult term to define, here is our interpretation of DevOps.

In short, DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

Changing How You Run IT

When I talk to customers about DevOps, we usually start with the practices and culture of DevOps rather than specific services or tools. I ask about their current practices. Are development and operations functions separate or more integrated? Are apps more monolithic, or have they been decomposed into pieces that can be delivered independently? Are releases small and frequent (weekly or monthly), or big and infrequent (semi-annually or longer)? We then talk about what their aspirations are, and how AWS and the cloud might help with that.

For example, moving from monolithic application architectures to “microservices” architectures enables companies to decouple complex applications into modular parts that can be independently operated and updated. This is often combined with more frequent releases, which enable companies to deliver incremental but rapid improvements which drives faster velocity.

To implement more frequent releases, companies use continuous integration and continuous delivery pipelines, which let engineers automate building, testing, and deploying code from development all the way to production. These pipelines act like software delivery “assembly lines”, where automated processes and checks ensure that code changes are safe to deploy and don’t compromise the application as a whole.

Each independent microservice is paired with a delivery pipeline, enabling a faster rate of change for the application overall since each microservice can be updated independently of another (barring dependencies). The use of automated pipelines also frees up valuable engineering time that is normally spent managing traditionally manual release processes, enabling developers to focus on the core value propositions of their company, such as the product itself.

Building Ownership and Accountability

Culture is another important aspect of DevOps that we discuss.

While culture is unique to every organization, DevOps practitioners share common values around ownership and accountability and use effective team structures. This usually means pairing each microservice with a team that becomes responsible for the whole service.

For organizations with separate development and operations teams, this means having the two teams share responsibility across the entire application lifecycle. No longer can developers simply hand off their code to operations and expect the job to be complete, and vice versa. Development and operations have a shared responsibility to ensure that code runs in production and that each application update is reliable. This often means expanding the scope of responsibilities for development and operations from what they might be traditionally accustomed to doing.

For example, Amazon has a strong DevOps culture that emphasizes ownership. We have what are known as “two pizza teams”, which are named so because each team is large enough to be fed by only two pizzas. Two pizza teams are each responsible for one or more services and they become the sole owners of almost every aspect of that service: collecting and responding to end-user feedback, writing requirements, developing the service, building and testing code, deploying and releasing updates, and operating the service. Team members take turns doing on-call duty, so they are available if there is an urgent operational issue.

This philosophy is best summed up by the mantra “You Build It, You Run It.” Forming small teams is what lets us move quickly, and we’ve found that the two-pizza structure is the optimal size for keeping teams on their feet and to ensure that they are intimately connected to the details and intricacies of how their service runs. This encourages more ownership and accountability, which ultimately leads to faster results and better experiences for our end users.

Equipping Your Organization for DevOps

Putting DevOps into practice usually requires tooling. Small teams that deploy changes frequently have a strong need for tools. So when customers move to AWS, we are not only obliged to help them change where they run their compute, but also how they run and manage IT. That’s why we provide them a set of developer and management tools that help them use DevOps practices and thus leverage the benefits of the cloud more effectively. When DevOps techniques are combined with the low-cost, on-demand resources of the cloud, companies can achieve higher velocity and more efficient use of resources (both financially and productivity-wise).

We also help companies stay secure in their journey to DevOps so that developer productivity and agility don’t need to be sacrificed in order to keep their infrastructure secure and compliant. Secure identity and access management services, in addition to our governance and compliance services, make it easier for companies to encourage a culture of experimentation and independence in their engineering organizations while having peace of mind that needs for security and compliance are met.

Where to Start?

I’m often asked, “So where do I begin?”

A good starting place is by using continuous integration and continuous delivery practices, which automate building, testing, and deploying code. These best practices help you begin to perform more frequent releases, and in the process help you ensure your software delivery remains reliable (more likely, it will make your delivery more reliable than before). While most companies are using a CI service already, they may not have the best tooling for more complex automation. Thus, implementing practices like continuous delivery requires proper tooling to help your developers automate these tasks in a dependable manner.

AWS offers a set of services that are based on Amazon’s collective software development experience and internal tooling to help AWS customers practice continuous delivery. We built these tools because customers asked us how best to use these DevOps practices and because they wanted a reliable toolset that worked well with the AWS platform. Together, these services (AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy) help you store and version control code, then automate the building, testing, and deployment of code to Amazon EC2 or on-premises servers.

The AWS Code* services were inspired by our own internal developer tools and DevOps experience. When Amazon first moved to a microservices architecture earlier on in its history, we learned that the software release process could be optimized in a way that allowed us to deliver updates to our customers even faster and more reliably. So we built a continuous delivery tool (called “Pipelines” internally) and a deployment engine (known as “Apollo”) which help us deliver updates quickly and safely, and we built them keeping in mind that reliability and security were of the utmost importance to our business.

My colleague Werner Vogels, CTO of Amazon, has written about Apollo before and how it has helped Amazon deploy over 50M times each year across our global fleet of servers while maintaining uptime for our website. Apollo and Pipelines are both “self-service”, in that each team uses them independently to deploy their updates. This is by design and is part of the culture of ownership we’ve fostered; it gives our teams the freedom to make their own decisions and move as quickly as they’d like. These tools are what enable Amazon to innovate rapidly while keeping our website and services up and running 24/7.

DevOps is a Journey

As some may say, DevOps is a journey. There is no “one size fits all” model for DevOps, but every company that moves to the cloud can consider ways to change how they operate and deliver in an increasingly cloud-centric world. Hopefully, we can help you part of the way there on your DevOps journey.

If you’d like to learn more, feel free to read our page on “What is DevOps?” or see our page about AWS and our DevOps services.

Stephen
@stephenorban
orbans@amazon.com

Stephen Orban

Stephen Orban

Stephen is the GM (General Manager) of a new AWS service under development, and author of the book “Ahead in the Cloud: Best Practices for Navigating the Future of Enterprise IT” https://amzn.to/ahead-in-the-cloud Stephen spent his first three-and-a-half years with Amazon as the Global Head of Enterprise Strategy, where he oversaw AWS’s enterprise go-to-market strategy, invented and built AWS’s Migration Acceleration Program (MAP), and helped executives from hundreds of the world’s largest companies envision, develop, and mature their IT operating model using the cloud. Stephen authored Ahead in the Cloud so customers might benefit from many of the best practices Stephen observed working with customers in this role. Prior to joining AWS, Stephen was the CIO of Dow Jones, where he introduced modern software development methodologies and reduced costs while implementing a cloud-first strategy. These transformational changes accelerated product development cycles and increased productivity across all lines of business, including The Wall Street Journal, MarketWatch.com, Dow Jones Newswires, and Factiva. Stephen also spent 11 years at Bloomberg LP, holding a variety of leadership positions across their equity and messaging platforms, before founding Bloomberg Sports in 2008, where he served as CTO. Stephen earned his bachelor’s degree in computer science from State University of New York College at Fredonia. https://www.linkedin.com/profile/view?id=4575032