AWS Cloud Enterprise Strategy Blog
Does Your Technology Organization Have a Nimbleness Metric?
Application modernization is one of the big drivers of cloud adoption, with enterprises considering it as an opportunity to not just reduce operating costs as they move their core technology to the cloud, but also to improve performance, resiliency, and speed of delivery for new products and features.
Enterprises also begin to modernize their architecture and procurement models as teams begin to leverage cloud native services and realize increased development speed through simply making an API call to leverage the functionality of feature-rich, scalable cloud services like Amazon CloudSearch or Amazon Elastic Container Service, without having any of the overhead of cluster administration, tuning, and redundancy. Customer after customer tell us how they have reduced their lead times for new buildouts and product launches from weeks or months to hours or even minutes. They talk about newfound agility, or nimbleness, as a major benefit of modernization.
But danger is lurking. Without a defined mechanism to evaluate new technology architecture and purchasing decisions against an objective measure of nimbleness it will be difficult to resist the pressure to adopt new frameworks or vendor solutions that seemingly check the box on feature requirements but are not speed or agility friendly when it comes to preserving the benefits of your application modernization efforts.
My AWS colleague, Mark Schwartz, addressed this issue in his recent AWS blog post on enterprise IT, where he describes “Buy off the shelf wherever possible” as an anti-pattern:
The enterprise is focused on checking the box for functionality without considering the strategic value of agile IT assets. A business case determines what functionality is needed; the cheapest or most standard way of obtaining that functionality is obviously the best choice. If the enterprise builds its own system, then no value is attached to the internals of the system (“that is IT’s job”)—only the features matter. In general, though, the company chooses to buy COTS software whenever possible rather than build its own. It then has to mold its business processes around that COTS product, usually through workarounds or expensive, unplanned-for IT customizations and integrations. It is then committed to an ongoing stream of maintenance payments to the COTS vendor. When the vendor makes a major update to their product, all other IT work must stop while the update is tested and installed. The vendor’s roadmap does not include the changes that the enterprise needs to innovate in its market. As a result, the IT assets, while they might work, are not agile. Similarly, the enterprise builds processes that limit its agility: gatekeeping, bureaucracy, heavy-handed approval processes. All of this stands in the way of the enterprise as it tries to become faster and nimbler—the enterprise has never built a way to incorporate the value of nimbleness into its decision-making processes.
I experienced this firsthand in my former role as CIO at Edmunds.com, one of the most popular automotive shopping platforms, serving more than 20 million car buyers each month. We had just undertaken a year-long effort to implement a continuous delivery platform, adopting continuous integration and test-driven development, that brought our release frequency down from six to eight weeks to daily. It was one of the most value-creating initiatives the technology organization had undertaken. Not only were we able to release new features when they were ready, getting them in front of our shoppers faster, but the level of effort to do so was reduced from two dozen people in a release war room for 16-hours at a time, to a single person monitoring an automated deployment process. Nimbleness was at an all-time high.
However, it did not take long until product roadmaps started including requirements for new commercial and open-source software components that—while quite robust in features—did not have the same deployment agility as the existing stack. The release-when-ready capability started to require more coordination, more manual steps, and drift away from the precious agility the teams had worked to achieve.
Enter Cycle Time
Cycle time is a wonderfully-universal metric for measuring the duration of a task. Best known as a key measurement in the kanban scheduling system for lean manufacturing, it has also been widely adopted in the software development lifecycle (SDLC) of many organizations. Roughly defined as the average amount of time it takes to complete a unit of work, from the time work is started until it is released and ready for users, teams measure and track their cycle time for consistency of scoping, complexity, and delivery.
To introduce more nimbleness into your environment, start by measuring the current cycle time for every build, provisioning, or release process and make those the high-water marks for the maximum allowed cycle time.
Cycle time can also be used as a continuous improvement measurement across a wide and varied enterprise application portfolio to ensure that you are increasing the nimbleness of your technology organization. That’s because every application or system type has a starting point or baseline cycle time, whether they are mainframe or ERP systems with monthly release cycles, or customer facing applications with weekly ones. To introduce more nimbleness into your environment, start by measuring the current cycle time for every build, provisioning, or release process and make those the high-water marks for the maximum allowed cycle time. Going forward, set goals for teams to not only maintain that high-water measurement, but to decrease it over time with the north star vision that all cycle times should trend toward zero (time).
Far from a limiting or punitive performance metric, teams are actually free to make decisions about new features or platforms they feel are necessary, but along with budget, security, and other factors, this new condition for inclusion is that it must reduce (or at least not increase) the overall cycle time for that service. If the new work increases the overall recurring cycle team for a build/release/provisioning process, then the team knows it needs to remove that overage somewhere else in the cycle. This is how many organizations have made the decision to utilize cloud native services wherever possible, because by design they are intended to reduce the typical cycle time associated with their use.
At Edmunds, we measured the aggregate time to build, test, and deploy customer facing applications as our nimbleness metric to ensure we could maintain a daily release capability. The teams there have continued to modernize their applications and find ways to reduce the cycle times for a wide range of applications and systems.
Would you establish a cycle time—trend to zero—measurement across your company’s portfolio? What would be the challenges or benefits? Is there a better KPI to encourage your teams to—as my colleague Mark Schwartz would say—“Relentlessly shorten the lead time between idea and implementation?” Let me know. potloff@amazon.com