AWS Cloud Enterprise Strategy Blog

Proven Practices for Developing a Multicloud Strategy

Multicloud clarity

As an Enterprise Strategist, I find multicloud topics come up in many discussions rife with confusion, false certainty, and tentativeness. Companies are bombarded with conflicting messaging telling them either to never adopt a multicloud approach or to not miss out because “everyone is switching to multicloud.”

There are good reasons for either pursuing or avoiding multicloud strategies. This blog focuses on eight proven practices for succeeding with multicloud, including when and where multicloud makes sense and how AWS is positioned to help enterprises succeed with their multicloud strategies.

Multicloud refers to using more than one cloud service provider (CSP) simultaneously across an organization. But first, let’s clarify some terminology. Using SaaS products, such as email or project management software, alongside a CSP does not constitute a pure multicloud environment. It is a good strategy, and it might allow you to leverage multiple clouds, but in the context of this blog, it is not multicloud for public clouds.

1. Pursue Multicloud Only to Fulfill Legitimate Business Needs

Although we advise AWS customers to fully realize cloud benefits by choosing a primary cloud provider, where the majority of their workloads are on a single CSP, there are valid reasons why a multicloud approach might be right for your organization. Situations that may require the complexity of a multicloud infrastructure include:

Mergers and Acquisitions

A multicloud environment can be created during an M&A transaction when the acquiring company is on one primary CSP and the acquired company is on another. Welcome to multicloud! What comes next is not easy. The engineer in me wants to say consolidate—less is more. However, it may not make sense to consolidate immediately. Your overall technology integration strategy and assessment approach should be reflected in this process; make it part of your M&A playbook. What you move from one provider to another, when you move it, and what you leave alone may vary. But establishing an integration strategy is as important as preventing multiple instances of ERP from running forever.

Desire to Leverage Long-Term Differentiated Capabilities of Another CSP

The fear of missing out drives some companies to want a bit of every cloud. We believe companies are better served by selecting a CSP that can solve the majority of their organization’s challenges rather than adopting a more diffuse strategy. An 80/20 strategy is a good way of thinking about this. Indexing on the 80% (and not the 20%) can result in better efficiencies, talent retention, and value. While there may be specialized workloads that require a certain technology, those situations should be addressed case by case, where benefits and tradeoffs can be considered.

Companies may think about “the right workload for the right cloud.” Make sure that the analysis of what constitutes the “right cloud” extends beyond considerations for a specific workload. Ask how spreading this workload onto an additional CSP impacts overall complexity. I recommend that you conduct a careful price and performance analysis of each workload on each cloud to make sure the value is enough to justify this.

Multicloud at the Holding Company and Primary Cloud at the Operating Company/Line of Business

For private equity organizations or large holding companies with several portfolio companies, it can make sense for each portfolio company to have its own CSP strategy (frequently driven by M&A). Focusing more of your spending on a single cloud provider might let you take advantage of its volume discounts and incentives for reserving instances. But the other shortfalls around talent, fragmented workloads, and increased risks are largely bypassed as each organization operates independently.

2. Be Mindful of Multicloud Myths

Myth 1: Everyone is Adopting Multicloud Strategies

Advisory firms and media companies have released mixed findings on the extent to which companies place their workloads in multiple clouds. Our advice is to do the right thing for your firm and make decisions based on costs and risks, regardless of how prevalent this practice may appear.

Myth 2: Multicloud Reduces the Risk of Vendor Lock-In

Some companies cite the fear of lock-in (from both contractual and technological perspectives) as a primary reason for pursuing multicloud strategies. In on-premises environments, companies are driven to large long-term capital investments, often governed by lengthy, complicated service contracts. These are big one-way door decisions (i.e., difficult and costly to reverse) for companies and necessitate a strong focus on lowering risk.

The cloud is different. Companies that run the same workload in multiple cloud providers often feel pressure to use the “lowest common denominator”—they should consider the limitations. In some cases, they can avoid this problem by running different workloads with different providers.

I recommend companies work to fully understand their potential switching costs if they need to exit their existing CSP and the likelihood of that occurring. From there, it is possible to define the best approach to reduce lock-in by evaluating the cost and likelihood of needing to change CSPs vs. the strategic benefits of having a primary provider.

It is also important to note that the cloud is inherently more open than the traditional IT model, and we do not believe multicloud is necessary to avoid lock-in. Look at how AWS builds services on open-source technologies and standards, such as SQL, Linux, and containers. Customers have the choice of building on managed open-source services—such as Amazon Relational Database Service (RDS) for MySQL and Amazon RDS for PostgreSQL or foundational building blocks—and they pay as they go; there is no long-term, upfront commitment. We strive to build services that customers want to use, but should a customer choose to move away, AWS makes this as simple as possible. AWS provides multiple migration tools not only to help customers move resources from on-premises to AWS more easily but also to move them back on-premises or to other clouds if customers so choose.

Myth 3: Multicloud Improves Availability

Reducing the risk of service disruption if a company’s primary CSP has an outage is an increasingly rare reason for adopting a multicloud strategy. In these cases, there is a belief that a company can simply and seamlessly switch its workloads to a secondary CSP.

Multicloud failover presumes that an application can be failed over to another cloud. As many companies have found, this is extremely challenging. Achieving this requires the company to maintain full portability between two CSPs, adding complexity, risk, and additional work with the belief that failover is possible.

Distinguished VP Analyst Lydia Leong from Gartner summarized the problems with multicloud failover in her tweet, “Multicloud failover is complex and costly to the point of nearly almost always being impractical, and it’s not an especially effective way to address cloud and resilience risks.” The problem in making failover work is all the CSP differentiators (e.g., the different network architectures and features, different storage capabilities, proprietary higher-level services, database layers, ML services, along with different security capabilities, etc.). When workloads are spread across CSPs, a failure in either CSP could cause an outage. In this case, spreading workloads across CSPs actually increases risk.

Instead, I recommend that companies mitigate risk by “implementing and simplifying.” Target a specific workload or application for a single cloud, migrate it, master it, take cost and risk out, and repeat. Encourage deeper learning of CSP-specific features and capabilities via training and take advantage of higher-level CSP-specific services and tooling that are already integrated. Finally, and maybe most importantly, take advantage of AWS’s Regions and Availability Zones. AWS’s capabilities in this area already provide AWS customers with excellent capabilities to ensure highly available and reliable solutions.

Myth 4: Multicloud Provides Better Pricing

Price competitiveness might be the weakest argument of all for multicloud. Organizations’ experiences with complicated, expensive software or data center contracts that lock them into multi-year agreements have made them wary when procuring IT services. Traditional procurement approaches have not adapted to pay-as-you-go purchasing, volume discounts, or the reality of price competition in the cloud. (AWS has reduced prices 129 times since its inception.)

The biggest single driver of cost reduction is how well-managed and optimized a company’s cloud environment is. A company may see better cost optimization by working primarily with a provider whose services offer price-performance advantages (such as compute instances based on custom-designed chips like AWS Graviton) and has superior cloud financial management solutions. According to a 2021 Hackett Group study of more than 1,000 organizations, infrastructure spending as a percentage of total IT spending was 20% lower for AWS customers than for multicloud organizations.

Our experience has shown that companies do not anticipate the added cost and complexity of operating in multiple clouds, nor do they appropriately weigh it against the perceived gain in a head-to-head sourcing engagement.

3. Have a Clear Strategy and Governance to Support It

Just deciding to pursue a multicloud strategy is insufficient; you must also establish a strategy for delivering on your multicloud objective, including clear governance for which workloads will go where and why. Evaluation criteria should be used to optimize workloads and their dependencies. If left up to individuals, the uncoordinated sprawl across CSPs will likely erode any value the multicloud strategy sought to achieve. Evaluate CSP workload performance regularly and use your assessment as a key input to CSP selection, criteria, and future usage.

It is crucial to have comprehensive visibility into the total number of services, applications, and components used across the enterprise as part of an overall governance strategy. Integral to this is a robust tagging strategy that spans CSPs and establishes clear ownership, usage, and environment (e.g., development, QA, stage, and production) for 100% of deployed resources. Everything should be tagged to an owner; if it is not tagged or an owner cannot be identified, it should be removed. This codifies governance rules and automates enforcement instead of creating blocks to progress (guardrails, not gates). Cost, operations, and security must be tracked, monitored, and acted upon in the same manner with the same depth of data and transparency across CSPs. A single tool for a given need that can operate across CSPs is preferred.

4. Do Not Spread Contiguous Workloads across CSPs

In my view, workflows spanning multiple CSPs introduce needless complexity, risk, and cost while complicating support, deployment, and architecture with little value added. Contiguous workloads often involve large volumes of data that need to be processed and analyzed together. When the data is distributed across multiple cloud providers, it can create challenges in data movement, synchronization, and maintaining consistency. Additionally, managing a contiguous workload across multiple CSPs can be complex and time-consuming. It requires dealing with different APIs, management interfaces, security models, and operational processes for each CSP. This complexity increases the chances of errors, increases operational overhead, and can hinder agility and scalability.

Specific criteria and guiding principles should be established when evaluating this type of design and business need.

5. Applications Should Remain with Their Transactional Data

Care should be taken when developers need to move large volumes of data between applications in different clouds, especially with compute/applications deployed in one CSP and data storage in another. Such a situation can add complexity and latency that may offset perceived benefits.

The decision criteria for determining a CSP for a workload should include a long-term view of integrating that workload with others. Will the data be needed for advanced analytics or ML beyond its current scope? Will the services provided be consumed broadly across other CSPs, or is it isolated to the workloads in that CSP? For more guidance and a decision model for deployment considerations, check out my colleague Gregor Hohpe’s Multi-cloud: From Buzzword to Decision Model blog.

6. Containers Can Help, but Realize They Do Not Solve Every Use Case

Using containers is generally a good idea for any modern application, and they help with many elements of portability. Containers are platform-agnostic, meaning they can run on any cloud platform or infrastructure that supports containerization. This allows you to develop and package your application once and deploy it consistently across multiple cloud providers or on-premises environments without significant modifications. But be cautious as containers do not work in all cases (e.g., large monolithic applications), nor do they solve all the issues (especially data, policies, and security) around portability between CSPs.

7. Have a Single Cloud Center of Excellence (CCOE) but Specialize Within

As we advise many AWS customers, you should leverage a CCOE within your organization to provide leadership, standardization, and acceleration of your cloud journey. When it comes to multicloud, we find the most successful companies have a single CCOE but specialize within that CCOE the skills, tools, and mechanisms particular to a specific CSP. We find that when AWS customers have multiple CCOEs for each CSP, it often leads to divergence, reengineering, and waste instead of a more coordinated approach through a single CCOE.

8. Make Sure Security is Always a Top Priority

Multicloud makes security harder by increasing the risk of unauthorized access or data breach. Multicloud forces companies to deal with multiple security models across CSPs in areas such as identity management, network security, asset management, and audit logging.

This complexity makes transparency harder and increases the burden on security teams, elevating risk. Although they are not unique to multicloud, several core security practices become more important: (1) shifting security left by automating and embedding it into delivery pipelines, cloud environments, and team priorities; and (2) encrypting data at rest and in transit within or between CSPs.

One useful approach to multicloud adoption is creating a single destination for security data (i.e., a single pane of glass). Augment this with cloud-native tools developed by every CSP to present this data so it makes sense within that environment.

Conclusion

For most organizations, a primary cloud strategy provides the most value through simplicity, focus, and risk mitigation while allowing companies to deepen their partnership and working knowledge of their primary CSP and services. This increases an organization’s ability to take advantage of more sophisticated services and better attract and retain talent while delivering value to companies faster.

A multicloud approach can make sense, but companies should ensure their decision to adopt such an approach is driven by business needs and made with a clear understanding of the tradeoffs involved. In such cases, we recommend a cloud model focused on applications and business workflows that can be delivered from a single CSP, are unlikely to share data across CSPs, and have clear governance for which workload goes where.

To learn more about the AWS services that can help centralize and simplify management and monitoring of hybrid and multicloud environments, provide access to all your data wherever it is stored, and run applications on AWS, on-premises, and other clouds with AWS container services, check out the AWS Solutions for Hybrid and Multicloud.

Tom Godden

Tom Godden

Tom Godden is an Enterprise Strategist and Evangelist at Amazon Web Services (AWS). Prior to AWS, Tom was the Chief Information Officer for Foundation Medicine where he helped build the world's leading, FDA regulated, cancer genomics diagnostic, research, and patient outcomes platform to improve outcomes and inform next-generation precision medicine. Previously, Tom held multiple senior technology leadership roles at Wolters Kluwer in Alphen aan den Rijn Netherlands and has over 17 years in the healthcare and life sciences industry. Tom has a Bachelor’s degree from Arizona State University.