AWS Cloud Enterprise Strategy Blog

Rightsizing Infrastructure Can Cut Costs 36%

When an enterprise migrates into the cloud, it is empowered with many new ways to manage its costs. One important benefit is increased agility which in turn leads to leaner delivery processes with corresponding savings. With agility comes the possibility for rapid feedback-and-adjustment cycles, from which comes improved quality and a lower cost of rework. Managed services — anything from databases and analytics to identity and access management and logging and monitoring — can reduce operational costs and result in a lower Total Cost of Ownership (TCO). And the pay-as-you-go model for higher-level services is likely to reduce costs for most enterprises. These savings come from using the cloud environment in new ways, and have the greatest long-term impact on costs.

Nevertheless, it is the cost of infrastructure that many enterprises first focus on, and that often yields the most easily measurable and visible savings. To maximize those savings, enterprises should make a focused effort and take advantage of the tools AWS makes available to them. In the cloud, an enterprise has a great deal of control over how it provisions and uses its infrastructure, and the choices it makes can greatly affect its costs.

One of our Migration Competency and APN partners, TSO Logic, has contributed the article below, demonstrating how important some of these choices can be. In particular, TSO Logic discusses the importance of rightsizing instances. There are many other levers that can be used to reduce costs for workloads in AWS, but TSO’s article demonstrates their deep expertise in this area.

***

Slash data center costs by 36% — move to the cloud and stop paying for resources you’re not using

By Aaron Rallo, Chief Executive Officer, TSO Logic

You want to start capitalizing on cloud. You’ve targeted some workloads you think are good candidates. You’re ready to get started. So where do you begin? Seems simple enough: you glance down at your spreadsheet showing how much you’re paying to run those workloads on-premises now, and look up how much it would cost to run them in AWS. But something seems off here. Cloud doesn’t look much less expensive. For some workloads, cloud actually costs more. What’s happening here? Have you fallen victim to a bunch of cloud hype?

Not at all. More likely, the numbers you’re looking at aren’t telling you the whole story. For one thing, it is difficult to compare apples-to-apples. You may be assuming that the server platforms in the cloud are basically comparable to what you’re running on-premises. In reality, the platforms you’ll use through AWS are latest-generation hardware that can likely do a better job at a lower cost than what you’re currently running.

But the biggest mistake when comparing numbers on a spreadsheet is assuming that your current resources are provisioned appropriately. In reality, most on-premises workloads are overprovisioned — more than 80 percentaccording to our new research. (http://tsologic.com/resources/economics-of-cloud-migration-2017/)

Data Reveal that Most Instances Are Over-Provisioned

TSO Logic recently conducted a statistical analysis of nearly 105,000 operating system (OS) instances across North America — one of the largest data sets ever assembled for this type of analysis. The results might surprise you:

· Just 16% OS instances were sized appropriately for their workloads. 84% could run on a smaller footprint and directly porting these to cloud based instance types of the same size would be a signifigant waste.

· However, by right-sizing those instances — porting them to optimally sized AWS resources, based on historical analysis of real-world utilization — they could run in the cloud for just $90,000,000. That’s a savings of more than $55 million annually — a 36% cost reduction.

Where do those savings come from? Let’s dig a little deeper.

Inside the Numbers

Here’s a real-world sample from our analysis. Table 1 shows a Dev environment with three OS instances, each running on Intel dual-core E5 processors running at 2600 MHz. Table 1a shows the details of each of these instances. For example, Instance A, is in use 100% of the time. The servers peak usage is 39.55% and the peak memory is pinned, using all 6,145 provisioned Megabytes.

Each of those instances ties back to a specific real-world server, with a specific annual cost in terms of rack units and power consumption. Those figures, as well as the hardware generation, capabilities, and the way it’s historically been used, figure into the total current operating cost for those instances: $4,633 per year, and the cost of operating the compute only is $4,103 per year.

To “direct match” those resources in the AWS cloud — to provision in AWS exactly what’s deployed on premise — you’d need two m4.large instances and an r4.large instance to satisfy those workload levels. Total cost at AWS: $3,741 a savings of just $361 per year. But do you really need to provision in AWS based on old provisioning assumptions? If you look at their historical real-world utilization, the answer is no.

· For Instance A, the OS is in use 100% of the time, but when its used at peak its only consuming 39.55% of the processor, with the average at 13.8%. Looking closer at the processor type, we can see that it is from 2008. Considering the improvements in modern processors and historical workload levels, the optimal instance size is not an m4.large, but a T2.Xlarge, which costs $970 per year.

· Instance C is currently used 17.3% of the time on-premise and has 4,096 MB of Ram. When it is in use the CPU average is 5.6% and peaks out at 16.8%. This processor is also from 2013. The optimal match for this workload pattern is a t2.medium, costing just $286 per year.

· Note: The “direct match” instance size is less expensive out of the box but it would become even more cost effective when you use RI optimization.

By “rightsizing” each OS instance — migrating from two m4.large and r4.large to two tx.large and an t2.medium — the total cost is now $2,588 per year. That’s a 55% cost savings compared to current workload provisioning levels.

And that’s just the savings from basic right-sizing of resources. It doesn’t include the discounts available through purchasing Spot Instances, which can result in further cost reductions of up to 80%!

Applying the analysis to each of the nearly 105,000 OS instances in this sample will reveal some that are provisioned appropriately, where the economic case for cloud is not as strong. But for many more — 84% — the organization is paying for substantially more resources than those workloads actually need.

Hard Data Drives Smarter Migrations

When crafting a business case for cloud, spreadsheets won’t give you complete answers. You need to model and compare costs over time, not just in a one-time snapshot, because compute patterns, as well as the options including the cost of an instance, software and storage in AWS, are always changing. You need to understand differences in hardware — a later-generation single-core Intel processor in AWS, for example, may deliver better price/performance than an older dual-core processor on-premise. And you need to know where and how you’re overprovisioned as well as what software and storage options are required. Only then can you start making meaningful decisions.

At TSO Logic, we create fine-grained statistical models of all on-premises resources to determine the most cost-effective place to run each workload. Ingesting millions of data points from the environment — including age, generation, software, storage and configuration of all hardware, the OSs they’re running, and each instance’s historical utilization — our platform algorithmically profiles compute patterns. It then uses multiple heuristics including pattern matching to determine the best fit for each workload from thousands of potential cloud options. Using validated information from Intel and AWS, we normalize and compare processing capabilities between various generations of Intel processors and the myriad of hardware, software and storage options in the AWS cloud.

By putting aside paper-and-pencil calculations and moving to algorithmic statistical analysis, you can automatically discover the data points needed to understand your real-world needs. And you can create a much more accurate business case for cloud planning that will deliver bottom-line results.

Ready to find out how much you could be saving based on your environment’s real-world needs? Visit www.tsologic.com or talk to your AWS sales rep.

Mark Schwartz

Mark Schwartz

Mark Schwartz is an Enterprise Strategist at Amazon Web Services and the author of The Art of Business Value and A Seat at the Table: IT Leadership in the Age of Agility. Before joining AWS he was the CIO of US Citizenship and Immigration Service (part of the Department of Homeland Security), CIO of Intrax, and CEO of Auctiva. He has an MBA from Wharton, a BS in Computer Science from Yale, and an MA in Philosophy from Yale.