Microsoft Workloads on AWS

Do AWS customers benefit from 64KB block size for SQL Server storage?

In this blog post, we will share results of Microsoft SQL Server performance testing using Amazon FSx for Windows File Server (Amazon FSx) and Amazon Elastic Block Store (Amazon EBS) with block sizes in the range of 16 KB to 64 KB to prove or disprove the common opinion that SQL Server requires 64KB block size for optimal performance.

Introduction

The internet is abundant with recommendations to use 64 KB block size or allocation units when formatting storage drives for SQL Server databases. Many of these suggestions originated during the era of spinning hard drives and SQL Server 2005 and 2008. Since then, storage architecture has evolved, particularly with the advent of SSD drives being predominantly used in high-performance database deployments. Also, newer versions of SQL Server have been introduced since SQL Server 2008 with various storage access enhancements.

The rationale behind these recommendations stems from how SQL Server manages storage. The fundamental unit of data storage in SQL Server is an 8 KB page, with eight contiguous data pages representing a 64 KB extent. Lately, some articles and blog posts have argued that most of the read and write operations in modern releases of SQL Server do not occur in 64 KB extents. Despite this, the 64 KB allocation unit recommendations still appear in Microsoft documentation. Therefore, some customers deploying SQL Server Failover Cluster Instances (FCI) have expressed concerns because Amazon FSx for Windows uses 16 KB allocation unit when formatting the storage.

To address these concerns, we conducted a series of SQL Server performance tests to determine whether the allocation unit size affects SQL Server performance. Given the variety of SQL Server workloads, we focused on the common case of OLTP workloads. We used HammerDB, a leading benchmarking and load testing software, to run our tests.

Performance testing approach

To avoid potential network bottleneck when connecting to Amazon FSx, we hosted our SQL Server instance on the Amazon Elastic Compute Cloud (Amazon EC2) memory-optimized r6i.32xlarge instance, which offers network throughput of 200 Gbps. As the test workload, we generated the HammerDB TPROC-C database that closely resembles common OLTP databases. To examine the effect of storage performance on SQL Server, we set up this database with 75,000 warehouses, resulting in an approximately 8 TB database size. This size significantly exceeds 1 TB of RAM available on r6i.32xlarge EC2 instance used for testing. In configuring the HammerDB test script, we selected “Use all warehouses” option to increase the load on the storage subsystem.

To capture the maximum performance achieved for each storage configuration, we ran multiple tests with varying levels of load from 362 to 1,024 virtual users. To ensure statistical consistency, we performed three tests for each level of load and calculated the average of the results.

Testing with different allocation units on Amazon FSx for Windows

When you create an Amazon FSx for Windows file system, it formats the allocated storage using 16 KB allocation units, or block size. To implement our testing, we engaged the Amazon FSx Service Team to assist us with creating custom Amazon FSx instances with allocation units of 16KB, 32 KB, and 64 KB. Apart from the difference in block size, we configured all three instances of Amazon FSx identically with 26 TB of storage to achieve 80,000 IOPS and 2GB/sec throughput. This setup allowed us to attribute any difference in SQL Server performance to the one property that was different – the block size.

Table 1 presents the results of our tests and Figure 1 shows the corresponding chart. Here is the SQL Server performance for each test point represented by the HammerDB measure of New Orders per Minute (NOPM):

Table 1. SQL Server performance testing results as a function of FSx block size.

Figure 1. SQL Server performance testing results as a function of FSx block size.

The data in Table 1 and Figure 1 indicates that the SQL Server performance with HammerDB testing of a simulated OLTP workload does not depend on the allocation unit size – the results for different block sizes are statistically indistinguishable within the limit of HammerDB repeatability!

To explain these results, we also collected average read and write sizes per disk operation using Amazon CloudWatch metrics reported for Amazon FSx.

Figure 2. Average READ operation size, 16 KB block size.

Figure 3. Average READ operation size, 32 KB block size.

Average READ operation size, 64 KB block size.

Figure 4. Average READ operation size, 64 KB block size.

As shown in Figures 2, 3, and  4, the average size of the read operation is just over 8 KB. This suggests that most read operations operate on a single 8 KB page, with very few read operations bringing in more than one page at a time. Note the same situation for write operations, as illustrated in Figures 5, 6, and 7.

Average WRITE operation size, 16 KB block size.

Figure 5. Average WRITE operation size, 16 KB block size.

Average WRITE operation size, 32 KB block size.

Figure 6. Average WRITE operation size, 32 KB block size.

Average WRITE operation size, 64 KB block size.

Figure 7. Average WRITE operation size, 64 KB block size.

Testing with different block size on Amazon EBS

We then performed the next logical set of tests for SQL Server configured to use an Amazon EBS volume for database storage. To make results for the Amazon EBS tests comparable with our results for Amazon FSx, we configured our Amazon EC2 instance hosting SQL Server database with IO2 Block Express EBS volume of 26 TB and provisioned IOPS of 80,000. Considering the ease of formatting EBS volumes, we expanded our series of tests by including volumes formatted with 4 KB and 8 KB blocks besides the 16, 32, and 64 KB blocks matching our storage formatting for Amazon FSx.

Table 2 and Figure 8 present the results of our performance testing. Because of the high-resolution vertical scale in Figure 8, it may appear that different block sizes result in different performance levels, but this is just the artifact of scale. The difference between the results for respective data points is around 0.5%, which is within the limits of the precision of HammerDB benchmarking.

Table 2. SQL Server performance testing results as a function of EBS block size.

Figure 8. SQL Server performance testing results as a function of EBS block size.

Similar to using Amazon FSx, the average read and write operations size was consistently around 9 KB when we used EBS volume with various allocation unit sizes.

Conclusion

Based upon our test results, we concluded that for a simulated OLTP workload using HammerDB, SQL server performance does not depend on the allocation unit size of the underlying storage subsystem. This should ease our customers’ concerns regarding Amazon FSx formatting storage with 16 KB blocks.

Even when using Amazon EBS as a storage subsystem for SQL Server deployment, where customers have full control over formatting the volume, it is unnecessary to format the volume with 64 KB blocks to achieve the best performance for a SQL Server OLTP workload.

While there are other aspects of SQL Server operation, such as backups that were not included in our testing, our findings suggest that for routine OLTP operations, block size of the storage does not affect performance.


AWS can help you assess how your company can get the most out of cloud. Join the millions of AWS customers that trust us to migrate and modernize their most important applications in the cloud. To learn more on modernizing Windows Server or SQL Server, visit Windows on AWSContact us to start your migration journey today.

Alex Zarenin

Alex Zarenin

Alex Zarenin is a Principal Solutions Architect with Amazon Web Services. He works with financial services companies designing and implementing a broad array of technical solutions. With domain expertise in Microsoft technologies, Alex has more than 30 years of technical experience in both the commercial and public sectors. Alex holds a Ph.D. in Computer Science from NYU.

Aaron Adams

Aaron Adams

Aaron Adams is a Solutions Architect at AWS, based out of New York. He helps enterprise customers in the Northeast U.S. accelerate their adoption of the AWS Cloud by providing architectural guidelines to design innovative and scalable solutions. Previous to joining AWS, He spend the majority of his career as a Pre-Sales System Engineer in the backup and disaster recovery space helping customer meet compliance and regulatory reequipments as well as defend against ransomware.

Suprith Krishnappa C

Suprith Krishnappa C

Suprith Krishnappa C is a Database Consultant with the Professional Services team at Amazon Web Services. He works with enterprise customers, offering technical support and designing customer solutions on database projects, as well as assisting them in migrating and modernizing their existing databases to the AWS cloud.

Vikas Babu Gali

Vikas Babu Gali

Vikas Babu Gali is a Senior Specialist Solutions Architect, focusing on Microsoft Workloads at Amazon Web Services. Vikas provides architectural guidance and technical assistance to customers across different industry verticals accelerating their cloud adoption.