SQL Server on Azure VM with storage pools

Running SQL Server workloads on an Azure Virtual Machine(VM) is one of the quickest and easiest ways to lift-and-shift the workloads into the Azure cloud. The expectation is to perform your workload better in Azure as much as the same or more as compared to on-premises environment. However, the performance of a database running in Azure cloud depends on many factors, such as the size of a VM, additional features and the configuration of the disk. 

You should be optimizing both cost and the performance when you are running SQL Server on a Azure VM. You should be considering all the recommended optimization options if your workload is so demanding. 


Why storage performance is critical for SQL Server performance?

SQL Server performance depends heavily on the I/O subsystems. The storage performance is measured by IOPS (Input Output per second) and throughput(MB/sec). Unless SQL Server database fits into the physical memory(RAM), SQL Server constantly brings database pages in and out of the memory (buffer pool). Data files and tempdb files are randomly accessed by the SQL Server engine whereas log files are sequentially accessed under normal circumstances.   
If the underlying I/O subsystem is slower, the application and the end users may experience performance issues.  

There are some performance counters available to validate IOPS and Throughputs required for your SQL Server databases. 
 * \LogicalDisk\Disk Reads/Sec (read IOPS)
 * \LogicalDisk\Disk Writes/Sec (write IOPS)
 * \LogicalDisk\Disk Read Bytes/Sec (read throughput requirements for the data, log, and tempdb files)
 * \LogicalDisk\Disk Write Bytes/Sec (write throughput requirements for the data, log, and tempdb files)


Microsoft guidelines for SQL Server disk performance:

  • Place data, log and tempdb files on separate disk drives
  • Use host caching to read-only for data disks
  • Do not use host caching for log files
  • Provision disks in the same region as the SQL Server VM
  • Monitor and determine proper storage bandwidth and latency requirements for SQL server data, log and tempdb files before choosing Azure disk types
  • Plan for highest un-cached IOPS available and use data caching for data reads


Storage pools and the SQL Server performance:
As mentioned above, one of the critical aspect of getting optimal performance is to configure your disks attached to the Azure VM.
The SQL server data and log disks are remote storage disks that can be created in storage pools. The storage pool is a collection of physical disks.
A storage pool enables better performance, storage aggregation, elastic capacity expansion and delegated administration.


From a storage pool, you can create one or more virtual disks. There virtual disks appears to the Windows OS as a regular disk from which you can create formatted volumes. From these virtual disks, you can create one or more volumes. These volumes can be configures with size, drive letter, file system , allocation unit size etc. 

When you are selecting the disk, each Premium SSD comes with different performance characteristics, mainly throughput(MB/sec) and Provisioned IOPS (I/O per second).

With the information provided for each disk types and sizes, we can do a simple calculation to see what benefits we are getting out of storage pools rather than using  a large single disk. 

If you are going to create 8 TB disk for your SQL data files, you could instead create a storage pool with 4 x 2 TB (P40) disk and get the disk combined performance as below. 
– With a single 8 TB (P60) disk – 16,000 IOPS and 500 MB/s throughput
– With 4 x 2 TB (P40) disks – 7500 x 4 = 30,000 IOPS and 250×4 = 1,000 MB/sec throughput

As we have seen, we can achieve optimal performance having a store pool with multiple disks. 

However there are some considerations you need to think when designing storage pools in Azure VMs. 


Considerations when choosing storage pools in Azure. 

1. Be sure that your storage configuration matches the IOPS and throughput limitations imposed by the selected VM size. It does not matter how fast your storage pool if you are hitting the VM’s IOPS and throughput limits. If you are exceeding VM’s cap limits, you will be seeing warnings and the workload will not be executed as expected. 
2. Be sure that you have selected correct number of disks and disk types.
There is also a limit on a the number of disk you can have per VM. Some of the VMs do not allow premium SSD and disk caching. 
3. Use different storage pools for data and log files. This is needed to get optimal performance from the read-only caching for the data file storage pools. 
4. Make sure you have selected number of columns correctly. When you create storage pools you are configuring number of columns which is equal to the number of disks added at the storage pool creation. Every time when you expand the virtual disk in your storage pool, you will need to add number of disks as equal to number of columns. As we know above, There is a limit on a the number of disk you can have per Azure VM.


More info on Azure VM sizes:
We will cover how we can create a storage pool in a separate blog.



It is really critical to design your storage subsystems. By using storage pools correctly, you can enhance your disk subsystem’s performance. Storage subsystem has to be properly planned and designed well before you are migrating or running your workload in an Azure VM.