Skip to end of metadata
Go to start of metadata

Common Scenarios and Best Practices

For any high-performance on-premise use case, be sure to deploy an adequate amount (e.g., 64 GB or more) of write log (ZFS "ZiL") and RAM (plus read cache (ZFS "L2ARC") to absorb the high level of 4K block I/O for best results).

For workloads with predominately small (less than 128K) reads and writes, making use of RAM, write log and read cache is critical to achieving maximum throughput, as ZFS block I/O occurs in 128K block I/O chunks. Windows also defaults to 4K blocks.

Windows Workloads

One approach that works well for a broad range of applications is to use a combination of SAS and SATA drives - using SSD for read cache/write log (always configure write logs as mirrored pairs in case a drive fails). SATA drives provide very high densities in a relatively small footprint, which is perfect for user mass storage, Windows profiles, Office files, MS Exchange, etc. SQL Server typically demands SAS and/or SSD for best results, due to the high transaction rates involved. Exchange can be relatively heavy on I/O when it's starting up, but since it reads most everything into memory, high-speed caching does little to help run-time performance after initial startup.

Virtual Desktops

Virtual desktops benefit greatly from all the cache memory, level 2 caching and high-speed storage available, because many performance lags quickly become visible as user launch applications, open and save files, etc. Caching also helps alleviate "login storms" and "boot storms" that occur when a large number of simultaneous users attempt to log in first thing in the morning. For these situations, a combination of local caching (on each VDI server), combined with appropriate caching for user profiles and applications can yield excellent results.