For optimal SoftNAS® performance, adhere to the following Networking considerations and Best Practices:
As with any storage system, NAS performance is a function of a number of many different combined factors:
- Network bandwidth available; e.g., 1 GbE vs. 10 GbE vs. Infiniband
- Network QoS (whether the network is dedicated, shared, local vs. remote, EC2 provisioned IOPS, etc.)
- Network latency (between workload VMs and SoftNAS® VM)
- MTU settings in VM host software and switches
- Network access protocol (NFS, CIFS/SMB, iSCSI, Direct-attached fiber-channel)
- Use of VLANs to separate storage traffic from other network traffic.
A minimum of 1 gigabit networking is required and will provide throughput up to 120 MB/sec (line speed of 1 GbE). 10 GbE offers 750+ MB/sec throughput. To reduce the overhead for intensive storage I/O workloads, it is highly recommended to configure the VMware hosts running SoftNAS® and the heavy I/O workloads with "jumbo frames", MTU 9000. It's usually best to allocate a separate vSwitch for storage with dual physical NICs with their VMkernels configured for MTU 9000 (be sure to configure the physical switch ports for MTU 9000, as well). If possible, isolating storage onto its own VLAN is also a best practice.
When using dual switches for redundancy (usually a good idea and best practice for HA configurations), be sure to configure the VMware host vSwitch for Active-Active operation and test switch port failover prior to placing SoftNAS® into production (as with any other production VMware host).
Choose static IPv4 addresses for SoftNAS®. If the plan is to assign storage to a separate VLAN (usually a good idea), ensure the vSwitch and physical switches are properly configured and available for use. For VMware-based storage systems, SoftNAS® is typically deployed on an internal, private network. Access to the Internet from SoftNAS® is required for certain features to work; e.g., Software Updates (which download updates from softnas.com site), NTP time synchronization (which can be used to keep the system clock accurate), etc.
From an administration perspective, allow browser-based access from the internal network only. Optionally, use SSH for remote shell access (optional). To completely isolate access to SoftNAS from both internal and external users, then access will be restricted to the VMware console only (launch a local web browser on the graphical console's desktop). Add as many network interfaces to the SoftNAS® VM as are permitted by the VMware environment.
Prior to installation, allocate a static IP address for SoftNAS® and be prepared to enter the usual network mask, default gateway and DNS details during network configuration. By default, SoftNAS® is configured to initially boot in DHCP mode (but it is recommended to use a fixed, static IP address for production use).
At a minimum, SoftNAS® must have at least one NIC assigned for management and storage. Provide separate NICs for management/administration, storage I/O and replication I/O.
A fast NAS response to requests isn't the only governing factor to how well workloads perform. Network design, available bandwidth and latency are also important factors. For example, for high-performance NAS applications, where possible, use of a dedicated VLAN for storage is a must. Configuring all components in the storage path to use MTU 9000 will greatly increase throughput by reducing the effects of round-trip network latency and reducing the interrupt load on the NAS server itself. Interrupts are often overlooked as a source of overhead, because they aren't readily measured, but their effects can be significant, both on the NAS server and workload servers. Configure any NAS requiring the highest level of performance for MTU 9000 along with the switching ports used between the NAS host and workload servers.
- Network Throughput - A single 1 GbE network segment will, at most, produce up to 120 MB/sec throughput under the most ideal conditions possible. 10 GbE has been observed to deliver up to 1,000 MB/sec of throughput.
- Network Protocol - Use NFS, CIFS or iSCSI? The iSCSI protocol often provides the best throughput, and increased resiliency through multi-pathing. Just be aware of the added complexities associated with iSCSI.
- For VM-based workloads - it's hard to go wrong with NFS or iSCSI. For user data (e.g.,file shares), CIFS is more common because of the need to integrate natively with Windows, domain controllers and Active Directory when using a NAS as a file server.
Thick-provisioning VMware datastores provides increased write performance, and should be preferred over thin-provsioning of VMDKs when optimal performance is required.
Regardless of design, verify each implementation by running performance benchmarks to validate the throughput expected before going into production.