Skip to end of metadata
Go to start of metadata


On AWS, SoftNAS SNAP HA™ is designed to operate within the Virtual Private Cloud (VPC). VPCs can be as simple as a single subnet, with or without a VPN security gateway, or as complex as public/private compartmentalized subnets, as depicted in the figures herein. In the first image below, we see a VPC configured to operate across two availability zones (AZ), with separate private subnets. SoftNAS Cloud® controllers are placed into the private subnet for Virtual IP address routing purposes.

About Virtual IP Addresses

SoftNAS Cloud® storage is normally not accessible from the public Internet. With a Virtual IP setup, none of your IP addresses is public facing, increasing the security of the deployment. A Virtual IP address is configured with a security group setting that restricts its access to only the internally-routable, private IPs assigned to the VPC; e.g., in this example, only EC2 instances within the VPCs internal 10.0.0./16 private network are routable to the Virtual IP.

Why use a Virtual IP? Virtual IPs (VIP) are completely private cross-zone routable IP addresses available in the AWS VPC environment. In practice, VIPs add very little latency or other overhead to storage traffic, and provide routing redundancy across the zones, all without the risks inherent in using a public facing IP. For this reason, Virtual IPs are our recommended best practice.

AWS Virtual IP Cross-Availability Zone Architecture Overview


Please refer to Figure 1 shown below for the remaining discussion. This drawing depicts a typical HA deployment but is not the only possible design. In fact, SoftNAS SNAP HA™ can be deployed with dual controllers located within a single AZ on a VPC (there is no requirement to split controllers across AZs, but it is a recommended best practice for maximum availability).

We see a VPC created in a /16 network in AWS US East (Virginia) data center, with subnets allocated in Zone A and Zone B. This topology provides the best overall redundancy and availability within the AWS AZ architecture.



Two SoftNAS Cloud® controller EC2 instances are deployed - one per AZ. If optional private subnets are configured in one or more AZs, they will also have access to the Virtual IP(VIP) for NFS client storage access via NFS, CIFS and iSCSI protocols.

The drawing shows SNAP HA™ replication traffic flowing from Controller A to Controller B. This traffic is allocated to interface 0. Interface 0 is also used for administration using the SoftNAS StorageCenter™ GUI. Block replication keeps a warm copy of the data from node A on node B, in case a failover is necessary.

The drawing shows two orange arrows emanating from an orange and white circle, which represents the Virtual IP. The black lock symbol represents the EC2 Security Group associated with the Virtual IP. The shadowed orange arrow represents re-routed storage requests flowing to Controller B after an automatic failover or manual takeover. This Virtual IP must be in a completely separate CIDR block from the two instance IPs.

When an automatic failover or manual takeover occurs, NAS traffic is re-routed via the Virtual IP from Controller A to Controller B, as indicated by in the diagram above. When a Virtual IP switches over from one controller to another, NAS client traffic is rapidly re-routed to the new controller, typically in just a few seconds. NAS clients typically experience a brief switch-over delay of up to 20 seconds or so, and automatically reconnect after the switch-over event takes place.

AWS VIP Cross-Availability Zone Network Design


Each SoftNAS Cloud® controller has two NICs assigned - interface 0 (default) and one additional interface 1 (added during EC2 instance configuration).

  • Admin and Replication, Interface 0 - the first (default) NIC is used for SoftNAS StorageCenter™ access and SnapReplicate™ data replication across controllers.

  • SoftNAS Cloud® Storage, Interface 1 - the second NIC is dedicated to NAS storage traffic and is used for Virtual IP routing of storage-related traffic (NFS, CIFS, iSCSI).
    Note in the following diagrams the IP addresses shown are for illustration purposes only, and actual IP addresses will be assigned by AWS. 




The remaining EC2 instances constitute NAS clients; that is, EC2 instances that connect using NFS, CIFS or iSCSI protocols to access NAS services across the private network. Although only two AZs are shown in these diagrams, NAS clients can access HA NAS services from any zone within the region allowed access to the Virtual IP.

Secure Administrative Access in VPC

Without a public facing IP, the only way to access a Virtual IP VPC is by connecting to the private subnet upon which it is based. There are multiple ways to configure secure administrative access to the SoftNAS SNAP HA™ storage controllers:

  1. VPN - this is the most secure stand-alone solution, and a recommended minimum best practice for limiting access to the private IPs of each SoftNAS Cloud® controller. In this case, use DNS to assign a common name to each controller (e.g., "nas01.localdomain.com", "nas02.localdomain.com"), making routing to each SoftNAS Cloud® controller convenient for administrators

  2. Admin Desktop - an even more secure approach is to combine VPN access with an Administrator's desktop, (sometimes referred to as a jumpbox) typically running Windows and accessed via RDP. This secure admin desktop adds another layer of authentication, namely Active Directory (or local account) authentication. Once an administrator has gained secure, encrypted access to the Admin Desktop, she opens up a web browser to connect to the SoftNAS StorageCenter™ controller. 


HA Controller in AWS


On AWS, shared data stored in highly-redundant S3 storage is used as an HA Controller. A single S3 bucket is created in the same region as the VPC.

HA controller bucket names in S3 are of the form "hacontroller-<haUUID>", where haUUID is a unique ID created by SNAP HA™ and assigned to represent a customer's HA cluster; e.g., "hacontroller-02c8a87d-8af7-4295-962e-8313e1ff6c7d" is an HA controller bucket stored on S3. The HA controller bucket occupies very little space.