Using virtual synology in a scale out distributed storage architecture

I’ve recently finished upgrading the Home Datacenter (#HomeDC) to vSphere 6.0 with four hosts running VSAN 6.0 with dual 10GbE networking for each host.

vsan

Even running a few large virtual machines on the VSAN Datastore like VDP 6.0 with a 4TB backed disk, I found myself with a lot of spare storage. I’ve invested in the SAS disks (Seagate Enterprise Capacity 4TB SAS 7200rpm) backing the VSAN datastore, so the budget is gone for replacing the aging Synology DS1010+.

I’ve recently studied various reviews on the Synology DS2015xs, but found the CPU a bit lacking to drive the dual 10GbE SFP+ links, and the Synology DS3615xs is a bit expensive. So why not leverage the 10GbE NICs in my management cluster for ultra fast connections, the fast CPUs on my hosts are a nice addition too. The biggest advantage is “cheap” 10GbE file server connections.

The rest of the blog is going in a grey zone… it’s #unsupported

Let me show you the goods first.

virtual Synology DS3615xs running on VSAN datastore

The concept is to create a storage appliance, that leverages the VSAN datastore and its accelerations of read/writes, and provides a flexible structure, where you could increase the storage on an as needed basis, or create a temporary storage while migrating from one Synology to a newer one. All this running on a vSphere host. A concept that a lot of other companies are doing with their Virtual Storage Appliances.

I’m going to use the XPEnology operating system, which is based on the Synology DiskStation Manager (DSM).

  • In my design and implementation that I will describe here, the virtual synology has a 8TB disk. The appliance is not doing any RAID functions on this disk, as its already protected on the VSAN datastore using a number of failures to tolerate of 1 policy (FTT=1).
  • Another way would be to create two or four virtual disks with a number of failures to tolerate of 0, and do a Software RAID in the appliance.
  • A third way could be to use four physical disks and two SSDs on a host, create RDM links, and present all these disks to the virtual Synology appliance and do Software RAID on the disks, and use the SSD for caching (SSDcache). This virtual storage appliance would not be able to move to another host using vMotion, but you could mitigate this restriction using Synology High-Availability.

To build the virtual synology you will need to retrieve the latest copy of the XPEnology DS3615xs files. You are looking for XPEnoboot_DS3615xs_5.1-5022.3.vmdk or a more recent version. Each version can have its own deployment process. The process I have described below is using the XPEnoboot_DS3615xs_5.1-5022.3.vmdk version.

There is also a huge forum with lots of contributions and interesting links at the XPEnology forums.

1) Creating the vSynology

Now I’m going to say upfront, that you will need to upload the XPEnoboot_DS3615xs_5.1-5022.3.vmdk twice in the virtual storage appliance. Once for the initial install, which will format all disks of the appliance (including the boot vmdk), then again to boot the appliance.

We start by creating a new Virtual Machine.

01 - Create new VM

We give it a name and place it in a Cluster.

02 - Name VM

And we store the virtual machine and its configuration files on an existing datastore. I have select my vsanDatastore.

04 - Select VSAN Datastore

We define the hardware compatibility of the virtual machine and select the Guess OS. We are going to use the Linux Other 3.x Linux (64-bit).

06 - Select Guess OS Linux 3.2I have selected two CPU and 8GB of memory. Because my appliance won’t do any software RAID, 2 vCPU is more than enough.

07 - Base Hardware

I have added a second VMXNET3 network interface, which I put on a dedicated 10GbE Distributed Port Group. So eth0 goes out using uplink1 and eth1 goes out using uplink2. You see these changes in the summary of the appliance below.

08 - ds3615xs Hardware Summary2) Changing the Boot disk

We can now go back into the appliance and edit it. We remove the boot disk, and erase it from the disk. (Yeah missing screenshot of this step).

We then use the datastore browser to upload for the first time the XPEnoboot_DS3615xs_5-1-5022.3.vmdk in the appliance folder.

09 - Upload XPE vmdk on vsanDatastore

And we add this existing virtual disk to the appliance

10a - Select the XPE vmdk

The new boot disk is attached as an IDE disk on port IDE(0:0)

10b - Add XPE vmdk as IDE0-0

In the following screenshot, I’m adding the main disk to the storage appliance. I’m creating a 8TB (or 8192GB) virtual disk, and select my VSAN Storage Base Polci “VSAN High Perf”.  The “VSAN High Perf” is defined as a Number of failures to tolerate of 1, and Number of disk stripes per object at 2.

11 - XPE non-persistent and 8TB

Now you can start the appliance. Look closely at the IP addresses of the appliance and the MAC addresses. You want to start configuring the IP Addresses later on the proper NIC.

12a Start VM and check eth0 eth1

Using the Synology Assistant you can now see your appliance appear on the network.12b - Use Synology Assistant to find new DS3615xs Use your browser and aim it to the IP address shown in the Synology Assistant to do the initial install.

12c - Open the Web Assistant

We are installing the DSM using the Manual install.

12d - Install DiskStation Manager

Here you upload the DSM 5.1-5022 pattern file that you retrieved from the Synology download center in the DS3615xs selection.

12e - Select Manual install and select DS3615xs 5022 pat

It will now prompt you that it will erase all partitions on the attached disks of the appliance. This includes the XPEnoboot disk of the appliance.

12f - Format disks with 5022.3 PAT

Accordingly the expected behavior now, is that the boot disk is wiped and won’t boot.

13 - Both disk formatted.

Stop the appliance, and using the Datastore browser, you go erase the XPenoboot disk. Upload again for the 2nd time the XPEnoboot_DS3615xs_5.1-5022-3.vmdk in the folder.

14 - Erase XPEnoboot vmdk and replace with original one

3) Configuration using Synology Assistant

You can now restart the appliance. You will notice that the 2nd time the appliance boots, some of the messages like the IP address are not there anymore. And using the Synology Assistant, you see that the DHCP function isn’t started. The IP addresses are now 169.254.x.y

Select the proper network interface in the Synology Assistant using the MAC address, and select Setup. If you don’t select the proper MAC address you might need to change swap IP addresses later. So save yourself some time, and select the eth0 one.

15 - Reboot DS3615xs and use Synology AssistantThe Synology assistant wizard will now start.

16 - Synology Assistant

The Admin password at this time is blank, don’t enter any value. You can change the password later.

17 - Synology Assitant - Blank passwordEnter the appliance Network settings.

18 - Synology Assitant - Final Network settings for eth0

Refreshing the Synology Assistant shows that you have the proper IP address now.

19 - Now ready for Web configuration

Time to connect to your newly deploy appliance.

20 - Configuration

You are now only a few steps away from using your storage appliance.

21 - Web Config

It is now time to change your admin account password.

22 - Server name

We can now update the DSM 5.1-5022 version to the latest 5.1-5022-5 version. Depending on the CPU of your host, you will never have seen a Synology reboot so fast.

23 Patch DSM

If you intend to use this virtual synology appliance to store data, I recommend you do some conditioning tests first, to see how it reacts in your environment.

I like the flexibility of the virtual synology appliance:

  • Adding a temporary repository for a data migration becomes easy if you have a lot of underlying VSAN datastore space.
  • Want to try out Synology High-Availability, add a 2nd appliance and create the High-Availability cluster.
  • Want to test a Synology with 10GbE interface, easy if your ESXi host has a 10G interface. (*)

In the coming weeks, I’m looking forward to deploy on my VSAN datastore another storage appliances that can scale out in this distributed storage architecture.

(*) I have found out that while having the virtual synology appliance with 10GbE on the backbone is awesome, yet I ran into upload bandwidth limits trying to upload data. My sources where connected to the core switch over 1GbE links, or the virtual machines being used as a source for testing, has its disk store on 1GbE NFS/iSCSI LUNs. To test the virtual synolgoy I copied large files from various sources.I had three sources pushing out 100-120MB/s, 60-70MB/s and 80-90MB/s of large sequential files to get the 2nd screenshot at the top and see the virtual synology write stats at 220MB/s.

Homelab 2015 Upgrade

Since my last major entry about my Homelab in 2014, I have changed a few things. I added a 2nd cluster based on Apple MacMini (Late 2012), on which I run my OS X workloads, VMware Photon #CloudNativeApps machines, the DevOps Management tools and the vRealize Automation deployed blueprints. This cluster was initially purchased & conceived as a management cluster. The majority of my workload is composed of management, monitoring, analysis and infrastructure loads. It just made sense to swap the Compute and Management cluster around, and use the smaller one for Compute.

Compute_Cluster

Compute Cluster

The original cluster composed of three SuperMicro X9SRH-7TF described in my Homelab 2014 article (more build pictures here) gave me some small issues.

2014

Homelab in December 2014.

I’ve found that the Dual 10GbE X540 chipset on the motherboard does heat up a bit more than expected, and more than once (5x) I lost the integrated Dual 10GbE adapters on one of my hosts, requiring a host power off for ~20 minutes to cool down. In addition, a single 16G DDR3 DIMM was causing one host to freeze once every ~12 days. All the host have run extensive 48 hours memtest86+ checks, but nothing was spotted.  When a frozen VSAN host rejoins the cluster you see the re-synchronization of the data, and at that time, I’m glad to have a 10GbE network switch. In the end, I followed a best practice for VSAN clusters, I extendd the cluster to 4 hosts.

Beginning February I added a single Supermicro X10SRH-CLN4F server with a Intel Xeon E5-2630v3 (8 Cores @2.4Ghz and 64GB of DDR4 memory) to the cluster. This Supermicro X10SRH-CLN4F comes with 4 Intel Gigabit ports, an integrated LSI 3008 SAS 12Gb/s adapter. I also added an Intel X540-T2 dual 10GbE adapter to bring it in line with the first three nodes.

esx01

vSphere 6.0 on Supermicro X10SRH-CLN4F

Having a fourth host means scaling up the VSAN Cluster with an additional SSD and two 4TB SAS drives.

In the past month, the pricing of the Samsung 845DC Pro SSD have drop, to come in the $1/GB range. The Samsung 845DC Pro is rated at 10 DWPD (Disk-Writes-Per-Day) or 7300 TBW (TeraBytes-Written-in-5-years), and its performance is documented at 50’000 Sustained Write IOPS (4K) (Write IOPS Consistency at 95%) [Reference Samsung 845DC Pro PDF, and thessdreview article]. A fair warning for other poeple looking at the Samsung SSD 845DC Pro, it is not on the VMware VSAN Hardware Compatibility List.

Here is a screenshot of the disk group layout of the VSAN Cluster.

vsan_disk_group

VSAN Disk Management

The resulting VSAN configuration is now 28TB usable space.

vsan

Here is a screenshot of the current Management Cluster.

Management Cluster

This cluster having grown, is now also generating additional heat. It’s been relocated in a colder room, and I had a Three Phase 240V 16A electricity line put in.

Management Cluster (April 2015)

Management Cluster (April 2015)

My external storage is still composed of two Synology arrays. An old DS1010+ and a more recent DS1813+ with a DX513 extension. At this point, 70% of my virtual machine datasets are located on the VSAN datastore.

Synology DS1813+ Storage Manager

Reviewing this article, I realize this cannot quantify as a homelab anymore… its a home datacenter… guess I need a new #HomeDC hashtag…

NFS volume mounting error following a network change

I recently redesigned my network configuration from a single /24 address range to multiple /24 ranges with routing done on my core switch. Part of this change meant shutdown the vSphere cluster, and the storage arrays (Synology) and reassigning new IP addresses and gateways to all of these entities.

When my hosts and my storage arrays had their new IP addresses and I attempted to re-map my NFS volumes to the Synology, I got a strange error message while attempting to mount my NFS volumes. “There are incorrect or missing values below.

Unable to mount NFS point in vSphere Web Client

Another error message “Unable to add new NAS, volume with the label X already exists” was given on the ESXi shell when I attempted the same operations using a SSH session directly on my host.

Unable to add new NAS volume with the label already exists

Yet the esxcfg-nas -l command did not return any values.

Well it seems that the old NAS entry is still in the ESXi host, but not listed. To fix this small issue, you need to delete the non-existing NFS mount point and recreate it.

In the next screenshot you see me listing my mount points, attempting to mount a new NFS volume (legolas_nfs), erasing the non-visible entry, and at last adding the new NFS volume.

erasing_nfs_ghost_mount_point_2

I hope this can save someone some precious time.