Network core switch Cisco Nexus 3064PQ

Here is my new network core switch for the Home Datacenter, a Cisco Nexus 3064PQ-10GE.

Cisco Nexus 3064PQ-10GE (48x SFP+ & 4x QSFP+)

Cisco Nexus 3064PQ-10GE (48x SFP+ & 4x QSFP+)

But before I speak more about the Cisco Nexus 3064PQ-10GE, let me just bring you back in time… Two years ago, I purchased a Cisco SG500XG-8F8T 16-port 10-Gigabit Stackable Managed Switch. This was first described in my Homelab 2014 build. This was my most expensive networking investment I ever did. During the past two years, as the lab grew, I used the SG500XG and two SG500X-24 for my networking stack. This stack is still running on the 1.4.0.88 firmware.

sg500xg_stack

During these past two years, I have learned the hard way that network chipsets for 10GbE using RJ-45 cabling was outputting so much more heat than the SFP+ chipset. My initial Virtual SAN Hybrid implementation using a cluster of three ESXi host with Supermicro X9SRH-7TF (Network chipset is Intel X540-AT2) crashed more than once, when the network chipset became so hot that I lost my 10G connectivity, but the ESXi host kept on running. Only a powerdown & cool off of the motherboard, would allow my host to restart with the 10G connectivity. This also lead me to expand the VSAN Hybrid cluster from three to four hosts and to have a closer look at the heating issues when running 10G over RJ45.

Small business network switches with 10GBase-T connectivity are more expensive than the more enterprise oriented SFP+ switch, but they also output so much more heat (Measured in BTU/hr). Sure once the 10GBase-T switch is purchased, the cost of Category 6A cables is cheaper than getting the Passive Copper cables, who are limited to 7 meters.

The Cisco SG500XG-8F8T is a great switch as it allows me to connect using both RJ-45 and SFP+ cables.

As the lab expanded, I started to ensure that my new hosts have either no 10GBase-T adapters on the motherboard, or use the SFP+ adapter (Like my recent X10SDV-4C-7TP4F ESXi host). I have started using the Intel X710 Dual SFP+ adapters on some of my host. I like this Intel network adapter, as the network chipset gives out less heat than previous generations chipset, and has a firmware update function that can done from the command prompt inside of vSphere 6.0.

This brings me to the fact that I was starting to run out of SFP+ ports as the labs expands. I have found on ebay some older Cisco Nexus switch, and the one that caught my eye for it’s amount of ports, it price and it’s capabilities is the Cisco Nexus 3064PQ-10GE. These babies are going for about $1200-$1500 on ebay now.

3064pq_on_ebay

The switch comes with 48-ports SFP+ and 4-ports in QSFP+ format. These four ports can be configured in either 16x10G using fan-out cables or 4x40G. This is a software command that can be put on the switch to change from one mode to the other.

Here is my switch with the interface output. I’m using a Get-Console Airconsole to extend the console port to my iPad over Bluetooth.

nexus_3064pq_10g_40g-1

My vSphere 6.0 host is now connected to the switch using an Intel XL710-QDA2 40GbE network adapter and a QSFP+ copper cable.

esxi_40G

I’m going to use the four QSFP+ connectors on the Cisco Nexus 3064PQ-10GE to connect my Compute cluster with NSX and VSAN All-Flash.

3064_10g_40g_show_int

 

The switch came with NX-OS 5.0(3)U5(1f).

3068_nx-os

 

Concerning the heat output of the Cisco Nexus 3064PQ-10GE (datasheet) I was pleasantly surprised to note that it’s output is rather small at 488 BTU/hr when all 48 SFP+ are used. I also noted that the noise level of the fans was linked to the fan speed and the charge of the switch. Going from 59 dBA at 40% duty cycle to 66 dBA at 60% duty cycle to 71 dBA when at 100% duty cycle.

Here is the back of the Cisco Nexus 3064PQ-10GE. I did purchase the switch with a DC power (top of switch to the right), because the switch I wanted had both the LAN_BASE_SERVICES and the LAN_ENTERPRISE_SERVICES license. I sourced two N2200-PAC-400W-B power supply from another place.

nexus_3064pq_back-1

Link to the Cisco Nexus 3064PQ Architecture.

 

Intel Xeon D-1518 (X10SDV-4C-7TP4F) ESXi & Storage server build notes

These are my build notes of my last server. This server is based around the Supermicro X10SDV-4C-7TP4F motherboard that I already described in my previous article (Bill-of-Materials). For the Case I select a Fractal Design Node 804 square small chassis. It is described as being able to handle upto 10x 3.5″ disks.

Fractal Design Node 804

Here is the side view where the motherboard can be fitted. It supports MiniITX, MicroITX and the FlexATX of the Supermicro motherboard. Two 3.5″ harddrives or 2.5″ SSD can be fitted on the bottom plate.

x10sdv_node804--2

The right section of the chassis, contains the space for eight 3.5″ harddrives, fixed in two sliding frame at the top.

x10sdv_node804--3

Let’s compare the size of the Chassis, the Power Supply Unit and the Motherboard in the next photo.

Fractal Design Node 804, Supermicro X10SDV-4C-7TP4F and Corsair RM750i

Fractal Design Node 804, Supermicro X10SDV-4C-7TP4F and Corsair RM750i

When you zoom in the the picture above, you can see three red squares on the bottom right of the motherboard. Before you inser the motherboard in the chassis, you might want to make sure you have moved the mSATA pin from the position on the photo to the 2nd position, otherwise you will not be able to attach the mSATA to the chassis. You need to unscrew the holding grommet from below the motherboard. People having purchased the Supermicro E300-8D will have a nasty surprise. The red square in the center of the motherboard is set for M.2 sticks at the 2280 position. If you have a M.2 storage stick 22110, you better move the holding grommet also.

Here is another closer view of the Supermicro X10SDV-4C-7TP4F motherboard with the two Intel X552 SFP+ connectors, and the 16 SAS2 ports managed by the onboard LSI 2116 SAS Chipset.

X10SDV-4C-7TP4F

In the next picture you see the mSATA holding grommet moved to accommodate the Samsung 850 EVO Basic 1TB mSATA SSD, and the Samsung SM951 512GB NVMe SSD in the M.2 socket.

X10SDV-4C-7TP4F

In the next picture we see the size of the motherboard in the Chassis.At the top left, you will see a feature of the Fractal Design Node 804. A switch that allows you to change the voltage of three fans. This switch is getting it’s electricity thru a SATA power connector. It’s on this power switch, that I was able to put a Y-power cable and then drive the Noctua A6x25 PWM CPU fan that fits perfectly on top of the CPU heatsink. This allowed me to bring down the CPU heat buildup during the Memtest86+ test from 104c to 54c.

X10SDV in Node 804

I used two spare Noctua Fan on CPU Heatsink fixer to hold the Noctua A6x25 PWM on the Heatsink, and a ziplock to hold those two fixers together (sorry I’m not sure if we have a proper name for those metal fixing brackets). Because the Noctua is getting it’s electricity from the Chassis and not the Motherboard, the Supermicro BIOS is not attemping to increase/decrease the Fan’s rpm. This allows me to keep a steady air flow on the heatsink.

Noctua A6x25 PWM fixed on heatsink

Noctua A6x25 PWM fixed on heatsink

I have fitted my server with a single 4TB SAS drive. To do this I use a LSI SAS Cable L5-00222-00 shown here.

lsi_sas_l5-00222-00_cable

This picture shows the 4TB SAS drive in the left most storage frame. Due to the length of the adapter, the SAS cable would be blocked by the Power Supply Unit. I will only be able to expand to 4x 3.5″ SAS disk in this chassis. Using SATA drives, the chassis would take upto 10 disks.

Node 804 Storage and PSU side

View from the back once all is assembled and powered up.

x10sdv_node804--12

This server with an Intel Xeon D-1518 and 128GB is part of my Secondary Site chassis.

ESXi60P03

The last picture shows my HomeDC Secondary Site. The Fractal Design Node 804 is sitting next to a Fractal Design Define R5. The power consumption is rated at 68 Watts for a X10SDV-4C-7TP4F with two 10GbE SFP+ Passive Copper connection, two SSDs and a single 4TB SAS drive.

HomeDC Secondary Site

HomeDC Secondary Site

Using virtual synology in a scale out distributed storage architecture

I’ve recently finished upgrading the Home Datacenter (#HomeDC) to vSphere 6.0 with four hosts running VSAN 6.0 with dual 10GbE networking for each host.

vsan

Even running a few large virtual machines on the VSAN Datastore like VDP 6.0 with a 4TB backed disk, I found myself with a lot of spare storage. I’ve invested in the SAS disks (Seagate Enterprise Capacity 4TB SAS 7200rpm) backing the VSAN datastore, so the budget is gone for replacing the aging Synology DS1010+.

I’ve recently studied various reviews on the Synology DS2015xs, but found the CPU a bit lacking to drive the dual 10GbE SFP+ links, and the Synology DS3615xs is a bit expensive. So why not leverage the 10GbE NICs in my management cluster for ultra fast connections, the fast CPUs on my hosts are a nice addition too. The biggest advantage is “cheap” 10GbE file server connections.

The rest of the blog is going in a grey zone… it’s #unsupported

Let me show you the goods first.

virtual Synology DS3615xs running on VSAN datastore

The concept is to create a storage appliance, that leverages the VSAN datastore and its accelerations of read/writes, and provides a flexible structure, where you could increase the storage on an as needed basis, or create a temporary storage while migrating from one Synology to a newer one. All this running on a vSphere host. A concept that a lot of other companies are doing with their Virtual Storage Appliances.

I’m going to use the XPEnology operating system, which is based on the Synology DiskStation Manager (DSM).

  • In my design and implementation that I will describe here, the virtual synology has a 8TB disk. The appliance is not doing any RAID functions on this disk, as its already protected on the VSAN datastore using a number of failures to tolerate of 1 policy (FTT=1).
  • Another way would be to create two or four virtual disks with a number of failures to tolerate of 0, and do a Software RAID in the appliance.
  • A third way could be to use four physical disks and two SSDs on a host, create RDM links, and present all these disks to the virtual Synology appliance and do Software RAID on the disks, and use the SSD for caching (SSDcache). This virtual storage appliance would not be able to move to another host using vMotion, but you could mitigate this restriction using Synology High-Availability.

To build the virtual synology you will need to retrieve the latest copy of the XPEnology DS3615xs files. You are looking for XPEnoboot_DS3615xs_5.1-5022.3.vmdk or a more recent version. Each version can have its own deployment process. The process I have described below is using the XPEnoboot_DS3615xs_5.1-5022.3.vmdk version.

There is also a huge forum with lots of contributions and interesting links at the XPEnology forums.

1) Creating the vSynology

Now I’m going to say upfront, that you will need to upload the XPEnoboot_DS3615xs_5.1-5022.3.vmdk twice in the virtual storage appliance. Once for the initial install, which will format all disks of the appliance (including the boot vmdk), then again to boot the appliance.

We start by creating a new Virtual Machine.

01 - Create new VM

We give it a name and place it in a Cluster.

02 - Name VM

And we store the virtual machine and its configuration files on an existing datastore. I have select my vsanDatastore.

04 - Select VSAN Datastore

We define the hardware compatibility of the virtual machine and select the Guess OS. We are going to use the Linux Other 3.x Linux (64-bit).

06 - Select Guess OS Linux 3.2I have selected two CPU and 8GB of memory. Because my appliance won’t do any software RAID, 2 vCPU is more than enough.

07 - Base Hardware

I have added a second VMXNET3 network interface, which I put on a dedicated 10GbE Distributed Port Group. So eth0 goes out using uplink1 and eth1 goes out using uplink2. You see these changes in the summary of the appliance below.

08 - ds3615xs Hardware Summary2) Changing the Boot disk

We can now go back into the appliance and edit it. We remove the boot disk, and erase it from the disk. (Yeah missing screenshot of this step).

We then use the datastore browser to upload for the first time the XPEnoboot_DS3615xs_5-1-5022.3.vmdk in the appliance folder.

09 - Upload XPE vmdk on vsanDatastore

And we add this existing virtual disk to the appliance

10a - Select the XPE vmdk

The new boot disk is attached as an IDE disk on port IDE(0:0)

10b - Add XPE vmdk as IDE0-0

In the following screenshot, I’m adding the main disk to the storage appliance. I’m creating a 8TB (or 8192GB) virtual disk, and select my VSAN Storage Base Polci “VSAN High Perf”.  The “VSAN High Perf” is defined as a Number of failures to tolerate of 1, and Number of disk stripes per object at 2.

11 - XPE non-persistent and 8TB

Now you can start the appliance. Look closely at the IP addresses of the appliance and the MAC addresses. You want to start configuring the IP Addresses later on the proper NIC.

12a Start VM and check eth0 eth1

Using the Synology Assistant you can now see your appliance appear on the network.12b - Use Synology Assistant to find new DS3615xs Use your browser and aim it to the IP address shown in the Synology Assistant to do the initial install.

12c - Open the Web Assistant

We are installing the DSM using the Manual install.

12d - Install DiskStation Manager

Here you upload the DSM 5.1-5022 pattern file that you retrieved from the Synology download center in the DS3615xs selection.

12e - Select Manual install and select DS3615xs 5022 pat

It will now prompt you that it will erase all partitions on the attached disks of the appliance. This includes the XPEnoboot disk of the appliance.

12f - Format disks with 5022.3 PAT

Accordingly the expected behavior now, is that the boot disk is wiped and won’t boot.

13 - Both disk formatted.

Stop the appliance, and using the Datastore browser, you go erase the XPenoboot disk. Upload again for the 2nd time the XPEnoboot_DS3615xs_5.1-5022-3.vmdk in the folder.

14 - Erase XPEnoboot vmdk and replace with original one

3) Configuration using Synology Assistant

You can now restart the appliance. You will notice that the 2nd time the appliance boots, some of the messages like the IP address are not there anymore. And using the Synology Assistant, you see that the DHCP function isn’t started. The IP addresses are now 169.254.x.y

Select the proper network interface in the Synology Assistant using the MAC address, and select Setup. If you don’t select the proper MAC address you might need to change swap IP addresses later. So save yourself some time, and select the eth0 one.

15 - Reboot DS3615xs and use Synology AssistantThe Synology assistant wizard will now start.

16 - Synology Assistant

The Admin password at this time is blank, don’t enter any value. You can change the password later.

17 - Synology Assitant - Blank passwordEnter the appliance Network settings.

18 - Synology Assitant - Final Network settings for eth0

Refreshing the Synology Assistant shows that you have the proper IP address now.

19 - Now ready for Web configuration

Time to connect to your newly deploy appliance.

20 - Configuration

You are now only a few steps away from using your storage appliance.

21 - Web Config

It is now time to change your admin account password.

22 - Server name

We can now update the DSM 5.1-5022 version to the latest 5.1-5022-5 version. Depending on the CPU of your host, you will never have seen a Synology reboot so fast.

23 Patch DSM

If you intend to use this virtual synology appliance to store data, I recommend you do some conditioning tests first, to see how it reacts in your environment.

I like the flexibility of the virtual synology appliance:

  • Adding a temporary repository for a data migration becomes easy if you have a lot of underlying VSAN datastore space.
  • Want to try out Synology High-Availability, add a 2nd appliance and create the High-Availability cluster.
  • Want to test a Synology with 10GbE interface, easy if your ESXi host has a 10G interface. (*)

In the coming weeks, I’m looking forward to deploy on my VSAN datastore another storage appliances that can scale out in this distributed storage architecture.

(*) I have found out that while having the virtual synology appliance with 10GbE on the backbone is awesome, yet I ran into upload bandwidth limits trying to upload data. My sources where connected to the core switch over 1GbE links, or the virtual machines being used as a source for testing, has its disk store on 1GbE NFS/iSCSI LUNs. To test the virtual synolgoy I copied large files from various sources.I had three sources pushing out 100-120MB/s, 60-70MB/s and 80-90MB/s of large sequential files to get the 2nd screenshot at the top and see the virtual synology write stats at 220MB/s.

NFS volume mounting error following a network change

I recently redesigned my network configuration from a single /24 address range to multiple /24 ranges with routing done on my core switch. Part of this change meant shutdown the vSphere cluster, and the storage arrays (Synology) and reassigning new IP addresses and gateways to all of these entities.

When my hosts and my storage arrays had their new IP addresses and I attempted to re-map my NFS volumes to the Synology, I got a strange error message while attempting to mount my NFS volumes. “There are incorrect or missing values below.

Unable to mount NFS point in vSphere Web Client

Another error message “Unable to add new NAS, volume with the label X already exists” was given on the ESXi shell when I attempted the same operations using a SSH session directly on my host.

Unable to add new NAS volume with the label already exists

Yet the esxcfg-nas -l command did not return any values.

Well it seems that the old NAS entry is still in the ESXi host, but not listed. To fix this small issue, you need to delete the non-existing NFS mount point and recreate it.

In the next screenshot you see me listing my mount points, attempting to mount a new NFS volume (legolas_nfs), erasing the non-visible entry, and at last adding the new NFS volume.

erasing_nfs_ghost_mount_point_2

I hope this can save someone some precious time.

Upgrading LSI HBA 9300-8i via UEFI (Phase 06)

Here is a resume on how to upgrade a LSI SAS3 HBA 9300-8i card to the latest BIOS & Firmware using the UEFI mode. This is applicable to my homelab Supermicro X9SRH-7TF or any other motherboard with UEFI Build-In EFI Shell. I’ve found that using the UEFI mode to be more practical than the old method of a MSDOS bootable USB key. And this is the way more and more Firmware and BIOS will be released.

Tom and Duncan showed  how to upgrade an LSI 9207-4i4e from within VMware vSphere 5.5 CLI. In this article I’m going to show you how to use the UEFI Shell for the upgrade.

Preparation.

First you need to head over to the LSI website for your HBA and download a few files to your computer. For the LSI HBA 9300-8i you can jump to the Software Downloads section. You want to download three files, extract them and put the files on a USB key.

The Installer_P4_for_UEFI which contains the firmware updater sas3flash.efi that works with P06. You can retrieve it using this dropbox link as it’s disappeared from the LSI download site.

The SAS3_UEFI_BSD_P6 which contains the BIOS for the updater (X64SAS3.ROM)

The 9300_8i_Package_P6_IR_IT_firmware_BIOS_for_MSDOS_Windows which contains the SAS9300_8i_IT.bin firmware and the MPTSAS3.ROM bios.

 

lsi9300_8i_download

At this point you put all those extract files mentioned above on a USB key.

lsi9300_p06_usbdos

 

You reboot your server, and modify the Boot parameters in the BIOS of the server to boot in UEFI Built-In EFI Shell.

UEFI_Build-In_EFI_Shell

Upgrading BIOS & Firmware.

When you reboot you will be dumped in the UEFI shell. You can easely move to the USB key with your programs using

UEFI_booting

And lets move over to the USB key. For me the USB key is mapped as fs1: but you could also have a fs0:

A quick dir command will list the files on the USB key.

uefi_dir

Using the sas3flash.efi -list command (extracted from the Installer_P4_for_UEFI file) we can list the local LSI MPT3SAS HBA adapter, see the SAS address and see the various versions of the Firmware & BIOS and UEFI BSD Bios.

sas3flash_list

There are three components that we want to patch, the Firmware, the BIOS and the UEFI BSD Code.

Here we start by upgrading the UEFI BSD BIOS. Using the sas3flash.efi we can fine tune with the SAS address of the controller, and select the X64SAS3.ROM file found in the SAS3_UEFI_BSD_P6 download. As you see, the –c Controller command allows you to specify to which adapter the BIOS is loaded. You can enter the number 0 or the SAS Address. sas3flash.efi -c 006F94D30 -b X64SAS3.ROM

sas3flash_bios

The next step is upgrade the Firmware with the SAS9300_8i_IT.bin found in the 9300_8i_Package_P6_IR_IT_firmware_BIOS_for_MSDOS_Windows file. sas3flash.efi -c 006F94D30 -f SAS9300_8i_IT.bin

sas3flash_firmware

The last part is to upgrade the MPTSAS3.ROM file which contains the BIOS of the LSI adapter. Here again we use sas3flash.efi -c 006F94D30 -b MPTSAS3.ROM.

 

The end result once Phase 06 firmware and bioses have been install is the following sas3flash.efi -list

lsi9300_8i_phase06

 

  • Firmware Version 06.00.00.00
  • BIOS Version 08.13.00.00
  • UEFI BSD Version 07.00.00.00

Now reboot the server, and make sure to change back your Boot option in the server BIOS to your USB key or harddrive that contains the vSphere hypervisor.

 

Creating a Linux Net benchmark VM

In this post, I will quickly explain, how I created my Virtual Machine under Linux, that I have and will use to benchmark some aspects of my new 2014 Homelab. First I download from the CentOS website, the latest version of the CentOS 6.5 64bit Net Install .ISO. This will allow me to install the Virtual Machine quickly with the packages I need.

The next step is to create a two Linux 64bit VMs on my vCenter. I selected to create a VMX-09 virtual machine, so that I can edit the network properties from the vCenter 5.5 Windows Client or the vSphere Web client. I create a two vCPU machine, because the application that I will be running for my network benchmarks is iperf, and is a single-threaded process, so the 2nd vCPU will be consumed by the operating system of the VM.

For Network Adapters, I select two VMXNET3 adapters, the first one will be used for management and baselining my perfs on a 1Gbps Ethernet, the 2nd one can be moved around from vSwitch to dVSwitch and from VMNIC to VMNIC. Note that I rather give two virtual sockets with one core, than one virtual socket with two cores. This will give you about 6% more performance for the VM.

vm_64bit_linux_01

Another small change I always do, is to optimize the Virtual Machine Monitor for the VMs. The VMM is a thin layer for each VM that leverages the the scheduling, memory management and the network stack in the VMkernel. So I change in the Options tab, the CPU/MMU Virtualization settings to force the use of Intel VT-x/AMD-V for instruction set virtualization and Intel EPT/AMD RVI for MMU virtualization. This will ensure that the VM gets the best optimized hardware supportfor the CPU and MMU. This should only be done on recent processors, when you are sure that your CPU/MMU supports EPT and VT-X. If that is not the case, then leave this setting to Automatic.

vm_64bit_linux_02_cpu-mmu

 

If you want to know more about these settings and many others, I highly recommend you read the great “vSphere High Performance Cookbook” by Prasenjit Sarkar (@stretchcloud) at Packt Publishing.

I just need to say that in the past few years, all my VMs and Templates get this setting by default on all my systems and my customer clusters.

Next, we need to boot the Linux machine with the CentOS Net Installer. I’m not going to explain all the steps needed for every Linux settings, just a few points. When you get the option to select the installation method we select the URL option.

CentOS Installation Method

It will then ask you to select the network card and will fetch an IP address from the network via DHCP before asking you to enter the URL. We will use the following URL

http://mirror.centos.org/centos/6.5/os/x86_64/

Enter URL

Once the install GUI has started make sure not to forget to put the 2nd Ethernet interface where you will be doing your iperf testing to a 9000 MTU. Otherwise your network performance results will be skewed.

nic_eth1_mtu
For my performance testing VMs, I let the OS select the default file partition scheme, this is not a VM requiring special sizing.

default_partition_scheme

I select the Desktop installation config for these test platforms.

desktop installation

Once you have finished installing the virtual machine, install the latest VMware Tools on it, before modifying the grub menu. I add the key work VGA=0x317 to all my linux machines kernel settings in grub.conf or menu.lst (OpenSuSE), so that the VM boots think it has a 1024×768 monitor. Even if I stay in the Console mode of Linux, it gives me more screen estate.

When you have Linux machines that run on 1Gbps ethernet, the default settings in the Linux kernel are fine, but if you want to optimize the network traffic for Linux for 10Gbps, there are a few System variables that we can fine tune. Lets edit the /etc/sysctl.conf and add six fields:

# Minimum, initial and max TCP Receive buffer size in Bytes

net.ipv4.tcp_rmem = 4096 87380 134217728

# Minimum, initial and max buffer space allocated

net.ipv4.tcp_wmem = 4096 65536 134217728

# TCP Moderage Receive Buffer Auto-Tuning

net.ipv4.tcp_moderate_rcvbuf=1

# Maximum Receive socket buffer size (size of BDP)

net.core.rmem_max = 134217728

# Maximum Send socket buffer size (size of BDP)

net.core.wmem_max = 134217728

# Maximum number of packets queueed on the input side

net.core.netdev_max_backlog = 300000

I’m going to use iperf to test the links between two machines, so for this set of machines, I disable the IPtables as I have multiple ports being used between the two linux test platforms. chkconfig iptables off will do the trick. A quick reboot and all the modifications will take effect.

Also as we will test the 10G Ethernet performance, both virtual machines are on a Distributed vSwitch (dVS), and the PortGroup is configured with a MTU set at 9000 (Jumbo Frames).

And before finishing this blog, I also make sure to use DRS Rules, so that the Linux VM 01 should runs on my ESX01 server, and the Linux VM 02 should run on my ESX02 server. Using the Should rule, allows me to quickly put a host in maintenance mode, while ensure that my performance virtual machines stay where they should.

To use the iperf (a very single threaded program) between two test hosts, start iperf on the first one as a service iperf -s , and on the second one, we use the commands iperf -m -i t300 -c IP_of_other_VM or iperf -m -i t300 -c IP_of_other_VM -fM to have the same results but in Bytes instead of bits.

Here is preliminary results using a 10G Ethernet interface between the two hosts (both hosts have an Intel X540-T2 adaper).

10g_results

 

 

Upgrading the X9SRH-7TF LSI HBA 2308 and LSI HBA 9207-8i

Here is a resume on how to upgrade the LSI HBA 2308 Chipset on the Supermicro X9SRH-7TF and a LSI SAS2 HBA 9207-8i card to the latest BIOS & Firmware using the UEFI mode. This is applicable to my homelab Supermicro X9SRH-7TF or any other motherboard with UEFI Build-In EFI Shell.

I’ve found that using the UEFI mode to be more practical than the old method of a MSDOS bootable USB key. And this is the way more and more Firmware and BIOS will be released.

Tom and Duncan showed you last week how to upgrade an LSI 9207-4i4e from within VMware vSphere 5.5 CLI. In this article I’m going to show you how to use the UEFI Shell for the upgrade.

Preamble.

Since last week, I have been running the PernixData FVP (Flash Virtualization Platform) 1.5 solution on my two ESXi hosts, and I have found that the LSI HBA 2308 on the motherboard had a tendency to drop all the Drives and SSDs under heavy I/O load. I did upgrade last week the LSI HBA 2308 from the original Phase 14 Firmware to Phase 16, but that didn’t solve the issue.  Unfortunately I have not yet found on the Supermicro Support site, a newer release of the Firmware Phase 18 or BIOS for the embedded adapter.

So I dropped in the box another LSI HBA 9207-8i adapter, which is also based on the LSI 2308 chip. And low and behold, my two LSI adapter seemed to have nearly the exact same Firmware & BIOS.

two_adapters_lsi

Well if they LSI Embedded HBA and the LSI 9207-8i are nearly identical and with the same chipset… who knows if I burn the Firmware & BIOS on the motherboard…

 

Preparation.

First you need to head over to the LSI website for the LSI 9207-8I and download a few files to a local computer. For the LSI HBA 9207-8i you can jump to the Software Downloads section. You want to download three files, extract them and put the files on a USB key.

  • The Installer_P18_for_UEFI which contains the firmware updater (sas2flash.efi)
  • The UEFI_BSD_P18 which contains the BIOS for the updater (X64SAS2.ROM)
  • The 9207_8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows which contains the 9207-8.bin firmware.

lsi_site

At this point you put all those extracted files mentioned above on a USB key.

You reboot your server, and modify the Boot parameters in the BIOS of the server to boot in UEFI Built-In EFI Shell.

UEFI_Build-In_EFI_Shell

When you reboot also jump into the LSI HBA Adapter to collect the controllers SAS address. Its a 9 digit number you can find on the following interface. Notice that it starts with a 0 on the left of the quote.

lsi_sas_address_1

and

lsi_sas_address_2

For my adapters it would be 005A68BB0 for the SAS9207-8I and 0133DBE00 for the embedded SMC2308-IT.

 

Upgrading BIOS & Firmware.

Lets plug in the USB key in the server, and lets boot into the UEFI Build-In EFI Shell.

UEFI_booting

And lets move over to the USB key. For me the USB key is mapped as fs1: but you could also have a fs0:.  A quick dir command will list the files on the USB key.

usb_dir

Using the sas2flash.efi -listall command (extracted from the Installer_P18_for_UEFI file) we can list all the local LSI HBA adapters and see the various versions of the Firmware & BIOS.

sas2flash_listall_old

We can also get more details about a specific card using the sas2flash.efi -c 0 -list

sas2flash_list_old_9207

and sas2flash.efi -c 1 -list

sas2flash_list_old_2308

Now lets just upgrade the BIOS with the X64SAS2.ROM file found in the UEFI_BSD_P18 download and the Firmware with the 9207-8.bin that we found in the 9207-8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows file.

As you see, the -c Controller command allows you to specify to which adapter the BIOS and Firmware is upgraded.

sas2flash_upgrade_0

and

sas2flash_upgrade_1

Lets have a peak again at just one of the LSI Adapters, the controller 1, which is the embedded one, now seems to have the Board name SAS9207-8i. A bit confusing, but it seemed to have worked.

sas2flash_1_list

Using the sas2flash.efi -listall command now shows us the new Firmware and BIOS applied to both cards.

sas2flash_listall_new

Now power-off the server, so the new BIOS & Firmware are properly loaded, and make sure to change back your Boot option in the server BIOS to your USB key or harddrive that contains the vSphere hypervisor.

Both LSI 9207-8i and the Embedded LSI HBA 2308 now show up as LSI2308_1 and LSI2308_2 in the vSphere Client.

esxi_storage_adapters

 

Homelab 2014 upgrade

I’ve been looking for a while for a new more powerful homelab (for home), that scales and passes the limits I currently have. I had a great success last year with the Supermicro X9SRL-F motherboard for the Home NAS (Running NexentaStor 3.1.5), so I know I loved the Supermicro X9 Single LGA2011 series. Because of the Intel C600 series of chipset, you can break the barrier of the 32GB you find on most motherboards (Otherwise the X79 chipset allows you upto 64GB).

As time passes, and you see product solutions coming out (vCOPS, Horizon View, vCAC, DeepSecurity, ProtectV, Veeam VBR, Zerto) with memory requirements just exploding. You need more and more memory. I’m done with the homelab, where you really need to upgrade just because you can’t upgrade the top limit of the memory. So bye bye the current cluster of four Shuttle XH61v with 16GB.

With the Supermicro X9SRH-7TF (link) you can go to 128GB easy (8x16GB) for now. It’s really just a $$$ choice. 256GB (8x32GB) is still out of reach for now, but that might change in 2 years.

I have attempted to install PernixData FVP 1.5 on my Homelab 2013 Shuttle XH61v, but the combo of the motherboard/AHCI/Realtek R8168 makes for an unstable ESXi 5.5. Sometimes the PernixData FVP Management Server sees the SSD on my host, then it looses it. I did work with PernixData engineers (and Satyam Vaghani), but my homelab is just not stable. Having been invited to the PernixPro program, doesn’t give me the right to use hours and hours of PernixData engineers time to solve my homelab issues. This has made the choice for my two X9SRH-7TF boxes much easier.

The Motherboard choice of the Supermicro X9SRH-7TF (link) is great because of the integrated management, the F in the X9SRH-7TF. Its a must these day. Having the Dual X540 Intel 10GbE Network Card on the motherboard will allow me to start using the network with a dual gigabit link,  and when I have the budget for a Netgear XS708E or XS712T it will scale to dual 10Gbase-T. In the meantime I can also have a single point-to-point 10GbE link between the two X9SRH-7TF boxes for vMotion and the PernixData data synchronization. The third component that comes on the X9SRH-7TF is the integrated LSI Storage SAS HBA, the LSI 2308 SAS2 HBA. This will allow me to build a great VSAN cluster, once I go from two to three serverss at a later date. Its very important to ensure you have a good storage adapter for VSAN. I have been using the LSI adapters for a few years and I trust them. Purchasing a motherboard, then adding the Dual X540 10GbE NIC and a LSI HBA would have cost a lot more than the X9SRH-7TF.

For the CPU, Frank Denneman (@FrankDenneman) and me came to the same conclusion, the Intel Xeon E5-1650 v2 is the perfect choice between number of cores, cache and speed. Here is an another description of the Intel Xeon E5-1650 v2 launch (CPUworld).

For the Case, I have gone just like Frank Denneman’s vSphere 5.5 home lab choice with the Fractal Design Define R4 (Black). I used a Fractal Design Arc Midi R2 for my Home NAS last summer, and I really liked the case’s flexibility, the interior design, the two SSD slots below the motherboard. I removed the default two Fractal Design Silent R2 12cm cooling fans in the case and replaced with two Noctua NH-A14 FLX fans that are even quieter, and are connected using rubber holders so they vibrate even less. It’s all about having a quiet system. The Home NAS is in the guest room, and people sleep next to it without noticing it. Also the Define R4 case is just short of 47cm in height, meaning you can lie it down in a 19″ rack if there is such a need/opportunity.

For the CPU Cooler, I ordered two Noctua NH-U12DX i4 coolers which support the Narrow ILM socket. Its a bit bigger than the NH-U9DX i4 that Frank ordered, so we will be able to compare. I burned myself last year with the Narrow ILM socket. I puchased a water cooling solution for the Home NAS and it just couldn’t fit it on the Narrow ILM socket. That was before I found out the difference between a normal square LGA2011 socket and the Narrow ILM sockets used on some of the Supermicro boards. Here is a great article that explains the differences Narrow ILM vs Square ILM LGA 2011 Heatsink Differences (ServeTheHome.com)

For the Power supply, I invested last year in an Enermax Platimax 750W for the Home NAS. This time the selection is the Enermax Revolution X’t 530W power supply. This is a very efficient 80 Gold Plus PSU. which supports ATX 12V v2.4 (can drop to 0.5W on standby) and uses the same modular connectors of my other power supplies. These smaller 500W power supplies are very efficient when they run at 20% to 50% charge. This should also be a very quiet PSU.

I made some quick calculations yesterday for the Power Consumption, I expect the max power that can be consumed by this new X9SRH-7TF build should be around 180-200W, but it should be running around the 100-120W on a normal basis. At normal usage, I should hit the 20% of the power supply load, so my Efficiency of the PSU should be at around 87%, a bit lower than Frank’s choice of the Corsair RM550. This is the reason why I attempt to take a smaller PSU rather than some of the large 800W or even 1000W PSU. 

xt_530w_efficiency

For the Memory, I’m going to reuse what I purchased last year for my Home NAS. So each box will receive 4x16GB Kingston 1600Mhz ECC for now.

My current SSDs that I will use in this rig are the Intel SSD S3700 100GB enterprise SSD and some Samsung 840 Pro 512GB. What is crucial for me in the the Intel S3700 is that its Endurance design is 10 drive writes per day for 5 years. For the 100GB, it means that its designed to write 1TB each day. This is very important for solutions like PernixData or VSAN.  Just to compare, the latest Intel Enthusiast SSD, the SSD 730 240GB that I purchased for my wife’s computer, its endurance design is set to 50GB per day for 5 years (70GB for the 480GB model). The Intel SSD 730 just like it’s Enterprise cousins (S3500 and S3700) come with a Enhanced power-loss data protection using power capacitors. The second crucial design in an Enterprise SSD, is its Sustained IOPs rating.

I’m also adding a Intel Ethernet Server Adapter I350-T2 Network Card for the vSphere Console management. I’m used to have a dedicated Console Management vNIC on my ESXi hosts. These will be configured in the old but trusty vSwitch Standard.

Another piece of equipment that I already own and that I will plug on the new X9SRH-7TF are the Mellanox ConnectX-3 Dual FDR 56Gb/s  InfiniBand Adapters I purchased last year. This will allow me to test and play with a point-to-point 56Gb/s link between the two ESXi hosts. Some interesting possibilities here…  I currently don’t have a QDR or FDR InfiniBand switch, and these switches are also very noisy, so that is something I will look at in Q3 this year.

I live in Switzerland, so my pricing will be a bit more expensive than what you find in other European countries. I’m purchasing my equipment with a large distribor in switzerland, Brack.ch . Even if the Supermicro X9SRH-7TF is not on their pricing list, they are able to order them for me. The price I got for the X9SRH-7TF is at 670 Swiss Francs, and the Intel E5-1650v2 at 630 Swiss Francs. As you see the Cost of one of these server is closing in the 1800-1900 Euro price range. I realize it’s Not Cheap. And it’s the reason of my previous article on the increase costs for a dedicated homelab, the Homelab shift…

Last but not least, in my Homelab 2013 I focus a lot on the Wife Acceptance Factor (WAF). I aimed for Small, Quiet, Efficence. This time, the only part that I will not be able to keep, is the Small. This design is still a Quiet and Efficient configuration. Lets hope I won’t get into too much problems with the wife.

I also need to thank Frank Denneman (@FrankDenneman) as we discussed extensively this home lab topic over the past 10 days, fine tuning the design on some of the choice going into this design. My prior design for the homelab 2014 might have gone with the Supermicro A1SAM-2750F without his input. A nifty little motherboard with Quad Gigabit, 64GB memory support, but lacking on the CPU performance. Thanks Frank.

The homelab shift…

I believe that we are at a point of time where we will see a shift in the vSphere homelab designs.

One homelab design, which I see as becoming more and more popular is the Nested Homelab using either a VMware Workstation or VMware Fusion base.
There are already a lot of great blogs on Nested homelabs (William Lam), and I must at least mention the excellent AutoLab project. AutoLab is a quick and easy
way to build a vSphere environment for testing and learning, and the latest release of AutoLab supports the vSphere 5.5 release.

The other homelab design is a dedicated homelab. Some of the solutions that people want to test on the homelabs are becoming larger and with more components (Horizon, vCAC), requiring more resources. So it is painful to admit, but I believe the dedicated homelab is heading towards a more expensive direction.

Let me explain my view with these two points.

The first one and the more recent one, is that if you want to lab Virtual SAN, you need to spend some non-negligible money in your lab. You need to invest in at least 3 SSDs on three hosts, and you need to invest in a storage controller that is on the VMware VSAN Hardware Compatibility List.

Recently Duncan Epping mentioned once again that unfortunately the Advanced Host Controller Interface (AHCI) standard for SATA is not supported with VSAN, and you can loose the integrity of your VSAN storage. Something that you don’t want to happen in production and loose hours of your precious time configuring VMs. Therefore if you want to lab Virtual SAN, you will need to get an storage controller that is supported. This will cost money and will limit the whitebox motherboards that support VSAN without add-on cards. I really hope that the AHCI standard will be supported in the near future, but there is no guarantee.

The second one, and the one I see as a serious trend, is network drivers support. Network drivers used in most homelab computer are not updated for the current release of vSphere (5.5) and don’t have a bright future with upcoming vSphere releases. 

VMware has started with vSphere 5.5 their migration to a new Native Driver Architecture and slowly moving away from the Linux Kernel Driver that are plugged into the VMkernel using Shims (great blog entry by Andreas Peetz on Native Driver Architecture).  

For all those users that need the Realtek R8168 driver in the current vSphere 5.5 release, they need to extract the driver from the latest vSphere 5.1 offline bundle, and need to injected the .vib driver in the vSphere 5.5 iso file. You can read more about this popular article at “Adding Realtek R8168 Driver to ESXi 5.5.0 ISO“. 

My homelab 2013 implementation uses these Realtek network cards, and the driver works good with my Shuttle XH61v.  But if you have a closer peak at the many replies to my article, a big trend seems to emerge. People use a lot of various Realtek NICs on their computers, and they have to use these R8168/R8169 drivers. Yet these drivers don’t work well for everyone. I get a lot of queries about why the drivers stop working, or are slow, but hey, I’m just a administrator that cooked a driver in the vSphere ISO, I’m not driver developer.

vSphere is a product aimed at large enterprise, so priority in the development of drivers, is to be expected for this market.  VMware seems to have dropped/lagged the development of these non-Enterprise oriented drivers. I don’t believe we will see further development of these Realtek drivers from the VMware development team, only Realtek could really pickup this job.

This brings me up to the fact that for the future, people will need to move to more professional computers/workstations and controllers if they want to keep using and learning vSphere at home on a dedicated homelab.
I really hope to be proven wrong here… So you are most welcome to reply to me that I’m completely wrong.

 

47787550

 

 

28/03/2014 Some spelling corrects and some