Network core switch Cisco Nexus 3064PQ

Here is my new network core switch for the Home Datacenter, a Cisco Nexus 3064PQ-10GE.

Cisco Nexus 3064PQ-10GE (48x SFP+ & 4x QSFP+)

Cisco Nexus 3064PQ-10GE (48x SFP+ & 4x QSFP+)

But before I speak more about the Cisco Nexus 3064PQ-10GE, let me just bring you back in time… Two years ago, I purchased a Cisco SG500XG-8F8T 16-port 10-Gigabit Stackable Managed Switch. This was first described in my Homelab 2014 build. This was my most expensive networking investment I ever did. During the past two years, as the lab grew, I used the SG500XG and two SG500X-24 for my networking stack. This stack is still running on the 1.4.0.88 firmware.

sg500xg_stack

During these past two years, I have learned the hard way that network chipsets for 10GbE using RJ-45 cabling was outputting so much more heat than the SFP+ chipset. My initial Virtual SAN Hybrid implementation using a cluster of three ESXi host with Supermicro X9SRH-7TF (Network chipset is Intel X540-AT2) crashed more than once, when the network chipset became so hot that I lost my 10G connectivity, but the ESXi host kept on running. Only a powerdown & cool off of the motherboard, would allow my host to restart with the 10G connectivity. This also lead me to expand the VSAN Hybrid cluster from three to four hosts and to have a closer look at the heating issues when running 10G over RJ45.

Small business network switches with 10GBase-T connectivity are more expensive than the more enterprise oriented SFP+ switch, but they also output so much more heat (Measured in BTU/hr). Sure once the 10GBase-T switch is purchased, the cost of Category 6A cables is cheaper than getting the Passive Copper cables, who are limited to 7 meters.

The Cisco SG500XG-8F8T is a great switch as it allows me to connect using both RJ-45 and SFP+ cables.

As the lab expanded, I started to ensure that my new hosts have either no 10GBase-T adapters on the motherboard, or use the SFP+ adapter (Like my recent X10SDV-4C-7TP4F ESXi host). I have started using the Intel X710 Dual SFP+ adapters on some of my host. I like this Intel network adapter, as the network chipset gives out less heat than previous generations chipset, and has a firmware update function that can done from the command prompt inside of vSphere 6.0.

This brings me to the fact that I was starting to run out of SFP+ ports as the labs expands. I have found on ebay some older Cisco Nexus switch, and the one that caught my eye for it’s amount of ports, it price and it’s capabilities is the Cisco Nexus 3064PQ-10GE. These babies are going for about $1200-$1500 on ebay now.

3064pq_on_ebay

The switch comes with 48-ports SFP+ and 4-ports in QSFP+ format. These four ports can be configured in either 16x10G using fan-out cables or 4x40G. This is a software command that can be put on the switch to change from one mode to the other.

Here is my switch with the interface output. I’m using a Get-Console Airconsole to extend the console port to my iPad over Bluetooth.

nexus_3064pq_10g_40g-1

My vSphere 6.0 host is now connected to the switch using an Intel XL710-QDA2 40GbE network adapter and a QSFP+ copper cable.

esxi_40G

I’m going to use the four QSFP+ connectors on the Cisco Nexus 3064PQ-10GE to connect my Compute cluster with NSX and VSAN All-Flash.

3064_10g_40g_show_int

 

The switch came with NX-OS 5.0(3)U5(1f).

3068_nx-os

 

Concerning the heat output of the Cisco Nexus 3064PQ-10GE (datasheet) I was pleasantly surprised to note that it’s output is rather small at 488 BTU/hr when all 48 SFP+ are used. I also noted that the noise level of the fans was linked to the fan speed and the charge of the switch. Going from 59 dBA at 40% duty cycle to 66 dBA at 60% duty cycle to 71 dBA when at 100% duty cycle.

Here is the back of the Cisco Nexus 3064PQ-10GE. I did purchase the switch with a DC power (top of switch to the right), because the switch I wanted had both the LAN_BASE_SERVICES and the LAN_ENTERPRISE_SERVICES license. I sourced two N2200-PAC-400W-B power supply from another place.

nexus_3064pq_back-1

Link to the Cisco Nexus 3064PQ Architecture.

 

Notes & Photos of the Homelab 2014 build

I’ve had a few questions about my Homelab 2014 upgrade hardware and settings. So here is a follow-up. This is just a photo collection of the various stages of the build.  Compared to my previous homelabs that where designed for a small footprint, this one isn’t, this homelab version has been build to be a quiet environment.

I started my build with only two hosts. For the cases I used the very nice Fractal Design Define R4. These are ATX chassis in a sleek black color, can house 8x 3.5″ disks, and support a lot of extra fans. Some of those you can see on the right site, those are Noctua NF-A14 FLX. For the power supply I picked up some Enermax Revolution Xt PSU.

IMG_4584

For the CPU I went with the Intel Xeon E5-1650v2 (6 Cores @3.5GHZ) and a large Noctua NH-U12DX i4. The special thing about the NH-U12DX i4 model is that it comes with socket brackets for the Narrow-Brack ILM that you find on the Supermicro X9SRH-7TF motherboard.

IMG_4591

The two Supermicro X9SRH-7TF motherboards and two add-on Intel I350-T2 dual 1Gbps network cards.

IMG_4594

Getting everything read for the build stage.

On the next photo you will see quiet a large assortment of pieces. There are 5 small yet long lasting Intel SSD S3700 100GB, 8x Seagate Constellation 3TB disks, some LSI HBA Adapters like the LSI 9207-8i and LSI 9300-8i, two Mellanox ConnectX-3 VPI Dual 40/56Gbps InfiniBand and Ethernet adapters that I got for a steal (~$320USD) on ebay last summer.

IMG_4595

You need to remember, that if you only have two hosts, with 10Gbps Ethernet or 40Gbps Ethernet, you can build a point-to-point config, without having to purchase a network switch. These ConnectX-3 VPI adapters are recognized as 40Gbps Ethernet NIC by vSphere 5.5.

Lets have a closer look at the Fractal Design Define R4 chassis.

Fractal Design Define R4 Front

Fractal Design Define R4 Front

The Fractal Design Define R4 has two 14cm Fans, one in the front, and one in the back. I’m replacing the back one with the Noctua NF-A14 FLX, and I put one in the top of the chassis to extra the little warm air out the top.

The inside of the chassis has a nice feel, easy access to the various elements, space for 8x 3.5″ disk in the front, and you can push power-cables on the other side of the chassis.

Fractal Design Define R4 Inside

Fractal Design Define R4 Inside

A few years ago, I bought a very nice yet expensive Tyan dual processor motherboard and I installed it with all the elements before looking to put the CPU on the motherboard. It had bent pins under the CPU cover. This is something that motherboard manufacturers and distributors have no warranty. That was an expensive lesson, and that was the end of my Tyan allegiance. Since then I moved to Supermicro.

LGA2011 socket close-up. Always check the PINs. for damage

LGA2011 socket close-up. Always check the PINs. for damage

Here is the close up of the Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

I now always put the CPU on the motherboard, before the motherboard goes in the chassis. Note on the next picture the Narrow ILM socket for the cooling.

Intel Xeon E5-1650v2 and Narrow ILM

Intel Xeon E5-1650v2 and Narrow ILM

Here is the difference between the Fractal Design Silent Series R2 fan and the Noctua NF-A14 FLX.

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

What I like in the Noctua NF-A14 FLX are the rubber hold-fasts that replace the screws holding the fan. That is one more option where items in a chassis don’t vibrate and make noise. Also the Noctua NF-A14 FLX runs by default at 1200RPM, but you have two electric Low-Noise Adapters (LNA) that can bring the default speed down to 1000RPM and 800RPM. Less rotations equals less noise.

Noctua NF-A14 FLX Details

Noctua NF-A14 FLX Details

Putting the motherboard in the Chassis.

IMG_4623

Now we need to modify the holding brackets for the CPU Cooler. The Noctua NH-U12DX i4 comes with Narrow ILM that can replace the ones on it. In the picture below, the top one is the Narrow ILM holder, while the bottom one still needs to be replaced.

IMG_4621

And a close up of everything installed in the Chassis.

IMG_4629

To hold the SSD in the chassis, I’m using an Icy Dock MB996SP-6SB to hold multiple SSD in a single 5.25″ frontal slot. As SSD don’t heat up like 2.5″ HDD, you can select to cut the electricity to the FAN.

IMG_4611

This Icy Dock MB996SP-6SB gives a nice front look to the chassis.

IMG_4631

How does it look inside… okay, honest I have tied up the sata cables since my building process.

IMG_4632

 

Here is the picture of my 2nd vSphere host during building. See the cabling is done better here.

IMG_4647

 

The two Mellanox ConnectX-3 VPI 40/56Gbps cards I have where half-height adapters. So I just to adapt a little bit the holders so that the 40Gbps NIC where firmly secured in the chassis.

IMG_4658

Here is the Homelab 2014 after the first build.

IMG_4648

 

At the end of August 2014, I got a new Core network switch to expand the Homelab. The Cisco SG500XG-8F8T, which is a 16x Port 10Gb Ethernet. Eight ports are in RJ45 format, and eight are in SFP+ format, and one for Management.

Cisco SG500XG-8G8T

Cisco SG500XG-8G8T

I build a third vSphere host using the same config as the first ones. And here is the current 2014 Homelab.

Homelab 2014

Homelab 2014

And if you want to see what the noise is at home, check this Youtube movie out. I used the dBUltraPro app on the iPad to measure the noise level.

And this page would not be complete if it didn’t have a vCenter cluster screenshot.

Homelab 2014 Cluster

Upgrading LSI HBA 9300-8i via UEFI (Phase 06)

Here is a resume on how to upgrade a LSI SAS3 HBA 9300-8i card to the latest BIOS & Firmware using the UEFI mode. This is applicable to my homelab Supermicro X9SRH-7TF or any other motherboard with UEFI Build-In EFI Shell. I’ve found that using the UEFI mode to be more practical than the old method of a MSDOS bootable USB key. And this is the way more and more Firmware and BIOS will be released.

Tom and Duncan showed  how to upgrade an LSI 9207-4i4e from within VMware vSphere 5.5 CLI. In this article I’m going to show you how to use the UEFI Shell for the upgrade.

Preparation.

First you need to head over to the LSI website for your HBA and download a few files to your computer. For the LSI HBA 9300-8i you can jump to the Software Downloads section. You want to download three files, extract them and put the files on a USB key.

The Installer_P4_for_UEFI which contains the firmware updater sas3flash.efi that works with P06. You can retrieve it using this dropbox link as it’s disappeared from the LSI download site.

The SAS3_UEFI_BSD_P6 which contains the BIOS for the updater (X64SAS3.ROM)

The 9300_8i_Package_P6_IR_IT_firmware_BIOS_for_MSDOS_Windows which contains the SAS9300_8i_IT.bin firmware and the MPTSAS3.ROM bios.

 

lsi9300_8i_download

At this point you put all those extract files mentioned above on a USB key.

lsi9300_p06_usbdos

 

You reboot your server, and modify the Boot parameters in the BIOS of the server to boot in UEFI Built-In EFI Shell.

UEFI_Build-In_EFI_Shell

Upgrading BIOS & Firmware.

When you reboot you will be dumped in the UEFI shell. You can easely move to the USB key with your programs using

UEFI_booting

And lets move over to the USB key. For me the USB key is mapped as fs1: but you could also have a fs0:

A quick dir command will list the files on the USB key.

uefi_dir

Using the sas3flash.efi -list command (extracted from the Installer_P4_for_UEFI file) we can list the local LSI MPT3SAS HBA adapter, see the SAS address and see the various versions of the Firmware & BIOS and UEFI BSD Bios.

sas3flash_list

There are three components that we want to patch, the Firmware, the BIOS and the UEFI BSD Code.

Here we start by upgrading the UEFI BSD BIOS. Using the sas3flash.efi we can fine tune with the SAS address of the controller, and select the X64SAS3.ROM file found in the SAS3_UEFI_BSD_P6 download. As you see, the –c Controller command allows you to specify to which adapter the BIOS is loaded. You can enter the number 0 or the SAS Address. sas3flash.efi -c 006F94D30 -b X64SAS3.ROM

sas3flash_bios

The next step is upgrade the Firmware with the SAS9300_8i_IT.bin found in the 9300_8i_Package_P6_IR_IT_firmware_BIOS_for_MSDOS_Windows file. sas3flash.efi -c 006F94D30 -f SAS9300_8i_IT.bin

sas3flash_firmware

The last part is to upgrade the MPTSAS3.ROM file which contains the BIOS of the LSI adapter. Here again we use sas3flash.efi -c 006F94D30 -b MPTSAS3.ROM.

 

The end result once Phase 06 firmware and bioses have been install is the following sas3flash.efi -list

lsi9300_8i_phase06

 

  • Firmware Version 06.00.00.00
  • BIOS Version 08.13.00.00
  • UEFI BSD Version 07.00.00.00

Now reboot the server, and make sure to change back your Boot option in the server BIOS to your USB key or harddrive that contains the vSphere hypervisor.

 

Upgrading the X9SRH-7TF LSI HBA 2308 and LSI HBA 9207-8i

Here is a resume on how to upgrade the LSI HBA 2308 Chipset on the Supermicro X9SRH-7TF and a LSI SAS2 HBA 9207-8i card to the latest BIOS & Firmware using the UEFI mode. This is applicable to my homelab Supermicro X9SRH-7TF or any other motherboard with UEFI Build-In EFI Shell.

I’ve found that using the UEFI mode to be more practical than the old method of a MSDOS bootable USB key. And this is the way more and more Firmware and BIOS will be released.

Tom and Duncan showed you last week how to upgrade an LSI 9207-4i4e from within VMware vSphere 5.5 CLI. In this article I’m going to show you how to use the UEFI Shell for the upgrade.

Preamble.

Since last week, I have been running the PernixData FVP (Flash Virtualization Platform) 1.5 solution on my two ESXi hosts, and I have found that the LSI HBA 2308 on the motherboard had a tendency to drop all the Drives and SSDs under heavy I/O load. I did upgrade last week the LSI HBA 2308 from the original Phase 14 Firmware to Phase 16, but that didn’t solve the issue.  Unfortunately I have not yet found on the Supermicro Support site, a newer release of the Firmware Phase 18 or BIOS for the embedded adapter.

So I dropped in the box another LSI HBA 9207-8i adapter, which is also based on the LSI 2308 chip. And low and behold, my two LSI adapter seemed to have nearly the exact same Firmware & BIOS.

two_adapters_lsi

Well if they LSI Embedded HBA and the LSI 9207-8i are nearly identical and with the same chipset… who knows if I burn the Firmware & BIOS on the motherboard…

 

Preparation.

First you need to head over to the LSI website for the LSI 9207-8I and download a few files to a local computer. For the LSI HBA 9207-8i you can jump to the Software Downloads section. You want to download three files, extract them and put the files on a USB key.

  • The Installer_P18_for_UEFI which contains the firmware updater (sas2flash.efi)
  • The UEFI_BSD_P18 which contains the BIOS for the updater (X64SAS2.ROM)
  • The 9207_8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows which contains the 9207-8.bin firmware.

lsi_site

At this point you put all those extracted files mentioned above on a USB key.

You reboot your server, and modify the Boot parameters in the BIOS of the server to boot in UEFI Built-In EFI Shell.

UEFI_Build-In_EFI_Shell

When you reboot also jump into the LSI HBA Adapter to collect the controllers SAS address. Its a 9 digit number you can find on the following interface. Notice that it starts with a 0 on the left of the quote.

lsi_sas_address_1

and

lsi_sas_address_2

For my adapters it would be 005A68BB0 for the SAS9207-8I and 0133DBE00 for the embedded SMC2308-IT.

 

Upgrading BIOS & Firmware.

Lets plug in the USB key in the server, and lets boot into the UEFI Build-In EFI Shell.

UEFI_booting

And lets move over to the USB key. For me the USB key is mapped as fs1: but you could also have a fs0:.  A quick dir command will list the files on the USB key.

usb_dir

Using the sas2flash.efi -listall command (extracted from the Installer_P18_for_UEFI file) we can list all the local LSI HBA adapters and see the various versions of the Firmware & BIOS.

sas2flash_listall_old

We can also get more details about a specific card using the sas2flash.efi -c 0 -list

sas2flash_list_old_9207

and sas2flash.efi -c 1 -list

sas2flash_list_old_2308

Now lets just upgrade the BIOS with the X64SAS2.ROM file found in the UEFI_BSD_P18 download and the Firmware with the 9207-8.bin that we found in the 9207-8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows file.

As you see, the -c Controller command allows you to specify to which adapter the BIOS and Firmware is upgraded.

sas2flash_upgrade_0

and

sas2flash_upgrade_1

Lets have a peak again at just one of the LSI Adapters, the controller 1, which is the embedded one, now seems to have the Board name SAS9207-8i. A bit confusing, but it seemed to have worked.

sas2flash_1_list

Using the sas2flash.efi -listall command now shows us the new Firmware and BIOS applied to both cards.

sas2flash_listall_new

Now power-off the server, so the new BIOS & Firmware are properly loaded, and make sure to change back your Boot option in the server BIOS to your USB key or harddrive that contains the vSphere hypervisor.

Both LSI 9207-8i and the Embedded LSI HBA 2308 now show up as LSI2308_1 and LSI2308_2 in the vSphere Client.

esxi_storage_adapters

 

InfiniBand in the lab…

Okay, the original title was going to be ‘InfiniBand in the lab… who can afford 10/40 GbE’. I’ve looked in the past at 10GbE switches, and nearly pulled the trigger a few times. Even now the prices of the switches like the Netgear Prosafe or Cisco SG500X are going down, the cost of the 10GbE Adapter is still high. Having tested VSAN in the lab, I knew I wanted more speed for the replication and access to the data, than what I experienced.  The kick in the butt in network acceleration that I have used is InfiniBand.

If you search on ebay, you will find lots of very cheap InfiniBand host channel adapters (HCA) and cables. A Dual 20Gbps adapter will cost you between $40 and $80 dollars. The cables will vary between $15 and upto $150 depending of the type of cables. One of the interesting fact is that you can use InfiniBand in a point-to-point configuration. Each InfiniBand network needs a Subnet Manager, This is a configuration for the network, akin a Fabric Channel Zoning.

InfiniBand Data Rates

An InfiniBand link is a serial link operating at one of five data rates: single data rate (SDR), double data rate (DDR), quad data rate (QDR), fourteen data rate (FDR), and enhanced data rate (EDR).

  1. 10 Gbps or Single Data Rate (SDR)
  2. 20 Gbps or Dual Data Rate (DDR)
  3. 40 Gbps or Quad Data Rate (QDR)
  4. 56 Gbps or Fourteen data rate (FDR)
  5. 100Gbps or Enhanced data rate (EDR)
  6. In 2014 will see the announcement of High data rate (HDR)
  7. And the roadmap continues with next data rate (NDR)

There is a great entry InfiniBand on wikipedia that discuss in larger terms the different signaling of InfiniBand.

InfiniBand Host Channel Adapters

Two weeks ago, I found a great lead, information and that pushed me to purchased 6 InfiniBand adapters.

Item picture  3x Mellanox InfiniBand MHGH28-XTC Dual Port DDR/CX4 (PCIe Gen2) at $50.
Item picture 3x Mellanox InfiniBand MCX354A-FCBT CX354A Dual Port FDR/QDR  (PCIe Gen3) at $300.

InfiniBand Physical Interconnection

Early InfiniBand used copper CX4 cable for SDR and DDR rates with 4x ports — also commonly used to connect SAS (Serial Attached SCSI) HBAs to external (SAS) disk arrays. With SAS, this is known as an SFF-8470 connector, and is referred to as an “InfiniBand-style” Connector.

Item picture Cisco 10GB CX4 to CX4 Infiniband Cable 1.5 m

The latest connectors used with up to QDR and FDR speeds 4x ports are QSFP (Quad SFP) and can be copper or fiber, depending on the length required.

InfiniBand Switch

While you can create a triangle configuration with 3 hosts using Dual Port cards like Vladan Seget (@Vladan) writes in his very interesting article Homelab Storage Network Speed with InfiniBand I wanted to see how a InfiniBand switch would work. I only invested in the following older Silverstorm 9024-CU24-ST2 that supports only 10Gbps SDR port. But it has 24x of them. Not bad for a $400 switch that supports 24x 10Gbps ports.

Item picture
SilverStorm 10Gbps 24 port InfiniBand switch 9024-CU24-ST2

In my configuration each Dual Port Mellanox MHGH28-XTC (DDR Capable) will connect to my SilverStorm switch at only SDR 10Gbps speed, but I have two ports from each hosts. I can also increase the amount of hosts connected to the switch, and use a single Subnet Manager and single IPoIB (IP over InfiniBand) network addressing scheme. At the present time, I think this single IPoIB network addressing might be what is important for the implementation of VSAN in the lab.

Below you see the IB Port Statistics with three vSphere 5.1 hosts connected (1x cable per ESXi as I’m waiting on a 2nd batch of CX4 cables).

Silverstorm 3x SDR Links

The surprise I had when connecting to the SilverStorm 9024 switch is that it did not have the Subnet Manager. But thanks to Raphael Schitz (@hypervisor_fr) who has successfully with the work & help of others (William Lam & Stjepan Groš) and great tools (ESX Community Packaging Tool by Andreas Peetz @vFrontDE), repackaged the OpenFabrics Enterprise Distribution OpenSM (Subnet Manager) so that it can be loaded on vSphere 5.0 and vSphere 5.1. This vSphere installable VIB can be found in his blog article  InfiniBand@home votre homelab a 20Gbps (In French).

The Link states in the screenshot above went to active, once the ib-Opensm was installed on the vSphere 5.1 hosts, the MTU was set and the partitions.conf configuration file written. Without Raphael’s ib-opensm, my InfiniBand switch would have been alone and not passed the IPoIB traffic in my lab.

 

Installing the InfiniBand Adapters in vSphere 5.1

Here is the process I used to install the InfiniBand drivers after adding the Host Channel Adapters. You will need three files. The first is the InfiniBand OFED Driver for VMware vSphere 5.x from Mellanox. The 2nd is the

  1. VMware’s Mellanox 10Gb Ethernet driver supports products based on the Mellanox ConnectX Ethernet adapters
  2. Mellanox InfiniBand OFED Driver for VMware vSphere 5.x
  3. OpenFabrics.org Enterprise Distribution’s OpenSM for VMware vSphere 5.1 packaged by Raphael Schitz

You will need to transfer these three packages to each vSphere 5.x host, and install them using the esxcli command line. Before installing the VMware Mellanox ConnectX dcrive, you need to unzip the file, as it’s the offline zip file you want to supply in the ‘esxcli software vib’ install command. I push all the files via SSH in the /tmp folder. I recommend that the host be put in maintenance mode, as you will need to reboot after the drivers are installed.

esxcli software vib install

The commands are

  • unzip mlx4_en-mlnx-1.6.1.2-471530.zip
  • esxcli software vib install -d /tmp/mlx4_en-mlnx-1.6.1.2-offline_bundle-471530.zip –no-sig-check
  • esxcli software vib install -d /tmp/MLNX-OFED-ESX-1.8.1.0.zip –no-sig-check
  • esxcli software vib install -v /tmp/ib-opensm-3.3.15.x86_64.vib –no-sig-check

Careful with the ib-opensm, the esxcli -d becomes a -v for the vib.

At this point, you will reboot the host. Once the host comes backup, there are two more commands you need to do. One is the set the MTU to 4092, and configure the OpenSM per adapter with the partitions.conf file.

The partitions.conf file is a simple one line file that contains the following config.

[button] Default=0x7fff,ipoib,mtu=5:ALL=full;[/button]

esxcli set IB mtu and copy partitions.conf

The commands are

  • esxcli system module paramters set -m=mlx4_core -p=mtu_4k=1
  • copy partitions.conf  /scratch/opensm/adapter_1_hca/
  • copy partitions.conf /scratch/opensm/adapter_2_hca/

At this point you will be able to configure the Mellanox Adapters in the vSphere Web Client (ConnectX for the MHGH28-XTC)

ESXi Network Adapter ConnectX

The vSwitch view is as follow

vSwitch1 Dual vmnic_ib

 

Configure the Mellanox Adapter in the vSphere Clientand (ConnectX3 for the MCX354A-FCBT)

ESXi Network Adapter ConnectX3

I’m still waiting on the delivery of some QSFP Cable for the ConnectX Adapters. This config will be done in a triangular design until I find a QDR Switch of reasonable cost.

This article wouldn’t be complete without a bench mark. Here is the screenshot I quickly took from the vCenter Server Appliance, that I bumped to 4 vCPU and 22GB that I vMotioned between two hosts with SDR (10Gbps) connectivity.

vCSA 22GB vMotion at SDR speed

 

This is where I’m going to stop for now.  Hope you enjoyed it.

 

 

Expanding the lab and network reflexions

Before I start on a blog on how I implemented InfiniBand in the lab, I wanted to make a quick backstory on my lab which is located in the office. I do have a small homelab running VSAN, but this my larger lab.

This is a quick summary of my recent lab adventures. The different between the lab and the homelab, is it’s location. I’m privileged that the company I work for, is allowing me used  12U in the company datacenter. They provide the electricity and the cooling. The rest of what happens inside those 12U is mine. The only promise I had to do, is that I was not going to run external commercial services on this infrastructure.

Early last year, I purchased a set of Cisco UCS C M2 (Westmere processor) Series servers when Cisco announced the newer M3 with Sandy Bridge processors, at a real bargain (to me at least).

I had gotten three Cisco UCS C200 M2 with a Single Xeon 5649 (6-cores @2.5Ghz) and 4GB, and three Cisco UCS C210 M2 with a Single Xeon 5649. At that point I purchased some Kingston memory 8GB dimms to increase each host from to 48GB (6x 8GB), and the last one got all 6x4GB dimms.

It tooks me quite a few months to pay for all this infrastructure.

Office Lab with Cisco UCS C Series

This summer with the release of the next set of Intel Xeon Ivy Bridge processors (E5-2600v2), the Westmere series of processors are starting to fade from the pricelists. Yet at the same time, some large social networking companies are shedding some equipment. In this I was able to find on ebay a set of 6x Xeon L5639 (6-cores @2.1Ghz), and I have just finished adding them to the lab. I don’t really need the additional cpu resources, but I do want the capability to expand the memory of each server past the original 6 DIMMs I purchased.

The Lab is composed of two Clusters, one with the C200 M2 with Dual Xeon 5649.

Cluster 1

and one cluster with the C210 M2 with Dual Xeon L5639.

Cluster 2

The clusters are real empty now, as I have broken down the vCloud Director infrastructure that was available to my colleagues, as I wait for the very imminent release of the vSphere 5.5 edition.

The network is done with two Cisco SG300-28 switches with a LAG Trunk between them.

Office Lab back

For a longtime, I have been searching for a faster backbone to these two Cisco SG300-28 switches. Prices on 10GbE switches have come down, and some very interesting contenders are the Cisco SG500X series with 4 SFP+ ports, or the Netgear Prosafe XS708E or XS712T switches. While these switches are just affordable for a privatly sustain lab, the cost of the Adapters would make it expensive. I’ve tried to find an older 10GbE switch or tried to coax some suppliers to give their old Nexus 5010 switches over but not much success. The revelation to an affordable and fast network backbone comes from InfiniBand. Like others, I’ve know about InfiniBand for years, and I’ve seen my share in datacenters left and right (HPC Clusters, Oracle Exadata racks). But only this summer did I see a french blogger Raphael Schitz (@hypervisor_fr) write what we all wanted to have… InfiniBand@home votre homelab a 20Gbps (In French). Vladan Seget (@Vladan) has followed up on the topic and has also a great article Homelab Storage Network Speed with InfiniBand on the topic. Three weeks ago, I took the plunge and ordered my own InfiniBand interfaces, InfiniBand cables and even to try my hand at it, an InfiniBand Switch. Follow me in the next article to see me build a Cheap yet fast network backbone to my lab.

Having had the opportunity to test VSAN in the homelab, I’ve notice that once you are running a dozen virtual machines, you really want to migrate from the gigabit network to the recommended VSAN network speed of 10G. If you plan to just validate VSAN in the homelab, gigabit is great, but if you plan to run the homelab on VSAN, you will be quickly finding things sluggish.

 

 

 

2013 Homelab refresh

Preamble

It’s now 2013, and it’s time to have a peak at my homelab refresh for this year.

 

Background

In the past three years, I’ve ran a very light homelab with VMware ESXi. I mainly used my workstation (Supermicro X8DTH-6F) with Dual Xeon 5520 @2.26Ghz (8-Cores) and 72GB of RAM to run most of the virtual machines and for testing within VMware Workstation, and only ran domain controllers and 1 proxy VM on a small ESXi machine, a Shuttle XG41. This gives a lot of flexibilty to run nearly all the virtual machines on a large beefed up workstation. There are quiet a few posts on this topic on various vExpert websites (I highly recommend Eric Sloof’s Super-Workstation).

I sometimes do play games (I’m married with a gamer), and when I do I have to ensure my virtual machines are powered down within VMware Workstation, as my system could and has crashed during games. Having corrupted VM is no fun.

 

Requirements

What I want for 2013 in the homelab, is a flexible environment composed of a few quiet ESXi hosts with my larger workstation being able to add new loads or test specific VM configuration. For this I need a infrastructure that is small, quiet and stable. Here are the requirements for my 2013 homelab infrastructure

  1. Wife Acceptance Factor (WAF)
  2. Small
  3. Quiet
  4. Power Efficient

Having purchased a flat, I don’t have a technical room (nothing like my 2006 computer room) or a basement. So having a few ESXi hosts on 24 hours a day, requires a high Wife Acceptance Factor. The system have to be small & quiet. In addition, if they are power efficient, it will make the utility bill easier.

 

Shuttle XH61V

The Shuttle XH61V is small black desktop based on the Intel H61 chipset. It comes in a 3.5L metal case with very quiet ventilators. You just need to purchase the Shuttle XH61V, a Intel 1155 Socket 65W processor, two memory SODIMMs (laptop memory) and local storage. Assembly can be done in less than 30 minutes.

Shuttle XH61V

Shuttle XH61V

The Shuttle XH61V comes with two NICs and support for a mSATA (Bootable) connector, a PCIe x1 slot, and two 2.5″ Devices. The Shuttle XH61V comes with two gigabit network cards. They are Realtek 8168 cards. These work flawlessly, but they do not support Jumbo frames.

Shuttle XH61V Back

Shuttle XH61V Back

For storage, I decided to boot from a mSATA device, and to keep a Intel SSD for a fast upper-tier local storage, and one large Hybrid 2.5″ Harddisk for main storage. I do have a Synology DS1010+ on the network that is the centralized NFS storage, but I want some fast local storage for specific virtual machines. It’s still early 2013, so I have not yet upgraded my older Synology or created a new powerful & quiet Nexenta Community Edition home server. On the next image you can see that three Shuttle XH61V take less space than a Synology DS1010+

Three Shuttle HX61V with Synology DS1010+

VMware ESXi installation

Installing VMware ESXi is done quickly as all the devices drivers are on the ESXi 5.1 VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso install cdrom.

ESXi 5.1 on XH61V

ESXi 5.1 on XH61V

Here is the Hardware Status for the Shuttle XH61V

ESXi XH61V Hardware Status

Here is an updated screenshot of my vSphere 5.1 homelab cluster.

Management Cluster

 

Bill of Materials (BOM)

Here is my updated bill of materials (BOM) for my ESXi nodes.

  • Shuttle XH61V
  • Intel Core  i7-3770S CPU @3.1Ghz
  • Two Kingston 8GB DDR3 SO-DIMM KVR1333D3S9/8G
  • Kingston 16GB USB 3.0 Key to boot ESXi (Change BIOS as you cannot boot a USB key in USB3 mode)
  • Local Storage Intel SSD 525 120GB
  • Local Storage Intel SSD 520 240GB
  • Local Storage Seagate Momentus XT 750GB

Planned upgrade: I hope to get new Intel SSD 525 mSATA boot devices to replace the older Kingston SSDnow when they become available.

 

Performance & Efficiency

In my bill of materials, I selected the most powerful Intel Core i7 processor that I could fit in the Shuttle XH61V. Because I’m running virtual appliances and virtual machines like vCenter Operations Manager, SQL Databases, Splunk. There are some less expensive Core i3 (3M Cache), Core i5 (6M Cache) or Core i7 (8M Cache) processor that would work great.

What is impressive, is that the Shuttle XH61V comes with a 90W power adapter. We are far from the 300W mini-boxes/XPC or even the HP MicroServer with their 150W power adapters. Only the Intel NUC comes lower with a 65W power adapter and a single gigabit network (@AlexGalbraith has a great series of post on running ESXi on his Intel NUC ).

Just for info, the Intel Core i7-3770S has a cpubenchmark.net score of 9312. Which is really good for a small box that uses 90W.

The Shuttle XH61V is also very quiet... it’s barely a few decibels above the noise of a very quiet room. To tell you the thru… the WAF is really working, as my wife is now sleeping with two running XH61V at less than 2 meters away. And she does not notice them… 🙂

 

Pricing

The pricing for a Shuttle XH61V with 16GB memory and a USB boot device (16GB Kingston USB 3.0) can be kept to a about $350 on newegg. What will increase the price is the performance of the LGA 1155 Socket 65W processor ( Core i3-2130 from $130 to Core  i7-3770S at $300) and what additional local storage you want to put in.

vSphere 5.1 Cluster XH61V

The sizing of the homelab in early 2013 is so far from the end of 2006 when I moved out of my first flat, when I had a dedicated Computer room.

Update 18/03/2012. DirectPath I/O Configuration for Shuttle XH61v BIOS 1.04

XH61v DirectPath I/O Configuration

XH61v DirectPath I/O Configuration

 

Update 22/03/2013.  mSATA SSD Upgrade

I’ve decided to replace the Intel 525 30GB mSATA SSD that is used for booting ESXi and to store the Host Cache with a larger Intel 525 120GB mSATA SSD. This device will give me more space to store the Host Cache and will be used as a small Tier for the Temp scratch disk of my SQL virtual machine.

The ‘published’ performance for the Intel 525 120GB mSATA are

Capacity
Sequential
Read/Write (up to)
Random 4KB
Read/Write (up to)
Form Factor
30 GB SATA 6 Gb/s       500 MB/s / 275 MB/s  5,000 IOPS / 80,000 IOPS mSATA
60 GB SATA 6 Gb/s       550 MB/s / 475 MB/s 15,000 IOPS / 80,000 IOPS mSATA
120 GB SATA 6 Gb/s       550 MB/s / 500 MB/s 25,000 IOPS / 80,000 IOPS mSATA
180 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
240 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
 Show More Detailed Product Specifications >

 

Sager 9262 & NVIDIA Quadro FX3700m & NVIDIA Linux Binary Driver performance issue.

Okay, this is probably not only effective for the Sager 9262 and the Quadro FX3700m, but this is the only platform that I have right now where I can identify and reproduce the problem. I hope some other Sager 9262 and/or 9800M GTX users using Linux can also validate this issue.

The problem stems from the feature PowerMizer which allows the graphic card to scale it’s performance. The Quadro FX3700M (1024MB) (550MHz/799MHz) that shipped in my Sager 9262 last week has four Performance Levels with scaling NV Clock and Memory Clock.

  • 0 200MHz & 100MHz
  • 1 275MHz & 301MHz
  • 2 383MHz & 301MHz
  • 3 550MHz & 799MHz

Unfortunately with the latest 177.82 or 180.11 (Beta) Linux (x86-64) Binary Drivers, I cannot get the card running above Performance Level 1. Actually I’m using a script found at the nvnews forums to artificially keep the graphic card running at Higher Performance. There are also multiple posts in the Phoronix forums about this issue.

Here is a screenshot of my nvidia-settings and the performance level of the FX3700m while running OpenGL benchmarks. As you see it’s stuck at Performance Level 1.

So right now, due to the nvidia binary drivers NOT supporting the PowerMizer feature, I’m able to only use less than 50% of the performance of my graphic card. This is a very expensive setback for someone that invested in an expensive nvidia 9800M GTX or FX3700M graphic card.

I’m lucky that I’m not rendering on this laptop, and that I can wait for nvidia to get their act together and supply proper PowerMizer drivers.

Erik

Sager 9262 arrived.

Sager 9262The Sager 9262 arrived in the office, I will pick it up later today to give it a go and test that the hardware is in good condition and the screen does not show dead pixels.

It only took XoticPC and Sager 9 days to purchase the laptop, make the wire transfer, build to specs, test and validate the screen and send  the laptop by UPs from the United States to Switzerland.

And the first CD-ROM I booted on this brand new system is the memtest+ 2.10 tool. I’m very impressed by the speed of the main Memory.

L1 Cache : 32K 39806 MB/s
L2 Cache : 6144K 18472 MB/s
L3 Cache : none
Memory : 8190M 3632 MB/s

In comparison my D900K is running at 1440 MB/s for it’s main memory.

Ordered a new laptop.

Sager 9262After 3 years of hard work, I decided to replace my trusty Sager 9750 (Clevo D900K) by a new Sager 9262 (Clevo D901C) laptop. This time I took a seriously over-powered system, which should give me some head-room in running multiple Virtual Machines. The laptop comes with a Intel Core 2 Quad 9550 Processor (4 cores at 2.83Ghz) and with 8GB of memory. I hope to receive this laptop prior to the holiday season. I did order my Sager 9262 from XoticPC in the United States.

It should be noted, that these laptops are designed and build by Clevo in taiwan, then rebadged by distributions such as Sager, Alienware and others.