Intel Xeon D-1518 (X10SDV-4C-7TP4F) ESXi & Storage server build notes

These are my build notes of my last server. This server is based around the Supermicro X10SDV-4C-7TP4F motherboard that I already described in my previous article (Bill-of-Materials). For the Case I select a Fractal Design Node 804 square small chassis. It is described as being able to handle upto 10x 3.5″ disks.

Fractal Design Node 804

Here is the side view where the motherboard can be fitted. It supports MiniITX, MicroITX and the FlexATX of the Supermicro motherboard. Two 3.5″ harddrives or 2.5″ SSD can be fitted on the bottom plate.

x10sdv_node804--2

The right section of the chassis, contains the space for eight 3.5″ harddrives, fixed in two sliding frame at the top.

x10sdv_node804--3

Let’s compare the size of the Chassis, the Power Supply Unit and the Motherboard in the next photo.

Fractal Design Node 804, Supermicro X10SDV-4C-7TP4F and Corsair RM750i

Fractal Design Node 804, Supermicro X10SDV-4C-7TP4F and Corsair RM750i

When you zoom in the the picture above, you can see three red squares on the bottom right of the motherboard. Before you inser the motherboard in the chassis, you might want to make sure you have moved the mSATA pin from the position on the photo to the 2nd position, otherwise you will not be able to attach the mSATA to the chassis. You need to unscrew the holding grommet from below the motherboard. People having purchased the Supermicro E300-8D will have a nasty surprise. The red square in the center of the motherboard is set for M.2 sticks at the 2280 position. If you have a M.2 storage stick 22110, you better move the holding grommet also.

Here is another closer view of the Supermicro X10SDV-4C-7TP4F motherboard with the two Intel X552 SFP+ connectors, and the 16 SAS2 ports managed by the onboard LSI 2116 SAS Chipset.

X10SDV-4C-7TP4F

In the next picture you see the mSATA holding grommet moved to accommodate the Samsung 850 EVO Basic 1TB mSATA SSD, and the Samsung SM951 512GB NVMe SSD in the M.2 socket.

X10SDV-4C-7TP4F

In the next picture we see the size of the motherboard in the Chassis.At the top left, you will see a feature of the Fractal Design Node 804. A switch that allows you to change the voltage of three fans. This switch is getting it’s electricity thru a SATA power connector. It’s on this power switch, that I was able to put a Y-power cable and then drive the Noctua A6x25 PWM CPU fan that fits perfectly on top of the CPU heatsink. This allowed me to bring down the CPU heat buildup during the Memtest86+ test from 104c to 54c.

X10SDV in Node 804

I used two spare Noctua Fan on CPU Heatsink fixer to hold the Noctua A6x25 PWM on the Heatsink, and a ziplock to hold those two fixers together (sorry I’m not sure if we have a proper name for those metal fixing brackets). Because the Noctua is getting it’s electricity from the Chassis and not the Motherboard, the Supermicro BIOS is not attemping to increase/decrease the Fan’s rpm. This allows me to keep a steady air flow on the heatsink.

Noctua A6x25 PWM fixed on heatsink

Noctua A6x25 PWM fixed on heatsink

I have fitted my server with a single 4TB SAS drive. To do this I use a LSI SAS Cable L5-00222-00 shown here.

lsi_sas_l5-00222-00_cable

This picture shows the 4TB SAS drive in the left most storage frame. Due to the length of the adapter, the SAS cable would be blocked by the Power Supply Unit. I will only be able to expand to 4x 3.5″ SAS disk in this chassis. Using SATA drives, the chassis would take upto 10 disks.

Node 804 Storage and PSU side

View from the back once all is assembled and powered up.

x10sdv_node804--12

This server with an Intel Xeon D-1518 and 128GB is part of my Secondary Site chassis.

ESXi60P03

The last picture shows my HomeDC Secondary Site. The Fractal Design Node 804 is sitting next to a Fractal Design Define R5. The power consumption is rated at 68 Watts for a X10SDV-4C-7TP4F with two 10GbE SFP+ Passive Copper connection, two SSDs and a single 4TB SAS drive.

HomeDC Secondary Site

HomeDC Secondary Site

Supermicro X10SDV-4C-7TP4F server Bill-of-Materials

Another new host has joined the Home Datacenter (#HomeDC). This one is my first low powered Intel Xeon D-1500 server I get my hands on. There have been some great install guides about other Supermicro X10SDV motherboards on many sites, and I would recommend that you head over to Paul Braren’s (@tinkertry) TinkerTry site for a lot of great content. There are now also two small server from Supermicro that came out E200-8D and E300-8D. The motherboard I selected for my new host closely matches the one on the Supermicro E300-8D, described on TinkerTry.

I was looking for a motherboard that had great storage capabilities, 10G connectivity and low powered. As my Home Datacenter (#HomeDC) is growing, I find myself using more and more 10G SFP+ connectivity. This 10G SFP+ connectivity consumes less watts in the chipset, creating less heat inside the servers. SFP+ connecitivty allows me to use cheaper network switches. 10G Ethernet with RJ45 has a price premium, even if the Category 6A cables are cheaper than Passive Copper SFP+ cables.

I selected the Supermicro X10SDV-4C-7TP4F motherboard, it has a 7 year product life, support two SFP+ 10G connection, comes with a LSI/AVAGO 2116 SAS/SATA chipset with a total of 16 SAS ports. More than enough for a storage server. It comes with a M.2 socket and a mSATA socket. The Intel Xeon D-1518 is a quad cores processor running at 2.2Ghz. All in all a very good selection of specifications on such a small FlexATX motherboard.

X10SDV-7TP4F_spec

The X10SDV series of motherboards come with the Intel X552 dual 10G network card. In case you are experiencing network connectivity issues, it is important to make sure your motherboard has the proper firmware. When I received my motherboard with the default bios 1.0, it gave me a serious scare. I was unable to get the two 10G links up with my Cisco SG500X and SG500XG switches. I had to upgrade to version 1.0a and clear the CMOS to get it to work.

I’ve been a long time user of the Fractal Design cases, and I wanted to have something small for the FlexATX, yet with lots of space for adding disks. So I selected the Fractal Design Node 804 cube chassis that supports MicroATX, MiniATX and the FlexATX like the Supermicro X10SDV series. The Node 804 is capable of having upto 10x 3.5″ disks. The case comes with three fans and a fan selector that is powered by a SATA power connector, so fans can run independant of the motherboard connectors. This is very usefull when you add a small Noctua NF-A6x25 PWM fan on top of the CPU heat sink. It is not spinning-up and down at the whim of the Supermicro motherboard choosing. I also liked the square look of the chassis.

fd-ca-node-804

For my power supply, I have decided to change from my usual Enermax for a Corsair RM750i power supply. I wanted a power supply that was capable of driving a lot of disks if I decided to increase the amount of disks, and a power supply that would be quiet under low power consumption. As you see below plenty of expansions and a power supply that stays fan-less until it it’s 45% of it’s charge. I added a Seagate Enterprise Capacity 4TB SAS drive in the chassis and when it’s running vSphere with some quiet VMs, the system is only consuming 69 Watts.

RMi_750_04RM750i_NOISE_WEB_121714

The Supermicro X10SDV-4C-7TP4F comes with the following expansions for storage.

PCI-Express
  • 2 PCI-E 3.0 x8 slots
M.2
  • Interface: PCI-E 3.0 x4
  • Form Factor: M Key 2242/2280/22110
  • Support SATA devices
Mini PCI-E
  • Interface: PCI-E 2.0 x1
  • Support mSATA

In the M.2 socket, I added a Samsung SM951 512GB NVMe Solid State Disk and in the Mini PCI-E, I added the Samsung 850 EVO 1TB Basic Solid State Disk. The mSATA drive is used as the Boot device and to have a large datastore to keep VMs local to the host. The Samsung SM951 512GB NVMe SSD can be used for the caching part of a VSAN design or a rfcache when running scaleIO.

Another up front warning, before you place this motherboard in a chassis, you need to make sure to un-screw the mSATA holder stick to the right position, so you can use a standard mSATA. There is a tiny screw on top and bottom of the mSATA holding bolt.

The Supermicro X10SDV-4C-7TP4F CPU cooling is done with a passive CPU heat sink. But during the initial memory testing, I have found that the IPMI CPU Sensor was showing Critical heat warning during a memtest86+ run. I decided to add the Noctua A6X25 PWM fan on top of Xeon D-1518 processor. The fit is perfect, and when this fan is connected on the chassis fan subsystem (see the top right section in the photo at the bottom) the critical heat issues disappeared.

So let’s recap the Bill-of-Materials (BoM) for this server the way I have configured. The pricing has been assembled from amazon/newegg in the US, amazon/azerty.nl for the Euro and with Brack.ch for Switzerland. I have left out the cost of the HDD, as Your Mileage May Vary.

X10SDV Cost

I will create a 2nd post on the build notes and pictures, but here is a teaser.

Node804_X10SDV

 

Notes & Photos of the Homelab 2014 build

I’ve had a few questions about my Homelab 2014 upgrade hardware and settings. So here is a follow-up. This is just a photo collection of the various stages of the build.  Compared to my previous homelabs that where designed for a small footprint, this one isn’t, this homelab version has been build to be a quiet environment.

I started my build with only two hosts. For the cases I used the very nice Fractal Design Define R4. These are ATX chassis in a sleek black color, can house 8x 3.5″ disks, and support a lot of extra fans. Some of those you can see on the right site, those are Noctua NF-A14 FLX. For the power supply I picked up some Enermax Revolution Xt PSU.

IMG_4584

For the CPU I went with the Intel Xeon E5-1650v2 (6 Cores @3.5GHZ) and a large Noctua NH-U12DX i4. The special thing about the NH-U12DX i4 model is that it comes with socket brackets for the Narrow-Brack ILM that you find on the Supermicro X9SRH-7TF motherboard.

IMG_4591

The two Supermicro X9SRH-7TF motherboards and two add-on Intel I350-T2 dual 1Gbps network cards.

IMG_4594

Getting everything read for the build stage.

On the next photo you will see quiet a large assortment of pieces. There are 5 small yet long lasting Intel SSD S3700 100GB, 8x Seagate Constellation 3TB disks, some LSI HBA Adapters like the LSI 9207-8i and LSI 9300-8i, two Mellanox ConnectX-3 VPI Dual 40/56Gbps InfiniBand and Ethernet adapters that I got for a steal (~$320USD) on ebay last summer.

IMG_4595

You need to remember, that if you only have two hosts, with 10Gbps Ethernet or 40Gbps Ethernet, you can build a point-to-point config, without having to purchase a network switch. These ConnectX-3 VPI adapters are recognized as 40Gbps Ethernet NIC by vSphere 5.5.

Lets have a closer look at the Fractal Design Define R4 chassis.

Fractal Design Define R4 Front

Fractal Design Define R4 Front

The Fractal Design Define R4 has two 14cm Fans, one in the front, and one in the back. I’m replacing the back one with the Noctua NF-A14 FLX, and I put one in the top of the chassis to extra the little warm air out the top.

The inside of the chassis has a nice feel, easy access to the various elements, space for 8x 3.5″ disk in the front, and you can push power-cables on the other side of the chassis.

Fractal Design Define R4 Inside

Fractal Design Define R4 Inside

A few years ago, I bought a very nice yet expensive Tyan dual processor motherboard and I installed it with all the elements before looking to put the CPU on the motherboard. It had bent pins under the CPU cover. This is something that motherboard manufacturers and distributors have no warranty. That was an expensive lesson, and that was the end of my Tyan allegiance. Since then I moved to Supermicro.

LGA2011 socket close-up. Always check the PINs. for damage

LGA2011 socket close-up. Always check the PINs. for damage

Here is the close up of the Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

I now always put the CPU on the motherboard, before the motherboard goes in the chassis. Note on the next picture the Narrow ILM socket for the cooling.

Intel Xeon E5-1650v2 and Narrow ILM

Intel Xeon E5-1650v2 and Narrow ILM

Here is the difference between the Fractal Design Silent Series R2 fan and the Noctua NF-A14 FLX.

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

What I like in the Noctua NF-A14 FLX are the rubber hold-fasts that replace the screws holding the fan. That is one more option where items in a chassis don’t vibrate and make noise. Also the Noctua NF-A14 FLX runs by default at 1200RPM, but you have two electric Low-Noise Adapters (LNA) that can bring the default speed down to 1000RPM and 800RPM. Less rotations equals less noise.

Noctua NF-A14 FLX Details

Noctua NF-A14 FLX Details

Putting the motherboard in the Chassis.

IMG_4623

Now we need to modify the holding brackets for the CPU Cooler. The Noctua NH-U12DX i4 comes with Narrow ILM that can replace the ones on it. In the picture below, the top one is the Narrow ILM holder, while the bottom one still needs to be replaced.

IMG_4621

And a close up of everything installed in the Chassis.

IMG_4629

To hold the SSD in the chassis, I’m using an Icy Dock MB996SP-6SB to hold multiple SSD in a single 5.25″ frontal slot. As SSD don’t heat up like 2.5″ HDD, you can select to cut the electricity to the FAN.

IMG_4611

This Icy Dock MB996SP-6SB gives a nice front look to the chassis.

IMG_4631

How does it look inside… okay, honest I have tied up the sata cables since my building process.

IMG_4632

 

Here is the picture of my 2nd vSphere host during building. See the cabling is done better here.

IMG_4647

 

The two Mellanox ConnectX-3 VPI 40/56Gbps cards I have where half-height adapters. So I just to adapt a little bit the holders so that the 40Gbps NIC where firmly secured in the chassis.

IMG_4658

Here is the Homelab 2014 after the first build.

IMG_4648

 

At the end of August 2014, I got a new Core network switch to expand the Homelab. The Cisco SG500XG-8F8T, which is a 16x Port 10Gb Ethernet. Eight ports are in RJ45 format, and eight are in SFP+ format, and one for Management.

Cisco SG500XG-8G8T

Cisco SG500XG-8G8T

I build a third vSphere host using the same config as the first ones. And here is the current 2014 Homelab.

Homelab 2014

Homelab 2014

And if you want to see what the noise is at home, check this Youtube movie out. I used the dBUltraPro app on the iPad to measure the noise level.

And this page would not be complete if it didn’t have a vCenter cluster screenshot.

Homelab 2014 Cluster

Upgrading the X9SRH-7TF LSI HBA 2308 and LSI HBA 9207-8i

Here is a resume on how to upgrade the LSI HBA 2308 Chipset on the Supermicro X9SRH-7TF and a LSI SAS2 HBA 9207-8i card to the latest BIOS & Firmware using the UEFI mode. This is applicable to my homelab Supermicro X9SRH-7TF or any other motherboard with UEFI Build-In EFI Shell.

I’ve found that using the UEFI mode to be more practical than the old method of a MSDOS bootable USB key. And this is the way more and more Firmware and BIOS will be released.

Tom and Duncan showed you last week how to upgrade an LSI 9207-4i4e from within VMware vSphere 5.5 CLI. In this article I’m going to show you how to use the UEFI Shell for the upgrade.

Preamble.

Since last week, I have been running the PernixData FVP (Flash Virtualization Platform) 1.5 solution on my two ESXi hosts, and I have found that the LSI HBA 2308 on the motherboard had a tendency to drop all the Drives and SSDs under heavy I/O load. I did upgrade last week the LSI HBA 2308 from the original Phase 14 Firmware to Phase 16, but that didn’t solve the issue.  Unfortunately I have not yet found on the Supermicro Support site, a newer release of the Firmware Phase 18 or BIOS for the embedded adapter.

So I dropped in the box another LSI HBA 9207-8i adapter, which is also based on the LSI 2308 chip. And low and behold, my two LSI adapter seemed to have nearly the exact same Firmware & BIOS.

two_adapters_lsi

Well if they LSI Embedded HBA and the LSI 9207-8i are nearly identical and with the same chipset… who knows if I burn the Firmware & BIOS on the motherboard…

 

Preparation.

First you need to head over to the LSI website for the LSI 9207-8I and download a few files to a local computer. For the LSI HBA 9207-8i you can jump to the Software Downloads section. You want to download three files, extract them and put the files on a USB key.

  • The Installer_P18_for_UEFI which contains the firmware updater (sas2flash.efi)
  • The UEFI_BSD_P18 which contains the BIOS for the updater (X64SAS2.ROM)
  • The 9207_8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows which contains the 9207-8.bin firmware.

lsi_site

At this point you put all those extracted files mentioned above on a USB key.

You reboot your server, and modify the Boot parameters in the BIOS of the server to boot in UEFI Built-In EFI Shell.

UEFI_Build-In_EFI_Shell

When you reboot also jump into the LSI HBA Adapter to collect the controllers SAS address. Its a 9 digit number you can find on the following interface. Notice that it starts with a 0 on the left of the quote.

lsi_sas_address_1

and

lsi_sas_address_2

For my adapters it would be 005A68BB0 for the SAS9207-8I and 0133DBE00 for the embedded SMC2308-IT.

 

Upgrading BIOS & Firmware.

Lets plug in the USB key in the server, and lets boot into the UEFI Build-In EFI Shell.

UEFI_booting

And lets move over to the USB key. For me the USB key is mapped as fs1: but you could also have a fs0:.  A quick dir command will list the files on the USB key.

usb_dir

Using the sas2flash.efi -listall command (extracted from the Installer_P18_for_UEFI file) we can list all the local LSI HBA adapters and see the various versions of the Firmware & BIOS.

sas2flash_listall_old

We can also get more details about a specific card using the sas2flash.efi -c 0 -list

sas2flash_list_old_9207

and sas2flash.efi -c 1 -list

sas2flash_list_old_2308

Now lets just upgrade the BIOS with the X64SAS2.ROM file found in the UEFI_BSD_P18 download and the Firmware with the 9207-8.bin that we found in the 9207-8i_Package_P18_IR_IT_Firmware_BIOS_for_MSDOS_Windows file.

As you see, the -c Controller command allows you to specify to which adapter the BIOS and Firmware is upgraded.

sas2flash_upgrade_0

and

sas2flash_upgrade_1

Lets have a peak again at just one of the LSI Adapters, the controller 1, which is the embedded one, now seems to have the Board name SAS9207-8i. A bit confusing, but it seemed to have worked.

sas2flash_1_list

Using the sas2flash.efi -listall command now shows us the new Firmware and BIOS applied to both cards.

sas2flash_listall_new

Now power-off the server, so the new BIOS & Firmware are properly loaded, and make sure to change back your Boot option in the server BIOS to your USB key or harddrive that contains the vSphere hypervisor.

Both LSI 9207-8i and the Embedded LSI HBA 2308 now show up as LSI2308_1 and LSI2308_2 in the vSphere Client.

esxi_storage_adapters

 

Homelab 2014 upgrade

I’ve been looking for a while for a new more powerful homelab (for home), that scales and passes the limits I currently have. I had a great success last year with the Supermicro X9SRL-F motherboard for the Home NAS (Running NexentaStor 3.1.5), so I know I loved the Supermicro X9 Single LGA2011 series. Because of the Intel C600 series of chipset, you can break the barrier of the 32GB you find on most motherboards (Otherwise the X79 chipset allows you upto 64GB).

As time passes, and you see product solutions coming out (vCOPS, Horizon View, vCAC, DeepSecurity, ProtectV, Veeam VBR, Zerto) with memory requirements just exploding. You need more and more memory. I’m done with the homelab, where you really need to upgrade just because you can’t upgrade the top limit of the memory. So bye bye the current cluster of four Shuttle XH61v with 16GB.

With the Supermicro X9SRH-7TF (link) you can go to 128GB easy (8x16GB) for now. It’s really just a $$$ choice. 256GB (8x32GB) is still out of reach for now, but that might change in 2 years.

I have attempted to install PernixData FVP 1.5 on my Homelab 2013 Shuttle XH61v, but the combo of the motherboard/AHCI/Realtek R8168 makes for an unstable ESXi 5.5. Sometimes the PernixData FVP Management Server sees the SSD on my host, then it looses it. I did work with PernixData engineers (and Satyam Vaghani), but my homelab is just not stable. Having been invited to the PernixPro program, doesn’t give me the right to use hours and hours of PernixData engineers time to solve my homelab issues. This has made the choice for my two X9SRH-7TF boxes much easier.

The Motherboard choice of the Supermicro X9SRH-7TF (link) is great because of the integrated management, the F in the X9SRH-7TF. Its a must these day. Having the Dual X540 Intel 10GbE Network Card on the motherboard will allow me to start using the network with a dual gigabit link,  and when I have the budget for a Netgear XS708E or XS712T it will scale to dual 10Gbase-T. In the meantime I can also have a single point-to-point 10GbE link between the two X9SRH-7TF boxes for vMotion and the PernixData data synchronization. The third component that comes on the X9SRH-7TF is the integrated LSI Storage SAS HBA, the LSI 2308 SAS2 HBA. This will allow me to build a great VSAN cluster, once I go from two to three serverss at a later date. Its very important to ensure you have a good storage adapter for VSAN. I have been using the LSI adapters for a few years and I trust them. Purchasing a motherboard, then adding the Dual X540 10GbE NIC and a LSI HBA would have cost a lot more than the X9SRH-7TF.

For the CPU, Frank Denneman (@FrankDenneman) and me came to the same conclusion, the Intel Xeon E5-1650 v2 is the perfect choice between number of cores, cache and speed. Here is an another description of the Intel Xeon E5-1650 v2 launch (CPUworld).

For the Case, I have gone just like Frank Denneman’s vSphere 5.5 home lab choice with the Fractal Design Define R4 (Black). I used a Fractal Design Arc Midi R2 for my Home NAS last summer, and I really liked the case’s flexibility, the interior design, the two SSD slots below the motherboard. I removed the default two Fractal Design Silent R2 12cm cooling fans in the case and replaced with two Noctua NH-A14 FLX fans that are even quieter, and are connected using rubber holders so they vibrate even less. It’s all about having a quiet system. The Home NAS is in the guest room, and people sleep next to it without noticing it. Also the Define R4 case is just short of 47cm in height, meaning you can lie it down in a 19″ rack if there is such a need/opportunity.

For the CPU Cooler, I ordered two Noctua NH-U12DX i4 coolers which support the Narrow ILM socket. Its a bit bigger than the NH-U9DX i4 that Frank ordered, so we will be able to compare. I burned myself last year with the Narrow ILM socket. I puchased a water cooling solution for the Home NAS and it just couldn’t fit it on the Narrow ILM socket. That was before I found out the difference between a normal square LGA2011 socket and the Narrow ILM sockets used on some of the Supermicro boards. Here is a great article that explains the differences Narrow ILM vs Square ILM LGA 2011 Heatsink Differences (ServeTheHome.com)

For the Power supply, I invested last year in an Enermax Platimax 750W for the Home NAS. This time the selection is the Enermax Revolution X’t 530W power supply. This is a very efficient 80 Gold Plus PSU. which supports ATX 12V v2.4 (can drop to 0.5W on standby) and uses the same modular connectors of my other power supplies. These smaller 500W power supplies are very efficient when they run at 20% to 50% charge. This should also be a very quiet PSU.

I made some quick calculations yesterday for the Power Consumption, I expect the max power that can be consumed by this new X9SRH-7TF build should be around 180-200W, but it should be running around the 100-120W on a normal basis. At normal usage, I should hit the 20% of the power supply load, so my Efficiency of the PSU should be at around 87%, a bit lower than Frank’s choice of the Corsair RM550. This is the reason why I attempt to take a smaller PSU rather than some of the large 800W or even 1000W PSU. 

xt_530w_efficiency

For the Memory, I’m going to reuse what I purchased last year for my Home NAS. So each box will receive 4x16GB Kingston 1600Mhz ECC for now.

My current SSDs that I will use in this rig are the Intel SSD S3700 100GB enterprise SSD and some Samsung 840 Pro 512GB. What is crucial for me in the the Intel S3700 is that its Endurance design is 10 drive writes per day for 5 years. For the 100GB, it means that its designed to write 1TB each day. This is very important for solutions like PernixData or VSAN.  Just to compare, the latest Intel Enthusiast SSD, the SSD 730 240GB that I purchased for my wife’s computer, its endurance design is set to 50GB per day for 5 years (70GB for the 480GB model). The Intel SSD 730 just like it’s Enterprise cousins (S3500 and S3700) come with a Enhanced power-loss data protection using power capacitors. The second crucial design in an Enterprise SSD, is its Sustained IOPs rating.

I’m also adding a Intel Ethernet Server Adapter I350-T2 Network Card for the vSphere Console management. I’m used to have a dedicated Console Management vNIC on my ESXi hosts. These will be configured in the old but trusty vSwitch Standard.

Another piece of equipment that I already own and that I will plug on the new X9SRH-7TF are the Mellanox ConnectX-3 Dual FDR 56Gb/s  InfiniBand Adapters I purchased last year. This will allow me to test and play with a point-to-point 56Gb/s link between the two ESXi hosts. Some interesting possibilities here…  I currently don’t have a QDR or FDR InfiniBand switch, and these switches are also very noisy, so that is something I will look at in Q3 this year.

I live in Switzerland, so my pricing will be a bit more expensive than what you find in other European countries. I’m purchasing my equipment with a large distribor in switzerland, Brack.ch . Even if the Supermicro X9SRH-7TF is not on their pricing list, they are able to order them for me. The price I got for the X9SRH-7TF is at 670 Swiss Francs, and the Intel E5-1650v2 at 630 Swiss Francs. As you see the Cost of one of these server is closing in the 1800-1900 Euro price range. I realize it’s Not Cheap. And it’s the reason of my previous article on the increase costs for a dedicated homelab, the Homelab shift…

Last but not least, in my Homelab 2013 I focus a lot on the Wife Acceptance Factor (WAF). I aimed for Small, Quiet, Efficence. This time, the only part that I will not be able to keep, is the Small. This design is still a Quiet and Efficient configuration. Lets hope I won’t get into too much problems with the wife.

I also need to thank Frank Denneman (@FrankDenneman) as we discussed extensively this home lab topic over the past 10 days, fine tuning the design on some of the choice going into this design. My prior design for the homelab 2014 might have gone with the Supermicro A1SAM-2750F without his input. A nifty little motherboard with Quad Gigabit, 64GB memory support, but lacking on the CPU performance. Thanks Frank.

2013 Homelab refresh

Preamble

It’s now 2013, and it’s time to have a peak at my homelab refresh for this year.

 

Background

In the past three years, I’ve ran a very light homelab with VMware ESXi. I mainly used my workstation (Supermicro X8DTH-6F) with Dual Xeon 5520 @2.26Ghz (8-Cores) and 72GB of RAM to run most of the virtual machines and for testing within VMware Workstation, and only ran domain controllers and 1 proxy VM on a small ESXi machine, a Shuttle XG41. This gives a lot of flexibilty to run nearly all the virtual machines on a large beefed up workstation. There are quiet a few posts on this topic on various vExpert websites (I highly recommend Eric Sloof’s Super-Workstation).

I sometimes do play games (I’m married with a gamer), and when I do I have to ensure my virtual machines are powered down within VMware Workstation, as my system could and has crashed during games. Having corrupted VM is no fun.

 

Requirements

What I want for 2013 in the homelab, is a flexible environment composed of a few quiet ESXi hosts with my larger workstation being able to add new loads or test specific VM configuration. For this I need a infrastructure that is small, quiet and stable. Here are the requirements for my 2013 homelab infrastructure

  1. Wife Acceptance Factor (WAF)
  2. Small
  3. Quiet
  4. Power Efficient

Having purchased a flat, I don’t have a technical room (nothing like my 2006 computer room) or a basement. So having a few ESXi hosts on 24 hours a day, requires a high Wife Acceptance Factor. The system have to be small & quiet. In addition, if they are power efficient, it will make the utility bill easier.

 

Shuttle XH61V

The Shuttle XH61V is small black desktop based on the Intel H61 chipset. It comes in a 3.5L metal case with very quiet ventilators. You just need to purchase the Shuttle XH61V, a Intel 1155 Socket 65W processor, two memory SODIMMs (laptop memory) and local storage. Assembly can be done in less than 30 minutes.

Shuttle XH61V

Shuttle XH61V

The Shuttle XH61V comes with two NICs and support for a mSATA (Bootable) connector, a PCIe x1 slot, and two 2.5″ Devices. The Shuttle XH61V comes with two gigabit network cards. They are Realtek 8168 cards. These work flawlessly, but they do not support Jumbo frames.

Shuttle XH61V Back

Shuttle XH61V Back

For storage, I decided to boot from a mSATA device, and to keep a Intel SSD for a fast upper-tier local storage, and one large Hybrid 2.5″ Harddisk for main storage. I do have a Synology DS1010+ on the network that is the centralized NFS storage, but I want some fast local storage for specific virtual machines. It’s still early 2013, so I have not yet upgraded my older Synology or created a new powerful & quiet Nexenta Community Edition home server. On the next image you can see that three Shuttle XH61V take less space than a Synology DS1010+

Three Shuttle HX61V with Synology DS1010+

VMware ESXi installation

Installing VMware ESXi is done quickly as all the devices drivers are on the ESXi 5.1 VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso install cdrom.

ESXi 5.1 on XH61V

ESXi 5.1 on XH61V

Here is the Hardware Status for the Shuttle XH61V

ESXi XH61V Hardware Status

Here is an updated screenshot of my vSphere 5.1 homelab cluster.

Management Cluster

 

Bill of Materials (BOM)

Here is my updated bill of materials (BOM) for my ESXi nodes.

  • Shuttle XH61V
  • Intel Core  i7-3770S CPU @3.1Ghz
  • Two Kingston 8GB DDR3 SO-DIMM KVR1333D3S9/8G
  • Kingston 16GB USB 3.0 Key to boot ESXi (Change BIOS as you cannot boot a USB key in USB3 mode)
  • Local Storage Intel SSD 525 120GB
  • Local Storage Intel SSD 520 240GB
  • Local Storage Seagate Momentus XT 750GB

Planned upgrade: I hope to get new Intel SSD 525 mSATA boot devices to replace the older Kingston SSDnow when they become available.

 

Performance & Efficiency

In my bill of materials, I selected the most powerful Intel Core i7 processor that I could fit in the Shuttle XH61V. Because I’m running virtual appliances and virtual machines like vCenter Operations Manager, SQL Databases, Splunk. There are some less expensive Core i3 (3M Cache), Core i5 (6M Cache) or Core i7 (8M Cache) processor that would work great.

What is impressive, is that the Shuttle XH61V comes with a 90W power adapter. We are far from the 300W mini-boxes/XPC or even the HP MicroServer with their 150W power adapters. Only the Intel NUC comes lower with a 65W power adapter and a single gigabit network (@AlexGalbraith has a great series of post on running ESXi on his Intel NUC ).

Just for info, the Intel Core i7-3770S has a cpubenchmark.net score of 9312. Which is really good for a small box that uses 90W.

The Shuttle XH61V is also very quiet... it’s barely a few decibels above the noise of a very quiet room. To tell you the thru… the WAF is really working, as my wife is now sleeping with two running XH61V at less than 2 meters away. And she does not notice them… 🙂

 

Pricing

The pricing for a Shuttle XH61V with 16GB memory and a USB boot device (16GB Kingston USB 3.0) can be kept to a about $350 on newegg. What will increase the price is the performance of the LGA 1155 Socket 65W processor ( Core i3-2130 from $130 to Core  i7-3770S at $300) and what additional local storage you want to put in.

vSphere 5.1 Cluster XH61V

The sizing of the homelab in early 2013 is so far from the end of 2006 when I moved out of my first flat, when I had a dedicated Computer room.

Update 18/03/2012. DirectPath I/O Configuration for Shuttle XH61v BIOS 1.04

XH61v DirectPath I/O Configuration

XH61v DirectPath I/O Configuration

 

Update 22/03/2013.  mSATA SSD Upgrade

I’ve decided to replace the Intel 525 30GB mSATA SSD that is used for booting ESXi and to store the Host Cache with a larger Intel 525 120GB mSATA SSD. This device will give me more space to store the Host Cache and will be used as a small Tier for the Temp scratch disk of my SQL virtual machine.

The ‘published’ performance for the Intel 525 120GB mSATA are

Capacity
Sequential
Read/Write (up to)
Random 4KB
Read/Write (up to)
Form Factor
30 GB SATA 6 Gb/s       500 MB/s / 275 MB/s  5,000 IOPS / 80,000 IOPS mSATA
60 GB SATA 6 Gb/s       550 MB/s / 475 MB/s 15,000 IOPS / 80,000 IOPS mSATA
120 GB SATA 6 Gb/s       550 MB/s / 500 MB/s 25,000 IOPS / 80,000 IOPS mSATA
180 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
240 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
 Show More Detailed Product Specifications >