The homelab shift…

I believe that we are at a point of time where we will see a shift in the vSphere homelab designs.

One homelab design, which I see as becoming more and more popular is the Nested Homelab using either a VMware Workstation or VMware Fusion base.
There are already a lot of great blogs on Nested homelabs (William Lam), and I must at least mention the excellent AutoLab project. AutoLab is a quick and easy
way to build a vSphere environment for testing and learning, and the latest release of AutoLab supports the vSphere 5.5 release.

The other homelab design is a dedicated homelab. Some of the solutions that people want to test on the homelabs are becoming larger and with more components (Horizon, vCAC), requiring more resources. So it is painful to admit, but I believe the dedicated homelab is heading towards a more expensive direction.

Let me explain my view with these two points.

The first one and the more recent one, is that if you want to lab Virtual SAN, you need to spend some non-negligible money in your lab. You need to invest in at least 3 SSDs on three hosts, and you need to invest in a storage controller that is on the VMware VSAN Hardware Compatibility List.

Recently Duncan Epping mentioned once again that unfortunately the Advanced Host Controller Interface (AHCI) standard for SATA is not supported with VSAN, and you can loose the integrity of your VSAN storage. Something that you don’t want to happen in production and loose hours of your precious time configuring VMs. Therefore if you want to lab Virtual SAN, you will need to get an storage controller that is supported. This will cost money and will limit the whitebox motherboards that support VSAN without add-on cards. I really hope that the AHCI standard will be supported in the near future, but there is no guarantee.

The second one, and the one I see as a serious trend, is network drivers support. Network drivers used in most homelab computer are not updated for the current release of vSphere (5.5) and don’t have a bright future with upcoming vSphere releases. 

VMware has started with vSphere 5.5 their migration to a new Native Driver Architecture and slowly moving away from the Linux Kernel Driver that are plugged into the VMkernel using Shims (great blog entry by Andreas Peetz on Native Driver Architecture).  

For all those users that need the Realtek R8168 driver in the current vSphere 5.5 release, they need to extract the driver from the latest vSphere 5.1 offline bundle, and need to injected the .vib driver in the vSphere 5.5 iso file. You can read more about this popular article at “Adding Realtek R8168 Driver to ESXi 5.5.0 ISO“. 

My homelab 2013 implementation uses these Realtek network cards, and the driver works good with my Shuttle XH61v.  But if you have a closer peak at the many replies to my article, a big trend seems to emerge. People use a lot of various Realtek NICs on their computers, and they have to use these R8168/R8169 drivers. Yet these drivers don’t work well for everyone. I get a lot of queries about why the drivers stop working, or are slow, but hey, I’m just a administrator that cooked a driver in the vSphere ISO, I’m not driver developer.

vSphere is a product aimed at large enterprise, so priority in the development of drivers, is to be expected for this market.  VMware seems to have dropped/lagged the development of these non-Enterprise oriented drivers. I don’t believe we will see further development of these Realtek drivers from the VMware development team, only Realtek could really pickup this job.

This brings me up to the fact that for the future, people will need to move to more professional computers/workstations and controllers if they want to keep using and learning vSphere at home on a dedicated homelab.
I really hope to be proven wrong here… So you are most welcome to reply to me that I’m completely wrong.





28/03/2014 Some spelling corrects and some

  • Graham Mitchell

    > people will need to move to more professional computers/workstations and controllers

    You can pick up an IBM M1015 controller from eBay for about $100 (or slightly less if you wait for a bit). Then cross flash it to an LSI 9220-8i in IT mode, so that it just presents disks individually. I hammer several of them hard in my home Linux file servers, and I’m in the process of setting one up for ESXi 5.5, though to be honest, I’m probably not going to run storage locally, I will probably make it available form a Linux server over FC and iSCSI (maybe Infiniband too).

    There are a couple of good tutorials on cross flashing

  • Robert

    Unfortunately you’re right and this is where MS enters the game. Their Server 2012 runs on every el cheapo home hardware and so HyperV will eventually gain market share in the big business because people get used to it. Too bad VMware is so stupid.

    • MyName

      Robert, I think you got it. Isn’t it funny how things have turned? Microsoft thought VMware was just a bunch of hippies and posed no threats….then they did….but now VMware is alienating those who helped them, and Microsoft will be there to pick up the pieces. I hate it though, because we could be stuck on that cycle.

      I think that it may be time for us HomeLab-ers to start playing seriously with the open source hypervisors, that run on truly anything (google hypervisors in cars). A lot of people say: oh c’mon, open source can’t do X or Y. But you know what? neither could VMware, and it was the cowboys (us) that played with it, brought it into our offices, our companies, and have helped shaped its future. I guess,

      I personally rather help shape opensource hypervisors, than Microsoft…Personally.

  • Michael Patton

    After years piecing together laptops, desktops, memory increase, etc… decided to invest into a scalable home lab environment. The goals included a.) min. ease of setup and support, b.) Scalable and c.) fun… I landed on a scratch/dent Dell R420, Dual E52430, Procs, x2 500 GB SATA (2.5) x 2 300 GB SAS (2.5), Quad Port GB Broadcom (BCM5719), dual GB Broadcom onboard (BCM5720) and Dual Power Supply. I’ve added to this configuration 192 GB of RAM (16 GB mods Kingston) and x 4 1TB 10K WD VelociRaptor. Goal achieved – I accelerated my learning/study time with technologies and able to scale without issue.

  • virtualistic

    > people will need to move to more professional computers/workstations and controllers

    Hi Erik,
    Your articles ROCK!
    But as far as the “move to professional hardware part” is concerned, I can only partly agree with you on that.
    If you look at Frank Denneman’s previous setup he also faced issues with his Intel nics (Intel PRO/1000 Pt Dual Port Server Adapter). falling out of grace with VMWare.
    I don’t expect VMWare to support every piece of hardware under the sun but they need some middleground. As stated previously HyperV will find solid ground if people stop tinkering with ESXi it’s that simple.