NSX Advanced Load Balancer (Avi Networks) Components

In this first post, I will attempt to describe the different components of the NSX Advanced Load Balancer (ALB). Following the purchase of Avi Networks by VMware in June 2019, the product Avi Vantage has been renamed as NSX Advanced Load Balancer. If you attempt to deploy the latest release 18.2.7, you will notice it still has the Avi Network branding. This will change in the near future.

NSX Advanced Load Balancer is aSoftware Defined solution that provides Application Delivery Services in an automated elastic deployment. It has an in-build best of breed operational dashboard for advanced analytics and reporting, it’s completely API driven, and supports a wide range of infrastructure and cloud providers. In this series of blogs, I will focus on the vSphere integration in my home datacenter.

Version 18.2 of the product works over distributed virtual port groups that belong to both Virtual Distributed Switch (regular VLAN networks) and Logical Switches (NSX-V VXLAN based or NSX-T GENEVE networks) as it is agnostic of underlying network infrastructure. Integration in an NSX-T network is possible today, but still requires some manual configuration steps. The automation of this process will come in a future release of NSX ALB.

The management plane is composed of one or three Controllers, and the data plane is composed of the Service Engines (SE).

The Controller is the central repository for the configurations and policies, it manages the full lifecycle of the Service Engines (creation, control and deletion). The Controllers run on a dedicated virtual machines. I’ve used a Controller that is integrated in my vSphere infrastructure to automatically deploy and configure the Service Engines across the selected cluster.

The Controllers have the recommended minimum requirements of 24GB RAM, 8 vCPU per node and 128GB disk. The reason to increase the size of the appliance settings above the previous requirements, depends on the data analytics and the amount of the flows. https://avinetworks.com/docs/latest/avi-controller-sizing/

Once the Controllers are deployed, we define a Cloud Infrastructure. Here I have configured my vCenter as the target. With administrator credentials, the Controller will be able to provision the required Service Engines on the cluster.

The Service Engines are lightweight data plane engines that distribute the connections based on Load-Balancing algorithms or (HTTP/S) headers. These Service Engines do the Load-Balancing in front of the back-end servers and also execute all the data plane Application Delivery Controls operations, such as health Monitoring and test the performance of the back-end servers, Persists request to back-end servers, Caches response content for potential re-use, Protects against security threats (DoS, suspicious client IPs), Delivers high performance web security with iWAF and Offloads SSL decryption from back-end servers; re-encrypts if required.

Service Engines are then assembled in a Service Engine Group, just like vSphere hosts are part of a cluster.

When you create a Virtual Service for a Load-Balance, this service is deployed across the Service Engine Group.

In the next blog post, I will cover more details of the configuration and the deployment of a simple Load Balanced workload.

Enabling vRealize Log Insight agent on vRealize Automation 7 appliance

While deploying a vRealize Automation (vRA) appliance at a customer yesterday, I wanted to setup the monitoring from vRealize Automation to the vRealize Log Insight (vRLI) solution with the new vRA 7 Content Pack. This Content Pack is available for download from the Marketplace, directly on each vRLI server.

vra_cp

When you install the vRA 7 Content Pack you can find the setup instruction in the tool tab. Here you will realize that you need to add two more Content Packs to be able to properly monitor a vRealize Automation installation.

vra7_cp_setup_instructions

Lets add the Apache – CLF and the vRealize Orchestrator Content Packs in the vRLI marketplace.

apache_clf_cp vro_cp

 

 

 

The vRealize Automation 7.0.1 Build 3622989 release now comes with a vRealize Log Insight Agent pre-installed. This agent is already running on the appliance when it is deployed, but it is not configured. You just need to connect using SSH to the vRA appliance and edit the /etc/liagent.ini configuration file.

edit the /etc/liagent.ini file

edit the /etc/liagent.ini file

The three fields to edit are vRLI hostname, protocol (cfapi) and the port. Once this is done, you just need to restart the Log Insight Agent using service liagentd restart.

service liagentd restart

service liagentd restart

At this point the Log Insight Agent starts talking to your vRealize Log Insight host or cluster Virtual IP address.

When checking the vRealize Log Insight Administration pane, in the Agents tab, I now see in my All Agents the vRA Appliance communicating.

vrli_all_agents

The Log Insight Agent is now communicating, but it does not yet have a configuration on what to send back. Once an Agent pools the Log Insight server it will query what it needs to monitor. Here we will start to have fun and see the real power of vRealize Log Insight agent configuration.

We are going to use the appropriate vRA agent configuration template and make it an agent configuration that can be applied to our implementation.

Select the appropriate vRA agent group and Copy Template

Select the appropriate vRA agent group and Copy Template

From the Agent Drop-Down menu, scroll to the vRealize Automation 7 – Linux template and select on the right the Copy Template button. We will keep the configuration name as vRealize Automation 7 – Linux.

vrli_agent_name

The next step is to select to which Log Insight Agents this configuration is applicable to. In the next screenshot I have selected to setup this Agent configuration using the IP Address of my vRA Appliance. I re-type the IP Address of the vRA Appliance in the filter, and use the Save New Group at the bottom.

vrli_agent_add_by_ip_address

Once save you can refresh the view and you will now see the configuration that is sent out to the qualified agent on their next regular pool of the vRLI host/cluster.

Here is another view of the vRealize Automation 7 – Windows group, and you see that I have applied this configuration only to the windows servers of a vRA Enterprise deployment.

vrli_agent_group

You can apply multiple Log Insight Agent configurations to the same server. Another vRA example to monitor the IIS components of vRA IaaS elements

vrli_agent_group_iis

Example of monitoring Microsoft – Active Directory 2012 servers

vrli_agent_group_ad

You can see now how easy and fast it is to monitor servers with specific Log Insight Agent configurations.

Using virtual synology in a scale out distributed storage architecture

I’ve recently finished upgrading the Home Datacenter (#HomeDC) to vSphere 6.0 with four hosts running VSAN 6.0 with dual 10GbE networking for each host.

vsan

Even running a few large virtual machines on the VSAN Datastore like VDP 6.0 with a 4TB backed disk, I found myself with a lot of spare storage. I’ve invested in the SAS disks (Seagate Enterprise Capacity 4TB SAS 7200rpm) backing the VSAN datastore, so the budget is gone for replacing the aging Synology DS1010+.

I’ve recently studied various reviews on the Synology DS2015xs, but found the CPU a bit lacking to drive the dual 10GbE SFP+ links, and the Synology DS3615xs is a bit expensive. So why not leverage the 10GbE NICs in my management cluster for ultra fast connections, the fast CPUs on my hosts are a nice addition too. The biggest advantage is “cheap” 10GbE file server connections.

The rest of the blog is going in a grey zone… it’s #unsupported

Let me show you the goods first.

virtual Synology DS3615xs running on VSAN datastore

The concept is to create a storage appliance, that leverages the VSAN datastore and its accelerations of read/writes, and provides a flexible structure, where you could increase the storage on an as needed basis, or create a temporary storage while migrating from one Synology to a newer one. All this running on a vSphere host. A concept that a lot of other companies are doing with their Virtual Storage Appliances.

I’m going to use the XPEnology operating system, which is based on the Synology DiskStation Manager (DSM).

  • In my design and implementation that I will describe here, the virtual synology has a 8TB disk. The appliance is not doing any RAID functions on this disk, as its already protected on the VSAN datastore using a number of failures to tolerate of 1 policy (FTT=1).
  • Another way would be to create two or four virtual disks with a number of failures to tolerate of 0, and do a Software RAID in the appliance.
  • A third way could be to use four physical disks and two SSDs on a host, create RDM links, and present all these disks to the virtual Synology appliance and do Software RAID on the disks, and use the SSD for caching (SSDcache). This virtual storage appliance would not be able to move to another host using vMotion, but you could mitigate this restriction using Synology High-Availability.

To build the virtual synology you will need to retrieve the latest copy of the XPEnology DS3615xs files. You are looking for XPEnoboot_DS3615xs_5.1-5022.3.vmdk or a more recent version. Each version can have its own deployment process. The process I have described below is using the XPEnoboot_DS3615xs_5.1-5022.3.vmdk version.

There is also a huge forum with lots of contributions and interesting links at the XPEnology forums.

1) Creating the vSynology

Now I’m going to say upfront, that you will need to upload the XPEnoboot_DS3615xs_5.1-5022.3.vmdk twice in the virtual storage appliance. Once for the initial install, which will format all disks of the appliance (including the boot vmdk), then again to boot the appliance.

We start by creating a new Virtual Machine.

01 - Create new VM

We give it a name and place it in a Cluster.

02 - Name VM

And we store the virtual machine and its configuration files on an existing datastore. I have select my vsanDatastore.

04 - Select VSAN Datastore

We define the hardware compatibility of the virtual machine and select the Guess OS. We are going to use the Linux Other 3.x Linux (64-bit).

06 - Select Guess OS Linux 3.2I have selected two CPU and 8GB of memory. Because my appliance won’t do any software RAID, 2 vCPU is more than enough.

07 - Base Hardware

I have added a second VMXNET3 network interface, which I put on a dedicated 10GbE Distributed Port Group. So eth0 goes out using uplink1 and eth1 goes out using uplink2. You see these changes in the summary of the appliance below.

08 - ds3615xs Hardware Summary2) Changing the Boot disk

We can now go back into the appliance and edit it. We remove the boot disk, and erase it from the disk. (Yeah missing screenshot of this step).

We then use the datastore browser to upload for the first time the XPEnoboot_DS3615xs_5-1-5022.3.vmdk in the appliance folder.

09 - Upload XPE vmdk on vsanDatastore

And we add this existing virtual disk to the appliance

10a - Select the XPE vmdk

The new boot disk is attached as an IDE disk on port IDE(0:0)

10b - Add XPE vmdk as IDE0-0

In the following screenshot, I’m adding the main disk to the storage appliance. I’m creating a 8TB (or 8192GB) virtual disk, and select my VSAN Storage Base Polci “VSAN High Perf”.  The “VSAN High Perf” is defined as a Number of failures to tolerate of 1, and Number of disk stripes per object at 2.

11 - XPE non-persistent and 8TB

Now you can start the appliance. Look closely at the IP addresses of the appliance and the MAC addresses. You want to start configuring the IP Addresses later on the proper NIC.

12a Start VM and check eth0 eth1

Using the Synology Assistant you can now see your appliance appear on the network.12b - Use Synology Assistant to find new DS3615xs Use your browser and aim it to the IP address shown in the Synology Assistant to do the initial install.

12c - Open the Web Assistant

We are installing the DSM using the Manual install.

12d - Install DiskStation Manager

Here you upload the DSM 5.1-5022 pattern file that you retrieved from the Synology download center in the DS3615xs selection.

12e - Select Manual install and select DS3615xs 5022 pat

It will now prompt you that it will erase all partitions on the attached disks of the appliance. This includes the XPEnoboot disk of the appliance.

12f - Format disks with 5022.3 PAT

Accordingly the expected behavior now, is that the boot disk is wiped and won’t boot.

13 - Both disk formatted.

Stop the appliance, and using the Datastore browser, you go erase the XPenoboot disk. Upload again for the 2nd time the XPEnoboot_DS3615xs_5.1-5022-3.vmdk in the folder.

14 - Erase XPEnoboot vmdk and replace with original one

3) Configuration using Synology Assistant

You can now restart the appliance. You will notice that the 2nd time the appliance boots, some of the messages like the IP address are not there anymore. And using the Synology Assistant, you see that the DHCP function isn’t started. The IP addresses are now 169.254.x.y

Select the proper network interface in the Synology Assistant using the MAC address, and select Setup. If you don’t select the proper MAC address you might need to change swap IP addresses later. So save yourself some time, and select the eth0 one.

15 - Reboot DS3615xs and use Synology AssistantThe Synology assistant wizard will now start.

16 - Synology Assistant

The Admin password at this time is blank, don’t enter any value. You can change the password later.

17 - Synology Assitant - Blank passwordEnter the appliance Network settings.

18 - Synology Assitant - Final Network settings for eth0

Refreshing the Synology Assistant shows that you have the proper IP address now.

19 - Now ready for Web configuration

Time to connect to your newly deploy appliance.

20 - Configuration

You are now only a few steps away from using your storage appliance.

21 - Web Config

It is now time to change your admin account password.

22 - Server name

We can now update the DSM 5.1-5022 version to the latest 5.1-5022-5 version. Depending on the CPU of your host, you will never have seen a Synology reboot so fast.

23 Patch DSM

If you intend to use this virtual synology appliance to store data, I recommend you do some conditioning tests first, to see how it reacts in your environment.

I like the flexibility of the virtual synology appliance:

  • Adding a temporary repository for a data migration becomes easy if you have a lot of underlying VSAN datastore space.
  • Want to try out Synology High-Availability, add a 2nd appliance and create the High-Availability cluster.
  • Want to test a Synology with 10GbE interface, easy if your ESXi host has a 10G interface. (*)

In the coming weeks, I’m looking forward to deploy on my VSAN datastore another storage appliances that can scale out in this distributed storage architecture.

(*) I have found out that while having the virtual synology appliance with 10GbE on the backbone is awesome, yet I ran into upload bandwidth limits trying to upload data. My sources where connected to the core switch over 1GbE links, or the virtual machines being used as a source for testing, has its disk store on 1GbE NFS/iSCSI LUNs. To test the virtual synolgoy I copied large files from various sources.I had three sources pushing out 100-120MB/s, 60-70MB/s and 80-90MB/s of large sequential files to get the 2nd screenshot at the top and see the virtual synology write stats at 220MB/s.

Homelab 2015 Upgrade

Since my last major entry about my Homelab in 2014, I have changed a few things. I added a 2nd cluster based on Apple MacMini (Late 2012), on which I run my OS X workloads, VMware Photon #CloudNativeApps machines, the DevOps Management tools and the vRealize Automation deployed blueprints. This cluster was initially purchased & conceived as a management cluster. The majority of my workload is composed of management, monitoring, analysis and infrastructure loads. It just made sense to swap the Compute and Management cluster around, and use the smaller one for Compute.

Compute_Cluster

Compute Cluster

The original cluster composed of three SuperMicro X9SRH-7TF described in my Homelab 2014 article (more build pictures here) gave me some small issues.

2014

Homelab in December 2014.

I’ve found that the Dual 10GbE X540 chipset on the motherboard does heat up a bit more than expected, and more than once (5x) I lost the integrated Dual 10GbE adapters on one of my hosts, requiring a host power off for ~20 minutes to cool down. In addition, a single 16G DDR3 DIMM was causing one host to freeze once every ~12 days. All the host have run extensive 48 hours memtest86+ checks, but nothing was spotted.  When a frozen VSAN host rejoins the cluster you see the re-synchronization of the data, and at that time, I’m glad to have a 10GbE network switch. In the end, I followed a best practice for VSAN clusters, I extendd the cluster to 4 hosts.

Beginning February I added a single Supermicro X10SRH-CLN4F server with a Intel Xeon E5-2630v3 (8 Cores @2.4Ghz and 64GB of DDR4 memory) to the cluster. This Supermicro X10SRH-CLN4F comes with 4 Intel Gigabit ports, an integrated LSI 3008 SAS 12Gb/s adapter. I also added an Intel X540-T2 dual 10GbE adapter to bring it in line with the first three nodes.

esx01

vSphere 6.0 on Supermicro X10SRH-CLN4F

Having a fourth host means scaling up the VSAN Cluster with an additional SSD and two 4TB SAS drives.

In the past month, the pricing of the Samsung 845DC Pro SSD have drop, to come in the $1/GB range. The Samsung 845DC Pro is rated at 10 DWPD (Disk-Writes-Per-Day) or 7300 TBW (TeraBytes-Written-in-5-years), and its performance is documented at 50’000 Sustained Write IOPS (4K) (Write IOPS Consistency at 95%) [Reference Samsung 845DC Pro PDF, and thessdreview article]. A fair warning for other poeple looking at the Samsung SSD 845DC Pro, it is not on the VMware VSAN Hardware Compatibility List.

Here is a screenshot of the disk group layout of the VSAN Cluster.

vsan_disk_group

VSAN Disk Management

The resulting VSAN configuration is now 28TB usable space.

vsan

Here is a screenshot of the current Management Cluster.

Management Cluster

This cluster having grown, is now also generating additional heat. It’s been relocated in a colder room, and I had a Three Phase 240V 16A electricity line put in.

Management Cluster (April 2015)

Management Cluster (April 2015)

My external storage is still composed of two Synology arrays. An old DS1010+ and a more recent DS1813+ with a DX513 extension. At this point, 70% of my virtual machine datasets are located on the VSAN datastore.

Synology DS1813+ Storage Manager

Reviewing this article, I realize this cannot quantify as a homelab anymore… its a home datacenter… guess I need a new #HomeDC hashtag…

vSphere Replication Add-On registration and Folder issue.

While deploying recently the vSphere Replication solution on an infrastructure, I found myself confronted with a strange situation. I had deployed the vSphere Replication 6.0 appliance (Management and Replication), I was looking to add a 2nd vSphere Replication appliance to the primary site. Yet, once the Add-On vSphere Replication appliance was deployed, I was not able to select it via the vSphere Web Client to add it, only the error message “Selected object is not a virtual machine. Select a virtual machine to register as vSphere Replication Server.”

Unable to register vSphere Replication Add-On

 

After moving the new vSphere Replication appliance from my Bussink.org VM Folder back to the ‘Discovered virtual machine’ folder, was I able to use the Register vSphere Replication server function.

Register vR Add-On

Once the VM is registered with the vSphere Replication server, moving it to my custom VM Folder (Bussink.org) is not an issue anymore.

 

 

NFS volume mounting error following a network change

I recently redesigned my network configuration from a single /24 address range to multiple /24 ranges with routing done on my core switch. Part of this change meant shutdown the vSphere cluster, and the storage arrays (Synology) and reassigning new IP addresses and gateways to all of these entities.

When my hosts and my storage arrays had their new IP addresses and I attempted to re-map my NFS volumes to the Synology, I got a strange error message while attempting to mount my NFS volumes. “There are incorrect or missing values below.

Unable to mount NFS point in vSphere Web Client

Another error message “Unable to add new NAS, volume with the label X already exists” was given on the ESXi shell when I attempted the same operations using a SSH session directly on my host.

Unable to add new NAS volume with the label already exists

Yet the esxcfg-nas -l command did not return any values.

Well it seems that the old NAS entry is still in the ESXi host, but not listed. To fix this small issue, you need to delete the non-existing NFS mount point and recreate it.

In the next screenshot you see me listing my mount points, attempting to mount a new NFS volume (legolas_nfs), erasing the non-visible entry, and at last adding the new NFS volume.

erasing_nfs_ghost_mount_point_2

I hope this can save someone some precious time.

Notes & Photos of the Homelab 2014 build

I’ve had a few questions about my Homelab 2014 upgrade hardware and settings. So here is a follow-up. This is just a photo collection of the various stages of the build.  Compared to my previous homelabs that where designed for a small footprint, this one isn’t, this homelab version has been build to be a quiet environment.

I started my build with only two hosts. For the cases I used the very nice Fractal Design Define R4. These are ATX chassis in a sleek black color, can house 8x 3.5″ disks, and support a lot of extra fans. Some of those you can see on the right site, those are Noctua NF-A14 FLX. For the power supply I picked up some Enermax Revolution Xt PSU.

IMG_4584

For the CPU I went with the Intel Xeon E5-1650v2 (6 Cores @3.5GHZ) and a large Noctua NH-U12DX i4. The special thing about the NH-U12DX i4 model is that it comes with socket brackets for the Narrow-Brack ILM that you find on the Supermicro X9SRH-7TF motherboard.

IMG_4591

The two Supermicro X9SRH-7TF motherboards and two add-on Intel I350-T2 dual 1Gbps network cards.

IMG_4594

Getting everything read for the build stage.

On the next photo you will see quiet a large assortment of pieces. There are 5 small yet long lasting Intel SSD S3700 100GB, 8x Seagate Constellation 3TB disks, some LSI HBA Adapters like the LSI 9207-8i and LSI 9300-8i, two Mellanox ConnectX-3 VPI Dual 40/56Gbps InfiniBand and Ethernet adapters that I got for a steal (~$320USD) on ebay last summer.

IMG_4595

You need to remember, that if you only have two hosts, with 10Gbps Ethernet or 40Gbps Ethernet, you can build a point-to-point config, without having to purchase a network switch. These ConnectX-3 VPI adapters are recognized as 40Gbps Ethernet NIC by vSphere 5.5.

Lets have a closer look at the Fractal Design Define R4 chassis.

Fractal Design Define R4 Front

Fractal Design Define R4 Front

The Fractal Design Define R4 has two 14cm Fans, one in the front, and one in the back. I’m replacing the back one with the Noctua NF-A14 FLX, and I put one in the top of the chassis to extra the little warm air out the top.

The inside of the chassis has a nice feel, easy access to the various elements, space for 8x 3.5″ disk in the front, and you can push power-cables on the other side of the chassis.

Fractal Design Define R4 Inside

Fractal Design Define R4 Inside

A few years ago, I bought a very nice yet expensive Tyan dual processor motherboard and I installed it with all the elements before looking to put the CPU on the motherboard. It had bent pins under the CPU cover. This is something that motherboard manufacturers and distributors have no warranty. That was an expensive lesson, and that was the end of my Tyan allegiance. Since then I moved to Supermicro.

LGA2011 socket close-up. Always check the PINs. for damage

LGA2011 socket close-up. Always check the PINs. for damage

Here is the close up of the Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

I now always put the CPU on the motherboard, before the motherboard goes in the chassis. Note on the next picture the Narrow ILM socket for the cooling.

Intel Xeon E5-1650v2 and Narrow ILM

Intel Xeon E5-1650v2 and Narrow ILM

Here is the difference between the Fractal Design Silent Series R2 fan and the Noctua NF-A14 FLX.

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

What I like in the Noctua NF-A14 FLX are the rubber hold-fasts that replace the screws holding the fan. That is one more option where items in a chassis don’t vibrate and make noise. Also the Noctua NF-A14 FLX runs by default at 1200RPM, but you have two electric Low-Noise Adapters (LNA) that can bring the default speed down to 1000RPM and 800RPM. Less rotations equals less noise.

Noctua NF-A14 FLX Details

Noctua NF-A14 FLX Details

Putting the motherboard in the Chassis.

IMG_4623

Now we need to modify the holding brackets for the CPU Cooler. The Noctua NH-U12DX i4 comes with Narrow ILM that can replace the ones on it. In the picture below, the top one is the Narrow ILM holder, while the bottom one still needs to be replaced.

IMG_4621

And a close up of everything installed in the Chassis.

IMG_4629

To hold the SSD in the chassis, I’m using an Icy Dock MB996SP-6SB to hold multiple SSD in a single 5.25″ frontal slot. As SSD don’t heat up like 2.5″ HDD, you can select to cut the electricity to the FAN.

IMG_4611

This Icy Dock MB996SP-6SB gives a nice front look to the chassis.

IMG_4631

How does it look inside… okay, honest I have tied up the sata cables since my building process.

IMG_4632

 

Here is the picture of my 2nd vSphere host during building. See the cabling is done better here.

IMG_4647

 

The two Mellanox ConnectX-3 VPI 40/56Gbps cards I have where half-height adapters. So I just to adapt a little bit the holders so that the 40Gbps NIC where firmly secured in the chassis.

IMG_4658

Here is the Homelab 2014 after the first build.

IMG_4648

 

At the end of August 2014, I got a new Core network switch to expand the Homelab. The Cisco SG500XG-8F8T, which is a 16x Port 10Gb Ethernet. Eight ports are in RJ45 format, and eight are in SFP+ format, and one for Management.

Cisco SG500XG-8G8T

Cisco SG500XG-8G8T

I build a third vSphere host using the same config as the first ones. And here is the current 2014 Homelab.

Homelab 2014

Homelab 2014

And if you want to see what the noise is at home, check this Youtube movie out. I used the dBUltraPro app on the iPad to measure the noise level.

And this page would not be complete if it didn’t have a vCenter cluster screenshot.

Homelab 2014 Cluster

A new beginning… joining VMware

After nearly 13 years at LANexpert, a Value Added Reseller and Integrator in the french speaking area of Switzerland, I’ll be starting a new chapter of my professional career. I’ll be joining VMware as a Solutions Architect on the 1st of October.

Over the better part of the last decade, I have played my part in building the Virtualization practice for LANexpert, into one of the first VMware Premier Partner in Switzerland. During this past decade, I have seen and pushed the changes in IT infrastructure from standalone server, to nearly full virtualized datacenter (not the network yet). Now I have the great opportunity to join the ‘mothership’ and keep pushing a technology that I trust and truly believe in.

I’m excited with the new challenge and the opportunity to meet a lot more new people as driven as me by virtualization, and sad to leave a great ‘family’ as LANexpert behind me. I also want to thanks all the people around me in the community, that have helped me directly or indirectly to grow and extend my technical expertise.

Erik Bussink

 

 

Speed testing 40G Ethernet in the Homelab

In my previous post, I described the building of two Linux virtual machines to benchmark the network. Here are the results.

homelab_network_1g_10g_40g_iperf_testing

 

The first blip, is running iperf to the maximum speed between the two Linux VMs at 1Gbps, on separate hosts using Intel I350-T2 adapters.

The second spike (or vmnic0), is running iperf to the maximum speed between two Linux VMs at 10Gbps. The two ESXi hosts are using Intel X540-T2 adapters.

The third mountain (or vmnic4) and most impressive result is running iperf between the Linux VMs using 40Gb Ethernet. The two ESXi hosts are using Mellanox ConnectX-3 VPI adapters.

The Homelab 2014 ESXi hosts, uses a Supermicro X9SRH-7TF come with an embedded Intel X540-T2. We can more closely see the  results of the iperf test at 10Gbps in the following picture.

homelab_network_10g_iperf_testing

I also got last summer from Ebay, a set of Mellanox ConnectX-3 VPI Dual Adapters for $300. These cards support InfiniBand 40Gb/s and 56Gb/s, and Ethernet at 10Gb/s and 40Gb/s. By default, vSphere 5.5 recognizes these adapters as 40Gb Ethernet adapters. And I really wanted to test these adapters at 40Gb Ethernet… and the results are great. I can push upto 37.3 Gbits/sec thru a single 40Gb Ethernet link, or 4299 MBytes/sec. Just have a peak at the following screenshot.

homelab_network_40g_iperf_testing

I guess having 40Gb Ethernet for vMotion is too fast…  The vMotion of a 12GB VM takes 15-16 seconds, of which only 3 seconds are used for the memory transfer, the rest is the memory snapshot, processes freeze, cpu register cloning and the rest.

All the test run at 10Gb Ethernet and 40Gb Ethernet where done with Jumbo Frames. For 40Gb Ethernet it makes real (x 2.5) difference in bandwidth.

This was a fun piece to lab in the homelab.

Creating a Linux Net benchmark VM

In this post, I will quickly explain, how I created my Virtual Machine under Linux, that I have and will use to benchmark some aspects of my new 2014 Homelab. First I download from the CentOS website, the latest version of the CentOS 6.5 64bit Net Install .ISO. This will allow me to install the Virtual Machine quickly with the packages I need.

The next step is to create a two Linux 64bit VMs on my vCenter. I selected to create a VMX-09 virtual machine, so that I can edit the network properties from the vCenter 5.5 Windows Client or the vSphere Web client. I create a two vCPU machine, because the application that I will be running for my network benchmarks is iperf, and is a single-threaded process, so the 2nd vCPU will be consumed by the operating system of the VM.

For Network Adapters, I select two VMXNET3 adapters, the first one will be used for management and baselining my perfs on a 1Gbps Ethernet, the 2nd one can be moved around from vSwitch to dVSwitch and from VMNIC to VMNIC. Note that I rather give two virtual sockets with one core, than one virtual socket with two cores. This will give you about 6% more performance for the VM.

vm_64bit_linux_01

Another small change I always do, is to optimize the Virtual Machine Monitor for the VMs. The VMM is a thin layer for each VM that leverages the the scheduling, memory management and the network stack in the VMkernel. So I change in the Options tab, the CPU/MMU Virtualization settings to force the use of Intel VT-x/AMD-V for instruction set virtualization and Intel EPT/AMD RVI for MMU virtualization. This will ensure that the VM gets the best optimized hardware supportfor the CPU and MMU. This should only be done on recent processors, when you are sure that your CPU/MMU supports EPT and VT-X. If that is not the case, then leave this setting to Automatic.

vm_64bit_linux_02_cpu-mmu

 

If you want to know more about these settings and many others, I highly recommend you read the great “vSphere High Performance Cookbook” by Prasenjit Sarkar (@stretchcloud) at Packt Publishing.

I just need to say that in the past few years, all my VMs and Templates get this setting by default on all my systems and my customer clusters.

Next, we need to boot the Linux machine with the CentOS Net Installer. I’m not going to explain all the steps needed for every Linux settings, just a few points. When you get the option to select the installation method we select the URL option.

CentOS Installation Method

It will then ask you to select the network card and will fetch an IP address from the network via DHCP before asking you to enter the URL. We will use the following URL

http://mirror.centos.org/centos/6.5/os/x86_64/

Enter URL

Once the install GUI has started make sure not to forget to put the 2nd Ethernet interface where you will be doing your iperf testing to a 9000 MTU. Otherwise your network performance results will be skewed.

nic_eth1_mtu
For my performance testing VMs, I let the OS select the default file partition scheme, this is not a VM requiring special sizing.

default_partition_scheme

I select the Desktop installation config for these test platforms.

desktop installation

Once you have finished installing the virtual machine, install the latest VMware Tools on it, before modifying the grub menu. I add the key work VGA=0x317 to all my linux machines kernel settings in grub.conf or menu.lst (OpenSuSE), so that the VM boots think it has a 1024×768 monitor. Even if I stay in the Console mode of Linux, it gives me more screen estate.

When you have Linux machines that run on 1Gbps ethernet, the default settings in the Linux kernel are fine, but if you want to optimize the network traffic for Linux for 10Gbps, there are a few System variables that we can fine tune. Lets edit the /etc/sysctl.conf and add six fields:

# Minimum, initial and max TCP Receive buffer size in Bytes

net.ipv4.tcp_rmem = 4096 87380 134217728

# Minimum, initial and max buffer space allocated

net.ipv4.tcp_wmem = 4096 65536 134217728

# TCP Moderage Receive Buffer Auto-Tuning

net.ipv4.tcp_moderate_rcvbuf=1

# Maximum Receive socket buffer size (size of BDP)

net.core.rmem_max = 134217728

# Maximum Send socket buffer size (size of BDP)

net.core.wmem_max = 134217728

# Maximum number of packets queueed on the input side

net.core.netdev_max_backlog = 300000

I’m going to use iperf to test the links between two machines, so for this set of machines, I disable the IPtables as I have multiple ports being used between the two linux test platforms. chkconfig iptables off will do the trick. A quick reboot and all the modifications will take effect.

Also as we will test the 10G Ethernet performance, both virtual machines are on a Distributed vSwitch (dVS), and the PortGroup is configured with a MTU set at 9000 (Jumbo Frames).

And before finishing this blog, I also make sure to use DRS Rules, so that the Linux VM 01 should runs on my ESX01 server, and the Linux VM 02 should run on my ESX02 server. Using the Should rule, allows me to quickly put a host in maintenance mode, while ensure that my performance virtual machines stay where they should.

To use the iperf (a very single threaded program) between two test hosts, start iperf on the first one as a service iperf -s , and on the second one, we use the commands iperf -m -i t300 -c IP_of_other_VM or iperf -m -i t300 -c IP_of_other_VM -fM to have the same results but in Bytes instead of bits.

Here is preliminary results using a 10G Ethernet interface between the two hosts (both hosts have an Intel X540-T2 adaper).

10g_results