Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0

As part of my ongoing expansion of the HomeDC, I was excited to learn about the availability of the latest Quad-Core Intel NUC a few months ago. Last friday I received my first Intel NUC Skylake NUC6I7KYK. I only started setting it up this afternoon. I usually do disabled a few settings in the BIOS, but following the warning from fellow bloggers that people had issues getting the Intel NUC running with ESXi [virtuallyghetto.com] I did take a deeper look prior to the install. I was able to install ESXi 6.0 Update 2 (Build 3620759) on my 4th try after disabling more settings in the BIOS.

Here is the screenshot of the ESXi Host Client of the Intel NUC6I7KYK with BIOS 0034.

nuc6i7kyk_ehc

Here is a quick screenshot of the physical machine. I was planning to use the SDXC slot with an SDXC 32GB card to store the boot configuration of ESXi, but unfortunately I did not see the SDXC as a valid target during the ESXi install process. So I keep the USB key I was boot from and select it as the target. On the screenshot below you will also notice an extra Network card, the StarTech USB3 Gigabit Ethernet Network Adapter which driver you can get from VirtuallyGhetto’s web page Functional USB 3.0 Ethernet Adapter (NIC) driver for ESXi 5.5 & 6.0. Thanks William for this driver.

nuc6i7kyk_startech

The Bill-of-Materials (BOM) of my assembly…

Here below you can see the Intel NUC with the two Samsung SM951 NVMe disks and the Crucial memory.nuc6i7kyk_open

To get ESXi 6.0 Update 2 to install I disabled the following BIOS Settings.But as people have commented back after more test, you really only need to disable the Thunderbolt Controller to get ESXi to install.

BIOS\Devices\USB

  • disabled – USB Legacy (Default: On)
  • disabled – Portable Device Charging Mode (Default: Charging Only)
  • not change – USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled – Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled – HDD Activity LED (Default: On)
  • disabled – M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory – 64MB (Default)
  • IGD Aperture Size – 256MB (Default)
  • IGD Primary Video Port – Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled – Audio (Default: On)
  • LAN (Default)
  • disabled – Thunderbolt Controller (Default: On)
  • disabled – WLAN (Default: On)
  • disabled – Bluetooth (Default: On)
  • Near Field Communication – Disabled (Default is Disabled)

BIOS\Devices\Onboard Devices\Legacy Device Configuration

  • disabled – Enhanced Consumer IR (Default: On)
  • disabled – High Precision Event Timers (Default: On)
  • disabled – Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 – Enabled
  • M.2 Slot 2 – Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan Header
  • Fan Control Mode : Cool

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)

Secondary Power Settings

  • disabled – Intel Ready Mode Technology (Default: On)
  • disabled – Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Sample view of the BIOS Onboard Devices as I deactivate some Legacy Device Configuration.

nuc6i7kyk_bios_onboard

 

26/05 Update: Only the Thunderbolt Controller is stopping the ESXi 6.0 Update 2 installer to run properly. Re-activiting it after the install does not cause an issue in my limited testing.

Homelab 2015 Upgrade

Since my last major entry about my Homelab in 2014, I have changed a few things. I added a 2nd cluster based on Apple MacMini (Late 2012), on which I run my OS X workloads, VMware Photon #CloudNativeApps machines, the DevOps Management tools and the vRealize Automation deployed blueprints. This cluster was initially purchased & conceived as a management cluster. The majority of my workload is composed of management, monitoring, analysis and infrastructure loads. It just made sense to swap the Compute and Management cluster around, and use the smaller one for Compute.

Compute_Cluster

Compute Cluster

The original cluster composed of three SuperMicro X9SRH-7TF described in my Homelab 2014 article (more build pictures here) gave me some small issues.

2014

Homelab in December 2014.

I’ve found that the Dual 10GbE X540 chipset on the motherboard does heat up a bit more than expected, and more than once (5x) I lost the integrated Dual 10GbE adapters on one of my hosts, requiring a host power off for ~20 minutes to cool down. In addition, a single 16G DDR3 DIMM was causing one host to freeze once every ~12 days. All the host have run extensive 48 hours memtest86+ checks, but nothing was spotted.  When a frozen VSAN host rejoins the cluster you see the re-synchronization of the data, and at that time, I’m glad to have a 10GbE network switch. In the end, I followed a best practice for VSAN clusters, I extendd the cluster to 4 hosts.

Beginning February I added a single Supermicro X10SRH-CLN4F server with a Intel Xeon E5-2630v3 (8 Cores @2.4Ghz and 64GB of DDR4 memory) to the cluster. This Supermicro X10SRH-CLN4F comes with 4 Intel Gigabit ports, an integrated LSI 3008 SAS 12Gb/s adapter. I also added an Intel X540-T2 dual 10GbE adapter to bring it in line with the first three nodes.

esx01

vSphere 6.0 on Supermicro X10SRH-CLN4F

Having a fourth host means scaling up the VSAN Cluster with an additional SSD and two 4TB SAS drives.

In the past month, the pricing of the Samsung 845DC Pro SSD have drop, to come in the $1/GB range. The Samsung 845DC Pro is rated at 10 DWPD (Disk-Writes-Per-Day) or 7300 TBW (TeraBytes-Written-in-5-years), and its performance is documented at 50’000 Sustained Write IOPS (4K) (Write IOPS Consistency at 95%) [Reference Samsung 845DC Pro PDF, and thessdreview article]. A fair warning for other poeple looking at the Samsung SSD 845DC Pro, it is not on the VMware VSAN Hardware Compatibility List.

Here is a screenshot of the disk group layout of the VSAN Cluster.

vsan_disk_group

VSAN Disk Management

The resulting VSAN configuration is now 28TB usable space.

vsan

Here is a screenshot of the current Management Cluster.

Management Cluster

This cluster having grown, is now also generating additional heat. It’s been relocated in a colder room, and I had a Three Phase 240V 16A electricity line put in.

Management Cluster (April 2015)

Management Cluster (April 2015)

My external storage is still composed of two Synology arrays. An old DS1010+ and a more recent DS1813+ with a DX513 extension. At this point, 70% of my virtual machine datasets are located on the VSAN datastore.

Synology DS1813+ Storage Manager

Reviewing this article, I realize this cannot quantify as a homelab anymore… its a home datacenter… guess I need a new #HomeDC hashtag…

Notes & Photos of the Homelab 2014 build

I’ve had a few questions about my Homelab 2014 upgrade hardware and settings. So here is a follow-up. This is just a photo collection of the various stages of the build.  Compared to my previous homelabs that where designed for a small footprint, this one isn’t, this homelab version has been build to be a quiet environment.

I started my build with only two hosts. For the cases I used the very nice Fractal Design Define R4. These are ATX chassis in a sleek black color, can house 8x 3.5″ disks, and support a lot of extra fans. Some of those you can see on the right site, those are Noctua NF-A14 FLX. For the power supply I picked up some Enermax Revolution Xt PSU.

IMG_4584

For the CPU I went with the Intel Xeon E5-1650v2 (6 Cores @3.5GHZ) and a large Noctua NH-U12DX i4. The special thing about the NH-U12DX i4 model is that it comes with socket brackets for the Narrow-Brack ILM that you find on the Supermicro X9SRH-7TF motherboard.

IMG_4591

The two Supermicro X9SRH-7TF motherboards and two add-on Intel I350-T2 dual 1Gbps network cards.

IMG_4594

Getting everything read for the build stage.

On the next photo you will see quiet a large assortment of pieces. There are 5 small yet long lasting Intel SSD S3700 100GB, 8x Seagate Constellation 3TB disks, some LSI HBA Adapters like the LSI 9207-8i and LSI 9300-8i, two Mellanox ConnectX-3 VPI Dual 40/56Gbps InfiniBand and Ethernet adapters that I got for a steal (~$320USD) on ebay last summer.

IMG_4595

You need to remember, that if you only have two hosts, with 10Gbps Ethernet or 40Gbps Ethernet, you can build a point-to-point config, without having to purchase a network switch. These ConnectX-3 VPI adapters are recognized as 40Gbps Ethernet NIC by vSphere 5.5.

Lets have a closer look at the Fractal Design Define R4 chassis.

Fractal Design Define R4 Front

Fractal Design Define R4 Front

The Fractal Design Define R4 has two 14cm Fans, one in the front, and one in the back. I’m replacing the back one with the Noctua NF-A14 FLX, and I put one in the top of the chassis to extra the little warm air out the top.

The inside of the chassis has a nice feel, easy access to the various elements, space for 8x 3.5″ disk in the front, and you can push power-cables on the other side of the chassis.

Fractal Design Define R4 Inside

Fractal Design Define R4 Inside

A few years ago, I bought a very nice yet expensive Tyan dual processor motherboard and I installed it with all the elements before looking to put the CPU on the motherboard. It had bent pins under the CPU cover. This is something that motherboard manufacturers and distributors have no warranty. That was an expensive lesson, and that was the end of my Tyan allegiance. Since then I moved to Supermicro.

LGA2011 socket close-up. Always check the PINs. for damage

LGA2011 socket close-up. Always check the PINs. for damage

Here is the close up of the Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

Supermicro X9SRH-7TF

I now always put the CPU on the motherboard, before the motherboard goes in the chassis. Note on the next picture the Narrow ILM socket for the cooling.

Intel Xeon E5-1650v2 and Narrow ILM

Intel Xeon E5-1650v2 and Narrow ILM

Here is the difference between the Fractal Design Silent Series R2 fan and the Noctua NF-A14 FLX.

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

Fractal Design Silent Series R2 & Noctua NF-A14 FLX

What I like in the Noctua NF-A14 FLX are the rubber hold-fasts that replace the screws holding the fan. That is one more option where items in a chassis don’t vibrate and make noise. Also the Noctua NF-A14 FLX runs by default at 1200RPM, but you have two electric Low-Noise Adapters (LNA) that can bring the default speed down to 1000RPM and 800RPM. Less rotations equals less noise.

Noctua NF-A14 FLX Details

Noctua NF-A14 FLX Details

Putting the motherboard in the Chassis.

IMG_4623

Now we need to modify the holding brackets for the CPU Cooler. The Noctua NH-U12DX i4 comes with Narrow ILM that can replace the ones on it. In the picture below, the top one is the Narrow ILM holder, while the bottom one still needs to be replaced.

IMG_4621

And a close up of everything installed in the Chassis.

IMG_4629

To hold the SSD in the chassis, I’m using an Icy Dock MB996SP-6SB to hold multiple SSD in a single 5.25″ frontal slot. As SSD don’t heat up like 2.5″ HDD, you can select to cut the electricity to the FAN.

IMG_4611

This Icy Dock MB996SP-6SB gives a nice front look to the chassis.

IMG_4631

How does it look inside… okay, honest I have tied up the sata cables since my building process.

IMG_4632

 

Here is the picture of my 2nd vSphere host during building. See the cabling is done better here.

IMG_4647

 

The two Mellanox ConnectX-3 VPI 40/56Gbps cards I have where half-height adapters. So I just to adapt a little bit the holders so that the 40Gbps NIC where firmly secured in the chassis.

IMG_4658

Here is the Homelab 2014 after the first build.

IMG_4648

 

At the end of August 2014, I got a new Core network switch to expand the Homelab. The Cisco SG500XG-8F8T, which is a 16x Port 10Gb Ethernet. Eight ports are in RJ45 format, and eight are in SFP+ format, and one for Management.

Cisco SG500XG-8G8T

Cisco SG500XG-8G8T

I build a third vSphere host using the same config as the first ones. And here is the current 2014 Homelab.

Homelab 2014

Homelab 2014

And if you want to see what the noise is at home, check this Youtube movie out. I used the dBUltraPro app on the iPad to measure the noise level.

And this page would not be complete if it didn’t have a vCenter cluster screenshot.

Homelab 2014 Cluster

The homelab shift…

I believe that we are at a point of time where we will see a shift in the vSphere homelab designs.

One homelab design, which I see as becoming more and more popular is the Nested Homelab using either a VMware Workstation or VMware Fusion base.
There are already a lot of great blogs on Nested homelabs (William Lam), and I must at least mention the excellent AutoLab project. AutoLab is a quick and easy
way to build a vSphere environment for testing and learning, and the latest release of AutoLab supports the vSphere 5.5 release.

The other homelab design is a dedicated homelab. Some of the solutions that people want to test on the homelabs are becoming larger and with more components (Horizon, vCAC), requiring more resources. So it is painful to admit, but I believe the dedicated homelab is heading towards a more expensive direction.

Let me explain my view with these two points.

The first one and the more recent one, is that if you want to lab Virtual SAN, you need to spend some non-negligible money in your lab. You need to invest in at least 3 SSDs on three hosts, and you need to invest in a storage controller that is on the VMware VSAN Hardware Compatibility List.

Recently Duncan Epping mentioned once again that unfortunately the Advanced Host Controller Interface (AHCI) standard for SATA is not supported with VSAN, and you can loose the integrity of your VSAN storage. Something that you don’t want to happen in production and loose hours of your precious time configuring VMs. Therefore if you want to lab Virtual SAN, you will need to get an storage controller that is supported. This will cost money and will limit the whitebox motherboards that support VSAN without add-on cards. I really hope that the AHCI standard will be supported in the near future, but there is no guarantee.

The second one, and the one I see as a serious trend, is network drivers support. Network drivers used in most homelab computer are not updated for the current release of vSphere (5.5) and don’t have a bright future with upcoming vSphere releases. 

VMware has started with vSphere 5.5 their migration to a new Native Driver Architecture and slowly moving away from the Linux Kernel Driver that are plugged into the VMkernel using Shims (great blog entry by Andreas Peetz on Native Driver Architecture).  

For all those users that need the Realtek R8168 driver in the current vSphere 5.5 release, they need to extract the driver from the latest vSphere 5.1 offline bundle, and need to injected the .vib driver in the vSphere 5.5 iso file. You can read more about this popular article at “Adding Realtek R8168 Driver to ESXi 5.5.0 ISO“. 

My homelab 2013 implementation uses these Realtek network cards, and the driver works good with my Shuttle XH61v.  But if you have a closer peak at the many replies to my article, a big trend seems to emerge. People use a lot of various Realtek NICs on their computers, and they have to use these R8168/R8169 drivers. Yet these drivers don’t work well for everyone. I get a lot of queries about why the drivers stop working, or are slow, but hey, I’m just a administrator that cooked a driver in the vSphere ISO, I’m not driver developer.

vSphere is a product aimed at large enterprise, so priority in the development of drivers, is to be expected for this market.  VMware seems to have dropped/lagged the development of these non-Enterprise oriented drivers. I don’t believe we will see further development of these Realtek drivers from the VMware development team, only Realtek could really pickup this job.

This brings me up to the fact that for the future, people will need to move to more professional computers/workstations and controllers if they want to keep using and learning vSphere at home on a dedicated homelab.
I really hope to be proven wrong here… So you are most welcome to reply to me that I’m completely wrong.

 

47787550

 

 

28/03/2014 Some spelling corrects and some

VSAN Observer showing Degraded status…

This is just a quick follow-up on my previous “Using VSAN Observer in vCenter 5.5” post. As mentioned recently by Duncan Epping (@DuncanYB) in his blog entry Virtual SAN news flash pt 1. The VSAN engineers have done a full root cause of the AHCI controller issues that have been reported recently. The fix is not out yet. As a precaution, and because I use the AHCI chipset in my homelab servers, I have not scaled up the usage of the VSAN. I have been monitoring closely the VMs I have deployed on the VSAN datastore.

VSAN Observer DEGRADED status on a host

VSAN Observer degraded

This is curious as neither the vSphere Web Client or the vSphere Client on Windows have reported anything at a high level. No Alarms. As can be seen from the following two screenshots.

VSAN Virtual Disks

VSAN Virtual Disks

To see any glimpse to an error, you need to drill deeper into the Hard disk to see the following.

VSAN Virtual Disks Expanded

VSAN Disk Groups

VSAN Disk Groups

 

So what to do in this case. Well I tried to activate the Maintenance Mode and migrate the data from the degraded ESXi host to another.

Virtual SAN data migration

There are three modes how you can enter a host in the Virtual SAN Cluster into Maintenance Mode.  They are the following:

  1. Full data migration: Virtual SAN migrates all data that resides on this host. This option results in the largest amount of data transfer and consumes the most time and resources.
  2. Ensure accessibility: Virtual SAN ensures that all virtual machines on this host will remain accessible if the host is shut down or removed from the cluster. Only partial data migration is needed. This is the default option.
  3. No data migration: Virtual SAN will not migrate any data from this host. Some virtual machines might become inaccessible if the host is shut down or removed from the cluster.

 

Maintenance Mode - Full Data Migration

So I selected the Full data migration option. But this didn’t work out well for me.

General VSAN fault

I had to fail back to the Ensure accessibility to get the host into maintenance mode.

Unfortunately, even after a reboot of the ESXi host and it’s return from maintenance mode. The VSAN Observer keeps telling me that my component residing on the ESXi host is still in a DEGRADED state. Guess, I will have to patiently wait for the release of the AHCI controller VSAN fix. And see how it performs then.

 

Open Questions:

  • Is VSAN Observer picking up some extra info that is not raised by the vCenter Server 5.5 ?
  • Is the info from the vCenter Server 5.5 not presented properly in the vSphere Web Client ?

 

Supporting Information.

My hosts have two gigabit network interface. I have created two VMkernel-VSAN interface in two differents IP ranges, as per the recommendations. Each VMkernel-VSAN interface goes out using one interface, and will not switch to the 2nd one.

2013 Homelab refresh

Preamble

It’s now 2013, and it’s time to have a peak at my homelab refresh for this year.

 

Background

In the past three years, I’ve ran a very light homelab with VMware ESXi. I mainly used my workstation (Supermicro X8DTH-6F) with Dual Xeon 5520 @2.26Ghz (8-Cores) and 72GB of RAM to run most of the virtual machines and for testing within VMware Workstation, and only ran domain controllers and 1 proxy VM on a small ESXi machine, a Shuttle XG41. This gives a lot of flexibilty to run nearly all the virtual machines on a large beefed up workstation. There are quiet a few posts on this topic on various vExpert websites (I highly recommend Eric Sloof’s Super-Workstation).

I sometimes do play games (I’m married with a gamer), and when I do I have to ensure my virtual machines are powered down within VMware Workstation, as my system could and has crashed during games. Having corrupted VM is no fun.

 

Requirements

What I want for 2013 in the homelab, is a flexible environment composed of a few quiet ESXi hosts with my larger workstation being able to add new loads or test specific VM configuration. For this I need a infrastructure that is small, quiet and stable. Here are the requirements for my 2013 homelab infrastructure

  1. Wife Acceptance Factor (WAF)
  2. Small
  3. Quiet
  4. Power Efficient

Having purchased a flat, I don’t have a technical room (nothing like my 2006 computer room) or a basement. So having a few ESXi hosts on 24 hours a day, requires a high Wife Acceptance Factor. The system have to be small & quiet. In addition, if they are power efficient, it will make the utility bill easier.

 

Shuttle XH61V

The Shuttle XH61V is small black desktop based on the Intel H61 chipset. It comes in a 3.5L metal case with very quiet ventilators. You just need to purchase the Shuttle XH61V, a Intel 1155 Socket 65W processor, two memory SODIMMs (laptop memory) and local storage. Assembly can be done in less than 30 minutes.

Shuttle XH61V

Shuttle XH61V

The Shuttle XH61V comes with two NICs and support for a mSATA (Bootable) connector, a PCIe x1 slot, and two 2.5″ Devices. The Shuttle XH61V comes with two gigabit network cards. They are Realtek 8168 cards. These work flawlessly, but they do not support Jumbo frames.

Shuttle XH61V Back

Shuttle XH61V Back

For storage, I decided to boot from a mSATA device, and to keep a Intel SSD for a fast upper-tier local storage, and one large Hybrid 2.5″ Harddisk for main storage. I do have a Synology DS1010+ on the network that is the centralized NFS storage, but I want some fast local storage for specific virtual machines. It’s still early 2013, so I have not yet upgraded my older Synology or created a new powerful & quiet Nexenta Community Edition home server. On the next image you can see that three Shuttle XH61V take less space than a Synology DS1010+

Three Shuttle HX61V with Synology DS1010+

VMware ESXi installation

Installing VMware ESXi is done quickly as all the devices drivers are on the ESXi 5.1 VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso install cdrom.

ESXi 5.1 on XH61V

ESXi 5.1 on XH61V

Here is the Hardware Status for the Shuttle XH61V

ESXi XH61V Hardware Status

Here is an updated screenshot of my vSphere 5.1 homelab cluster.

Management Cluster

 

Bill of Materials (BOM)

Here is my updated bill of materials (BOM) for my ESXi nodes.

  • Shuttle XH61V
  • Intel Core  i7-3770S CPU @3.1Ghz
  • Two Kingston 8GB DDR3 SO-DIMM KVR1333D3S9/8G
  • Kingston 16GB USB 3.0 Key to boot ESXi (Change BIOS as you cannot boot a USB key in USB3 mode)
  • Local Storage Intel SSD 525 120GB
  • Local Storage Intel SSD 520 240GB
  • Local Storage Seagate Momentus XT 750GB

Planned upgrade: I hope to get new Intel SSD 525 mSATA boot devices to replace the older Kingston SSDnow when they become available.

 

Performance & Efficiency

In my bill of materials, I selected the most powerful Intel Core i7 processor that I could fit in the Shuttle XH61V. Because I’m running virtual appliances and virtual machines like vCenter Operations Manager, SQL Databases, Splunk. There are some less expensive Core i3 (3M Cache), Core i5 (6M Cache) or Core i7 (8M Cache) processor that would work great.

What is impressive, is that the Shuttle XH61V comes with a 90W power adapter. We are far from the 300W mini-boxes/XPC or even the HP MicroServer with their 150W power adapters. Only the Intel NUC comes lower with a 65W power adapter and a single gigabit network (@AlexGalbraith has a great series of post on running ESXi on his Intel NUC ).

Just for info, the Intel Core i7-3770S has a cpubenchmark.net score of 9312. Which is really good for a small box that uses 90W.

The Shuttle XH61V is also very quiet... it’s barely a few decibels above the noise of a very quiet room. To tell you the thru… the WAF is really working, as my wife is now sleeping with two running XH61V at less than 2 meters away. And she does not notice them… 🙂

 

Pricing

The pricing for a Shuttle XH61V with 16GB memory and a USB boot device (16GB Kingston USB 3.0) can be kept to a about $350 on newegg. What will increase the price is the performance of the LGA 1155 Socket 65W processor ( Core i3-2130 from $130 to Core  i7-3770S at $300) and what additional local storage you want to put in.

vSphere 5.1 Cluster XH61V

The sizing of the homelab in early 2013 is so far from the end of 2006 when I moved out of my first flat, when I had a dedicated Computer room.

Update 18/03/2012. DirectPath I/O Configuration for Shuttle XH61v BIOS 1.04

XH61v DirectPath I/O Configuration

XH61v DirectPath I/O Configuration

 

Update 22/03/2013.  mSATA SSD Upgrade

I’ve decided to replace the Intel 525 30GB mSATA SSD that is used for booting ESXi and to store the Host Cache with a larger Intel 525 120GB mSATA SSD. This device will give me more space to store the Host Cache and will be used as a small Tier for the Temp scratch disk of my SQL virtual machine.

The ‘published’ performance for the Intel 525 120GB mSATA are

Capacity
Sequential
Read/Write (up to)
Random 4KB
Read/Write (up to)
Form Factor
30 GB SATA 6 Gb/s       500 MB/s / 275 MB/s  5,000 IOPS / 80,000 IOPS mSATA
60 GB SATA 6 Gb/s       550 MB/s / 475 MB/s 15,000 IOPS / 80,000 IOPS mSATA
120 GB SATA 6 Gb/s       550 MB/s / 500 MB/s 25,000 IOPS / 80,000 IOPS mSATA
180 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
240 GB SATA 6 Gb/s       550 MB/s / 520 MB/s 50,000 IOPS / 80,000 IOPS mSATA
 Show More Detailed Product Specifications >