Upgrading vCloud Director Cell from RHEL 5.6 to RHEL 5.7

With the release of vCloud Director 1.5.1 last night, the operating system for the vCloud Director Cell now supports Red Hat Enterprise Linux 5.7 (x86_64). If you are running your current cell with Red Hat Enterprise Linux 5.6, and you want to upgrade to the most recent release that is supported, here are the steps. Yet, you have to be careful not to upgrade to Red Hat Enterprise Linux 5.8, which as been release the 21st February 2012. RHEL 5.8 is not on the official supported list by VMware.

In the following screenshots we will use the yum update tool to make sure we upgrade to RHEL 5.7 only.

The first screenshot shows the current kernel 2.6.18-308.el5 for RHEL 5.6, and the configuration of the yum.conf file that has an explicit exclude=redhat-release-5Server* rule. We also see that we now have the redhat-release-5Server-

Current vCD-Cell settings for RHEL 5.6

We will now modify the /etc/yum.conf so that we can download the redhat-release-5Server- file. We comment out the exclude file, and we install immediately the release file for RHEL 5.7

vCD-Cell upgrading from RHEL 5.6 to RHEL 5.7

Now it’s important to immedialty renable the exclusion of the redhat-release-5Server, so that you will not by accident upgrade to RHEL 5.8

Ensure that yum cannot retrieve RHEL 5.8

Now you can run the yum upgrade to your own pace, and be sure that you are staying on the supported release of Red Hat Enterprise Linux for the vCloud Director 1.5.1


Disable RHEL 5.6 Release Upgrade on vCloud Director 1.5 Cell

The VMware vCloud Director 1.5 runs on the Red Hat Enterprise Linux 5.6 platform. It is supported by VMware only on version 5.6 of the Red Hat Enterprise Linux. If you are not careful and try to patch the operating system on the vCloud Director 1.5 system, you could find yourself with a RHEL 5.7 or RHEL 5.8 Release, which would cause vCloud Director to break.

To ensure that your vCloud Director 1.5 stays on the Red Hat Enterprise Linux 5.6 Release and only download patches for the operating system, we need to add a single line to the /etc/yum.conf file.

Disable RHEL 5.6 Release Upgrade

I simply add the following line in /etc/yum.conf


This will exclude all newer Red Hat Releases from getting installed by yum & the Red Hat Network.

I hope this will save you so unneeded trouble.


VMware View 3.1 & VMware View Open Client 3.1

VMware has released VMware View 3.1 on the 27th May 2009 and VMware View Open Client 3.1 on June 5th.

The only part that the open source client is missing is the USB redirection part. This missing USB features will be available for Linux clients only thru the ThinClient images that Third-party will release for their respective ThinClients (like HP for the HP gt7725), but this will not be out immediately.


We’ve managed to shake a few more bugs out of the Beta release, which
means it’s time to announce that VMware View Open Client 3.1 is now
ready for general use.

This release includes many small bug fixes, as well as a few new features:

* Smart Card authentication support (see README.txt for details)
* Ctrl-Alt-Del will bring up a dialog letting you disconnect from
hung or unresponsive desktops
* Improved support for multiple monitors (see README.txt for details)
* A few UI improvements
* Internationalization support (using GNU gettext)
* Sound forwarding enabled by default
* Ability to specify more options to pass along to rdesktop
* Support for USB device forwarding (when additional USB software is installed)

Unfortunately, we are not able to open source the USB support, and
therefore cannot host those files on our Google Code site. We are
working on making them available on vmware.com; hopefully they will
make it there some day. In the mean time, we appreciate your

More information can be found in the README.txt file included in each
downloaded package.

Packages for RPM and Debian-based distributions, as well as binary and
source tarballs, can be downloaded from the Google Code site:


Please report any issues you find to:


Thanks and Let’s Go Pens,

Your View Open Client Team

VMware View Open Client 2.1.1 (Test Build 153227) released

The team writing the VMware View Open Client has released a new test version of the 2.1.1 client. This client fixes a few issues about library linking and some other fixes. The team announced that the client should build fine on Linux 64-bit systems as well as Mac OS X platforms.

I can confirm that it runs great on my Fedora 10 (x86-64). I just compiled the source code and connected 10 minutes later to a VirtualMachine running in a VMware View infrastructure on the other side of the world (Asia).

I had to add a single configure switch during my installation.

./configure –with-boost-libdir=/usr/lib64

And that’s all. You can connect to the VMware View Open Client group on Google code to retrieve the latest version.

Installing Adobe Flash on Fedora 10 (x86-64)

There are two ways to install support for Adobe Flash in you’re Fedora 10 (x86-64) system.You can install the i386 version of the release Adobe Flash Player (latest is flash-plugin.i386 0: or the alpha release of the Adobe Flash Player in x86-64 (latest is libflashplayer-10.0.d21.1.linux-x86_64.so.tar.gz)

Here I will provide the solution for both versions.

One using a yum repos channel to the Adobe website, or by downloading directly the .rpm file.

1) Preparing the Mozilla Plugins & needed libraries

  1. mkdir -p /usr/lib/mozilla/plugins
  2. yum install nspluginwrapper.{i386,x86_64} pulseaudio-lib.i386

2a) Using the yum adobe-linux-i386.repo channel

  1. sudo –
  2. rpm -ivh http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
  3. rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux
  4. yum install flash-plugin

2b) Using the flash-plugin- without the adobe yum channel


3) Mozilla Plugin Config & Restart

  1. mozilla-plugin-config -i -g -v

restart firefox and enjoy ;)

tip: if you experience any problems with the sound, install alsa-plugins-pulseaudio, restart firefox and try again:

  1. yum install alsa-plugins-pulseaudio.i386

let me know how it goes…



Sager 9262 & NVIDIA Quadro FX3700m & NVIDIA Linux Binary Driver performance issue.

Okay, this is probably not only effective for the Sager 9262 and the Quadro FX3700m, but this is the only platform that I have right now where I can identify and reproduce the problem. I hope some other Sager 9262 and/or 9800M GTX users using Linux can also validate this issue.

The problem stems from the feature PowerMizer which allows the graphic card to scale it’s performance. The Quadro FX3700M (1024MB) (550MHz/799MHz) that shipped in my Sager 9262 last week has four Performance Levels with scaling NV Clock and Memory Clock.

  • 0 200MHz & 100MHz
  • 1 275MHz & 301MHz
  • 2 383MHz & 301MHz
  • 3 550MHz & 799MHz

Unfortunately with the latest 177.82 or 180.11 (Beta) Linux (x86-64) Binary Drivers, I cannot get the card running above Performance Level 1. Actually I’m using a script found at the nvnews forums to artificially keep the graphic card running at Higher Performance. There are also multiple posts in the Phoronix forums about this issue.

Here is a screenshot of my nvidia-settings and the performance level of the FX3700m while running OpenGL benchmarks. As you see it’s stuck at Performance Level 1.

So right now, due to the nvidia binary drivers NOT supporting the PowerMizer feature, I’m able to only use less than 50% of the performance of my graphic card. This is a very expensive setback for someone that invested in an expensive nvidia 9800M GTX or FX3700M graphic card.

I’m lucky that I’m not rendering on this laptop, and that I can wait for nvidia to get their act together and supply proper PowerMizer drivers.


iSCSI targets ordering

On Thu, 2008-01-17 at 15:18 +0100, Klemens Kittan wrote:
> My question is will this be the order even if i reboot? Obviously the order of the nodes defines the order of the session. Will allways it be the same order?
I don’t think you can get a guarantee to always have the same ordering. It is much easier to create a custom udev rules.


KERNEL=="sd[b-z][1-9]" BUS=="scsi" SYSFS{serial}=="00000000014defbe2755" 
NAME="iscsi1" SYMLINK+="some_name1" 
KERNEL=="sd[b-z][1-9]" BUS=="scsi" SYSFS{serial}=="00000000014defbe2756" 
NAME="iscsi2" SYMLINK+="some_name2" 
KERNEL=="sd[b-z][1-9]" BUS=="scsi" SYSFS{serial}=="00000000014defbe2757" 
NAME="iscsi3" SYMLINK+="some_name3" 
KERNEL=="sd[b-z][1-9]" BUS=="scsi" SYSFS{serial}=="00000000014defbe2758" 
NAME="iscsi4" SYMLINK+="some_name4"

And get the SYSFS number from you’re iSCSI disk using the udevinfo command.
udevinfo -a -p $(udevinfo -q path -n /dev/sdd)

udev gives you a lot of flexibility. Give it a try.

show original

Zattoo for Linux (x86-64)

zattoo Just got back from a long weekend, and I saw a nice news item waiting for me in my email box, the Zattoo client is now available for Linux. The Zattoo client is a peer-to-peer client that allows the user to select a Live TV channel (out of a growing selection of television channels).

While only released so far for Linux in a x86 (32bit) format for 3 different distributions : Ubuntu 6.10, Fedora Core 6, and OpenSuse 10.2, it can quickly be adapted to other distros. I’ve been able to get it running without much trouble (just had to add 2 libraries) on my Red Hat Enterprise Linux 5 (x86-64). Here are the few steps needed to get it running after having downloaded the binary from the Zattoo download pages.


Create two Symbolic Links:

ln -s /lib/libssl.so.0.9.8b /lib/libssl.so.0.9.8
ln -s /lib/libcrypto.so.0.9.8b /lib/libcrypto.so.0.9.8

In addition it requires two additional libraries that where not on my configuration The gtkglext library for i386, which I found already compiled for rhel5-i386 and the libfaad library found in the faad2 package for i386. In addition I also created a ldconfig entry for Zattoo to find it’s libraries. Under RHEL5 I use the ld.so.conf.d directory.


Edit /etc/ld.so.conf.d/zattoo.conf :


When I didn’t do this, I was getting the following error code:


zattoo_player: symbol lookup error: zattoo_player: undefined symbol: faacDecOpen

Another list of people making comments about Zattoo on Linux is available on this more official blog.

Enabling Virtual Machine Interface (VMI) in VMware Workstation 6.0 & Ubuntu 7.04 (i386)

The Virtual Machine Interface (VMI) that is provided in the VMware Workstation 6.0 work currently only with the Ubuntu Feisty Fawn 7.04 distro. But it requires the i386 version, not the x86-64. To check if you’re kernel has VMI paravirtual kernel support enabled, you can check the kernel compile config.

# grep VMI /boot/config-2.6.20-15-server


There isn’t much showing inside the virtual machine when it’s running with the paravirtual kernel support. The only quick way I’ve found so far, is to check the APIC timer interrupt. In the Virtual Machine Interface (VMI) enabled machine you have to check the following


# grep VMI /proc/interrupts

0   74    IO-APIC-Edge            VMI-alarm

The normal APIC timer function has been replaced by a VMI-alarm function.

It goes without saying that you need to activate the VMI paravirtual kernel support in the config of the virtual machine in the Options/Advanced section.

VMware Workstation 6.0 for Linux

VMware LogoVMware has released VMware Workstation 6.0 yesterday. It is the sixth generation of the Workstation virtualization product. This version brings enhancements on the virtual devices and connectivity for the virtual machines (USB 2.0 support, more network cards, multiple-display). Seemlessly run both 32bit environments and 64bit (x86-64) on the same host. Supports running virtual machines in the background with headless operations. Enhanced support for developpers.

Up to this point nothing earthshattering right. Well two new features that VMware Workstation 6.0 brings to virtualization are :

  • Virtual Machine Interface (VMI) support (experimental): VMware Workstation 6.0 is the first virtualization platform to allow execution of paravirtualized guest operating systems that implement the VMI interface. Please note that VMI configuration is only available in i386 kernels for the moment. x86-64 will come.
  • Continuous virtual machine record and replay (experimental): Users can record the execution of a virtual machine, including all inputs, outputs and decisions made along the way. On demand, the user can go “back in time” to the start of the recording and replay execution, guaranteeing that the virtual machine will perform exactly the same operations every time and ensuring bugs can be reproduced and resolved.

Having taken part of the two beta releases and the release candidate of the Workstation 6.0 product, I immediately upgraded my Workstation 5.0 for Linux license to the new version last night. This is an amazing product !!!