2009-06-30

ESX 4.0 Running a VSphere Lab - Part 3

We continue our Saga. Part 2 ended with configuring our cluster and now we go onto shared storage and vMotion.
If you looked at the previous topology of my lab you will notice that there was no interface configured there for shared storage. That was a small oversight on my part which I corrected by adding a additional NIC, by the way, that is why I love working on virtual machines as a lab - hardware does not cost anything!!

I added a fourth NIC to each ESX host with a VMKernel port connected to 1.1.2.x - named NFS.
So after adding a new nic to each of my ESX hosts the topology looks like this

ESX4-1 ESX4-2
esx4_1_after esx4_2_after

 

We will now add a shared NFS volume to each server. I know that NFS is not the most popular choice for shared storage out there, but I do have to say that I am extremely pleased with the performance, and in my personal production environment, the benefits we receive with, ease of use, backup times and administration, has made NFS the de-facto choice for all our ESX deployments.

The NFS share is hosted on an Openfiler server (well I am exaggerating a bit - it is actually a desktop with a large disk). Extremely stable - as you can see from the screen shot below.

openfiler_uptime

"There is more than one way to skin a cat" - so go the saying, and there is more than one way to add an NFS volume.

We can do it through the VI client.

 

Add_Storage_1

Add_Storage_2
Add_Storage_3

Add_Storage_4

 

Or we can do it from the command-line on the ESX host.

esxcfg-nas -a <volume name> -o <hostname/ip> -s <share name>

or in my case

add_nfs2 

And now we have both Servers which see the same Shared Storage.

both_ESX_see_same_NFS

And this is a diagram of my environment

visio

I now fired up a Windows Server machine to test out my vMotion

 

Next up on the menu Fault Tolerance