Vmware vSphere 5.5 update


This article is supplemental to my earlier document (Vmware vSphere HOWTO) which covers vSphere 5 in detail.

The hardware used is shown below.

vsphere5hardware

This brief posting only deals with the new vSphere Web Client (which is now recommended by VMware) this example shows how to set up hosts, datacenter and how to create an iSCSI based Datastore. It does not specifically cover setting up a Windows Domain Controller which is required.

After performing the Simple Install from the vCenter, launch the client using the icon or the tile if using Windows 2012

Initially use administrator@sphere.local for the login and use the password that was supplied when installing vCenter Single Sign on

The system will authenticate and should bring up a screen similar to that shown below:

To get started add a datacenter by selecting the <Click here> link as shown in the figure above.

At the next screen select <Create Datacenter>

Enter a name for the new datacenter and select <OK>. The new datacenter should be created and appear in the left hand frame, after this the next step is to add host(s) to the datacenter. Select <Add a Host> from the screen shown below:

Enter the host’s IP address and select <Next>.

Authenticate and select <Next>

Select <Yes> to connect.

Select <Next> and finally <Finish> to add the host. The newly added host should appear as part of the datacenter.

Adding an iSCSI Adapter

From the <Home Screen> select <Hosts and Clusters>

Select the first host and then Manage -> Storage

Select the + icon to add a software iSCSI adapter.

The newly added adapter will show up in the <Storage Adapters> screen.

Now add the target from the screen below.

Select <OK> and then issue a rescan by clicking on the icon shown below.

Select <Storage Devices> to see the newly added iSCSI target LUNS.

Now select the Home tab by right clicking on the tab at the top left, this will show the history of events.

Select <Storage> from the Inventory pane.

Next select Datastores and select the <Create a new Datastore> icon.

Select <AJs Datacenter> and then <Next>.

Select <VMFS> for the file system type.

Next, name the Datastore and select the host that was previously configured for access to the target.

If there are no legacy considerations then it is recommended to use VMFS 5.

Use the full capacity.

Finally select <Finish> to create the new Datastore.

—————————————————————————————————————————————-

Advertisements

Red Hat Cluster HOWTO


Setting up a Red Hat Cluster

  1. Hardware configuration

    The installation hardware used for this configuration is shown below, however a lower cost installation could just as easily feature Virtual Machines along with a software emulated iSCSI target.

  2. Actual hardware used

  • 2 x x86 64 bit cluster nodes running Red Hat 6.4 each with two GigE NICs
  • 1 x DHCP server
  • 1 x Sharable iSCSI target device (Netgear ReadyNAS)
  • 1 x Network Switch

  1. Installation

    Start the installation of Red Hat V6 server on both of the cluster nodes and at the screen shown below ensure that (at least) the options that are highlighted are selected. In addition ensure that the iSCSI components are added.

    At the customize screen select other components (such as the GUI environment) as necessary.

    When a package is highlighted, can be selected to drill down further, here additional High Availability components have been added.

    Repeat the installation for the second node.

  2. Post Installation Tasks

  3. Network configuration

    In this case the network has been set up with two NIC’s per server. Network eth0 has been allocated by DHCP and eth1 (which will be used for a crossover cable to the other server) has been statically assigned as per the table below:

    eth0 eth1
    redhatclusternode1 DHCP 192.168.10.10
    redhatclusternode2 DHCP 192.168.10.20
    iSCSI device DHCP

    In this particular installation the /etc/sysconfig/network-scripts file for ifcfg-eth0 looks like:

    Note: The Network Manager program can be used to configure the network, however this must be disabled when the cluster is installed

  4. Cluster configuration

  5. Enabling Ricci

    After the RedHat installation has been completed and the network configured open a terminal and start the ricci service.

  6. Enabling luci

    Next start the luci service.

  7. Setting passwords for luci and ricci

    Add passwords for ricci and luci.

  8. Starting the cluster manager

    Start the clustermanager with the command:

    service cman start.

    Note if the Network Manager is running an error message will be generated as shown below:

  9. Disabling Network Manager

    If the Network Manager application is enabled it can be disabled by entering the commands:

    service NetworkManager stop

    chkconfig NetworkManager off

  10. Starting the cluster configuration

    As shown above start a browser session and point it at https://redhatclusternode1:8084

    Note if you used a different hostname then substitute it or the machine’s IP address in the URL above.

    Note cluster information is located in /etc/cluster/cluster.conf

  11. Creating a new cluster

    After logging in select <Manage Clusters> and then select <Create> to create a new cluster.

    Name the cluster, add in the node name and either use <Locally Installed Packages> or <download packages>. Ensure that <Enable Shared Storage Support> is checked.

    The cluster should now show that the node redhatclusternode1 has been added to the Cluster – redhatcluster:

  12. Adding additional cluster nodes

    To add the second node select the cluster and this time choose <Add>.

    The second node should be joined to the cluster after a short delay:

    If there are any issues configuring the cluster try restarting the following services ricci, luci, cman, taking note of and correcting any error messages that may occur.

  13. Adding an existing cluster

    The cluster view can be added to the second node’s browser view by using the <Add> button.

  14. Configuring an iSCSI target device

    Note: The iSCSI packages should have been selected during to installation, if not they will need to be obtained and installed.

    Configure the iscsi daemons and start the services:

    A number of iSCSI targets have been previously created for use with the cluster nodes. The IP target address of the iSCSI devices is 192.168.1.199. Use the following command (replacing the IP address below with your correct portal address):

    The targets have been discovered and will be set up as logical volumes for use by the cluster. Now login to the targets by adding the iqn information above:

    The session information can be listed by:

    The tail command can be used to show the device name:



    Here the device names are sdd, sde, sdf and sdg. They should now also show up with the cat /proc/partitions command.

    Repeat the steps for the other node (redhatclusternode1)

    Note the device names may be different on the other node.

  15. Creating logical volumes

    First create three volume groups.

    Display the volume groups using the vgdisplay command.

    The next task is to create three logical volumes from the three volume groups:

    Show the volumes using lvdisplay.

  16. Creating a GFS2 file system

    Format the logical volumes using the GFS2 file system by issuing the following command.

    Note –j refers to the number of journals.

    The next step is to create mount points. Do this for both of the nodes.

  17. Failover Domain

    A Failover Domain refers to a group of nodes within the cluster that can run administration tasks. In our case both nodes can fulfill this function. Larger clusters may want to restrict this ability to only certain nodes.

    Select <Failover Domains> à <Add>. Check the member boxes for both nodes, name the Failover Domain and select <Create>.


    Again the Failover Domain will show up on the second node automatically. The settings can be changed later if required.


  18. Adding a Resource

    In this example a Samba server will be created. Select the <Resources> tab and then select <Add>. From the drop down menu select GFS2 as the Resource and fill in the fields with the appropriate information. Select <Submit>

    Enter the data into the fields below and select <Submit>.

    Next add an IP Resource:

    Configure an IP address and Netmask:

    Next add a Samba Server Resource:

    The Resources screen should now show the three Resources:

    Now that the Resources have been added, the next step is to add a service group. Select the <Service Groups> <Add>.

    Name the service samba and check <Automatically Start This Service>, add in the Failover Domain that was created earlier and set the <Recovery Policy> to relocate. Select <Submit>.

    After selecting <Submit>, select the samba service and then <Add Resource>.

    Under <Global Resources> select the IP address that was created earlier. Then select <Add Child Resource> to the IP Address Resource. Add the GFS2 Resource from the <Global Resources> section.

    Now add a <Child Resource> to the GFS2 Resource.

    The Resource here is a Script Resource using the smb script for samba.

    Finally select <Submit>

    Samba should now be running on one of the cluster nodes, it can be tested for relocation by selecting the other node and then the start icon as shown below:

    After failover the status shows that the service is now running on redhatclusternode1.

  19. Setting up Samba access

    User alan has an account and can be added with su access by issuing:

    Now on the windows machine that will access the files enter the Samba resource IP address:

    Next step is to Logon

    Now access the files and map the Network drive if required.

  20. Summary

    A basic two node Red Hat Cluster implementation has been set up. The shared storage component was implemented using an iSCSI target device. A simple Samba application was configured and shared out to a Microsoft Windows client machine.

    Note: Applications behavior differently to failover conditions; some are cluster aware and are more tolerant to failover situations.

    Other areas that should be considered in a more robust implementation are the addition of a Quorum Resource and a Fencing Device.

  21. Further information

    www.redhat.com

    http://www.samba.org

Citrix Xenserver Quick Start Guide


Citrix Xenserver Quick Start Guide

Alan Johnson

© Alan Johnson February 2012

Introduction to Xenserver

The purpose of this tutorial is to provide a quick start guide to the basic features of XenServer and XenCenter. The intent is to shorten the learning curve for the casual hobbyist or to provide a basic introduction to administrators that are considering deploying Xenserver within their own organization. Relatively inexpensive hardware was used to generate the screenshots; the servers were sub $500 commodity systems with i3 processors and 6GB of RAM. Netgear ReadyNAS systems were used to provide iSCSI based storage. Xenserver is freely available, although to use some of the advanced features it may be necessary to upgrade to one of the fee based solutions. At the time of writing (2012) the following variants are available:

Installation

Obtain the free Xenserver product from

http://www.citrix.com/lang/English/lp/lp_1688615.asp

Register and download, then burn the ISO image and boot from a dedicated server. This will be the location of the bare metal hypervisor.

Note the term bare-metal is used since the hypervisor interacts directly with the hardware of the server.

In this configuration the server name is simply “Xenserver”. Follow the prompts and after installation a screen similar to that below should be shown on the server.

Post Installation

After installation the XenServer console will show its IP address under the Status Display option as shown in Figure 1.

Figure 1 Showing XenServer’s IP address

Installing XenCenter

XenCenter is the management tool for configuring xemserver. It is recommended to use thsi tool wherever possible. Open a browser on a regular windows system and enter the IP address into the URL bar as shown in Figure 2. This will give two choices for the installation of XenCenter. It is possible to make an image for later installation or to run the installer directly from the browser. In this case the direct installation method will be chosen. Highlight the <XenCenter installer> link and select it, depending on the browser used there may be a prompt similar to that shown in Figure 3. Save the file (if necessary) and then run it.

Figure 2 Installing XenCenter

Select <Run> to execute the installer.

Note: XenCenter runs under Windows Operating Systems

Figure 3 XenCenter installation prompt

At the prompt select <Next>, then select <Next> and enter a file location.

Starting and Configuring XenCenter

Adding a server

From the Start Menu select <Citrix XenCenter>. The first task is to add a server. This is done by opening the <Home> Tab and selecting <Add a server> by clicking on the icon. Enter the information using the server’s IP address that was determined earlier from the console screen and use the password that you supplied during the server installation process. Then select <Add>.

Figure 4 Adding a server to the console

Figure 5 Connecting to the console

License Activation

The license manager screen will now open and the free version can be activated by highlighting the server and then from the <Activate Free XenServer> prompt select <Request Activation Key> which will open up the Citrix website where you can enter the appropriate information.

Note: this activity can be delayed if desired and activated later from the <Tools> menu of the XenCenter console.

Figure 6 Activating the license

Enter the mandatory information and select <Submit>.

A license file will be sent via email. The next step is to select the <Activate Free XenServer> again and this time select <Apply Activation Key>. A prompt will be issued for the file’s location. Select the file and then select <Open>. The license should now be activated and an expiry date will be shown in the License Manager screen. Close the License Manager.

Figure 7 Applying for the Free XenServer license.

Figure 8 Applying the Activation key

Using the XenCenter Console

Figure 9 Navigating the XenCenter console

The XenCenter console will show a hierarchy corresponding to the server and its resources. Selecting each device in turn will show information related to the resource. In . above, statistics about the server are shown relating to its memory and CPU usage etc.

Selecting local storage will show the local resources and the file system type along with capacity and other storage related parameters. Multipathing is disabled since SATA devices do not have this capability.

The tabs in the right hand pane will vary according to context – for example with the uppermost object <XenCenter) selected – four tabs (Home, Search, Tags and Logs) are presented.

Going back and selecting the xenserver object shows a wide range of tabs, the first tab (search) has already been discussed. The second tab (General) give more in depth information relating to the server selected such as its uptime, iSCSI iqn name, memory size, version number and license details.

Figure 10 XenServer’s General properties.

Items can be expanded or collapsed by selecting the arrow to the right of each category.

The next tab <Memory> is not used in the free version but there is an upgrade available which will allow for the use of dynamic memory.

Figure 11 XenServer’s Memory Tab

The <Learn More> function will open a web location which explains the benefits of dynamic memory. In a commercial environment it is recommended that dynamic memory should be used. The Citrix website states that Dynamic memory control can:

Reduce costs, improve application performance and protection, and increase VM density per host server. Also known as memory ballooning, dynamic memory control allows host memory to be allocated dynamically across running VMs.

Allow new VMs to start on existing pool resources by automatically compressing the memory footprint of existing workloads

Reduce hardware costs and improve application performance by sharing unused server memory between VMs, allowing for a greater number of VMs per server host

Maintain VM and host performance and stability by never oversubscribing host memory, and by never swapping memory to slower disk-based systems

Adding Storage

The Storage Tab will allow us to create a Storage Repository which is where the Virtual Machines (VMs) will be located. In the steps following an iSCSI target will be added. The target in question is a Netgear ReadyNAS device and has been configured with IP addresses of 192.168.1.72 and 192.168.1.96. The relevant section is shown below in Figure 12.

Figure 12 Netgear iSCSI target configuration.

Both of the LUNs assigned to target xen will be used.

With the Storage tab selected, select <New SR>

Figure 13 Creating a new Storage Repository

This first example uses an iSCSI target so select <Software iSCSI> and select <Next>.

Figure 14Using iSCSI as a new Storage Repository

Enter a Name and check or uncheck Auto-generate description according to your preferences. Select <Next>.

Figure 15 Naming the storage repository

Add the IP address of the iSCSI target, for security CHAP can be selected. Note that CHAP stands for Challenge Authentication Protocol and can be used to select an extra level of security by using a password (a secret). In this example CHAP will not be used. The <Discover IQNs> button can be used to find available IQNs. It is recommended to use this function as IQN names are lengthy to type in.

Note: 3260 is the default port number for iSCSI.

After discovery a number of IQNs may be found, select the target that corresponds to the correct device and then after the IQN has been selected it will now be possible to send a <Discover LUNs> command. The target LUNs discovered should be LUN0 and LUN1 corresponding to the LUNs shown in Figure 12. Initially LUN0 will be configured.

Figure 16 Entering iSCSI target parameters

Select <Finish>.

The next step is to prepare the storage repository (SR). A prompt will be presented which will format the LUN.

Figure 17 Creating a new virtual disk from the iSCSI target.

Respond <Yes> and the new SR will be created. The Storage Tab will now show the newly created virtual disk. Repeat the process for the second LUN (LUN1).

Figure 18 Adding a second LUN as a new storage repository

Figure 19 Newly created virtual disk.

Networking Tab

The networking tab will show the current network. The IP address is shown along with other parameters. This networking task will be to add a new network. There are a number of choices here. The first choice <External Network> will be used for regular network traffic. It is also possible to bond multiple networks together for higher performance. To create an External Network select the Networking Tab and then select <Add Network>.

Figure 20 Adding a new external network

Select <External Network> and select <Next>.

Figure 21 Selecting the network type.

Name the network and type in a description. Select <Next>.

Figure 22 Naming the new network

Select a VLAN number and also increase the MTU rate if desired. In general it is recommended to use large frames (jumbo frames) for data throughput applications.

Figure 23 Setting the new external network parameters.

The new network looks like:

Figure 24 Showing the new external network

NIC Tab

This tab is mainly read only but it does allow multiple networks to be connected together to provide higher throughput and greater resiliency. Active-Active and Active-Passive modes are supported. After bonding a new NIC will appear which is termed the bond master, any other NICs that are part of the bond are termed NIC slaves.

Figure 25 NIC Bonding

Console Tab

The console tab displays the Virtual Machine’s window. If no Virtual Machines are running the console of the XenServer can be shown which presents a command line prompt. The command <xsconsole> will show a console similar to the screen output of the local XenServer.

Figure 26 XenServer console “help command” output

Figure 27 XenCenter – Output of xsconsole command

Performance Tab

This tab shows statistics related to CPU, Memory and NIC throughput.

Figure 28 Viewing Performance Graphs

The type of data that is displayed in the graphs can be modified by using the <Configure graphs> button. There are two tabs – Data Sources and Layout. Select the data sources from the <Data Sources> tab and then select and position them using the <layout> tab.

Figure 29 Selecting and arranging the performance graphs

Users Tab

This tab is used to configure Active Directory and Domain services.

Logs Tab

This tab shows the log entries and allows them to be filtered according to severity levels

  • Errors
  • Alerts
  • Actions
  • Information

The log can be cleared from this screen.

Creating a XenServer Virtual Machine

From the main menu at the top of XenCenter select <VMà New VM>.

Figure 30 Creating a Virtual Machine

A number of templates exist which can be selected to configure the system automatically for optimal performance. If a template does not exist for the operating System that is to be installed; then select the <Other install media> template and configure manually.

In the example following a later version of Ubuntu Linux will be used to generate a virtual machine and since this version is not available as a template the <Other install media> template will be selected.

Figure 31 Selecting a VM template

Select <Next>, name the VM and select <Next> again.

Figure 32 Naming the VM

The next step is to select the installation media. Xen has the ability to install a VM from a library of ISO files which contain the images, from a local DVD or from the network. In this example a DVD on the server itself (xenserver) will be used.

Figure 33 Selecting the VM installation source

The VM can be associated with a home server which means that this server will always be the one to start the VM (provided that it is available. If multiple servers are available then there will be an option to allow the VM to be started on any server that has access to the resource. In this instance the VM will be placed on server “xenserver”.

Figure 34 Choosing the VM’s home server.

Next select the quantity of Virtual CPUs and the memory that is to be allocated.

Figure 035 Allocating CPU and memory resources to the VM

The next step is to select the virtual disk to be used.

Note: A pre-defined template will normally select the virtual disk automatically.

Since a pre-defined template was not used, the virtual disk will be added and configured manually.

Figure 36 Selecting a virtual disk for the VM

Select <Add> and specify the volume and the capacity to be used.

Figure 37 Adding and configuring the virtual disk.

The new disk is now shown in the dialog below:

In the next stage a network for the VM will be configured.

Figure 38 Configuring the VM’s network

If required the Properties can be modified as shown below. In this example a Quality of Service (QoS) limit was added of 2MB/s.

Figure 39 Modifying the VM’s properties.

Verify the setting presented and select <Finish> to create and start the new VM.

Figure 40 Completing the VM creation.

A status will be shown at the bottom of the XenCenter screen showing the VM creation progress. At this point the Console on “xenserver” will show the new VM or it can be displayed using the console tab from XenCenter.

Figure 41 Console showing the newly added Ubuntu VM

In the main tree select the newly added <Ubuntu 11.10 VM> and then select the <Console tab>. This will now display the state of the VM.

Note: this console can be undocked away for XenCenter and used as an independent window.

Figure 42 Interacting with the VM through the console tab.

Continue with the installation process through the console window until Ubuntu has completed installation.

Note that as far as Ubuntu is concerned it will use the 20GB disk that we allotted to it earlier. At this point Ubuntu is communicating with virtual hardware provide by the Xen server. During the installation it may be instructive to look at the performance (via the performance tab) to see how much of the resources are being consumed.

Starting the new VM

From the main XenCenter console select the new VM (Ubuntu 11.10) and then click on <Start>.

Figure 43 Starting the new VM

Select the <console tab> to interact with the VM. An important part of the configuration is the installation of Xenserver tools

Installing XenServer tools

With the VM selected, open the <General tab> to see if the tools are installed, if not click on the link to begin installation.

Figure 44 Installing XenServer tools

This will mount a device within the VM which can be interacted with.

Figure 45 Accessing Xenserver tools from the VM

Open the Linux folder and execute (or edit if necessary) the install.sh file.

Figure 46 Running XenServer tools from Linux

Review of terminology

At this point it is useful to review some of the components that have been encountered so far.

  • Xenserver – contains the Ctirix Xenserver Operating system (Hypervisor)
  • XenCenter – console for configuring and interacting with the hypervisor.
  • Storage Repository – normally a shared storage resource that is available to multiple servers
  • Virtual disk – a slice of a storage resource that is used to house Virtual Machines and data.

Advanced Features of XenServer

Pools

Citrix uses Pools to provide a centralized view of servers and their associated storage as a single manageable entity. In general the hardware should be similar (cannot mix AMD and Intel or CPUs with different features), but there is an add on available to allow participation between heterogeneous servers which can mask some of the differences. Servers should be running the same version and patch revision. One of the servers is termed the Pool Master. If the pool master fails then another node can be designated as the new pool master.

Note: The free version does not allow this to happen automatically, the HA enabled version will though.

Pools are normally created with shard storage configurations.

Pool Creation

From XenCenter add in the servers that will form the pool:

Figure 47 Selecting servers for pool creation

Add in both servers

To create a pool there should be a minimum of two servers. In this example the two servers are XenAcer and xenserver-Acer2.

To form a new pool – select <New Pool>.

Figure 48 Creating a new pool

Referring to Figure 49 – Name the pool (AcerServerPool) in this instance) and use the drop down box to select the master server within the pool. Select from the checkbox additional members of the pool. Here xenserver-Acer2 will be used as a second member.

Figure 49 Naming the pool, selecting the master and adding an additional member

Select <Create Pool>.

Notice how the view has changed. The servers are now part of the Pool tree and selecting the pool shows each server in the right hand pane along with their statistics.

Figure 50 Viewing the newly created pool

Setting up NFS shared storage

The next stage is to set up storage that can be shared by the nodes in the pool. Select the pool and then with the storage tab selected, click on <New SR>.

Figure 51 Creating shared storage

The shared storage that will be used will be of type NFS.

Note: that a read/write NFS share has already been set up on the Netgear storage device as shown in Figure 52.

Figure 52 NFS read Write share on shared storage

Select NFS as the storage type and choose <Next>.

Name the Storage Repository.

Select <Next> and then enter the path followed by <Scan>.

Figure 53 Scanning the NFS path

Note: If the scan fails, verify that the network location has been correctly specified, that it is accessible and that there are no typographical errors in the entry.

Figure 54 Completing the addition of the NFS share operation

Select <Finish>. The new SR will be created.

Note: that the NFS storage is at the same peer level as the two members of the pool. Also selecting either server individually should show that the NFS storage is available to both nodes.

Figure 55 Viewing the newly added NFS storage

Creating a VM on shared storage

Select the pool and then select <New VM>.

Figure 56 Creating a new VM on the NFS shared storage

In this case a 32 bit version of Windows 7 will be installed. Select the correct template from the options provided and then select <Next>.

Figure 57 Selecting the Windows 7 32 bit template.

Name the VM and select <Next>.

Figure 58 Naming the new VM

The next stage is to select the installation media.

In this example a local DVD from one of the servers could be used or it could be from an ISO library of images that is network accessible. In the interests of slightly more complexity the ISO library option will be used. A CIFS share has already been setup on a different server outside of the pool and is set up to allow read only access. To select an ISO library select the link <New ISO library> and then select <Windows File Sharing> on the new screen that opens up (Figure 59).

Figure 59 Configuring a CIFS ISO library location

Select <Next> and name the library.

Figure 60 Naming the ISO library

Figure 61 Specifying the CIFS sharename for the library

Select <Finish>. The screen will now have a new option for Installation media corresponding to the ISO library that has just been set up. The drop down box will show the possible installation locations along with the ISO images that are available for installation. Here the Windows 7 Home Basic version has been selected and will be installed.

Figure 62 Selecting an ISO image from the new library.

Select <Next>.

Figure 63 Selecting a home server for the VM

Home Server

A Home server is a server that is closely associated with the VM; If a home server is nominated the system will attempt to run the VM from the home server (provided that it is available). The recommended default option is not to assign a home server which means that the VM can run on any of the two servers that are part of the current pool.

The next stage is to allocate memory and a number of Virtual CPUs. Here 2 vCPUs will be allocated along with 2GB of memory.

Figure 64 Allocating memory and vCPUs to a VM

The Storage step selects the location of the VM.

Note: under the Shared flag it shows that NFS storage is True (Shared).

Figure 65 Selecting the VM location on NFS shared storage.

Select <Next>. The next stage will configure the networking portion of the new VM. In this case a second virtual NIC will be added.

Adding a second NIC to the VM

Note: Up to 4 virtual NICs can be added from this screen.

Figure 66 Adding a second Virtual Network Interface Card

The new screen will now show two NICs which can be configured independently.

Figure 67 New VM with two NICs

Select <Finish> to create the new VM.

The new VM will show up in the pool’s inventory.

Figure 68 Viewing the new VM within the pool

Starting the shared VM

Select the VM and then select the <console tab> and interact with the VM to install Windows in the same manner as a physical installation.

Figure 69 Installing Windows on the Virtual Machine.

VM installation and completion

After installation install Xenserver Tools. Once the tools installation has been completed open Device Manager and view the virtual hardware which includes the two virtual NICs that were installed earlier.

Other Advanced Features

The paid version of XenServer provides a number of other features such as:

High Availability

With HA Virtual Machines can be automatically restarted if it fails or it can be restarted on other servers within the resource pool.

Dynamic Workload Balancing

Automatically balances new loads to ensure optimum performance across the pool and works in tandem with Xenmotion.

More Information

For more information on these and other features refer to http://www.citrix.com

VMware VSphere HOWTO


VMware VSphere HOWTO

Alan Johnson February 2012

alan@johnson.org

Introduction to VSphere 5

The purpose of this tutorial is to provide a quick start guide to the basic features of VSphere 5. The intent is to shorten the learning curve for the casual hobbyist or to provide a basic introduction to administrators that are considering deploying VMware within their own organization. As such the scope is limited; however it does cover advanced features such as High Availability (HA) and performance optimization. Relatively inexpensive hardware was used to generate the screenshots; the servers were sub $500 commodity systems with i3 processors and 6GB of RAM. A Netgear ReadyNAS system was used to provide iSCSI based storage. The VMware ESXI 5 hypervisor is freely available, and there are different levels of cost depending on the capability of the deployment. A comparison is shown in the table on the following page.

In addition there are a number of kits available which are explained by VMware as:

VSphere Kits: vSphere Kits are all-in-one solutions that include multiple vSphere licenses and vCenter Server, enabling an organization to quickly and easily set up their vSphere environment. Kits are available in several editions that vary in terms of scalability and functionality. VMware offers two types of kits:

Essentials Kits: All-in-one solutions for small environments inclusive of virtualization and management software available in two editions—Essentials and Essentials Plus. Both Editions include vSphere processor licenses and vCenter Server for Essentials for an environment of a maximum of 3 hosts (up to 2 CPUs each) and maximum pooled vRAM capacity of 192GB. Scalability limits of Essentials Kits are product-enforced and cannot be extended other than by upgrading the entire kit to a higher-end bundle. Essentials Kits are self-contained solutions and may not be decoupled or combined with any other VMware vSphere editions.

Acceleration Kits: All-in-one convenience bundles that provide a simple way for new customers to purchase all the necessary components to set up a new VMware environment. Each Acceleration Kit consists of a number of licenses for VMware vSphere, along with a license for one instance of a VMware vCenter Server Standard. Acceleration Kits with Management add 25-VM packs of management products to the related Acceleration Kits. All kits decompose into their individual kit components after purchase.

Installing ESXi

Obtain the ESXi Hypervisor from the VMware site. The Hypervisor is really the core of the system and is managed by other parts such as vCenter. It will be necessary to register first. VMware offers trial versions for 60 days. The Hypervisor part is all that is required for the first part of this chapter.

Open a browser session and point it to www.VMware.com, select the <Products> TAB and under <Free Products>, select <VMware vSphere Hypervisor> and then <Download>. Register or log in as necessary.

Figure 1 Obtaining the ESXi Hypervisor

Download the ISO image and burn a DVD to create standalone boot media. Set the system’s BIOS to boot from the DVD device and follow the prompts to install ESXi.

Figure 2 Installing ESXi

Select <Enter> to continue.

Accept the license agreement and then select <F11> to continue.

Figure 3 Accepting the ESXi license agreement.

The next stage is to select a device to install ESXi onto. In this example a 40GiB disk will be used.

Figure 4 Selecting the target device for ESXi installation

Respond to other prompts such as keyboard type, and setting the root password. There will be a need to configure and partition the installation disk. After the installation has completed, the system will reboot and show a screen with a prompt to download basic management tools.

Figure 5 Obtaining the IP address for browser interaction

Installing the vSphere client

The next stage is to open a browser session and enter the address shown:

You can ignore the certificate error for the purposes of this chapter Select <Download vSphere Client> to install the tools to configure VMware. There will be other links to documentation and VMware vCenter.

Note: These other links are external links pointing back to the VMware site.

Figure 6 Downloading the vSphere client.

Depending on the browser being used there may be different options such as running the file directly or saving it first.

Figure 7 Saving the VMware Client application

After the file has been downloaded run the application.

Figure 8 Installing the vSphere client

Running the client

Respond to the prompts and complete the installation. After the installation has completed the client can be run from the <Start Menu>.

Figure 9 Running the VMware vSphere client

When the login prompt appears enter the IP address of the ESXi system (192.168.190.128), the username <root> and the password that was assigned during the ESXi installation.

Note: In this case the login is taking place within the ESXi server.

The first login session will trigger a Security Warning which can be installed or ignored.

Figure 10 Logging into the ESXi host

The first screen will pop up showing that the license will expire within 60 days (if not licensed). This can be safely ignored at this point in time.

Figure 11 Initial Client Screen

The next screen defaults to Inventory and Administration. Select the inventory icon and the ESXi host will appear in the left hand pane.

Figure 12 Selecting the ESXi host

Getting Started Tab

The first tab in the screen gives an overview of the terminology used. A host is defined here as the node running the ESXi software. The ESXi software will run the Virtual Machines that will be created shortly. It is also possible to deploy Virtual Appliances. VMware defines a virtual Appliance as a pre-built virtual machine. There are prompts within this screen which will guide the user through Virtual Machine creation or deploying Virtual Appliances.

Installing a Virtual Appliance

From the right hand pane with the <Getting Started> Tab still open, select the option <Deploy from VA Marketplace> To illustrate this function a small appliance <Nostalgia> will be used. This appliance runs a version of DR-DOS and includes some early DOS based games.

Figure 13 Deploying a virtual machine

You can ignore the warning in this case.

Figure 14 Installing a non supported O/S in a VM

Note: the appliance is in OVF format which is a standard method of packaging. The acronym stands for Open Virtualization Format.

Select <Next>

Figure 15 Deploying an OVF Template Step 1

Name the OVF template and select <Next>.

Figure 16 Deploying an OVF template Step 2

The nest stage is to select the location and format. Locations are termed Datastores and are covered in more detail later in this document. The three formats mentioned here are defined as:

Thick Provision Lazy Zeroed

Allocates all of the space and then zeros out the block on the first write. This is the fastest way to create the disk but first time writes are slower. Subsequent writes to the same block are at normal speed.

Thick Provision Eager Zeroed

Allocates all of the space and zeros out the blocks. This takes longer to create but there is no first time write penalty since the block has already been zeroed.

Thin Provision

Thin disks allocate storage as required and zeros the blocks on the first write.

In this example Thick Provision Lazy Zeroed will be used since the space is small and write performance is not a concern.

Figure 17 Selecting the Disk Format

Accept the configuration and select <Finish>

Figure 18 Completing the OVF deployment

The appliance will now be downloaded and installed. It will show up as a VM under the ESXi host.

Now the VM Nostalgia can now be selected and powered up just as if it were a regular bootable Operating System. that when the VM is selected the tabs across the top of the right hand pane will vary according to context so a different set of tabs will be presented when the VM is selected.

With the Getting Started Tab selected click on the <Power on the Virtual Machine> link to start up the VM.

Note there are other ways of starting the VM such as clicking on the green right facing arrow icon above the host panel. After the link has been selected the options will change to <Power Off> and <Suspend>.

From the Tab menu select <Console> to access the Virtual memory’s (virtual) screen.

Figure 19 Viewing the VMs console.

The other way of setting up a Virtual machine is to select the option <create a new virtual machine> with the host selected and the <Getting Started> tab open. This will open a screen similar to that shown below:

Figure 20 Manually creating a VM

There are two options – Typical and Custom. The <Custom> option allows the user to supply more parameters during the pre-installation phase. It is also possible to tweak the parameters later so this example will use the <Typical> option. Select <Next> to get to the next screen.

Name the VM which in this case will be a version of SuSE Linux.

Figure 21 Naming the new manually created VM

The next stage is to select the destination. At this point there is only one Datastore so this will be the location to store the new VM. Select <Next>.

Figure 22 Selecting the destination for the manually created VM

There are a number of predefined templates available for common Operating Systems. Select Linux from the radio button and then choose <Novell SUSE Linux Enterprise 11 (64-bit) from the drop-down.

Figure 23 Choosing a template for the new VM

Select <Next> and this will bring up the Network screen. Select the default adapter and then select <Next>.

Note: Other virtual Network Interface Cards (NICs) can be added later on.

Figure 24 Selecting virtual NICs

The next choice is similar as before where a choice of disk format has to be made. The virtual disk size is dependent on how big the VM will be. Thin Provisioning will only assign space as it needs it.

Figure 25 Specifying capacity and format of the VM

Select <Next>. There will be a chance to edit the virtual machine settings if required, otherwise select <Finish> to complete the task.

Figure 26 Completing the VM preparation task.

The new VM will show up in the VM tree.

Now at this point we have really only prepared the virtual machine. So it is similar to a bare server with memory and disk space. The Operating System will have to be installed from an installation source as normal. Prior to the O/S installation the Virtual Machine settings need to be edited. This can be done by right clicking on the VM and selecting <Edit Settings>

Editing VM settings

With the Hardware Tab open select CD/DVD drive. The drive can be the actual physical DVD drive connected to the host itself, the actual physical drive connected to the client or an ISO file which resides in a Datastore.

For this example the DVD used will be an ISO image on the ESXi machine.

Figure 27 Setting the boot device for the new VM

Uploading an ISO image to a Datastore

Before we can connect we need to set up a source; here an ISO image on the ESXi Datastore will be used. Initially the ISO file will need to be uploaded to the Datastore. To do this select the ESXi host (192.168.190.128) and then select the <configuration tab>. Right click on the Datastore and choose <Browse Datastore>

Figure 28 Browsing an existing Datastore

After the Datastore browser window has opened select the <Upload files to this Datastore> icon as shown and choose the file to upload. In this case the file will be the SLES ISO image.

Figure 29 Uploading a file to a Datastore

After the upload has been completed, the Datastore browser will show the image loaded.

Figure 30 Browsing a Datastore to show the uploaded image

Close the browser and then from the icon bar with the SuSE VM selected choose <Connect to <ISO image on a Datastore>.

Figure 31 Connecting to the virtual DVD ISO image in the Datastore

Go back to the VM settings and this time select the <Options tab>. Under <Boot Options> choose Boot to BIOS and select <Force BIOS setup>. This may or may not be necessary.

This will allow the boot menu to be set up to boot from DVD.

Figure 32 Modifying the virtual BIOS

The system should now begin to install SuSE. Continue to install the VM as normal.

Creating Datastores on external storage

This section will show how to add external iSCSI storage and how to create a Datastore ISO repository on the iSCSI target. The iSCSI target will reside on the 192.168.128 network.

Typically multiple connections are used with an iSCSI target to improve performance and resilience. Enabling jumbo frames will also improve performance in many scenarios. It is also recommended to put iSCSI devices on a different network than management traffic. In this configuration two NICS are available – 192.168.128.103 (vmnic0) for management traffic and 192.168.128.40 (vmnic1) for iSCSI data traffic.

Note: It is beyond the scope of this particular book to include “real world” best practices scenarios

There are three parts to creating an external iSCSI data store –

  • Create a dedicated network
  • Adding a software iSCSI adapter
  • Attaching iSCSI target(s)

Setting up a new Virtual switch for iSCSI traffic

The first part of adding an iSCSI adapter is to set up a dedicated network. Select the ESXi host (192.168.1.103) and then select the <Configuration tab>.

Figure 33 Adding a new network Step 1

Select the <Networking link>. Ensure that <vSphere Standard Switch> View is shown. Select <Add Networking> and then from the next screen create a VMkernel port.

Figure 34 Adding a new network Step 2

The existing network is shown with a Virtual switch (vSwitch0) already added.

Figure 35 Adding a new network Step 3

Select <Next>.

The next screen will allow a choice of switches to be made. This choice is largely dependent on the network infrastructure on host machine. In this example a new virtual switch will be used and associated with the second NIC (vmnic1).

Figure 36 Adding a new network Step 4

Select <Next> and accept the defaults on the next screen. Again in a commercial deployment, it is very likely that vMotion and fault tolerance would be implemented.

Figure 37 Adding a new network Step 5

Normally static IP addresses are set up for iSCSI targets since a change of address would have severe implications for availability. In this case 192.168.2.40 will be used.

Figure 38 Adding a new network Step 6

Select <Finish> completing the networking portion of adding an iSCSI adapter.

Figure 39 Adding a new network Step 7

The new network view looks like:

Figure 40 Adding a new network Step 8

Adding an iSCSI software adapter

The next part is configuring a new iSCSI adapter – select <Storage Adapters> à Add.

Figure 41 Adding an iSCSI software adapter

Select the newly added Virtual HBA and then <Properties>.

Figure 42 Configuring the iSCSI adapter

An iSCSI target has already been configured at an IP address of 192.168.128.20 and 192.168.128.30 on a Netgear READYNAS device shown in Figure 43. This will be used as the target device for the iSCSI adapter.

Figure 43 Netgear iSCSI target console

The <Properties link> has four tabs. With the <General Tab > open verify that the initiator is enabled, if not use the Configure button to enable it.

Figure 44 Enabling the iSCSI adapter

Select the <Network Configuration> tab, and then (if necessary) use the <Add button> to select the port group that was added earlier.

Figure 45 Configuring the iSCSI network port group

Select the <Dynamic Discovery> Tab and then <Add> to configure the target’s IP address.

Figure 46 Adding the target through dynamic discovery


Figure 47 Viewing the iSCSI devices (Paths view)

Note that with the <Paths tab> selected a number of devices show up, in fact the VMware designated targets are seen through two paths since the Netgear has two IP ports. For both devices only one path is shown as active. The view with the <devices tab> selected only shows a single view. The path can be changed by selecting the device (from the devices view) and selecting <Manage Paths> and devices can be detached (Unmounted) by right clicking on them. The devices we are dealing with are the ones with (T1) highlighted above in the device designator.

Figure 48 Changing the active path with iSCSI devices

The path policy can be changed to Most Recently Used, Round Robin or Fixed. The active path can be disabled and changed to an alternative.

Figure 49 Selecting Pathing policies

Devices can be dismounted by right clicking and selecting <Detach>.

Figure 50 Unmounting iSCSI devices

After detaching two last devices are left as candidates for Datastores. Refreshing the view shows:

Figure 51 Viewing the two devices for Datastore preparation

Note: The device naming of Cx:Ty:Lz corresponds to Controller, Target and LUN.

An active Path can be disabled to force both IPs to be used. Viewing the path information below shows that both interfaces are active (balanced) on the target side giving potentially better throughput.

Figure 52 Balanced iSCSI I/O Pathing

External iSCSI Datastore Preparation

Returning to the main screen with the <Configuration Tab> active select the <Storage link> and then from the top of the screen select <Add Storage>

Figure 53 Selecting the storage type for the Datastore

Choose <Disk/LUN> as the storage type and select <Next>. From the list of devices choose the first iSCSI LUN and select <Next>. The next choice is to decide which format to use. If no legacy devices are required then the new (VMFS-5) format should be selected since it allows the use of devices with greater than 2TB capacity.

Figure 54 Choosing the Disk/LUN for the Datastore

Figure 55 Choosing the disk format for the Datastore

Select <Next>.

Choose a name for the new Datastore.

Figure 56 Naming the new Datastore

Choose either the maximum capacity or enter a capacity value for the device.

Note: The additional capacity can be added later if required.

Figure 57 Selecting the capacity for the Datastore

Select <Finish> and the device will be formatted. Repeat for the second iSCSI device (Lun1). The new iSCSI devices should now be available for Datastore deployment.

Figure 58 Viewing the newly configured Datastores

Browsing and uploading to Datastores

The first iSCSI target will be used to hold a library of ISO images and the task now is how to get them into the Datastore. To do this, right click on the Datastore and select <Browse Datastore>.

Figure 59 Browsing a Datastore

This will open the (currently) empty Datastore. Select the upload icon and then select <Upload File>.

Figure 60 Uploading a file to a Datastore

Select the file to be added from the browser window and then select <open>.

Figure 61 Selecting an ISO image file or folder for upload

Creating a Virtual Machine with External Storage

In this section a Virtual machine will be loaded from and created on external iSCSI devices from the previous section. A library of ISOs will be loaded from LUN0 of the external iSCSI device and the new VM will be stored on LUN 1 of the external SCSI device. To start with select the host (192.168.1.103) and right click to select the <New Virtual Machine> option.

Figure 62 Creating an externally based VM Step 1

Select <Typical> and then <Next>.

Figure 63 Creating an externally based VM Step 2

Name the Virtual Machine:

Figure 64 Creating an externally based VM Step 3

The second iSCSI LUN (LUN1) will be used to store the Virtual Machines that will be created.

Figure 65 Creating an externally based VM Step 4

There is a choice of Operating Systems available, the advantage of Windows and Linux is that there are ready made templates that can be used. In this example Linux is selected and the Linux version that will be used is Red Hat Enterprise Linux 6 (64-bit).

Figure 66 Creating an externally based VM Step 5

Since the machine is virtual, its hardware is also virtual. The next choice is to select how many virtual NICS that will be used. Many of the settings can be changed later and additional NICs can be added if necessary. At this point only one NIC will be used.

Figure 67 Creating an externally based VM Step 6

The next stage is to create a virtual hard disk, details of the various disk formats have already been discussed. Here a small disk of 16GB capacity will be created without thin provisioning.

Figure 68 Creating an externally based VM Step 7

Select <Finish> to create the new VM or the settings can be edited if needed.

Figure 69 Creating an externally based VM Step 8

The newly created VM appears under the <Virtual Machines> tab.

Figure 70 Viewing the new externally based VM

At this point all that has been set up is a Virtual Machine. Think of this as equivalent to a physical server that is ready to accept a new Operating System. The Virtual Machine is guaranteed to be compatible with Red Hat 6 since we used a template and Red Hat will interact with virtual hardware that it will recognize. This also shows another benefit of virtualization in that we do not have to be concerned with driver and device incompatibilities. It also makes it possible to move VMs around to more powerful systems since they all exhibit the same virtual hardware.

Preparing the VM for Operating System Installation

Select the new VM (Red Hat V6) and then select <Edit Virtual Machine Settings>.

Figure 71 VM Operating System preparation

Under the <Hardware> tab select the CD/DVD drive and point it to the Datastore where the ISO images are located (LUN0 of the iSCSI target) and select the correct image to boot from.

Check the box <Connect at Power On>and select <OK>

Figure 72 Booting from a Datastore ISO image

After the image has been selected close the dialogue, select the VM in the left hand pane and then right click on the VM and select <Power> à <Power On>. The Operating System should start the loader and can then be installed as normal. Select the <Console> tab to interact with the installation.

Figure 73 Interacting with the VM’s Operating System through the console view

The virtual machine console can be detached from the VSphere client by selecting the <Launch Virtual Machine Console> icon as shown below.

Figure 74 Detaching the console

Other Operating Systems can be created and these will function in the normal way bridging across to the physical machine’s network.

Figure 75 Viewing the IP settings in Linux

VMware vCenter Server

Using vCenter allows centralized control across the datacenter. This means that multiple ESXi systems and their VM’s can be administered from one central location. Information about the various objects is held in a supported database such as Microsoft SQL server. In addition it is possible to link together multiple instances of vCenter server. The previous section allowed much of the functionality of VMware to be realized but the server deployment goes beyond its capabilities in that advanced functionality such as vMotion, Distributed Resource Scheduler and other High Availability functions. The IP configuration of the I/O part of devices used in this section is as follows:

  • ESXi host     192.168.128.40
  • iSCSI target    192.168.128.20/192.168.128.30
  • The management ports are on the 192.168.1 network

Installing vCenter server

The vCenter software is downloadable or it is part of a solution from VMware. It can be installed on Linux or Windows. The example following covers the Windows version. Running the autorun file shows the screen below. Select vCenter server and then <Install>.

Figure 76 Installing vCenter Server

Follow the prompts to complete the installation.

Figure 77 vCenter Server setup

After the installation has been completed, launch the vSphere client and log onto the machine where the vCenter server was just installed to. In this case the server has an IP address of 192.168.1.118 and is the client is being launched from the machine where the server is running. Specify the IP address and check the <Use Windows session credentials box>.

Figure 78 Launching vSphere client from a server

The dashboard that loads has a hierarchical structure of Datacenterà Cluster| Host à Virtual Machine.

The figure below is taken directly from vCenter’s configuration screen.

Figure 79 VMware hierarchy

Figure 80 Initial screen

Creating a datacenter

The first task/prompt is to <Create a datacenter>. Do this by selecting the link or by right clicking on the server and then from the context sensitive menu select <New Datacenter>

Figure 81 Adding a Datacenter

Rename the datacenter (in this case Essentials).

Adding a cluster or a host

The next step is to add a cluster or a host. A cluster is a group of hosts that can work together to improve availability or performance by load balancing. At this stage a single host will be added rather than a cluster. Add a host by selecting the link in the main window or again by using the context sensitive option after selecting the datacenter.

Specify the host name or IP address of the ESXi host that is to be added along with its credentials. Initially a security alert screen may be presented. Select <Yes> to continue. A host summary screen will be shown and may include already created Virtual Machines. In this screenshot – Virtual Machines Red Hat and Windows 2008
R2 exist from a previous interaction.

Figure 82 Adding a host to a datacenter

Figure 83 Specifying a host

Figure 84 vCenter host summary screen

The next screen will issue a prompt to license the features. This can be ignored if the installation is still in the evaluation phase or the license key can be added by selecting <Assign a new license key to this host> if needed.

Figure 85 Licensing

Lockdown is normally not enabled but the choice here would depend on the site’s administration policies. Enabling lockdown forces access via the local host’s console or through a designated management node.

Figure 86 Lockdown Mode

Specify the Datacenter location of the host if prompted.

Figure 87 Specifying the host’s location

Select <Finish> to complete the task.

Figure 88 Completing the Add Host process

The left hand pane now shows the following grouping:

Datacenter à Host à VM’s. The existing VMs can now be powered up.

Figure 89 Console showing two VM’s running

Accessing the VM Consoles

Consoles can be detached from the server (right click on the VM and then select <Open Console> or accessed directly by selecting the VM and then the selecting the <console tab>.

Map Views

The server features a very useful topology view which shows the relationship between each of the entitities. This can be shown by selecting the datacenter and then the <Maps> tab. Various views are possible and this can give a good understanding of the layout of the system.

Figure 90 Using the maps view to show object relationships.

Here it can be seen that both VMs reside on host 192.168.128.40. A more comprehensive view is shown in the next diagram which includes the Host to Datastore relationship.

Figure 91 Map view showing all host relationships.

Performance monitoring

A number of performance statistics are available. Viewing these figures allows the Administrator to make informed decisions about the system. It is possible to identify bottlenecks and to allocate resources when needed. Again another one of the many benefits of virtualization is that more resources can be added on the fly to deal with these conditions. With physical servers the actual hardware needs to be physically upgraded thus incurring inevitable downtime.

Figure 92 Viewing performance statistics

VMware Tools

Once a VM is up and running VMware tools can be installed to improve performance. Install the tools by selecting the VM and then <Guest> à <Install/Upgrade VMware Tools>.

Figure 93 Installing VMware Tools

This will mount the VMware tools package in the virtual DVD drive and it can then be run manually (if autorun is not set up).

Figure 94 Running the windows VMware tools setup program

Run the file in the normal way; there will be a prompt to restart the system when finished. There is a similar VMware tools program for Linux systems. Extract the files and run the program as superuser.

Figure 95 Installing the Linux version of VMware Tools

Follow the prompts to complete the installation.

VSphere Clustering

In this section a complete hierarchy will be built Datacenterà Clusterà Hostsà Virtual Machines. The system is managed by the physical server “Zotac”.

Adding a Datacenter

Figure 96 Creating a new Datacenter

The first task is to create a new datacenter, in this case the datacenter is called “Enterprise Cluster”. Select the link <Create a datacenter>. Name the datacenter and the tree on the left hand side should look similar to Figure 97.

Figure 97 Viewing the datacenter

Adding a HA Cluster

Right click on the Datacenter and from the menu select <New Cluster>

Figure 98 Adding a cluster to the Datacenter

VMware vSphere supports two types of clusters. vSphere HA (High Availability) is used for fast recovery of host failure, resources can be made available on other hosts very quickly. DRS (Distributed Resource Scheduling) is mainly used for load balancing. Nodes are aggregated together to automatically make the best use of the available resources. This first part will illustrate how to build a HA cluster.

Figure 99 Choosing the HA cluster option

The next screen presents a choice of options. Nodes are monitored by heartbeats; which periodically checks the health of each other’s nodes. It is recommended to use the defaults presented, although large cluster deployments may want to change the number of failures that the cluster tolerates. With a small scale cluster more than one node failure would be cause of concern and should really be investigated rather than trying to carry on with a major system failure.

Figure 100 Enabling cluster options.

It is recommended to leave the restart options at their default settings.

Figure 101 Cluster restart options

Virtual Machines can be restarted if they do not issue heartbeat responses within a certain time.

Figure 102 Cluster heartbeat monitoring

Enhanced vMotion compatibility is a new feature which enforces strict compatibility between hosts. It is recommended to leave this option disabled.

Figure 103 Enabling/Disabling Enhanced vMotion capability.

For performance reasons it is recommended to store the swapfile in the same directory as the virtual machines.

Figure 104 Setting the Virtual Machine’s swapfile policy

Select <Finish> at the summary screen.

Figure 105 Completing the Cluster Add process

Adding hosts to the cluster

To add hosts to the new cluster ensure that the cluster is selected and then select <Add a Host> from the link.

Figure 106 Adding a host to a cluster

Specify the hostname and credentials

Figure 107 Specifying the hostname and credentials

There may be a security alert on the next screen (not shown).

Figure 108 Adding the first host to the cluster

There may be a license prompt if the system is still being used in trial mode.

Figure 109 Adding a license to a node

Select <Lockdown> options.

Figure 110 Setting lockdown options

Select <Finish> and repeat to add a second node.

Figure 111 Completing the Add Host to a cluster process

The <Recent Tasks> panel will show the progress of the cluster configuration.

Figure 112 Viewing the cluster task progress

The client now shows the following list of objects.

Note that the VMs were “owned” by particular hosts in the setup procedure earlier. Now they are shown independently of the hosts.

Figure 113 Completed cluster hierarchy

Sharing storage within a cluster

In a non clustered environment each host “owns” its own storage. In a clustered environment all hosts have access to the storage. This means that shared storage architectures such as Fibre Channel or iSCSI SANs are used in clustered environments. Other NAS architectures such as NFS are also suitable. Figure 114 shows the same storage accessible to both of the clustered nodes – 192.168.1.99 and 192.168.1.103.

Figure 114 Hosts sharing the same iSCSI storage

The map view below clearly shows dual paths to the iSCSI LUNs and a single path to each of the node’s local disks.

Figure 115 Topology view of cluster and shared storage

The next view is a simplified view of the Host to VM relationship. To illustrate how the VMs seamlessly failover, node 192.168.1.99 will be shutdown. The map view will have to be updated to reflect the changes.

Figure 116 Map view showing node to VM relationships

Figure 117 now shows the node to VM relationship after the VMs have failed over to the surviving node 192.168.1.103.

Figure 117 Node to VM relationship after failover

Changing host ownership

To manually move VMS between hosts, select the VM and then right click and choose <Migrate>.

Figure 118 Migrating a VM

Migration can be to another host, another Datastore or both. In this case the migration will be from host 192.168.1.103 to host 192.168.1.99

Figure 119 Selecting the migration type

The next dialogue requests the node name that the VM is to be migrated to.

Figure 120 Selecting the destination node for the migration.

Figure 121 Setting the migration priority

Figure 122 Completing the migration

Migrating the machine on the hardware used took approximately one minute, however the map view took a little longer to recognize the changes. The amount of time taken is largely dependent on the hardware used and the amount of I/O activity taking place. The new map view after the migration is shown below.

Figure 123 Viewing the host to VM relationship after the migration has completed

The cluster can be removed by simply selecting the cluster, and from the right click menu select <Remove>.

HA Master and Slave Nodes

Within a VMware HA cluster one of the nodes is elected as a Master node. The Master node co-ordinates the activity between the other (slave) nodes. The master is responsible for checking the status of the slave nodes. The master node is determined by processes which “elect” the master. The HA cluster is defined as a Master/Slave architecture. The designation of a particular node can be shown by selecting the node and then the <summary> tab.

In Figure 125 the vSphere HA state is shown as being the Master within the cluster. The next figure shows node 192.168.1.103 as being the slave.

Note the states are termed running and connected.

Figure 124 Showing the HA master node state within a HA cluster

Figure 125 Showing the HA Slave node state within a HA cluster

Adding a DRS Cluster

Creating a DRS cluster is similar to creating an HA cluster. Select <Turn on vSphere DRS> at the initial screen and (mainly) follow the prompts as before.

Figure 126 Creating a DRS Cluster

There will be a choice of automation level which decides how VMs will be placed. It is recommended to use fully automated unless there are special requirements.

Figure 127 Selecting DRS automation levels

Complete the screen prompts and then add hosts as before.

——————————————————————————————————–