Windows 2012 Cluster HOWTO


This HOWTO will describe how to set up a two node Windows 2012 Cluster. In addition a number of other areas are discussed which are pre-requisites to installing the Cluster. These additional steps include:

  1. Setting up a Domain controller
  2. Configuring DHCP and DNS
  3. Installing a shared iSCSI target device

After these steps have been completed the Cluster will be set up.

In this Windows 2012 Cluster HOWTO, VMware workstation was used to create Windows 2012 Virtual Machines for the Cluster implementation. This leads to a few additional steps but has the advantage of allowing the user to gain familiarity prior to deploying a full scale production Cluster.

Configuring Active Directory

Once Windows 2012 has been installed, the next step is to create a domain controller. This can be done by running the Server Manager.

From the Server Manager Dashboard select Manage Add Roles and Features

Select the installation Type as <Role based or feature based installation>

With the correct server highlighted select <Next>

Select <Active Directory Domain Services>, <DHCP> and <DNS>and then select <Next>.

Select <Next> until reaching the screen below:

Note: The recommendations on this Microsoft screen above are based on good practices and should be used in a real life scenario, since the objective of this tutorial is to cover the basics of Cluster installation only one domain controller will be installed.

Select <Next> –  <Install>

After the required components have been added select AD DS from the Server Manager screen to continue with the Active Directory configuration. Select, More> if it appears on the yellow bar.

Select <More …> from the right of the highlighted yellow bar. This gives an option to promote the server to a domain controller.

Configuring a Domain Controller

Select <Promote this server to a domain controller>. This will bring up the next screen:

Select <Add a new forest> and fill in a root domain name; select <Next>. At the next screen enter the DSRM Password. Since this example uses a pure Windows Server 2012 environment, there is no need to change the functional levels. Enter a Directory Services Restore password and then select <Next>.

Select <Next> at the DNS options screen.

Then accept the NETBIOS domain name.

Note that the NETBIOS name will be truncated in this case.

Select <Next> to accept the default folder locations:

Review the options and select <Next> to proceed.

Select <Install> after verifying that there are no gating prerequisites.

After selecting <Install> the system should restart as a domain controller.

After the system reboots, the login screen will automatically set the login to be a domain account.

Select <Computer> –  <Properties> to verify.

Setting up DHCP

Launch Server Manager and with the DHCP Tab highlighted select <More? Form the yellow bar on the right hand side.

Select <Complete DHCP configuration>

Select <Next>

Use the administrator credentials and select <Commit>

If the Summary screen shows success then select <Close>

Setting up a DHCP Scope

From the server manager screen select <Tools> <DHCP>

Expand the tree to show the server name which in this case is Clusterdomaincontroller and then right click on IPV4 to bring up a context sensitive menu and select <NewScope>. This will start the Wizard.

Select <Next> from the Wizard screen and enter a scope name and description. In this Cluster all static IP addresses will use have an available range from 192.168.4.1 through to 192.168.4.99. The scope will allocate IP addresses from 192.168.4.100 through to 192.168.4.200.

This is specified in the following screen.

No exclusions are needed so select <Next> on the following screen.

Accept the default lease duration.

The next stage is to activate the new scope.

Enter a gateway address and select <Add>

On the next screen accept the defaults. Note scope clients can be added later.

WINS is not required here, so select <Next> on the following screen.

Finally activate the scope.

The new scope is shown in the DHCP window.

Configuring a VMware Virtual Network

Note in this example virtual machines are configured which requires additional setting on VMware workstation. If physical machines are used this step can be omitted.

From the VMware workstation menu select Edit Virtual Network Editor Add Network.

.After the Virtual Network Editor opens – select VMnet4 from the drop down box and select <OK>.


Configure the settings as shown in the next screen and then select <DHCP Settings>.


Here IP addresses below 100 can be used for static IP’s and the DHCP server will allocate .100 and above.


For each of the machines that will be allocated a DHCP address on the 192.168.4 network, configure their network adapter settings under VMware to use the vmnet4 network.


Here there are two virtual network adapters.


Configuring the client Network Adapters

On the client machine’s the IPV4 properties are set up as shown in the following figure:


After this start the domain controller and the client machines and verify that they have received the correct IP addresses.


Setting up the client nodes to access the Domain Controller

The next stage is to set up accounts on domain controller. The two clients are named ClusterNode01 and ClusterNode02.

SID note For cloned VMs

Note if the virtual machines have been cloned then the SID will be the same, this can be changed by running the sysprep tool from %SystemRoot%\system32\sysprep\sysprep.exe.

Note this can clear settings such as password and network, so reconfiguration may be necessary after running this command. To avoid this it is better not to clone existing systems.

Joining the client nodes to the domain

Select Computer <Properties> <Change Settings>.

Then select <Change> and then select the <Domain> radio button and add in the domain name.

Select <OK>, if all is well then there should be a prompt to enter credentials of an account for the domain controller

After this a welcome message should appear.

Reboot the system to join the domain. The login prompt will look like:

Verify that the node is correctly configured from the Server Manager screen.

To verify that the nodes have been added select <Control Panel> <Administrative Tools> <Active Directory Users and Computers> and then from the next screen double click on computers to see the two newly added nodes.

Configuring iSCSI Targets

Windows 2012 failover Clusters require the shared storage to support SCSI-3 persistent reservations. An iSCSI target implementation is included with Windows 2012 and this will be used for shared storage since it meets these requirements.

Installing the iSCSI Target Server

From the Server Manger screen select <Add Roles and features>, select <next> until the server Role screen appears and then under <File and Storage Services> enable <File and iSCSI services>.

Select <File and iSCSI services> iSCSI Target Server.

Select <Next> and at the confirmation screen select <Install>.

Creating an iSCSI virtual disk

From Server Manager select <File and Storage Services> <iSCSI> and then select the link on the right to start the Wizard.

The iSCSI Virtual disk will be generated from an actual physical server disk. In this example a small portion of the C: drive’s capacity will be used to present the virtual iSCSI disk.

4

Select <Next> and name the device.

Select <Next> and allocate a size.

Select <New iSCSI target> and then <Next>

Name the target and enter a brief description. Select <Next>

The next stage is to add the initiators that will access the disk. At this point no initiators have been configured on the client machines that will access the target but two will be added with the names Clusternode01 and Clusternode02. The actual initiator name is configured in the section – Configuring iSCSI initiators.

Select <Add> and then on the next screen select <Enter a value for the selected type>

In a similar fashion add the second initiator.

Select <Next> and the nodes will be specified as Access servers. Select <Next>. In this example CHAP will not be enabled; select <Next>

Verify the settings and then select <Create > to build the iSCSI target.

The newly created target should show up under the <iscsi> link.

Note: In a real world implementation the target would not be set up like this. It is a single point of failure and also does not include any level of RAID protection. A later HOWTO will describe how to set up a redundant shared storage system.

Configuring iSCSI initiators

The next stage is to configure the initiators on the two Cluster nodes (Clusternode01 and Clusternode02) that will be used to access the iSCSI target. Select the Control Panel à <iSCSI initiator>. If the iSCSI service is not running there will be an option to start it – select <Yes>.

Under the configuration tab set the initiator name to be the name that we used in setting up the target configuration. Note that if we configured the initiator prior to the target the initiator names would be automatically discovered during the target configuration page

The next step is to connect to the target – under the <Targets> tab enter the IP address of the target (192.168.4.10) and select <Quick Connect>.

IF a successful connection is made it should show up as a discovered target.

Repeat for Clusternode02 using Clusternode02 as the initiator name.

Configuring the Cluster

From Clusternode01 open the Server Manager and then Manage <Add Roles and Features>.

Select <Next > until <Features> is highlighted and then check the <Failover Clustering> box.

Select <Add Features> at the next screen.Select <Next> and then <Install>.

Repeat the setup for Clusternode02.

After Failover Clustering has been installed. Login to Clusternode01 using the domain administrator account.

Now from Clusternode01 select <Server Manager> <Tools> <Failover Cluster Manager>.

This will start the Failover Cluster Manager Tool.

It is recommended to first validate the configuration. Select <Validate Configuration> and then add the nodes to be validated.

The tests can take a while to run if all tests are selected. Investigate the output for any warning or error messages. If the nodes are correctly set up then a validated message should appear.

Check <Create the Cluster now> box and then select <Finish>. This will start the Create Cluster Wizard. Enter a Cluster name and select <Next>

Verify that the settings are correct and select <Next>.

A confirmation screen should be shown.

It is recommended to view the report before selecting <Finish>. In this case a warning has been received about the quorum disk. This can be added later.

Configuring shared Cluster storage

At this point no shared storage has been setup. Open the Disk Management tool and the iSCSI volume should appear, if it is set to offline then set it to online and initialize it. After this it can be added to the Cluster.

Note if this was initialized prior to the Cluster configuration then it could have been automatically configured, but in the interests of education it was left to be added later.

Now from the Failover Cluster Manager select <Add Disk> to add the iSCSI target to the Cluster.

It should now show up as a resource.

Note that the disk is owned by Clusternode02. Ownership can be changed or a number of disks can be load balanced by reassigning the ownership.

Reassigning disk ownership

From the Failover Cluster Manager under Disks select <Move Available Storage>.

Select <Select Node> and then select ClusterNode01

The screen now shows ownership belonging to ClusterNode01.

High Availability Role configuration

The next step is to make use of the High Availability feature to make a feature that will be accessible to other client node. To do this new virtual hard disks were added to the Domain Controller VM and two new iSCSI volumes were added as described earlier. In addition make sure that the File Server Role has been installed on both Cluster nodes as shown below.

Recall that the iscsi initiator needs to be refreshed and then the disks must be added as Cluster storage, refer back to the earlier section for details.

From the Failover Cluster Manager screen select <Configure Role>

In this case the HA Role will be that of a File Server.

Select <File Server for general use>.

Give the Role a name.

Select the storage that is to be used for the file server and then select <Next>.

Select <Next>

After the task has been completed the report can be viewed. Investigate any warnings that appear.

Now with <Roles> selected test access by verifying that the Role can be moved over to the other server.

After this select <Add File Share>.

NFS is used for UNIX systems and Microsoft clients tend to use SMB. In this case an SMB share will be configured, highlight <SMB Share – Quick> and select <Next>.

Highlight the volume and select <Next>

Name the share and select <Next>

Accept the default settings and select <Next>

Accept the default permission and select <Next>.

At the confirmation screen select <create>

Close after successful creation. The share can be accessed now from a client by entering the path name in the run dialog.

Files can now be added to the share.

Configuring a Quorum Disk

From the Failover Cluster Manager with the Cluster highlighted select <More Action> on the right hand side of the screen and then select <Configure Cluster Quorum Settings>. Quorum resources are used to ensure Cluster integrity.

This will open the Configure Cluster Quorum Wizard.

Select <Next>.

In this case <Typical Settings> will be used. Recall that there is still an unused iSCSI disk available.

In this situation the iSCSI disk was recognized and allocated and that no errors occurred.

Summary

A basic two node Windows 2012 Cluster has been set up using two nodes connected to a third node which functions as a Domain controller. Shared Microsoft iSCSI target devices (which supports persistent reservations) were configured and added to the Cluster. This was all done using Virtual Machines but a real life deployment could physical systems. In addition the disk drives would most likely be SAS or Fibre Channel devices configured as external storage. A HA file server resource was added and also a Quorum resource. The tutorial was written to help those wishing to learn about Windows Clustering to come up to speed rapidly.

The deployment should not be used in anything other than in an educational environment.

Advertisements

VMware VSphere HOWTO


VMware VSphere HOWTO

Alan Johnson February 2012

alan@johnson.org

Introduction to VSphere 5

The purpose of this tutorial is to provide a quick start guide to the basic features of VSphere 5. The intent is to shorten the learning curve for the casual hobbyist or to provide a basic introduction to administrators that are considering deploying VMware within their own organization. As such the scope is limited; however it does cover advanced features such as High Availability (HA) and performance optimization. Relatively inexpensive hardware was used to generate the screenshots; the servers were sub $500 commodity systems with i3 processors and 6GB of RAM. A Netgear ReadyNAS system was used to provide iSCSI based storage. The VMware ESXI 5 hypervisor is freely available, and there are different levels of cost depending on the capability of the deployment. A comparison is shown in the table on the following page.

In addition there are a number of kits available which are explained by VMware as:

VSphere Kits: vSphere Kits are all-in-one solutions that include multiple vSphere licenses and vCenter Server, enabling an organization to quickly and easily set up their vSphere environment. Kits are available in several editions that vary in terms of scalability and functionality. VMware offers two types of kits:

Essentials Kits: All-in-one solutions for small environments inclusive of virtualization and management software available in two editions—Essentials and Essentials Plus. Both Editions include vSphere processor licenses and vCenter Server for Essentials for an environment of a maximum of 3 hosts (up to 2 CPUs each) and maximum pooled vRAM capacity of 192GB. Scalability limits of Essentials Kits are product-enforced and cannot be extended other than by upgrading the entire kit to a higher-end bundle. Essentials Kits are self-contained solutions and may not be decoupled or combined with any other VMware vSphere editions.

Acceleration Kits: All-in-one convenience bundles that provide a simple way for new customers to purchase all the necessary components to set up a new VMware environment. Each Acceleration Kit consists of a number of licenses for VMware vSphere, along with a license for one instance of a VMware vCenter Server Standard. Acceleration Kits with Management add 25-VM packs of management products to the related Acceleration Kits. All kits decompose into their individual kit components after purchase.

Installing ESXi

Obtain the ESXi Hypervisor from the VMware site. The Hypervisor is really the core of the system and is managed by other parts such as vCenter. It will be necessary to register first. VMware offers trial versions for 60 days. The Hypervisor part is all that is required for the first part of this chapter.

Open a browser session and point it to www.VMware.com, select the <Products> TAB and under <Free Products>, select <VMware vSphere Hypervisor> and then <Download>. Register or log in as necessary.

Figure 1 Obtaining the ESXi Hypervisor

Download the ISO image and burn a DVD to create standalone boot media. Set the system’s BIOS to boot from the DVD device and follow the prompts to install ESXi.

Figure 2 Installing ESXi

Select <Enter> to continue.

Accept the license agreement and then select <F11> to continue.

Figure 3 Accepting the ESXi license agreement.

The next stage is to select a device to install ESXi onto. In this example a 40GiB disk will be used.

Figure 4 Selecting the target device for ESXi installation

Respond to other prompts such as keyboard type, and setting the root password. There will be a need to configure and partition the installation disk. After the installation has completed, the system will reboot and show a screen with a prompt to download basic management tools.

Figure 5 Obtaining the IP address for browser interaction

Installing the vSphere client

The next stage is to open a browser session and enter the address shown:

You can ignore the certificate error for the purposes of this chapter Select <Download vSphere Client> to install the tools to configure VMware. There will be other links to documentation and VMware vCenter.

Note: These other links are external links pointing back to the VMware site.

Figure 6 Downloading the vSphere client.

Depending on the browser being used there may be different options such as running the file directly or saving it first.

Figure 7 Saving the VMware Client application

After the file has been downloaded run the application.

Figure 8 Installing the vSphere client

Running the client

Respond to the prompts and complete the installation. After the installation has completed the client can be run from the <Start Menu>.

Figure 9 Running the VMware vSphere client

When the login prompt appears enter the IP address of the ESXi system (192.168.190.128), the username <root> and the password that was assigned during the ESXi installation.

Note: In this case the login is taking place within the ESXi server.

The first login session will trigger a Security Warning which can be installed or ignored.

Figure 10 Logging into the ESXi host

The first screen will pop up showing that the license will expire within 60 days (if not licensed). This can be safely ignored at this point in time.

Figure 11 Initial Client Screen

The next screen defaults to Inventory and Administration. Select the inventory icon and the ESXi host will appear in the left hand pane.

Figure 12 Selecting the ESXi host

Getting Started Tab

The first tab in the screen gives an overview of the terminology used. A host is defined here as the node running the ESXi software. The ESXi software will run the Virtual Machines that will be created shortly. It is also possible to deploy Virtual Appliances. VMware defines a virtual Appliance as a pre-built virtual machine. There are prompts within this screen which will guide the user through Virtual Machine creation or deploying Virtual Appliances.

Installing a Virtual Appliance

From the right hand pane with the <Getting Started> Tab still open, select the option <Deploy from VA Marketplace> To illustrate this function a small appliance <Nostalgia> will be used. This appliance runs a version of DR-DOS and includes some early DOS based games.

Figure 13 Deploying a virtual machine

You can ignore the warning in this case.

Figure 14 Installing a non supported O/S in a VM

Note: the appliance is in OVF format which is a standard method of packaging. The acronym stands for Open Virtualization Format.

Select <Next>

Figure 15 Deploying an OVF Template Step 1

Name the OVF template and select <Next>.

Figure 16 Deploying an OVF template Step 2

The nest stage is to select the location and format. Locations are termed Datastores and are covered in more detail later in this document. The three formats mentioned here are defined as:

Thick Provision Lazy Zeroed

Allocates all of the space and then zeros out the block on the first write. This is the fastest way to create the disk but first time writes are slower. Subsequent writes to the same block are at normal speed.

Thick Provision Eager Zeroed

Allocates all of the space and zeros out the blocks. This takes longer to create but there is no first time write penalty since the block has already been zeroed.

Thin Provision

Thin disks allocate storage as required and zeros the blocks on the first write.

In this example Thick Provision Lazy Zeroed will be used since the space is small and write performance is not a concern.

Figure 17 Selecting the Disk Format

Accept the configuration and select <Finish>

Figure 18 Completing the OVF deployment

The appliance will now be downloaded and installed. It will show up as a VM under the ESXi host.

Now the VM Nostalgia can now be selected and powered up just as if it were a regular bootable Operating System. that when the VM is selected the tabs across the top of the right hand pane will vary according to context so a different set of tabs will be presented when the VM is selected.

With the Getting Started Tab selected click on the <Power on the Virtual Machine> link to start up the VM.

Note there are other ways of starting the VM such as clicking on the green right facing arrow icon above the host panel. After the link has been selected the options will change to <Power Off> and <Suspend>.

From the Tab menu select <Console> to access the Virtual memory’s (virtual) screen.

Figure 19 Viewing the VMs console.

The other way of setting up a Virtual machine is to select the option <create a new virtual machine> with the host selected and the <Getting Started> tab open. This will open a screen similar to that shown below:

Figure 20 Manually creating a VM

There are two options – Typical and Custom. The <Custom> option allows the user to supply more parameters during the pre-installation phase. It is also possible to tweak the parameters later so this example will use the <Typical> option. Select <Next> to get to the next screen.

Name the VM which in this case will be a version of SuSE Linux.

Figure 21 Naming the new manually created VM

The next stage is to select the destination. At this point there is only one Datastore so this will be the location to store the new VM. Select <Next>.

Figure 22 Selecting the destination for the manually created VM

There are a number of predefined templates available for common Operating Systems. Select Linux from the radio button and then choose <Novell SUSE Linux Enterprise 11 (64-bit) from the drop-down.

Figure 23 Choosing a template for the new VM

Select <Next> and this will bring up the Network screen. Select the default adapter and then select <Next>.

Note: Other virtual Network Interface Cards (NICs) can be added later on.

Figure 24 Selecting virtual NICs

The next choice is similar as before where a choice of disk format has to be made. The virtual disk size is dependent on how big the VM will be. Thin Provisioning will only assign space as it needs it.

Figure 25 Specifying capacity and format of the VM

Select <Next>. There will be a chance to edit the virtual machine settings if required, otherwise select <Finish> to complete the task.

Figure 26 Completing the VM preparation task.

The new VM will show up in the VM tree.

Now at this point we have really only prepared the virtual machine. So it is similar to a bare server with memory and disk space. The Operating System will have to be installed from an installation source as normal. Prior to the O/S installation the Virtual Machine settings need to be edited. This can be done by right clicking on the VM and selecting <Edit Settings>

Editing VM settings

With the Hardware Tab open select CD/DVD drive. The drive can be the actual physical DVD drive connected to the host itself, the actual physical drive connected to the client or an ISO file which resides in a Datastore.

For this example the DVD used will be an ISO image on the ESXi machine.

Figure 27 Setting the boot device for the new VM

Uploading an ISO image to a Datastore

Before we can connect we need to set up a source; here an ISO image on the ESXi Datastore will be used. Initially the ISO file will need to be uploaded to the Datastore. To do this select the ESXi host (192.168.190.128) and then select the <configuration tab>. Right click on the Datastore and choose <Browse Datastore>

Figure 28 Browsing an existing Datastore

After the Datastore browser window has opened select the <Upload files to this Datastore> icon as shown and choose the file to upload. In this case the file will be the SLES ISO image.

Figure 29 Uploading a file to a Datastore

After the upload has been completed, the Datastore browser will show the image loaded.

Figure 30 Browsing a Datastore to show the uploaded image

Close the browser and then from the icon bar with the SuSE VM selected choose <Connect to <ISO image on a Datastore>.

Figure 31 Connecting to the virtual DVD ISO image in the Datastore

Go back to the VM settings and this time select the <Options tab>. Under <Boot Options> choose Boot to BIOS and select <Force BIOS setup>. This may or may not be necessary.

This will allow the boot menu to be set up to boot from DVD.

Figure 32 Modifying the virtual BIOS

The system should now begin to install SuSE. Continue to install the VM as normal.

Creating Datastores on external storage

This section will show how to add external iSCSI storage and how to create a Datastore ISO repository on the iSCSI target. The iSCSI target will reside on the 192.168.128 network.

Typically multiple connections are used with an iSCSI target to improve performance and resilience. Enabling jumbo frames will also improve performance in many scenarios. It is also recommended to put iSCSI devices on a different network than management traffic. In this configuration two NICS are available – 192.168.128.103 (vmnic0) for management traffic and 192.168.128.40 (vmnic1) for iSCSI data traffic.

Note: It is beyond the scope of this particular book to include “real world” best practices scenarios

There are three parts to creating an external iSCSI data store –

  • Create a dedicated network
  • Adding a software iSCSI adapter
  • Attaching iSCSI target(s)

Setting up a new Virtual switch for iSCSI traffic

The first part of adding an iSCSI adapter is to set up a dedicated network. Select the ESXi host (192.168.1.103) and then select the <Configuration tab>.

Figure 33 Adding a new network Step 1

Select the <Networking link>. Ensure that <vSphere Standard Switch> View is shown. Select <Add Networking> and then from the next screen create a VMkernel port.

Figure 34 Adding a new network Step 2

The existing network is shown with a Virtual switch (vSwitch0) already added.

Figure 35 Adding a new network Step 3

Select <Next>.

The next screen will allow a choice of switches to be made. This choice is largely dependent on the network infrastructure on host machine. In this example a new virtual switch will be used and associated with the second NIC (vmnic1).

Figure 36 Adding a new network Step 4

Select <Next> and accept the defaults on the next screen. Again in a commercial deployment, it is very likely that vMotion and fault tolerance would be implemented.

Figure 37 Adding a new network Step 5

Normally static IP addresses are set up for iSCSI targets since a change of address would have severe implications for availability. In this case 192.168.2.40 will be used.

Figure 38 Adding a new network Step 6

Select <Finish> completing the networking portion of adding an iSCSI adapter.

Figure 39 Adding a new network Step 7

The new network view looks like:

Figure 40 Adding a new network Step 8

Adding an iSCSI software adapter

The next part is configuring a new iSCSI adapter – select <Storage Adapters> à Add.

Figure 41 Adding an iSCSI software adapter

Select the newly added Virtual HBA and then <Properties>.

Figure 42 Configuring the iSCSI adapter

An iSCSI target has already been configured at an IP address of 192.168.128.20 and 192.168.128.30 on a Netgear READYNAS device shown in Figure 43. This will be used as the target device for the iSCSI adapter.

Figure 43 Netgear iSCSI target console

The <Properties link> has four tabs. With the <General Tab > open verify that the initiator is enabled, if not use the Configure button to enable it.

Figure 44 Enabling the iSCSI adapter

Select the <Network Configuration> tab, and then (if necessary) use the <Add button> to select the port group that was added earlier.

Figure 45 Configuring the iSCSI network port group

Select the <Dynamic Discovery> Tab and then <Add> to configure the target’s IP address.

Figure 46 Adding the target through dynamic discovery


Figure 47 Viewing the iSCSI devices (Paths view)

Note that with the <Paths tab> selected a number of devices show up, in fact the VMware designated targets are seen through two paths since the Netgear has two IP ports. For both devices only one path is shown as active. The view with the <devices tab> selected only shows a single view. The path can be changed by selecting the device (from the devices view) and selecting <Manage Paths> and devices can be detached (Unmounted) by right clicking on them. The devices we are dealing with are the ones with (T1) highlighted above in the device designator.

Figure 48 Changing the active path with iSCSI devices

The path policy can be changed to Most Recently Used, Round Robin or Fixed. The active path can be disabled and changed to an alternative.

Figure 49 Selecting Pathing policies

Devices can be dismounted by right clicking and selecting <Detach>.

Figure 50 Unmounting iSCSI devices

After detaching two last devices are left as candidates for Datastores. Refreshing the view shows:

Figure 51 Viewing the two devices for Datastore preparation

Note: The device naming of Cx:Ty:Lz corresponds to Controller, Target and LUN.

An active Path can be disabled to force both IPs to be used. Viewing the path information below shows that both interfaces are active (balanced) on the target side giving potentially better throughput.

Figure 52 Balanced iSCSI I/O Pathing

External iSCSI Datastore Preparation

Returning to the main screen with the <Configuration Tab> active select the <Storage link> and then from the top of the screen select <Add Storage>

Figure 53 Selecting the storage type for the Datastore

Choose <Disk/LUN> as the storage type and select <Next>. From the list of devices choose the first iSCSI LUN and select <Next>. The next choice is to decide which format to use. If no legacy devices are required then the new (VMFS-5) format should be selected since it allows the use of devices with greater than 2TB capacity.

Figure 54 Choosing the Disk/LUN for the Datastore

Figure 55 Choosing the disk format for the Datastore

Select <Next>.

Choose a name for the new Datastore.

Figure 56 Naming the new Datastore

Choose either the maximum capacity or enter a capacity value for the device.

Note: The additional capacity can be added later if required.

Figure 57 Selecting the capacity for the Datastore

Select <Finish> and the device will be formatted. Repeat for the second iSCSI device (Lun1). The new iSCSI devices should now be available for Datastore deployment.

Figure 58 Viewing the newly configured Datastores

Browsing and uploading to Datastores

The first iSCSI target will be used to hold a library of ISO images and the task now is how to get them into the Datastore. To do this, right click on the Datastore and select <Browse Datastore>.

Figure 59 Browsing a Datastore

This will open the (currently) empty Datastore. Select the upload icon and then select <Upload File>.

Figure 60 Uploading a file to a Datastore

Select the file to be added from the browser window and then select <open>.

Figure 61 Selecting an ISO image file or folder for upload

Creating a Virtual Machine with External Storage

In this section a Virtual machine will be loaded from and created on external iSCSI devices from the previous section. A library of ISOs will be loaded from LUN0 of the external iSCSI device and the new VM will be stored on LUN 1 of the external SCSI device. To start with select the host (192.168.1.103) and right click to select the <New Virtual Machine> option.

Figure 62 Creating an externally based VM Step 1

Select <Typical> and then <Next>.

Figure 63 Creating an externally based VM Step 2

Name the Virtual Machine:

Figure 64 Creating an externally based VM Step 3

The second iSCSI LUN (LUN1) will be used to store the Virtual Machines that will be created.

Figure 65 Creating an externally based VM Step 4

There is a choice of Operating Systems available, the advantage of Windows and Linux is that there are ready made templates that can be used. In this example Linux is selected and the Linux version that will be used is Red Hat Enterprise Linux 6 (64-bit).

Figure 66 Creating an externally based VM Step 5

Since the machine is virtual, its hardware is also virtual. The next choice is to select how many virtual NICS that will be used. Many of the settings can be changed later and additional NICs can be added if necessary. At this point only one NIC will be used.

Figure 67 Creating an externally based VM Step 6

The next stage is to create a virtual hard disk, details of the various disk formats have already been discussed. Here a small disk of 16GB capacity will be created without thin provisioning.

Figure 68 Creating an externally based VM Step 7

Select <Finish> to create the new VM or the settings can be edited if needed.

Figure 69 Creating an externally based VM Step 8

The newly created VM appears under the <Virtual Machines> tab.

Figure 70 Viewing the new externally based VM

At this point all that has been set up is a Virtual Machine. Think of this as equivalent to a physical server that is ready to accept a new Operating System. The Virtual Machine is guaranteed to be compatible with Red Hat 6 since we used a template and Red Hat will interact with virtual hardware that it will recognize. This also shows another benefit of virtualization in that we do not have to be concerned with driver and device incompatibilities. It also makes it possible to move VMs around to more powerful systems since they all exhibit the same virtual hardware.

Preparing the VM for Operating System Installation

Select the new VM (Red Hat V6) and then select <Edit Virtual Machine Settings>.

Figure 71 VM Operating System preparation

Under the <Hardware> tab select the CD/DVD drive and point it to the Datastore where the ISO images are located (LUN0 of the iSCSI target) and select the correct image to boot from.

Check the box <Connect at Power On>and select <OK>

Figure 72 Booting from a Datastore ISO image

After the image has been selected close the dialogue, select the VM in the left hand pane and then right click on the VM and select <Power> à <Power On>. The Operating System should start the loader and can then be installed as normal. Select the <Console> tab to interact with the installation.

Figure 73 Interacting with the VM’s Operating System through the console view

The virtual machine console can be detached from the VSphere client by selecting the <Launch Virtual Machine Console> icon as shown below.

Figure 74 Detaching the console

Other Operating Systems can be created and these will function in the normal way bridging across to the physical machine’s network.

Figure 75 Viewing the IP settings in Linux

VMware vCenter Server

Using vCenter allows centralized control across the datacenter. This means that multiple ESXi systems and their VM’s can be administered from one central location. Information about the various objects is held in a supported database such as Microsoft SQL server. In addition it is possible to link together multiple instances of vCenter server. The previous section allowed much of the functionality of VMware to be realized but the server deployment goes beyond its capabilities in that advanced functionality such as vMotion, Distributed Resource Scheduler and other High Availability functions. The IP configuration of the I/O part of devices used in this section is as follows:

  • ESXi host     192.168.128.40
  • iSCSI target    192.168.128.20/192.168.128.30
  • The management ports are on the 192.168.1 network

Installing vCenter server

The vCenter software is downloadable or it is part of a solution from VMware. It can be installed on Linux or Windows. The example following covers the Windows version. Running the autorun file shows the screen below. Select vCenter server and then <Install>.

Figure 76 Installing vCenter Server

Follow the prompts to complete the installation.

Figure 77 vCenter Server setup

After the installation has been completed, launch the vSphere client and log onto the machine where the vCenter server was just installed to. In this case the server has an IP address of 192.168.1.118 and is the client is being launched from the machine where the server is running. Specify the IP address and check the <Use Windows session credentials box>.

Figure 78 Launching vSphere client from a server

The dashboard that loads has a hierarchical structure of Datacenterà Cluster| Host à Virtual Machine.

The figure below is taken directly from vCenter’s configuration screen.

Figure 79 VMware hierarchy

Figure 80 Initial screen

Creating a datacenter

The first task/prompt is to <Create a datacenter>. Do this by selecting the link or by right clicking on the server and then from the context sensitive menu select <New Datacenter>

Figure 81 Adding a Datacenter

Rename the datacenter (in this case Essentials).

Adding a cluster or a host

The next step is to add a cluster or a host. A cluster is a group of hosts that can work together to improve availability or performance by load balancing. At this stage a single host will be added rather than a cluster. Add a host by selecting the link in the main window or again by using the context sensitive option after selecting the datacenter.

Specify the host name or IP address of the ESXi host that is to be added along with its credentials. Initially a security alert screen may be presented. Select <Yes> to continue. A host summary screen will be shown and may include already created Virtual Machines. In this screenshot – Virtual Machines Red Hat and Windows 2008
R2 exist from a previous interaction.

Figure 82 Adding a host to a datacenter

Figure 83 Specifying a host

Figure 84 vCenter host summary screen

The next screen will issue a prompt to license the features. This can be ignored if the installation is still in the evaluation phase or the license key can be added by selecting <Assign a new license key to this host> if needed.

Figure 85 Licensing

Lockdown is normally not enabled but the choice here would depend on the site’s administration policies. Enabling lockdown forces access via the local host’s console or through a designated management node.

Figure 86 Lockdown Mode

Specify the Datacenter location of the host if prompted.

Figure 87 Specifying the host’s location

Select <Finish> to complete the task.

Figure 88 Completing the Add Host process

The left hand pane now shows the following grouping:

Datacenter à Host à VM’s. The existing VMs can now be powered up.

Figure 89 Console showing two VM’s running

Accessing the VM Consoles

Consoles can be detached from the server (right click on the VM and then select <Open Console> or accessed directly by selecting the VM and then the selecting the <console tab>.

Map Views

The server features a very useful topology view which shows the relationship between each of the entitities. This can be shown by selecting the datacenter and then the <Maps> tab. Various views are possible and this can give a good understanding of the layout of the system.

Figure 90 Using the maps view to show object relationships.

Here it can be seen that both VMs reside on host 192.168.128.40. A more comprehensive view is shown in the next diagram which includes the Host to Datastore relationship.

Figure 91 Map view showing all host relationships.

Performance monitoring

A number of performance statistics are available. Viewing these figures allows the Administrator to make informed decisions about the system. It is possible to identify bottlenecks and to allocate resources when needed. Again another one of the many benefits of virtualization is that more resources can be added on the fly to deal with these conditions. With physical servers the actual hardware needs to be physically upgraded thus incurring inevitable downtime.

Figure 92 Viewing performance statistics

VMware Tools

Once a VM is up and running VMware tools can be installed to improve performance. Install the tools by selecting the VM and then <Guest> à <Install/Upgrade VMware Tools>.

Figure 93 Installing VMware Tools

This will mount the VMware tools package in the virtual DVD drive and it can then be run manually (if autorun is not set up).

Figure 94 Running the windows VMware tools setup program

Run the file in the normal way; there will be a prompt to restart the system when finished. There is a similar VMware tools program for Linux systems. Extract the files and run the program as superuser.

Figure 95 Installing the Linux version of VMware Tools

Follow the prompts to complete the installation.

VSphere Clustering

In this section a complete hierarchy will be built Datacenterà Clusterà Hostsà Virtual Machines. The system is managed by the physical server “Zotac”.

Adding a Datacenter

Figure 96 Creating a new Datacenter

The first task is to create a new datacenter, in this case the datacenter is called “Enterprise Cluster”. Select the link <Create a datacenter>. Name the datacenter and the tree on the left hand side should look similar to Figure 97.

Figure 97 Viewing the datacenter

Adding a HA Cluster

Right click on the Datacenter and from the menu select <New Cluster>

Figure 98 Adding a cluster to the Datacenter

VMware vSphere supports two types of clusters. vSphere HA (High Availability) is used for fast recovery of host failure, resources can be made available on other hosts very quickly. DRS (Distributed Resource Scheduling) is mainly used for load balancing. Nodes are aggregated together to automatically make the best use of the available resources. This first part will illustrate how to build a HA cluster.

Figure 99 Choosing the HA cluster option

The next screen presents a choice of options. Nodes are monitored by heartbeats; which periodically checks the health of each other’s nodes. It is recommended to use the defaults presented, although large cluster deployments may want to change the number of failures that the cluster tolerates. With a small scale cluster more than one node failure would be cause of concern and should really be investigated rather than trying to carry on with a major system failure.

Figure 100 Enabling cluster options.

It is recommended to leave the restart options at their default settings.

Figure 101 Cluster restart options

Virtual Machines can be restarted if they do not issue heartbeat responses within a certain time.

Figure 102 Cluster heartbeat monitoring

Enhanced vMotion compatibility is a new feature which enforces strict compatibility between hosts. It is recommended to leave this option disabled.

Figure 103 Enabling/Disabling Enhanced vMotion capability.

For performance reasons it is recommended to store the swapfile in the same directory as the virtual machines.

Figure 104 Setting the Virtual Machine’s swapfile policy

Select <Finish> at the summary screen.

Figure 105 Completing the Cluster Add process

Adding hosts to the cluster

To add hosts to the new cluster ensure that the cluster is selected and then select <Add a Host> from the link.

Figure 106 Adding a host to a cluster

Specify the hostname and credentials

Figure 107 Specifying the hostname and credentials

There may be a security alert on the next screen (not shown).

Figure 108 Adding the first host to the cluster

There may be a license prompt if the system is still being used in trial mode.

Figure 109 Adding a license to a node

Select <Lockdown> options.

Figure 110 Setting lockdown options

Select <Finish> and repeat to add a second node.

Figure 111 Completing the Add Host to a cluster process

The <Recent Tasks> panel will show the progress of the cluster configuration.

Figure 112 Viewing the cluster task progress

The client now shows the following list of objects.

Note that the VMs were “owned” by particular hosts in the setup procedure earlier. Now they are shown independently of the hosts.

Figure 113 Completed cluster hierarchy

Sharing storage within a cluster

In a non clustered environment each host “owns” its own storage. In a clustered environment all hosts have access to the storage. This means that shared storage architectures such as Fibre Channel or iSCSI SANs are used in clustered environments. Other NAS architectures such as NFS are also suitable. Figure 114 shows the same storage accessible to both of the clustered nodes – 192.168.1.99 and 192.168.1.103.

Figure 114 Hosts sharing the same iSCSI storage

The map view below clearly shows dual paths to the iSCSI LUNs and a single path to each of the node’s local disks.

Figure 115 Topology view of cluster and shared storage

The next view is a simplified view of the Host to VM relationship. To illustrate how the VMs seamlessly failover, node 192.168.1.99 will be shutdown. The map view will have to be updated to reflect the changes.

Figure 116 Map view showing node to VM relationships

Figure 117 now shows the node to VM relationship after the VMs have failed over to the surviving node 192.168.1.103.

Figure 117 Node to VM relationship after failover

Changing host ownership

To manually move VMS between hosts, select the VM and then right click and choose <Migrate>.

Figure 118 Migrating a VM

Migration can be to another host, another Datastore or both. In this case the migration will be from host 192.168.1.103 to host 192.168.1.99

Figure 119 Selecting the migration type

The next dialogue requests the node name that the VM is to be migrated to.

Figure 120 Selecting the destination node for the migration.

Figure 121 Setting the migration priority

Figure 122 Completing the migration

Migrating the machine on the hardware used took approximately one minute, however the map view took a little longer to recognize the changes. The amount of time taken is largely dependent on the hardware used and the amount of I/O activity taking place. The new map view after the migration is shown below.

Figure 123 Viewing the host to VM relationship after the migration has completed

The cluster can be removed by simply selecting the cluster, and from the right click menu select <Remove>.

HA Master and Slave Nodes

Within a VMware HA cluster one of the nodes is elected as a Master node. The Master node co-ordinates the activity between the other (slave) nodes. The master is responsible for checking the status of the slave nodes. The master node is determined by processes which “elect” the master. The HA cluster is defined as a Master/Slave architecture. The designation of a particular node can be shown by selecting the node and then the <summary> tab.

In Figure 125 the vSphere HA state is shown as being the Master within the cluster. The next figure shows node 192.168.1.103 as being the slave.

Note the states are termed running and connected.

Figure 124 Showing the HA master node state within a HA cluster

Figure 125 Showing the HA Slave node state within a HA cluster

Adding a DRS Cluster

Creating a DRS cluster is similar to creating an HA cluster. Select <Turn on vSphere DRS> at the initial screen and (mainly) follow the prompts as before.

Figure 126 Creating a DRS Cluster

There will be a choice of automation level which decides how VMs will be placed. It is recommended to use fully automated unless there are special requirements.

Figure 127 Selecting DRS automation levels

Complete the screen prompts and then add hosts as before.

——————————————————————————————————–