This HOWTO will describe how to set up a two node Windows 2012 Cluster. In addition a number of other areas are discussed which are pre-requisites to installing the Cluster. These additional steps include:
Setting up a Domain controller
Configuring DHCP and DNS
Installing a shared iSCSI target device
After these steps have been completed the Cluster will be set up.
In this Windows 2012 Cluster HOWTO, VMware workstation was used to create Windows 2012 Virtual Machines for the Cluster implementation. This leads to a few additional steps but has the advantage of allowing the user to gain familiarity prior to deploying a full scale production Cluster.
Configuring Active Directory
Once Windows 2012 has been installed, the next step is to create a domain controller. This can be done by running the Server Manager.
From the Server Manager Dashboard select Manage – Add Roles and Features
Select the installation Type as <Role based or feature based installation>
With the correct server highlighted select <Next>
Select <Active Directory Domain Services>, <DHCP> and <DNS>and then select <Next>.
Select <Next> until reaching the screen below:
Note: The recommendations on this Microsoft screen above are based on good practices and should be used in a real life scenario, since the objective of this tutorial is to cover the basics of Cluster installation only one domain controller will be installed.
Select <Next> – <Install>
After the required components have been added select AD DS from the Server Manager screen to continue with the Active Directory configuration. Select, More> if it appears on the yellow bar.
Select <More …> from the right of the highlighted yellow bar. This gives an option to promote the server to a domain controller.
Configuring a Domain Controller
Select <Promote this server to a domain controller>. This will bring up the next screen:
Select <Add a new forest> and fill in a root domain name; select <Next>. At the next screen enter the DSRM Password. Since this example uses a pure Windows Server 2012 environment, there is no need to change the functional levels. Enter a Directory Services Restore password and then select <Next>.
Select <Next> at the DNS options screen.
Then accept the NETBIOS domain name.
Note that the NETBIOS name will be truncated in this case.
Select <Next> to accept the default folder locations:
Review the options and select <Next> to proceed.
Select <Install> after verifying that there are no gating prerequisites.
After selecting <Install> the system should restart as a domain controller.
After the system reboots, the login screen will automatically set the login to be a domain account.
Select <Computer> – <Properties> to verify.
Setting up DHCP
Launch Server Manager and with the DHCP Tab highlighted select <More? Form the yellow bar on the right hand side.
Select <Complete DHCP configuration>
Use the administrator credentials and select <Commit>
If the Summary screen shows success then select <Close>
Setting up a DHCP Scope
From the server manager screen select <Tools> – <DHCP>
Expand the tree to show the server name which in this case is Clusterdomaincontroller and then right click on IPV4 to bring up a context sensitive menu and select <NewScope>. This will start the Wizard.
Select <Next> from the Wizard screen and enter a scope name and description. In this Cluster all static IP addresses will use have an available range from 192.168.4.1 through to 192.168.4.99. The scope will allocate IP addresses from 192.168.4.100 through to 192.168.4.200.
This is specified in the following screen.
No exclusions are needed so select <Next> on the following screen.
Accept the default lease duration.
The next stage is to activate the new scope.
Enter a gateway address and select <Add>
On the next screen accept the defaults. Note scope clients can be added later.
WINS is not required here, so select <Next> on the following screen.
Finally activate the scope.
The new scope is shown in the DHCP window.
Configuring a VMware Virtual Network
Note in this example virtual machines are configured which requires additional setting on VMware workstation. If physical machines are used this step can be omitted.
From the VMware workstation menu select Edit – Virtual Network Editor – Add Network.
.After the Virtual Network Editor opens – select VMnet4 from the drop down box and select <OK>.
Configure the settings as shown in the next screen and then select <DHCP Settings>.
Here IP addresses below 100 can be used for static IP’s and the DHCP server will allocate .100 and above.
For each of the machines that will be allocated a DHCP address on the 192.168.4 network, configure their network adapter settings under VMware to use the vmnet4 network.
Here there are two virtual network adapters.
Configuring the client Network Adapters
On the client machine’s the IPV4 properties are set up as shown in the following figure:
After this start the domain controller and the client machines and verify that they have received the correct IP addresses.
Setting up the client nodes to access the Domain Controller
The next stage is to set up accounts on domain controller. The two clients are named ClusterNode01 and ClusterNode02.
SID note For cloned VMs
Note if the virtual machines have been cloned then the SID will be the same, this can be changed by running the sysprep tool from %SystemRoot%\system32\sysprep\sysprep.exe.
Note this can clear settings such as password and network, so reconfiguration may be necessary after running this command. To avoid this it is better not to clone existing systems.
Joining the client nodes to the domain
Select Computer – <Properties> – <Change Settings>.
Then select <Change> and then select the <Domain> radio button and add in the domain name.
Select <OK>, if all is well then there should be a prompt to enter credentials of an account for the domain controller
After this a welcome message should appear.
Reboot the system to join the domain. The login prompt will look like:
Verify that the node is correctly configured from the Server Manager screen.
To verify that the nodes have been added select <Control Panel> – <Administrative Tools> – <Active Directory Users and Computers> and then from the next screen double click on computers to see the two newly added nodes.
Configuring iSCSI Targets
Windows 2012 failover Clusters require the shared storage to support SCSI-3 persistent reservations. An iSCSI target implementation is included with Windows 2012 and this will be used for shared storage since it meets these requirements.
Installing the iSCSI Target Server
From the Server Manger screen select <Add Roles and features>, select <next> until the server Role screen appears and then under <File and Storage Services> enable <File and iSCSI services>.
Select <File and iSCSI services> – iSCSI Target Server.
Select <Next> and at the confirmation screen select <Install>.
Creating an iSCSI virtual disk
From Server Manager select <File and Storage Services> -<iSCSI> and then select the link on the right to start the Wizard.
The iSCSI Virtual disk will be generated from an actual physical server disk. In this example a small portion of the C: drive’s capacity will be used to present the virtual iSCSI disk.
Select <Next> and name the device.
Select <Next> and allocate a size.
Select <New iSCSI target> and then <Next>
Name the target and enter a brief description. Select <Next>
The next stage is to add the initiators that will access the disk. At this point no initiators have been configured on the client machines that will access the target but two will be added with the names Clusternode01 and Clusternode02. The actual initiator name is configured in the section – Configuring iSCSI initiators.
Select <Add> and then on the next screen select <Enter a value for the selected type>
In a similar fashion add the second initiator.
Select <Next> and the nodes will be specified as Access servers. Select <Next>. In this example CHAP will not be enabled; select <Next>
Verify the settings and then select <Create > to build the iSCSI target.
The newly created target should show up under the <iscsi> link.
Note: In a real world implementation the target would not be set up like this. It is a single point of failure and also does not include any level of RAID protection. A later HOWTO will describe how to set up a redundant shared storage system.
Configuring iSCSI initiators
The next stage is to configure the initiators on the two Cluster nodes (Clusternode01 and Clusternode02) that will be used to access the iSCSI target. Select the Control Panel à <iSCSI initiator>. If the iSCSI service is not running there will be an option to start it – select <Yes>.
Under the configuration tab set the initiator name to be the name that we used in setting up the target configuration. Note that if we configured the initiator prior to the target the initiator names would be automatically discovered during the target configuration page
The next step is to connect to the target – under the <Targets> tab enter the IP address of the target (192.168.4.10) and select <Quick Connect>.
IF a successful connection is made it should show up as a discovered target.
Repeat for Clusternode02 using Clusternode02 as the initiator name.
Configuring the Cluster
From Clusternode01 open the Server Manager and then Manage – <Add Roles and Features>.
Select <Next > until <Features> is highlighted and then check the <Failover Clustering> box.
Select <Add Features> at the next screen.Select <Next> and then <Install>.
Repeat the setup for Clusternode02.
After Failover Clustering has been installed. Login to Clusternode01 using the domain administrator account.
Now from Clusternode01 select <Server Manager> – <Tools> – <Failover Cluster Manager>.
This will start the Failover Cluster Manager Tool.
It is recommended to first validate the configuration. Select <Validate Configuration> and then add the nodes to be validated.
The tests can take a while to run if all tests are selected. Investigate the output for any warning or error messages. If the nodes are correctly set up then a validated message should appear.
Check <Create the Cluster now> box and then select <Finish>. This will start the Create Cluster Wizard. Enter a Cluster name and select <Next>
Verify that the settings are correct and select <Next>.
A confirmation screen should be shown.
It is recommended to view the report before selecting <Finish>. In this case a warning has been received about the quorum disk. This can be added later.
Configuring shared Cluster storage
At this point no shared storage has been setup. Open the Disk Management tool and the iSCSI volume should appear, if it is set to offline then set it to online and initialize it. After this it can be added to the Cluster.
Note if this was initialized prior to the Cluster configuration then it could have been automatically configured, but in the interests of education it was left to be added later.
Now from the Failover Cluster Manager select <Add Disk> to add the iSCSI target to the Cluster.
It should now show up as a resource.
Note that the disk is owned by Clusternode02. Ownership can be changed or a number of disks can be load balanced by reassigning the ownership.
Reassigning disk ownership
From the Failover Cluster Manager under Disks select <Move Available Storage>.
Select <Select Node> and then select ClusterNode01
The screen now shows ownership belonging to ClusterNode01.
High Availability Role configuration
The next step is to make use of the High Availability feature to make a feature that will be accessible to other client node. To do this new virtual hard disks were added to the Domain Controller VM and two new iSCSI volumes were added as described earlier. In addition make sure that the File Server Role has been installed on both Cluster nodes as shown below.
Recall that the iscsi initiator needs to be refreshed and then the disks must be added as Cluster storage, refer back to the earlier section for details.
From the Failover Cluster Manager screen select <Configure Role>
In this case the HA Role will be that of a File Server.
Select <File Server for general use>.
Give the Role a name.
Select the storage that is to be used for the file server and then select <Next>.
After the task has been completed the report can be viewed. Investigate any warnings that appear.
Now with <Roles> selected test access by verifying that the Role can be moved over to the other server.
After this select <Add File Share>.
NFS is used for UNIX systems and Microsoft clients tend to use SMB. In this case an SMB share will be configured, highlight <SMB Share – Quick> and select <Next>.
Highlight the volume and select <Next>
Name the share and select <Next>
Accept the default settings and select <Next>
Accept the default permission and select <Next>.
At the confirmation screen select <create>
Close after successful creation. The share can be accessed now from a client by entering the path name in the run dialog.
Files can now be added to the share.
Configuring a Quorum Disk
From the Failover Cluster Manager with the Cluster highlighted select <More Action> on the right hand side of the screen and then select <Configure Cluster Quorum Settings>. Quorum resources are used to ensure Cluster integrity.
This will open the Configure Cluster Quorum Wizard.
In this case <Typical Settings> will be used. Recall that there is still an unused iSCSI disk available.
In this situation the iSCSI disk was recognized and allocated and that no errors occurred.
A basic two node Windows 2012 Cluster has been set up using two nodes connected to a third node which functions as a Domain controller. Shared Microsoft iSCSI target devices (which supports persistent reservations) were configured and added to the Cluster. This was all done using Virtual Machines but a real life deployment could physical systems. In addition the disk drives would most likely be SAS or Fibre Channel devices configured as external storage. A HA file server resource was added and also a Quorum resource. The tutorial was written to help those wishing to learn about Windows Clustering to come up to speed rapidly.
The deployment should not be used in anything other than in an educational environment.