Red Hat Cluster HOWTO

Setting up a Red Hat Cluster

  1. Hardware configuration

    The installation hardware used for this configuration is shown below, however a lower cost installation could just as easily feature Virtual Machines along with a software emulated iSCSI target.

  2. Actual hardware used

  • 2 x x86 64 bit cluster nodes running Red Hat 6.4 each with two GigE NICs
  • 1 x DHCP server
  • 1 x Sharable iSCSI target device (Netgear ReadyNAS)
  • 1 x Network Switch

  1. Installation

    Start the installation of Red Hat V6 server on both of the cluster nodes and at the screen shown below ensure that (at least) the options that are highlighted are selected. In addition ensure that the iSCSI components are added.

    At the customize screen select other components (such as the GUI environment) as necessary.

    When a package is highlighted, can be selected to drill down further, here additional High Availability components have been added.

    Repeat the installation for the second node.

  2. Post Installation Tasks

  3. Network configuration

    In this case the network has been set up with two NIC’s per server. Network eth0 has been allocated by DHCP and eth1 (which will be used for a crossover cable to the other server) has been statically assigned as per the table below:

    eth0 eth1
    redhatclusternode1 DHCP 192.168.10.10
    redhatclusternode2 DHCP 192.168.10.20
    iSCSI device DHCP

    In this particular installation the /etc/sysconfig/network-scripts file for ifcfg-eth0 looks like:

    Note: The Network Manager program can be used to configure the network, however this must be disabled when the cluster is installed

  4. Cluster configuration

  5. Enabling Ricci

    After the RedHat installation has been completed and the network configured open a terminal and start the ricci service.

  6. Enabling luci

    Next start the luci service.

  7. Setting passwords for luci and ricci

    Add passwords for ricci and luci.

  8. Starting the cluster manager

    Start the clustermanager with the command:

    service cman start.

    Note if the Network Manager is running an error message will be generated as shown below:

  9. Disabling Network Manager

    If the Network Manager application is enabled it can be disabled by entering the commands:

    service NetworkManager stop

    chkconfig NetworkManager off

  10. Starting the cluster configuration

    As shown above start a browser session and point it at https://redhatclusternode1:8084

    Note if you used a different hostname then substitute it or the machine’s IP address in the URL above.

    Note cluster information is located in /etc/cluster/cluster.conf

  11. Creating a new cluster

    After logging in select <Manage Clusters> and then select <Create> to create a new cluster.

    Name the cluster, add in the node name and either use <Locally Installed Packages> or <download packages>. Ensure that <Enable Shared Storage Support> is checked.

    The cluster should now show that the node redhatclusternode1 has been added to the Cluster – redhatcluster:

  12. Adding additional cluster nodes

    To add the second node select the cluster and this time choose <Add>.

    The second node should be joined to the cluster after a short delay:

    If there are any issues configuring the cluster try restarting the following services ricci, luci, cman, taking note of and correcting any error messages that may occur.

  13. Adding an existing cluster

    The cluster view can be added to the second node’s browser view by using the <Add> button.

  14. Configuring an iSCSI target device

    Note: The iSCSI packages should have been selected during to installation, if not they will need to be obtained and installed.

    Configure the iscsi daemons and start the services:

    A number of iSCSI targets have been previously created for use with the cluster nodes. The IP target address of the iSCSI devices is 192.168.1.199. Use the following command (replacing the IP address below with your correct portal address):

    The targets have been discovered and will be set up as logical volumes for use by the cluster. Now login to the targets by adding the iqn information above:

    The session information can be listed by:

    The tail command can be used to show the device name:

    Here the device names are sdd, sde, sdf and sdg. They should now also show up with the cat /proc/partitions command.

    Repeat the steps for the other node (redhatclusternode1)

    Note the device names may be different on the other node.

  15. Creating logical volumes

    First create three volume groups.

    Display the volume groups using the vgdisplay command.

    The next task is to create three logical volumes from the three volume groups:

    Show the volumes using lvdisplay.

  16. Creating a GFS2 file system

    Format the logical volumes using the GFS2 file system by issuing the following command.

    Note –j refers to the number of journals.

    The next step is to create mount points. Do this for both of the nodes.

  17. Failover Domain

    A Failover Domain refers to a group of nodes within the cluster that can run administration tasks. In our case both nodes can fulfill this function. Larger clusters may want to restrict this ability to only certain nodes.

    Select <Failover Domains> à <Add>. Check the member boxes for both nodes, name the Failover Domain and select <Create>.


    Again the Failover Domain will show up on the second node automatically. The settings can be changed later if required.

  18. Adding a Resource

    In this example a Samba server will be created. Select the <Resources> tab and then select <Add>. From the drop down menu select GFS2 as the Resource and fill in the fields with the appropriate information. Select <Submit>

    Enter the data into the fields below and select <Submit>.

    Next add an IP Resource:

    Configure an IP address and Netmask:

    Next add a Samba Server Resource:

    The Resources screen should now show the three Resources:

    Now that the Resources have been added, the next step is to add a service group. Select the <Service Groups> – <Add>.

    Name the service samba and check <Automatically Start This Service>, add in the Failover Domain that was created earlier and set the <Recovery Policy> to relocate. Select <Submit>.

    After selecting <Submit>, select the samba service and then <Add Resource>.

    Under <Global Resources> select the IP address that was created earlier. Then select <Add Child Resource> to the IP Address Resource. Add the GFS2 Resource from the <Global Resources> section.

    Now add a <Child Resource> to the GFS2 Resource.

    The Resource here is a Script Resource using the smb script for samba.

    Finally select <Submit>

    Samba should now be running on one of the cluster nodes, it can be tested for relocation by selecting the other node and then the start icon as shown below:

    After failover the status shows that the service is now running on redhatclusternode1.

  19. Setting up Samba access

    User alan has an account and can be added with su access by issuing:

    Now on the windows machine that will access the files enter the Samba resource IP address:

    Next step is to Logon

    Now access the files and map the Network drive if required.

  20. Summary

    A basic two node Red Hat Cluster implementation has been set up. The shared storage component was implemented using an iSCSI target device. A simple Samba application was configured and shared out to a Microsoft Windows client machine.

    Note: Applications behavior differently to failover conditions; some are cluster aware and are more tolerant to failover situations.

    Other areas that should be considered in a more robust implementation are the addition of a Quorum Resource and a Fencing Device.

  21. Further information

    www.redhat.com

    http://www.samba.org

Advertisements

5 thoughts on “Red Hat Cluster HOWTO

  1. I am facing this error while adding node to a cluster.

    Error adding this cluster: Unable to connect to host “prime3”: Unable to establish an SSL connection to prime3:11111 after 5 tries. Is the ricci service running?: No route to host

    Please help me out in fixing

    • Did you see ricci running – using the ps command, unfortunately I no longer have this set up but I would restart ricci, luci and cman services.Also did you enter the password for the service and can you ping all nodes?

  2. Hi,
    I can’t start the samba service after adding global resource and child resource, as you mentioned. There is a red error icon on ‘samba’ service under service group tab. And also when I manually start the “smb” service (with command ‘service smb start’) on each server, the samba’s status doesn’t change on web-admin-console.

Comments and suggestions for future articles welcome!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s