Open-E HOWTO


 

Open-E HOWTO

  1. Introduction and Disclaimer

    Open-E develops Enterprise class IP based storage software. As of the time of writing (April 2014) the Data Storage Software V7 (Open-E DSS V7) is active and this is the version used here. The software is mature and widely used with a good track record, it also features ease of use (which will be show in this guide) and any experienced SysAdmin will have no issues getting to grips with device configuration.

    As always my HOWTO guides focus on the HOW rather than the WHY and are designed to get the user up to speed rapidly with a working configuration and as such very little explanation is given along the way. Basic functions such as iSCSI and NAS configuration are shown. Since the intent is to gain familiarity with the functionality of the product the configurations shown here are not necessarily best practices, indeed since the focus is more on education, virtual machines were used rather than physical ones to illustrate the functionality. Each configuration step shows the associated screen shots which reduces the likelihood of errors, however there is no guarantee that this document is free of errors. The information portrayed here is for educational purposes and should not be used in a live production environment.

    The user is encouraged to implement other features not covered in this guide and investigate the Enterprise level features such as High Availability, Remote Mirroring etc. Open-E has comprehensive documentation available that  goes well beyond the limited scope of this document.

  2. Obtaining and installing the software

    The software can be downloaded at http://www.open-e.com/download/open-e-data-storage-software-v7/ and burnt onto a CD.DVD. The installation process is fairly simple and after successful installation a screen similar to that shown below will be shown.

  3. Accessing the configuration screens

    From a suitable configured system point a browser to the IP address shown in the screen above.

    Select <Try> to get a 60 day trial version. This will bring up the next screen prompting for an email address.

    Enter a valid email address and follow the link that is emailed. After this a key will be sent which can be pasted into the dialog box.

    Select <apply> to activate the product key. This will bring up the license agreement screen as shown below. Review the agreement and select <agree> to proceed to the wizard screens.

     

    Use the password “admin” for the initial login.

    In this case the default language of English will be used – select <Next> to select the language.

     

    Change the password from the default of “admin”.

     

    Next set the IP address, in this case no changes are needed and the static IP of 192.168.0.200 will continue to be used.

    Next select the correct time zone.

     

    Set the time manually or use an NTP server.

    Name the Server.

    Verify the settings are correct and select <Finish>.

    At this point the wizard is complete and the system is ready for volume configuration.

    In addition scrolling down will show documentation links. The system allows for a 60 day trial period.

  4. Configuring a Volume

    The first step is to configure a volume. A volume is part of a volume group so the first part is to create the volume group. This is done by selecting Configuration Volume manager Volume groups.

     

    Name the volume group and select apply.

    Select <OK> to create and format the volume group

    Now that the volume group exists a logical volume can be created. This can be done by selecting the highlighted link below.

     

  5. Setting up an iSCSI Target

    The next screen will allow the creation of NAS or iSCSI devices. In addition snapshots can be created. In the following example an iSCSI volume will be used with a capacity of 25 GB. Also a SWAP space of 4 GB will be set up. The initialization rate can also be adjusted from the drop down box. It is recommended to perform an initialization on the new volume. In the option box select <Create new target automatically>.

     

    If the process is successful a screen similar to the one shown will appear.

    The capacity of 25 GB is shown and the link can be selected to configure the iSCSI target. This screen shows that the access mode is set up for write through which is best for reliability, CHAP authentication is disabled (which is acceptable for the purposes of the tutorial) and the default iqn is used. It is also possible to map only certain hosts to the target from the <target IP access> box.

  6. Connecting to the iSCSI target from a Linux iSCSI initiator

    This example will setup an iSCSI initiator on CentOS. If the iSCSI initiator is not included with the installation then it can be installed with the following command:

    sudo yum install iscsi-initiator-utils

    The default iSCSI configuration file is located at /etc/iscsi/iscsid.conf. At this point the defaults settings should be OK. Next configure the iSCSI services by:

    sudo service iscsid start

    Connect to the target by issuing:

    iscsiadm -m discovery -t st -p 192.168.0.220

    The system should respond with the target’s iqn as shown below/

    This can be compared with the Name on the Open-E configuration screen.

    Configure the iSCSI service to start at boot time –

    sudo chkconfig iscsid on

    Use the fdisk command to show the target:

    fdisk –l


     

    Here the iSCSI target is /dev/sdb. Next use fdisk to partition the device and mkfs to format it.

     

    Test the device by creating a mount point, mounting it, copy some files and then list the copied files.


  7. Configuring a NAS Device

    A new physical device of 50Gb has been added to the open-e host. This device will be used to configure a NAS share. From the browser select the <Configuration Tab> as before and select <Volume Manager>à <Volume Groups>. Select the second disk (Unit S002) and select <New Volume Group> from the Action dialogue.


    Note that instead we have the option to add to the existing volume group (vg00) but here a new volume group (vg01) will be created.


     

    Select <Apply> to format the new volume group. After the new volume group has been created it will show up as being “in use” and now select the link to create a logical volume.

     

    Select <new NAS volume> for the action and use 20GB of the capacity. Select <apply>.

    After the volume has been created the link to create a network share can be selected.

    Name the share and select <apply>. After the network share has been created the next step is to configure services for the share. Do this by selecting the link.

  8. Configuring a Windows Share

    This time Windows will be used as the Operating System to access the open-e resource. From the SMB setting box ensure that <Use SMB> is checked and then set the Users access permissions accordingly. Other share settings will not be used in this example.

  9. Accessing the Open-E share from Windows

    Select run form the menu (varies according to the windows version) and enter \\192.168.0.220. The share should now pop up in a file window as shown below. This share can be mapped to a drive letter and used a s a file resource.

     

     

    The share now shows up as a regular device under explorer.

  10. Other features

  11. Statistics

    Along with the ease of use of open-e it is also very strong in showing a wide range of statistics which can be used to analyze performance bottlenecks and to glean other important information. Select <Status> à <Statistics> and then select the link <more details>.

  12. Load Statistics

    The first section is <Load> which shows how the load various over the course of the day and this is of great importance to administrators as it gives them a great feel for what is happening “Behind the Scenes”.

  13. FileSystems Statistics

    The filesystems page shows device read and write accesses over varying time periods.

  14. Misc Statistics

    This screen captures System Uptime Stats.

     

  15. Network Statistics

    This screen shows the number of bits and packets that are sent and received over a given time period.

  16. Memory statistics

    This screen shows memory usage as well as access patterns.

  17. Using the Open-E console

    Many functions are also available directly from the console Press the <F1> Key to bring up a help screen.

    Only a few of the options will be discussed here but some of the screen that are covered are:

  18. Console Tools Console command

    Press Ctrl – Alt-T to access the console tools menu.

    Press <2> to bring up the Hardware Info menu. Use the arrow keys to navigate through the listing.

  19. Extended Tools console command

    Press Ctrl–Alt-X to access the console tools menu.

    This is a potentially data destructive tool so a password prompt is required to proceed.

    Warning it is not recommended to use these options without a thorough understanding of the underlying actions involved.

     


  20. Configure Network console command

    Press Ctrl–Alt-N to access the network settings menu. This screen can be used to change IP addresses and set up static or DHCP configuration.

  21. Hardware Configuration console command

    Press Ctrl-Alt-W to access this screen. This is also a potentially data destructive operation so the options should be used with care.

    Enter the administrator password to proceed.

    Option 7 can be used to run a basic benchmark such as a read performance test.

    Select the <Read test> only.

    Use the arrow keys to navigate and the space bar to select the devices, select <OK>.

  22. Shutdown console command

    Finally use Ctrl-Alt-K to shutdown or restart the system.

    As mentioned at the beginning this guide only covers the bare minimum of DSS V7’s capabilities, further investigation of the many enterprise level features is greatly encouraged.

     

     

     

Advertisements

Hadoop 2.2 Single Node Installation on CentOS 6.5


Introduction

This HOWTO covers Hadoop 2.2 installation with CentOS 6.5. My series of tutorials are meant just as that – tutorials. The intent is to allow the user to gain familiarity with the application and should not be construed as any type of best practices document to be used in a production environment and as such performance, reliability and security considerations are compromised. The tutorials are freely available and may be distributed with the proper acknowledgements. Actual screenshots of the commands are used to eliminate any possibility of typographical errors, in addition long sequences of text are placed in front of the screenshots to facilitate copy and paste. Command text is printed using Courier font. In general the document will only cover the bare minimum of how to get a single node cluster up and running with the emphasis on HOW rather than WHY. For more in depth information the reader should consult the many excellent publications on Hadoop such as Tom White's – Hadoop: The Definitive Guide, 3rd edition and Eric Sammer's – Hadoop Operations along with the Apache Hadoop website.

Please consult www.alan-johnson.net for an online version of this document.

Prerequisites

  • CentOS 6.5 installed

Machine configuration

In this HOWTO a physical machine was used; but for educational purposes Vmware Workstation or Virtualbox (https://www.virtualbox.org/) would work just as well. The screenshot below shows acceptable VM machine settings for VMWare.

Note an additional Network Adapter and physical drive have been added. Memory allocation is 2GB which is sufficient for the tutorial.

User configuration

If installing CentOS from scratch then select a user <hadoopuser> at installation time otherwise the user can be added by the command below. In addition create a group called <hadoopgroup>.

Note the initial configuration is done as user root.

Now make hadoopuser a member of hadoopgroup.

usermod –g hadoopgroup hadoopuser

Verify by issuing the id command.ss

id hadoopuser

The next step is to give hadoopuser access to sudo commands. Do this by executing the visudo command and adding the highlighted line shown below.

Reboot and now log in as user hadoopuser.

Setting up ssh

Setup ssh for password-less authentication using keys.

ssh-keygen -t rsa -P ''

Next change file ownership and mode.

sudo chown hadoopuser ~/.ssh

sudo chmod 700 ~/.ssh

sudo chmod 600 ~/.ssh/id_rsa

Then append the public key to the file authorized_keys

sudo cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Change permissions.

sudo chmod 600 ~/.ssh/authorized_keys

Edit /etc/ssh/sshd_config

Set PasswordAuthentication to no and allow empty passwords
as shown below in the extract of the file.

Verify that login can be accomplished without requiring a password.

Installing and configuring java

It is recommended to install the full openJDK package to take advantage of some of the java tools,

Installing openJDK

yum install java-1.7.0-openjdk*

After the installation verify the java version

java -version

The folder etc/alternatives contains a link to the java installation; perform a long listing of the file to show the link and use it as the location for JAVA_HOME.

Set the JAVA_HOME environmental variable by editing ~/.bashrc

Installing Hadoop

Downloading Hadoop

From the Hadoop releases page http://hadoop.apache.org/releases.html , download hadoop-2.2.0.tar.gz from one of the mirror sites.

Next untar the file

tar xzvf hadoop-2.2.0.tar.gz

Move the untarred folder

sudo mv hadoop-2.2.0 /usr/local/hadoop

Change the ownership with sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop

Next create namenode and datanode folders

mkdir -p ~/hadoopspace/hdfs/namenode

mkdir -p ~/hadoopspace/hdfs/datanode

Configuring Hadoop

Next edit ~/.bashrc to set up the environmental variables for Hadoop

# User specific aliases and functions

export HADOOP_INSTALL=/usr/local/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export PATH=$PATH:$HADOOP_INSTALL/sbin
export PATH=$PATH:$HADOOP_INSTALL/bin

Now apply the variables.

There are a number of xml files within the Hadoop folder that require editing which are:

  • mapred-site.xml
  • yarn-site.xml
  • core-site.xml
  • hdfs-site.xml
  • hadoop-env.sh

The files can be found in /usr/local/hadoop/etc/hadoop/. First copy the mapred-site template file over and then edit it.

mapred-site.xml

Add the following text between the configuration tabs.

<property>
  <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

yarn-site.xml

Add the following text between the configuration tabs.

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>

core-site.xml

Add the following text between the configuration tabs.
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
</property>

hdfs-site.xml

Add the following text between the configuration tabs.

<property>
 <name>dfs.replication</name>
<value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
  <value>file:///home/hadoopuser/hadoopspace/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>file:///home/hadoopuser/hadoopspace/hdfs/datanode</value>
</property>

Note other locations can be used in hdfs by separating values with a comma, e.g.

file:/home/hadoopuser/hadoopspace/hdfs/datanode, .disk2/Hadoop/datanode, . .

hadoop-env.sh

Add an entry for JAVA_HOME

export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64/

Next format the namenode.

. . .

Issue the following commands.

start-dfs.sh
start-yarn.sh

Issue the jps command and verify that the following jobs are running:

At this point Hadoop has been installed and configured

Testing the installation

A number of test files exist that can be used to benchmark Hadoop. Entering the command below without any arguments will list available tests.

The TestDFSIO test below can be used to measure read performance - initially create the files and then read:

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 100

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 100

The results are logged in TestDFSIO_results.log which will show throughput rates:

During the test run a message will be printed with a tracking url such as that shown below:

The link can be selected or the address can be pasted into a browser.

Another test is mrbench which is a map/reduce test.

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar mrbench –maps 100

Finally the test below is used to calculate pi. The first parameter refers to the number of maps and the second is the number of samples for each map.

hadoop jar $HADOOP_INSTALL/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 10 20

. . .

Note accuracy can be improved by increasing the value of the second parameter.

Working from the command line

Invoking a command without any or insufficient parameters will generally print out help data"

hdfs commands

hdfs dfsadmin –help

. . .

hadoop commands

hadoop version

Web Access

The location for checking the Namenode status is at localhost:50070/. This web page contains status information relating to the cluster.

There are also links for browsing the filesystem.

Logs can also be examined from the NameNode Logs link.

. . .

The secondary namenode can be accessed using port 50090

On line documentation

Comprehensive documentation can be found at the Apache website or locally using a browser by pointing it at $HADOOP_INSTALL/share/doc/Hadoop/index.html/

Feedback, corrections and suggestions are welcome, as are suggestions for further HOWTOs.

Windows 2012 Cluster HOWTO


This HOWTO will describe how to set up a two node Windows 2012 Cluster. In addition a number of other areas are discussed which are pre-requisites to installing the Cluster. These additional steps include:

  1. Setting up a Domain controller
  2. Configuring DHCP and DNS
  3. Installing a shared iSCSI target device

After these steps have been completed the Cluster will be set up.

In this Windows 2012 Cluster HOWTO, VMware workstation was used to create Windows 2012 Virtual Machines for the Cluster implementation. This leads to a few additional steps but has the advantage of allowing the user to gain familiarity prior to deploying a full scale production Cluster.

Configuring Active Directory

Once Windows 2012 has been installed, the next step is to create a domain controller. This can be done by running the Server Manager.

From the Server Manager Dashboard select Manage Add Roles and Features

Select the installation Type as <Role based or feature based installation>

With the correct server highlighted select <Next>

Select <Active Directory Domain Services>, <DHCP> and <DNS>and then select <Next>.

Select <Next> until reaching the screen below:

Note: The recommendations on this Microsoft screen above are based on good practices and should be used in a real life scenario, since the objective of this tutorial is to cover the basics of Cluster installation only one domain controller will be installed.

Select <Next> –  <Install>

After the required components have been added select AD DS from the Server Manager screen to continue with the Active Directory configuration. Select, More> if it appears on the yellow bar.

Select <More …> from the right of the highlighted yellow bar. This gives an option to promote the server to a domain controller.

Configuring a Domain Controller

Select <Promote this server to a domain controller>. This will bring up the next screen:

Select <Add a new forest> and fill in a root domain name; select <Next>. At the next screen enter the DSRM Password. Since this example uses a pure Windows Server 2012 environment, there is no need to change the functional levels. Enter a Directory Services Restore password and then select <Next>.

Select <Next> at the DNS options screen.

Then accept the NETBIOS domain name.

Note that the NETBIOS name will be truncated in this case.

Select <Next> to accept the default folder locations:

Review the options and select <Next> to proceed.

Select <Install> after verifying that there are no gating prerequisites.

After selecting <Install> the system should restart as a domain controller.

After the system reboots, the login screen will automatically set the login to be a domain account.

Select <Computer> –  <Properties> to verify.

Setting up DHCP

Launch Server Manager and with the DHCP Tab highlighted select <More? Form the yellow bar on the right hand side.

Select <Complete DHCP configuration>

Select <Next>

Use the administrator credentials and select <Commit>

If the Summary screen shows success then select <Close>

Setting up a DHCP Scope

From the server manager screen select <Tools> <DHCP>

Expand the tree to show the server name which in this case is Clusterdomaincontroller and then right click on IPV4 to bring up a context sensitive menu and select <NewScope>. This will start the Wizard.

Select <Next> from the Wizard screen and enter a scope name and description. In this Cluster all static IP addresses will use have an available range from 192.168.4.1 through to 192.168.4.99. The scope will allocate IP addresses from 192.168.4.100 through to 192.168.4.200.

This is specified in the following screen.

No exclusions are needed so select <Next> on the following screen.

Accept the default lease duration.

The next stage is to activate the new scope.

Enter a gateway address and select <Add>

On the next screen accept the defaults. Note scope clients can be added later.

WINS is not required here, so select <Next> on the following screen.

Finally activate the scope.

The new scope is shown in the DHCP window.

Configuring a VMware Virtual Network

Note in this example virtual machines are configured which requires additional setting on VMware workstation. If physical machines are used this step can be omitted.

From the VMware workstation menu select Edit Virtual Network Editor Add Network.

.After the Virtual Network Editor opens – select VMnet4 from the drop down box and select <OK>.


Configure the settings as shown in the next screen and then select <DHCP Settings>.


Here IP addresses below 100 can be used for static IP’s and the DHCP server will allocate .100 and above.


For each of the machines that will be allocated a DHCP address on the 192.168.4 network, configure their network adapter settings under VMware to use the vmnet4 network.


Here there are two virtual network adapters.


Configuring the client Network Adapters

On the client machine’s the IPV4 properties are set up as shown in the following figure:


After this start the domain controller and the client machines and verify that they have received the correct IP addresses.


Setting up the client nodes to access the Domain Controller

The next stage is to set up accounts on domain controller. The two clients are named ClusterNode01 and ClusterNode02.

SID note For cloned VMs

Note if the virtual machines have been cloned then the SID will be the same, this can be changed by running the sysprep tool from %SystemRoot%\system32\sysprep\sysprep.exe.

Note this can clear settings such as password and network, so reconfiguration may be necessary after running this command. To avoid this it is better not to clone existing systems.

Joining the client nodes to the domain

Select Computer <Properties> <Change Settings>.

Then select <Change> and then select the <Domain> radio button and add in the domain name.

Select <OK>, if all is well then there should be a prompt to enter credentials of an account for the domain controller

After this a welcome message should appear.

Reboot the system to join the domain. The login prompt will look like:

Verify that the node is correctly configured from the Server Manager screen.

To verify that the nodes have been added select <Control Panel> <Administrative Tools> <Active Directory Users and Computers> and then from the next screen double click on computers to see the two newly added nodes.

Configuring iSCSI Targets

Windows 2012 failover Clusters require the shared storage to support SCSI-3 persistent reservations. An iSCSI target implementation is included with Windows 2012 and this will be used for shared storage since it meets these requirements.

Installing the iSCSI Target Server

From the Server Manger screen select <Add Roles and features>, select <next> until the server Role screen appears and then under <File and Storage Services> enable <File and iSCSI services>.

Select <File and iSCSI services> iSCSI Target Server.

Select <Next> and at the confirmation screen select <Install>.

Creating an iSCSI virtual disk

From Server Manager select <File and Storage Services> <iSCSI> and then select the link on the right to start the Wizard.

The iSCSI Virtual disk will be generated from an actual physical server disk. In this example a small portion of the C: drive’s capacity will be used to present the virtual iSCSI disk.

4

Select <Next> and name the device.

Select <Next> and allocate a size.

Select <New iSCSI target> and then <Next>

Name the target and enter a brief description. Select <Next>

The next stage is to add the initiators that will access the disk. At this point no initiators have been configured on the client machines that will access the target but two will be added with the names Clusternode01 and Clusternode02. The actual initiator name is configured in the section – Configuring iSCSI initiators.

Select <Add> and then on the next screen select <Enter a value for the selected type>

In a similar fashion add the second initiator.

Select <Next> and the nodes will be specified as Access servers. Select <Next>. In this example CHAP will not be enabled; select <Next>

Verify the settings and then select <Create > to build the iSCSI target.

The newly created target should show up under the <iscsi> link.

Note: In a real world implementation the target would not be set up like this. It is a single point of failure and also does not include any level of RAID protection. A later HOWTO will describe how to set up a redundant shared storage system.

Configuring iSCSI initiators

The next stage is to configure the initiators on the two Cluster nodes (Clusternode01 and Clusternode02) that will be used to access the iSCSI target. Select the Control Panel à <iSCSI initiator>. If the iSCSI service is not running there will be an option to start it – select <Yes>.

Under the configuration tab set the initiator name to be the name that we used in setting up the target configuration. Note that if we configured the initiator prior to the target the initiator names would be automatically discovered during the target configuration page

The next step is to connect to the target – under the <Targets> tab enter the IP address of the target (192.168.4.10) and select <Quick Connect>.

IF a successful connection is made it should show up as a discovered target.

Repeat for Clusternode02 using Clusternode02 as the initiator name.

Configuring the Cluster

From Clusternode01 open the Server Manager and then Manage <Add Roles and Features>.

Select <Next > until <Features> is highlighted and then check the <Failover Clustering> box.

Select <Add Features> at the next screen.Select <Next> and then <Install>.

Repeat the setup for Clusternode02.

After Failover Clustering has been installed. Login to Clusternode01 using the domain administrator account.

Now from Clusternode01 select <Server Manager> <Tools> <Failover Cluster Manager>.

This will start the Failover Cluster Manager Tool.

It is recommended to first validate the configuration. Select <Validate Configuration> and then add the nodes to be validated.

The tests can take a while to run if all tests are selected. Investigate the output for any warning or error messages. If the nodes are correctly set up then a validated message should appear.

Check <Create the Cluster now> box and then select <Finish>. This will start the Create Cluster Wizard. Enter a Cluster name and select <Next>

Verify that the settings are correct and select <Next>.

A confirmation screen should be shown.

It is recommended to view the report before selecting <Finish>. In this case a warning has been received about the quorum disk. This can be added later.

Configuring shared Cluster storage

At this point no shared storage has been setup. Open the Disk Management tool and the iSCSI volume should appear, if it is set to offline then set it to online and initialize it. After this it can be added to the Cluster.

Note if this was initialized prior to the Cluster configuration then it could have been automatically configured, but in the interests of education it was left to be added later.

Now from the Failover Cluster Manager select <Add Disk> to add the iSCSI target to the Cluster.

It should now show up as a resource.

Note that the disk is owned by Clusternode02. Ownership can be changed or a number of disks can be load balanced by reassigning the ownership.

Reassigning disk ownership

From the Failover Cluster Manager under Disks select <Move Available Storage>.

Select <Select Node> and then select ClusterNode01

The screen now shows ownership belonging to ClusterNode01.

High Availability Role configuration

The next step is to make use of the High Availability feature to make a feature that will be accessible to other client node. To do this new virtual hard disks were added to the Domain Controller VM and two new iSCSI volumes were added as described earlier. In addition make sure that the File Server Role has been installed on both Cluster nodes as shown below.

Recall that the iscsi initiator needs to be refreshed and then the disks must be added as Cluster storage, refer back to the earlier section for details.

From the Failover Cluster Manager screen select <Configure Role>

In this case the HA Role will be that of a File Server.

Select <File Server for general use>.

Give the Role a name.

Select the storage that is to be used for the file server and then select <Next>.

Select <Next>

After the task has been completed the report can be viewed. Investigate any warnings that appear.

Now with <Roles> selected test access by verifying that the Role can be moved over to the other server.

After this select <Add File Share>.

NFS is used for UNIX systems and Microsoft clients tend to use SMB. In this case an SMB share will be configured, highlight <SMB Share – Quick> and select <Next>.

Highlight the volume and select <Next>

Name the share and select <Next>

Accept the default settings and select <Next>

Accept the default permission and select <Next>.

At the confirmation screen select <create>

Close after successful creation. The share can be accessed now from a client by entering the path name in the run dialog.

Files can now be added to the share.

Configuring a Quorum Disk

From the Failover Cluster Manager with the Cluster highlighted select <More Action> on the right hand side of the screen and then select <Configure Cluster Quorum Settings>. Quorum resources are used to ensure Cluster integrity.

This will open the Configure Cluster Quorum Wizard.

Select <Next>.

In this case <Typical Settings> will be used. Recall that there is still an unused iSCSI disk available.

In this situation the iSCSI disk was recognized and allocated and that no errors occurred.

Summary

A basic two node Windows 2012 Cluster has been set up using two nodes connected to a third node which functions as a Domain controller. Shared Microsoft iSCSI target devices (which supports persistent reservations) were configured and added to the Cluster. This was all done using Virtual Machines but a real life deployment could physical systems. In addition the disk drives would most likely be SAS or Fibre Channel devices configured as external storage. A HA file server resource was added and also a Quorum resource. The tutorial was written to help those wishing to learn about Windows Clustering to come up to speed rapidly.

The deployment should not be used in anything other than in an educational environment.

Citrix Xenserver Quick Start Guide


Citrix Xenserver Quick Start Guide

Alan Johnson

© Alan Johnson February 2012

Introduction to Xenserver

The purpose of this tutorial is to provide a quick start guide to the basic features of XenServer and XenCenter. The intent is to shorten the learning curve for the casual hobbyist or to provide a basic introduction to administrators that are considering deploying Xenserver within their own organization. Relatively inexpensive hardware was used to generate the screenshots; the servers were sub $500 commodity systems with i3 processors and 6GB of RAM. Netgear ReadyNAS systems were used to provide iSCSI based storage. Xenserver is freely available, although to use some of the advanced features it may be necessary to upgrade to one of the fee based solutions. At the time of writing (2012) the following variants are available:

Installation

Obtain the free Xenserver product from

http://www.citrix.com/lang/English/lp/lp_1688615.asp

Register and download, then burn the ISO image and boot from a dedicated server. This will be the location of the bare metal hypervisor.

Note the term bare-metal is used since the hypervisor interacts directly with the hardware of the server.

In this configuration the server name is simply “Xenserver”. Follow the prompts and after installation a screen similar to that below should be shown on the server.

Post Installation

After installation the XenServer console will show its IP address under the Status Display option as shown in Figure 1.

Figure 1 Showing XenServer’s IP address

Installing XenCenter

XenCenter is the management tool for configuring xemserver. It is recommended to use thsi tool wherever possible. Open a browser on a regular windows system and enter the IP address into the URL bar as shown in Figure 2. This will give two choices for the installation of XenCenter. It is possible to make an image for later installation or to run the installer directly from the browser. In this case the direct installation method will be chosen. Highlight the <XenCenter installer> link and select it, depending on the browser used there may be a prompt similar to that shown in Figure 3. Save the file (if necessary) and then run it.

Figure 2 Installing XenCenter

Select <Run> to execute the installer.

Note: XenCenter runs under Windows Operating Systems

Figure 3 XenCenter installation prompt

At the prompt select <Next>, then select <Next> and enter a file location.

Starting and Configuring XenCenter

Adding a server

From the Start Menu select <Citrix XenCenter>. The first task is to add a server. This is done by opening the <Home> Tab and selecting <Add a server> by clicking on the icon. Enter the information using the server’s IP address that was determined earlier from the console screen and use the password that you supplied during the server installation process. Then select <Add>.

Figure 4 Adding a server to the console

Figure 5 Connecting to the console

License Activation

The license manager screen will now open and the free version can be activated by highlighting the server and then from the <Activate Free XenServer> prompt select <Request Activation Key> which will open up the Citrix website where you can enter the appropriate information.

Note: this activity can be delayed if desired and activated later from the <Tools> menu of the XenCenter console.

Figure 6 Activating the license

Enter the mandatory information and select <Submit>.

A license file will be sent via email. The next step is to select the <Activate Free XenServer> again and this time select <Apply Activation Key>. A prompt will be issued for the file’s location. Select the file and then select <Open>. The license should now be activated and an expiry date will be shown in the License Manager screen. Close the License Manager.

Figure 7 Applying for the Free XenServer license.

Figure 8 Applying the Activation key

Using the XenCenter Console

Figure 9 Navigating the XenCenter console

The XenCenter console will show a hierarchy corresponding to the server and its resources. Selecting each device in turn will show information related to the resource. In . above, statistics about the server are shown relating to its memory and CPU usage etc.

Selecting local storage will show the local resources and the file system type along with capacity and other storage related parameters. Multipathing is disabled since SATA devices do not have this capability.

The tabs in the right hand pane will vary according to context – for example with the uppermost object <XenCenter) selected – four tabs (Home, Search, Tags and Logs) are presented.

Going back and selecting the xenserver object shows a wide range of tabs, the first tab (search) has already been discussed. The second tab (General) give more in depth information relating to the server selected such as its uptime, iSCSI iqn name, memory size, version number and license details.

Figure 10 XenServer’s General properties.

Items can be expanded or collapsed by selecting the arrow to the right of each category.

The next tab <Memory> is not used in the free version but there is an upgrade available which will allow for the use of dynamic memory.

Figure 11 XenServer’s Memory Tab

The <Learn More> function will open a web location which explains the benefits of dynamic memory. In a commercial environment it is recommended that dynamic memory should be used. The Citrix website states that Dynamic memory control can:

Reduce costs, improve application performance and protection, and increase VM density per host server. Also known as memory ballooning, dynamic memory control allows host memory to be allocated dynamically across running VMs.

Allow new VMs to start on existing pool resources by automatically compressing the memory footprint of existing workloads

Reduce hardware costs and improve application performance by sharing unused server memory between VMs, allowing for a greater number of VMs per server host

Maintain VM and host performance and stability by never oversubscribing host memory, and by never swapping memory to slower disk-based systems

Adding Storage

The Storage Tab will allow us to create a Storage Repository which is where the Virtual Machines (VMs) will be located. In the steps following an iSCSI target will be added. The target in question is a Netgear ReadyNAS device and has been configured with IP addresses of 192.168.1.72 and 192.168.1.96. The relevant section is shown below in Figure 12.

Figure 12 Netgear iSCSI target configuration.

Both of the LUNs assigned to target xen will be used.

With the Storage tab selected, select <New SR>

Figure 13 Creating a new Storage Repository

This first example uses an iSCSI target so select <Software iSCSI> and select <Next>.

Figure 14Using iSCSI as a new Storage Repository

Enter a Name and check or uncheck Auto-generate description according to your preferences. Select <Next>.

Figure 15 Naming the storage repository

Add the IP address of the iSCSI target, for security CHAP can be selected. Note that CHAP stands for Challenge Authentication Protocol and can be used to select an extra level of security by using a password (a secret). In this example CHAP will not be used. The <Discover IQNs> button can be used to find available IQNs. It is recommended to use this function as IQN names are lengthy to type in.

Note: 3260 is the default port number for iSCSI.

After discovery a number of IQNs may be found, select the target that corresponds to the correct device and then after the IQN has been selected it will now be possible to send a <Discover LUNs> command. The target LUNs discovered should be LUN0 and LUN1 corresponding to the LUNs shown in Figure 12. Initially LUN0 will be configured.

Figure 16 Entering iSCSI target parameters

Select <Finish>.

The next step is to prepare the storage repository (SR). A prompt will be presented which will format the LUN.

Figure 17 Creating a new virtual disk from the iSCSI target.

Respond <Yes> and the new SR will be created. The Storage Tab will now show the newly created virtual disk. Repeat the process for the second LUN (LUN1).

Figure 18 Adding a second LUN as a new storage repository

Figure 19 Newly created virtual disk.

Networking Tab

The networking tab will show the current network. The IP address is shown along with other parameters. This networking task will be to add a new network. There are a number of choices here. The first choice <External Network> will be used for regular network traffic. It is also possible to bond multiple networks together for higher performance. To create an External Network select the Networking Tab and then select <Add Network>.

Figure 20 Adding a new external network

Select <External Network> and select <Next>.

Figure 21 Selecting the network type.

Name the network and type in a description. Select <Next>.

Figure 22 Naming the new network

Select a VLAN number and also increase the MTU rate if desired. In general it is recommended to use large frames (jumbo frames) for data throughput applications.

Figure 23 Setting the new external network parameters.

The new network looks like:

Figure 24 Showing the new external network

NIC Tab

This tab is mainly read only but it does allow multiple networks to be connected together to provide higher throughput and greater resiliency. Active-Active and Active-Passive modes are supported. After bonding a new NIC will appear which is termed the bond master, any other NICs that are part of the bond are termed NIC slaves.

Figure 25 NIC Bonding

Console Tab

The console tab displays the Virtual Machine’s window. If no Virtual Machines are running the console of the XenServer can be shown which presents a command line prompt. The command <xsconsole> will show a console similar to the screen output of the local XenServer.

Figure 26 XenServer console “help command” output

Figure 27 XenCenter – Output of xsconsole command

Performance Tab

This tab shows statistics related to CPU, Memory and NIC throughput.

Figure 28 Viewing Performance Graphs

The type of data that is displayed in the graphs can be modified by using the <Configure graphs> button. There are two tabs – Data Sources and Layout. Select the data sources from the <Data Sources> tab and then select and position them using the <layout> tab.

Figure 29 Selecting and arranging the performance graphs

Users Tab

This tab is used to configure Active Directory and Domain services.

Logs Tab

This tab shows the log entries and allows them to be filtered according to severity levels

  • Errors
  • Alerts
  • Actions
  • Information

The log can be cleared from this screen.

Creating a XenServer Virtual Machine

From the main menu at the top of XenCenter select <VMà New VM>.

Figure 30 Creating a Virtual Machine

A number of templates exist which can be selected to configure the system automatically for optimal performance. If a template does not exist for the operating System that is to be installed; then select the <Other install media> template and configure manually.

In the example following a later version of Ubuntu Linux will be used to generate a virtual machine and since this version is not available as a template the <Other install media> template will be selected.

Figure 31 Selecting a VM template

Select <Next>, name the VM and select <Next> again.

Figure 32 Naming the VM

The next step is to select the installation media. Xen has the ability to install a VM from a library of ISO files which contain the images, from a local DVD or from the network. In this example a DVD on the server itself (xenserver) will be used.

Figure 33 Selecting the VM installation source

The VM can be associated with a home server which means that this server will always be the one to start the VM (provided that it is available. If multiple servers are available then there will be an option to allow the VM to be started on any server that has access to the resource. In this instance the VM will be placed on server “xenserver”.

Figure 34 Choosing the VM’s home server.

Next select the quantity of Virtual CPUs and the memory that is to be allocated.

Figure 035 Allocating CPU and memory resources to the VM

The next step is to select the virtual disk to be used.

Note: A pre-defined template will normally select the virtual disk automatically.

Since a pre-defined template was not used, the virtual disk will be added and configured manually.

Figure 36 Selecting a virtual disk for the VM

Select <Add> and specify the volume and the capacity to be used.

Figure 37 Adding and configuring the virtual disk.

The new disk is now shown in the dialog below:

In the next stage a network for the VM will be configured.

Figure 38 Configuring the VM’s network

If required the Properties can be modified as shown below. In this example a Quality of Service (QoS) limit was added of 2MB/s.

Figure 39 Modifying the VM’s properties.

Verify the setting presented and select <Finish> to create and start the new VM.

Figure 40 Completing the VM creation.

A status will be shown at the bottom of the XenCenter screen showing the VM creation progress. At this point the Console on “xenserver” will show the new VM or it can be displayed using the console tab from XenCenter.

Figure 41 Console showing the newly added Ubuntu VM

In the main tree select the newly added <Ubuntu 11.10 VM> and then select the <Console tab>. This will now display the state of the VM.

Note: this console can be undocked away for XenCenter and used as an independent window.

Figure 42 Interacting with the VM through the console tab.

Continue with the installation process through the console window until Ubuntu has completed installation.

Note that as far as Ubuntu is concerned it will use the 20GB disk that we allotted to it earlier. At this point Ubuntu is communicating with virtual hardware provide by the Xen server. During the installation it may be instructive to look at the performance (via the performance tab) to see how much of the resources are being consumed.

Starting the new VM

From the main XenCenter console select the new VM (Ubuntu 11.10) and then click on <Start>.

Figure 43 Starting the new VM

Select the <console tab> to interact with the VM. An important part of the configuration is the installation of Xenserver tools

Installing XenServer tools

With the VM selected, open the <General tab> to see if the tools are installed, if not click on the link to begin installation.

Figure 44 Installing XenServer tools

This will mount a device within the VM which can be interacted with.

Figure 45 Accessing Xenserver tools from the VM

Open the Linux folder and execute (or edit if necessary) the install.sh file.

Figure 46 Running XenServer tools from Linux

Review of terminology

At this point it is useful to review some of the components that have been encountered so far.

  • Xenserver – contains the Ctirix Xenserver Operating system (Hypervisor)
  • XenCenter – console for configuring and interacting with the hypervisor.
  • Storage Repository – normally a shared storage resource that is available to multiple servers
  • Virtual disk – a slice of a storage resource that is used to house Virtual Machines and data.

Advanced Features of XenServer

Pools

Citrix uses Pools to provide a centralized view of servers and their associated storage as a single manageable entity. In general the hardware should be similar (cannot mix AMD and Intel or CPUs with different features), but there is an add on available to allow participation between heterogeneous servers which can mask some of the differences. Servers should be running the same version and patch revision. One of the servers is termed the Pool Master. If the pool master fails then another node can be designated as the new pool master.

Note: The free version does not allow this to happen automatically, the HA enabled version will though.

Pools are normally created with shard storage configurations.

Pool Creation

From XenCenter add in the servers that will form the pool:

Figure 47 Selecting servers for pool creation

Add in both servers

To create a pool there should be a minimum of two servers. In this example the two servers are XenAcer and xenserver-Acer2.

To form a new pool – select <New Pool>.

Figure 48 Creating a new pool

Referring to Figure 49 – Name the pool (AcerServerPool) in this instance) and use the drop down box to select the master server within the pool. Select from the checkbox additional members of the pool. Here xenserver-Acer2 will be used as a second member.

Figure 49 Naming the pool, selecting the master and adding an additional member

Select <Create Pool>.

Notice how the view has changed. The servers are now part of the Pool tree and selecting the pool shows each server in the right hand pane along with their statistics.

Figure 50 Viewing the newly created pool

Setting up NFS shared storage

The next stage is to set up storage that can be shared by the nodes in the pool. Select the pool and then with the storage tab selected, click on <New SR>.

Figure 51 Creating shared storage

The shared storage that will be used will be of type NFS.

Note: that a read/write NFS share has already been set up on the Netgear storage device as shown in Figure 52.

Figure 52 NFS read Write share on shared storage

Select NFS as the storage type and choose <Next>.

Name the Storage Repository.

Select <Next> and then enter the path followed by <Scan>.

Figure 53 Scanning the NFS path

Note: If the scan fails, verify that the network location has been correctly specified, that it is accessible and that there are no typographical errors in the entry.

Figure 54 Completing the addition of the NFS share operation

Select <Finish>. The new SR will be created.

Note: that the NFS storage is at the same peer level as the two members of the pool. Also selecting either server individually should show that the NFS storage is available to both nodes.

Figure 55 Viewing the newly added NFS storage

Creating a VM on shared storage

Select the pool and then select <New VM>.

Figure 56 Creating a new VM on the NFS shared storage

In this case a 32 bit version of Windows 7 will be installed. Select the correct template from the options provided and then select <Next>.

Figure 57 Selecting the Windows 7 32 bit template.

Name the VM and select <Next>.

Figure 58 Naming the new VM

The next stage is to select the installation media.

In this example a local DVD from one of the servers could be used or it could be from an ISO library of images that is network accessible. In the interests of slightly more complexity the ISO library option will be used. A CIFS share has already been setup on a different server outside of the pool and is set up to allow read only access. To select an ISO library select the link <New ISO library> and then select <Windows File Sharing> on the new screen that opens up (Figure 59).

Figure 59 Configuring a CIFS ISO library location

Select <Next> and name the library.

Figure 60 Naming the ISO library

Figure 61 Specifying the CIFS sharename for the library

Select <Finish>. The screen will now have a new option for Installation media corresponding to the ISO library that has just been set up. The drop down box will show the possible installation locations along with the ISO images that are available for installation. Here the Windows 7 Home Basic version has been selected and will be installed.

Figure 62 Selecting an ISO image from the new library.

Select <Next>.

Figure 63 Selecting a home server for the VM

Home Server

A Home server is a server that is closely associated with the VM; If a home server is nominated the system will attempt to run the VM from the home server (provided that it is available). The recommended default option is not to assign a home server which means that the VM can run on any of the two servers that are part of the current pool.

The next stage is to allocate memory and a number of Virtual CPUs. Here 2 vCPUs will be allocated along with 2GB of memory.

Figure 64 Allocating memory and vCPUs to a VM

The Storage step selects the location of the VM.

Note: under the Shared flag it shows that NFS storage is True (Shared).

Figure 65 Selecting the VM location on NFS shared storage.

Select <Next>. The next stage will configure the networking portion of the new VM. In this case a second virtual NIC will be added.

Adding a second NIC to the VM

Note: Up to 4 virtual NICs can be added from this screen.

Figure 66 Adding a second Virtual Network Interface Card

The new screen will now show two NICs which can be configured independently.

Figure 67 New VM with two NICs

Select <Finish> to create the new VM.

The new VM will show up in the pool’s inventory.

Figure 68 Viewing the new VM within the pool

Starting the shared VM

Select the VM and then select the <console tab> and interact with the VM to install Windows in the same manner as a physical installation.

Figure 69 Installing Windows on the Virtual Machine.

VM installation and completion

After installation install Xenserver Tools. Once the tools installation has been completed open Device Manager and view the virtual hardware which includes the two virtual NICs that were installed earlier.

Other Advanced Features

The paid version of XenServer provides a number of other features such as:

High Availability

With HA Virtual Machines can be automatically restarted if it fails or it can be restarted on other servers within the resource pool.

Dynamic Workload Balancing

Automatically balances new loads to ensure optimum performance across the pool and works in tandem with Xenmotion.

More Information

For more information on these and other features refer to http://www.citrix.com

FreeNAS HOWTO


FreeNAS HOWTO Alan Johnson August 2012

Configuration details

This installation was performed using VMware workstation, however the principles are similar for a physical server. Using a Virtual Machine allows the rapid creation of virtual components such as disks and NICs and is therefore well suited to a tutorial.

Obtaining FreeNAS

FreeNAS is an excellent implementation of a robust file sharing system. It is available from iXsystems or directly from www.freeNAS.org

In this tutorial Version 8.2 was used

Select the released version

Here the 64 bit version is used

Download the ISO image

Installation

Burn the ISO image to a CD and boot from the CD, The installer should bring you to the following screen.

Select a location for the FreeNAS installation and select <OK>.

Select a fresh installation

Proceed by selecting <Yes>

After installation remove the CD and boot the newly installed FreeNAS Operating System

The system should reboot and display the menu below:

Accept the default option and proceed to the Console setup screen:

GUI Access

The console presents minimal configuration options. At this stage it should be possible to log in from a browser enabled client. Point the browser to the IP address displayed (192.168.48.133).

Changing the Password

The first step is to set up an admin password, select the <Account Tab> then under <Admin Account> select <Change Password>.

Creating a Volume

The machine has two smaller SCSI emulated disks of 10 GB capacity each and they will be used to create a volume on which to house the shared NAS files. Under <Storage> select <Volumes> à <View Disks>.

Edit the parameters (since these are actually virtual disks there is no benefit in enabling power management. Give the disk a descriptive name and select <OK>. Repeat for the second disk.

Creating a ZFS Mirror

From the <Volume> branch select <Volume Manager>. In this case the two disks will be set up using the ZFS file system. Select both disks, then select the Filesystem type as <ZFS> and the Group type as <Mirror>.

<Select Add Volume>

Viewing the newly created volume

The volume is now shown as <Active>.

Creating a CIFS Share

The volume will now be set up for shared access. From the <Sharing> branch select <Windows (CIFS) Shares> <Add Windows (CIFS) Share>.

The path must be specified in the dialogue and this is obtained from the previous screen (View Volumes) which shows the mountpoint. Enable the service at the prompt.

Accessing the share from a Windows Client

After this the share should be accessible to CIFS clients –

Setting Share Permissions

Now that the share is accessible, the permissions can be set to allow for write access. This can be done by going to <Volumes> <View Volumes> and selecting the <Change Permissions> icon .

Note: Understanding permissions is key to file sharing. UNIX and windows behave differently in how permissions are administered and separate volumes should be created for each type of file share. FreeNAS supports Apple, NFS and Windows shares.

The <Change permissions> dialogue shows a matrix grid allowing a Windows or UNIX ACL to be set up for different groups and users. THE ACLs are mutually exclusive and that is why a radio button control is presented. After setting the permissions check to see if files are read/writable to the share from a windows client.

Extending a Volume

If new disks are added to FreeNAS, then their capacity can be used to extend a volume – here a third 10GB disk has been added.

This disk is not part of a volume and its capacity is available. To add the volume to an existing volume select the volume name (here ZFSMirror) from the drop down box, highlight disk da2 and then select <Add Volume>

Note that the example above has added a non redundant volume onto a redundant pair (not a good practice), but acceptable to show how a volume would be extended) and now the capacity has doubled since the first pair shows the capacity of a single disk and the third disk is non redundant giving 2 x capacity of a single disk.

Removing a volume

Volumes can be removed by selecting the <Detach Volume> icon.

Recall that the volume has been previously set up as a CIFS share. The correct way to remove the volume is to unwind the actions by removing the CIFS share and then removing the volume.

If the user inadvertently forgets this step FreeNAS is aware that the volume is part of a share and will issue a warning accordingly. So to delete a volume the “lazy” way select <detach> as shown above and then acknowledge the warning. Here the<Delete all shares> option has been checked. In addition a data destructive option is checked in order to put the disks into a clean state. Select <Yes> to destroy the volume.

The Volume Manager now shows that no volumes exist.

Improving Fault Tolerance

To further demonstrate the capabilities of ZFS a four drive RAID-Z volume will be created. This option was not available to us earlier since there were only two drives available and two drives can only be used in a striped or mirrored configuration. Select the volume Manager as before, highlight all drives and check ZFS and RAID-Z.

The new volume was named RAIDZVolume and now shows up with a capacity of 23.4 GiB. RAID-Z makes better use of redundancy in that more capacity is available and it can tolerate a failure of any single disk.

Note if RAID-Z2 was used fault tolerance is improved at the expense of capacity. As a general rule of thumb, single disk fault tolerance should be adequate for up to 5 disks, after this consider using multi-drive fault tolerance. In addition large capacity disks can take a long time to rebuild so a volume is no longer protected with RAID-Z or mirrored redundancy schemes.

The volume manager now shows RAIDZvolume available on the path /mnt/RAIDZVolume.

Creating an iSCSI Target

NAS is a file level protocol, however FreeNAS also supports block oriented protocols such as iSCSI. To start first of select <services> <iSCSI> <Portals> <Add Portal>. From the drop down select 0.0.0.0 and use the standard iSCSI port of 3260.

The next stage is to set up an initiator. The SCSI protocol uses the concept of Initiator/Target where the initiator will typically initiate a data transfer to or from a target. This could be a read or write operation. Initially all initiators will be allowed. Select <iSCSI> <Initiators> <Add Initiator>.

The next part is to add an Extent, do this be selecting <iSCSI> <Extents> <Add Extent>.

To create an iSCSI target, select <Services> <iSCSI> <Targets> <Add Target>. Specify the size and name the target.

The next stage is to associate the extent that we created with the newly created target. With the iSCSI target selected, choose <Associated Targets>.

Select <Add Extent to Target> and then select the target and the extent from the drop down box. Select <OK>

The final stage is to enable the iSCSI service – do this by selecting <Services> and the toggle the iSCSI service to ON.

Setting up the iSCSI initiator

In this example the Microsoft iSCSI initiator will be used to connect to the target, Enter the IP address of the FreeNAS server into the Target field in the iSCSI initiator dialogue and then select <Quick Connect>.

When the target appears in the <Discovered Targets> box, select , <Connect> to take it out of the <Inactive> state.

Since iSCSI is a block protocol, the disk will require formatting and partitioning just like a regular disk.

After this operation the iSCSI target is now ready for use.

Further information

www.ixsystems.com

www.freenas.org

Queuing Theory and Data Storage Systems


Alan Johnson

Queuing Theory is closely related to statistics in that there is an element of uncertainty between the various elements such as the arrival frequency. In disk storage terms the customer is the I/O request and the disk, is the server. Queuing Theory is a particularly complex area where other factors such as random arrival times and service priorities need to be considered. There will always be a tradeoff between the number of servers and the cost of implementing those servers. The designer must balance the wait time for customers against the wait time for servers. Servers should tend towards 100% utilization but this should not result in customers waiting excessively long times for service.

Servicing disk I/O requests is an area where queuing theory can be readily applied. I/O requests form a queue that must be serviced by the storage sub-system. When requests are not serviced fast enough, subsequent requests will have to wait for service. Clearly, the goal is to reduce wait times to as small a figure that is practically possible. Factors, which determine how individual requests are serviced, relate to:

  1. The number of servers
  2. The queuing time
  3. The service time
  4. Distribution (by time) of requests

Queuing Theory models

The manner in which customers are served is termed the queue discipline. A common discipline is the First Come First Served (FCFS) discipline. Queuing systems can be as simple as Single Server/Single Queue as shown in Figure 1, Multiple Server/Multiple Queue shown in Figure 2 or Multiple Server/Single Queue shown in Figure 3.

The latter is being adopted by many banks and airlines and gives better utilization and load balancing.


Figure 1 Single Server/Single Queue

Figure 2 Multiple Server/Multiple Queue

Figure 3 Multiple Server/Single Queue

Most real life examples deal with a random arrival of requests. This is modeled as a Poisson Probability Distribution. With this distribution the probability of x arrivals within a given time frame is calculated as P(x) = (Eq.1) where x = 0,1,2..n. For the case with P(0) the equation is simply .

Service times are defined as the time from when the request has been placed until the time that it has been satisfied. An Exponential Probability Distribution provides a good approximation for I/O storage systems running transaction-processing applications.

Queuing Theory variables

Important parameters used within single server queuing systems are the mean arrival rate (l) and the mean service rate (m). It is important that the service rate is greater than the arrival rate otherwise the queue will keep growing without limit. The average number of customers in the queue is Lq and is described by the equation (Eq.2).

The average number of customers in the system is described by (Eq.3).

The length of time spent in the queue is given by Wq and is described by the equation Wq = (Eq.4).

The average wait time in the system is given by (Eq.5).

A summary is given in Table 1.

Table 1 Queuing Theory variables for Single queue single server systems.

Mean arrival rate l
Mean service rate m
Probability of x arrivals within a given period P(x) =
Average number of customers in the queue
Average number of customers in the system
Average length of time spent in the queue Wq =
Average wait time in the system

To take an example of a database application, which generated, on average, I/O read requests (queries) at the rate of 50 per second being sent to an I/O storage subsystem with the capability of servicing 75 requests per second.

  • The average queue length of I/O requests is 1.33.

  • The average number of I/O requests outstanding would be 2.0.

  • The average time waiting in the queue would be 1.33/50 = 0.0267 seconds
  • = 27 milliseconds.

  • The average wait time in the system is .0267+1/75 = 40 milliseconds.

With this application the I/O sub-system can cope but it is showing signs of strain. Firstly, a wait time of 40 milliseconds is long and if more users are added to the system generating an extra ten more requests per second, the system will demonstrate significant delays. Figure 4 shows how the queue builds up as the arrival rate increases. Clearly, the System Administrator needs to address this issue quite soon.

Now, if the administrator substituted a more powerful storage system with the capability of servicing 150 requests per second (m=150), we obtain values of:

  • Lq = 0.5
  • L = 1
  • Wq = 6.6 milliseconds
  • W = 13.3 milliseconds

Figure 4 Queue length with μ = 75

Plotting these values again with μ =150, Figure 5 shows that the system can easily service 100 requests per second.


Figure 5 Queue length with μ = 150

However, providing a faster server is only one way of addressing the issue, an alternative method is to implement multiple servers similar to that shown in Figure 2 and Figure 3. The section “Queuing Theory as applied to RAID systems” on page 6 discusses multiple servers.

I/O Patterns

Typically within a computer system, applications exhibit either High bandwidth/low I/O request rate or Low bandwidth/high I/O request rate characteristics.

An application with a low request rate is normally a sequential application involving large data transfers. In this situation, the arrival rate l will be quite low. Typical applications, which fall into this category, are Imaging and Video.

Transaction processing applications exhibit a high frequency of requests for small quantities of data, hence l will be high. Queuing Theory models allow us to predict how applications perform.

Queuing Theory – practical applications

A RAID controller provides a good example where queuing theory is applicable. The RAID controller is typically connected to multiple disk channels and one or more host channels. To analyze the system it is convenient to think of it as two separate entities, one side is from the controller to the host and the other side is from the controller to the drive channels. In the case of read requests from the host’s perspective, the controller is the server and the Host Bus Adapter (HBA) is the client, of course, multiple adapters may be present on the host side but over a single interface, this translates into a busier queue. As far as the RAID controller is concerned, the drive channels are serving the data and the RAID controller requests are the consumers.

Figure 6 RAID System with multiple drive interfaces

Multiple drive interfaces versus single host interface

Providing multiple drive channels allows better utilization of the RAID controller. Of course, each channel will have multiple disks attached but here we will treat each channel as being the lowest level. In a host read situation, the controller (customer) will provide a stream of I/O requests, which are directed, to multiple channels (servers). In this instance, the model is similar to that shown in Figure 3. A number of assumptions will be made with the model in that channels are similar in terms of capability.

Relevant equations for the multiple server model are shown in Table 2.

Table 2 Queuing Theory variables for Single queue multiple server systems.

Mean arrival rate l
Mean service rate m
Number of channels k
Probability that there are no customers in the system. P(0) = []Eq. 6
Average number of customers in the queue . (Eq. 7)
Average number of customers in the system
Average length of time spent in the queue Wq =
Average wait time in the system

To take a specific example, assume that there is an arrival rate of 100 I/O requests per second and that this is serviced by a single channel with a service rate of 105. Using equation 2 on page 3 : Lq = 10000/525 = 19.

The average wait time in the queue (Wq) is 19/105 = 181 milliseconds. This is an area of concern for an administrator, spreading the load over multiple channels will ease the situation, but if there is a cost tradeoff then it is necessary to see how performance increases as the channels are added. One strategy is to apply the model by adding channels until the performance is within acceptable limits.

Firstly applying the same system with two service channels each capable of servicing 105 requests per second, using equation 7:

From equation 7 à     Lq = 9524/12100 P0 = 0.787 P0.

From equation 6
à     P0 = 0.3548

Lq = (0.787) (0.3548) = 0.279

Wq = 0.279/100 = 0.0028.

By adding a second channel the queue length has been reduced from 19 to less than one and the wait time has been reduced from 181 milliseconds to 2.8 milliseconds.

To take another example, the strategy employed is to replace the server with one that is twice as fast i.e. m = 210.

Using equation 2 (from the single server model) Lq = 1002 / (210)(110) = 0.433. The wait time is now 4.3 milliseconds. In this example, a significant difference in performance is obtained by using multiple channels or increasing the performance of the server The equations used in queuing theory are non linear and can exhibit counter intuitive results.

The single server model described in this chapter is known as M/M/1 and the multiple server model is known as M/M/c. Even though only these two queuing models are looked at with some rather simple assumptions, they are reasonably representative of a typical random access I/O pattern. Examples that are more complex would involve priorities and finite queues.

SCSI Enclosure Services (SES)


Functional Redundancy

The original Berkeley paper by Patterson Gibson and Katz described a method for implementing fault tolerance at the device level by utilizing various redundancy schemes. Through the efforts of the industry and the RAID Advisory Board (RAB) this redundancy has been extended into device enclosures by applying similar redundant schemes to critical components such as Cooling Modules and Power Supplies that is known as Functional Redundancy. The SCSI-3 Enclosure Services Command Set (SES) offers a way of sensing device status and with suitable software, a series of alert mechanisms can be sent to administrators. RAID set events and status can be monitored. An example of this includes a device, which is participating in a rebuild operation.

Enclosure Components

Before discussing SES in some detail, it is necessary to discuss the components that make up an enclosure device. Figure 1 shows a device enclosure with N+1 cooling, N+1 Power Supplies and Dual A/C inputs. Devices such as cooling devices will have various attributes. A simple cooling device would have two states – functional and non-functional. A more advanced cooling device would have variable fan speeds. It could respond to different temperature thresholds by increasing the motor speed. Power Supply Units would have voltage threshold limits, for example if the 12-volt line were to vary by +-5% an alert could be sent to the administrator. Devices such as disks will have attributes such as whether the device can be removed or inserted into an enclosure, whether a fault exists and the status of the port bypass circuit (if applicable).

Figure 1 Device Enclosure

A hardware-monitoring device is available which has sensors to report on device status and the environmental characteristics such as temperature.

RAID Controller Organization

In addition, a RAID system provides one or more controllers to perform the RAID functionality. In the case of a dual controller system, the controllers may be configured as Active/Passive where one controller handles the complete workload with the other standing by as a backup. In the case of a failure with the primary controller, the standby will take over. A better method is to have both controllers configured as an Active/Active configuration. In this mode, the workload is split and better utilization occurs. If either controller fails then the surviving member will manage the total I/O demands.

SES via an Enclosure Services Device

The SCSI Enclosure Command Set is used to obtain the status of components within the enclosure and to turn on indicators or set various states within the components. There are two methods of obtaining enclosure information. One method uses an Enclosure Services Device to access environmental information while the other uses an intermediary non-Enclosure Services Device to access the information. The Device appears as a peripheral and is addressed in a similar fashion to other peripheral devices.

Figure 2 shows an enclosure services device which is directly accessed by an Application Client. The Enclosure Services Device responds as a SCSI device and will respond to an INQUIRY command by setting the ENCSERV bit, which indicates that the device can handle information relating to enclosures.

Figure 2 Enclosures Services Device

SES via a non Enclosure Services Device

This method uses a peripheral device such as a disk to transport enclosure data to and from the Application Client. The Enclosure Services Device does not appear on the peripheral bus and cannot be directly accessed. This intermediate device will use the SCSI SEND DIAGNOSTIC and RECEIVE DIAGNOSTIC RESULTS commands to set status or obtain status information within the enclosure. Initially the application will ask the peripheral device if an Enclosure Services Device is actually connected to it. Figure 49 shows how this is implemented.

Figure 3 Access via a non-Enclosure Services Device

SCSI Commands used with Enclosure Services Devices

Table 1 shows the mandatory commands that must be implemented by enclosure services devices.

Table 1 Mandatory Commands for Enclosure Services Device

Command Operation Code
INQUIRY 12h
RECEIVE DIAGNOSTICS RESULTS 1Ch
REQUEST SENSE 03h
SEND DIAGNOSTIC 1Dh
TEST UNIT READY 00h

Optional commands are shown in Table 2.

Table 2 Optional Commands for Enclosure Services Device

Command Operation Code
MODE SELECT (6) 15h
MODE SELECT (10) 55h
MODE SENSE (6) 1Ah
MODE SENSE (10) 5Ah
PERSISTENT RESERVE IN 5Eh
PERSISTENT RESERVE OUT 5Fh
RELEASE (6) 17h
RELEASE (10) 57h
RESERVE (6) 16h
RESERVE (10) 56h
WRITE BUFFER 3Bh

Enclosure Services Device Pages

An Enclosure Services Device communicates information by a set of Diagnostic Page Codes. Some of the more relevant pages are now described.

Configuration Page

The Configuration Page (Page Code 01h) is used to return a list of elements that are found in an enclosure. Each of these elements is described by a header, which defines the number of element types that are supported within the enclosure. Each type has an eight bit code which identifies the element being described.. Typical devices found in an enclosure include Power Supplies, Cooling Modules, Temperature Sensors, Door Locks and Audible Alarms. These elements have the byte codes 02h, 03h, 04h, 05h and 06h respectively. There is another field, which denotes the number of elements of a specified type in an enclosure. The Application Client will read this information from the Configuration Page by issuing a RECEIVE DIAGNOSTIC RESULTS Command.

Enclosure Control Page

This page is used by the Application Client to control the elements that were listed when the Configuration Page was read. The client can set bits to denote the following four states for element types:

  • Information
  • Non Critical
  • Critical
  • Unrecoverable

Enclosure Status Page

This page is used to return the status of the elements listed in the Configuration Page. There is an overall status, which lists the status for a particular group of elements and an individual element status field for each instance of a particular device type.

Pages relevant to Storage Arrays

Array Control Page

This page is similar to the Enclosure Control Page but defines information relating to the status of a device when it participates within an Array. Defined indicated states for a device include the following:

  • Hot Spare
  • Device is participating in a critical array
  • Device is a member of an array that has failed
  • Device is participating in a rebuild operation
Array Status Page

This page is similar to the Enclosure Status Page but returns information relating to the status of a device as described in the paragraph relating to the Array Control Page.

Programming Information

This section details how C code could be implemented to read the enclosure configuration:

Read enclosure configuration:

Char cdb[6]    = {(char)SCSI_RCV_DIAG,(char)0x01,(char)SES_PAGE_CONFIGURATION,

(char)SES_MAX_XFER_SIZE_MSB,(char)SES_MAX_XFER_SIZE_LSB,(char)0x00};

/* Send the 6-byte CDB & receive the data in response */

/* Zero all fields */

    memset(&srb,0,sizeof(srb));

    /*

     * Setup the SRB

     */

    srb.SRB_Cmd        =    SC_EXEC_SCSI_CMD;

    srb.SRB_HaId        =    scsi->adapter;

    srb.SRB_Target    =    scsi->scsi;

    srb.SRB_Lun        =    scsi->lun;

    srb.SRB_CDBLen    =    cdblen;

    memcpy(&(srb.CDBByte[0]),cdb,cdblen);

    /* Read */

    srb.SRB_BufLen    =    *destlen;

    srb.SRB_BufPointer=    dest;

    srb.SRB_Flags    =    SRB_DIR_IN | SRB_EVENT_NOTIFY;

    srb.SRB_SenseLen    =    SENSE_LEN;

    srb.SRB_PostProc    =    ASPICompletionEvent;

    ResetEvent(ASPICompletionEvent);

    ASPIStatus        =    SendASPI32Command( (LPSRB) &srb);

/* Record the types and counts of each element type */

To read the enclosure status page:

char    cdb[6]    =    {(char)SCSI_RCV_DIAG,(char)0x01,(char)SES_PAGE_ENCLOSURE,(char)SES_MAX_XFER_SIZE_MSB,(char)SES_MAX_XFER_SIZE_LSB,(char)0x00};

/* Send the 6-byte CDB & receive the data in response*/