HowTo make X CentOS nodes into a cluster

CentOS Cluster requirements

I suspect that you have the basis linux knowhow, and are able to use the root user...

I also strongly recommend that you have the know-how of how to create iSCSI target's + LUN's on you NAS or SAN, which we will be going to use for our shared resouces, and GFS drive(s) for the clusters, before you start this walkthrough, else you might get lost in the middle, if you don't know how-to..

It is highly recommended that you have a master and x amount of nodes, in this walkthrough i will show you how it can be done with 4 nodes, 2 nodes in each cluster, in total of 5 machines.

I have installed these nodes up with the CentOS x86_64 LiceCD under a Windows 2012 Hyper-V.

Hardware for this walkthrough:
Asus Z8NA-D6
2x Xeon E5645 (24 threads)
48GB DDR3-1333 ECC/REG Memory
4x 256GB Samsung 840 PRO SSD
2x 128GB Samsung 840 PRO SSD

Synology DS1512+
5x 2TB WD Red discs

But it can be done with lot less physical hardware if only 2 nodes and a master is needed.

You will be needing in this senarioe 3 iSCSI target's + LUN's for getting this to work, in a smaller setup you can ofcourse go down to use only one or maybe 2 dependant on what you need it for.
In this case I have already made 3 iSCSI targets on my NAS:

iqn.xxxxxxx.clu-xlu-resource
iqn.xxxxxxx.clu-xlu-quorum1
iqn.xxxxxxx.clu-xlu-quorum2

I expect that you have the knowhow of the installation for the CentOS basic installation, so that i will not go into at all, I will how ever tell you what my nodes is called:

SRVCENTO-CLU1
SRVCENTO-CLU2
SRVCENTO-CLU3
SRVCENTO-CLU4
SRVCENTO-CLUM

For each node the specifications in Hyper-V is as follows:
CPU's: 2 (virtuel)
MEM: 3GB
HDD: 30GB
NET: 2 network cards(virtuel)
- Network 1: make the eth0 external
- Network 2: make the eth1 internal (example use vlan:2, this can be setup in the Hyper-V network configuration)

Maybe you want to give the CLUM(master) a but more MEM.


The CLU1 + CLU2 is going into Cluster CLU-XLU1
The CLU3 + CLU4 is going into Cluster CLU-XLU2
Both of these is controlled by the CLUM node, which have the Conga (luci) app installed on it as the only thing that actually is differnt from the others.

The following steps are made so you do the individual steps on all the nodes before you move on to the next step, that will also make it easyer for you so you know where you are, and waht you are missing on each node.

Update your CentOS!

You need to open up a terminal or a SSH connection to the node, and run it with the root user.
(This will update your instance, and it can take some time.)
[root@srvcentos-clum]# yum -y update

Through this guide I will be using nano, which is the best text based program out there in my mind.
[root@srvcentos-clum]# yum -y install nano
Remember to do this on all nodes, before you move on. it is not more frustration than having one node to be outdated with app's..

Step 1. Disable and start services

First you stop and start these services, on your first node(CLU1), also it will not be started again automatic after the reboot
[root@srvcentos-clum]# service iptables stop

[root@srvcentos-clum]# service ip6tables stop

[root@srvcentos-clum]# chkconfig iptables off | chkconfig ip6tables off
(This is not ideal for a production enviroment but for this walkthrough we will disable the services)

This service is for our ISCSI, which we need for our cluster shared resource, or our nodes shared storage, these services will also be started now every time you reboot.
[root@srvcentos-clum]# service iscsi start

[root@srvcentos-clum]# service iscsid start

[root@srvcentos-clum]# chkconfig iscsi on | chkconfig iscsid on

This start the SSH access for being able to access this node from the outside of this client, still within your own network, and this service will also start automatic now every time you reboot.
[root@srvcentos-clum]# service sshd start

[root@srvcentos-clum]# chkconfig sshd on

This is the service is used for storage based network access, and we want it to be running after reboot automatic.
[root@srvcentos-clum]# service netfs start

[root@srvcentos-clum]# chkconfig netfs on

We want the normal NetworkManager to be swtiched off because the Cluster apps is not able to function with that kind of app. Hence we turn it off for good.
[root@srvcentos-clum]# service NetworkManager stop

[root@srvcentos-clum]# chkconfig NetworkManager off

Now we need to change the name of the node:
Below command changes the node-name (srvcentos-clu1.rmus.me) with what you want the node to be called in your enviroment, in the walkthrough it is called srvcentos-clu1.rmus.me
[root@srvcentos-clum]# hostname srvcentos-clu1.rmus.me

[root@srvcentos-clum]# reboot

Note: The above step 1 needs to be made on all the nodes for your cluster, so in this walkthrough I need to redo this 5 times, (I can recommend making a script for this.)

Step 2. Change the node(s) net and hosts

Now we need to make the 2 network cards to work individual of each other, so they don't interupt each other.
First you make a ifconfig for looking what the HWADR(MAC address) is and write it down, we are going to use it shortly.
[root@srvcentos-clum]# ifconfig

External use(WWW):
Replace all the xx and x with the setup for your system.
[root@srvcentos-clum]# nano -w /etc/sysconfig/network-scripts/ifcfg-eth0

 DEVICE=eth0
 BOOTPROTO=static
 HWADDR=xx:xx:xx:xx:xx:xx
 IPADDR=192.168.1.x
 NETMASK=255.255.255.0
 GATEWAY=192.168.1.1
 DNS1=192.168.1.11
 ONBOOT=yes
 NETTYPE=qeth
 TYPE=Ethernet

(CTRL+X + y enter = save the file)

In our test it will look like this
DEVICE=eth0
BOOTPROTO=static
HWADDR=00:00:00:00:10:01
IPADDR=192.168.1.11
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.11
ONBOOT=yes
NETTYPE=qeth
TYPE=Ethernet

Next thing is that you need to setup this eth1 interface, which we are going to you for our headbeat and other stuff

Internal use(intranet this is running on VLAN:2 if you have set it up correctly in Hyper-V).
Replace all the xx and x with the setup for your system.
[root@srvcentos-clum]# nano -w /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=static
HWADDR=00:00:00:00:10:xx
IPADDR=10.0.0.x
NETMASK=255.255.255.0
ONBOOT=yes
NETTYPE=qeth
TYPE=Ethernet

This is the same procedure as before with eth0, you get the HWADR(MAC address) form the ifconfig, if you are in any doubt go out of the file without saving and go back in again after you have written down the numbers.
In our walkthrough it looks like this:
DEVICE=eth1
BOOTPROTO=static
HWADDR=00:00:00:00:10:02
IPADDR=10.0.0.11
NETMASK=255.255.255.0
ONBOOT=yes
NETTYPE=qeth
TYPE=Ethernet

Instead of the above Networkmanager in the periviously step we want the simplyfied network configuration tool: network.
[root@srvcentos-clum]# chkconfig network on

[root@srvcentos-clum]# service network start

To disable the IPV6, because this can sometimes break or cluster in some senarioes so we just kill it in this walkthrough.
[root@srvcentos-clum]# echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6 | echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6

Ifconfig shows you if it is disabled, if there is any: "inet6 addr:" then it is not disabled.
[root@srvcentos-clum]# ifconfig

Change your hosts file for each node.(all nodes files need to show the eaxct same)
[root@srvcentos-clum]# cd /

[root@srvcentos-clum]# nano -w /etc/hosts

(Below is how mine looks like)

127.0.0.1 localhost.localdomain localhost
192.168.1.10 srvcentos-clum.yourdomain.com srvcentos-clum
10.0.0.10 esrvcentos-clum.yourdomain.com esrvcentos-clum
192.168.1.11 srvcentos-clu1.yourdomain.com srvcentos-clu1
10.0.0.11 esrvcentos-clu1.yourdomain.co esrvcentos-clu1
192.168.1.12 srvcentos-clu2.yourdomain.com srvcentos-clu2
10.0.0.12 esrvcentos-clu2.yourdomain.com esrvcentos-clu2
192.168.1.13 srvcentos-clu3.yourdomain.com srvcentos-clu3
10.0.0.13 esrvcentos-clu3.yourdomain.com esrvcentos-clu3
192.168.1.14 srvcentos-clu4.yourdomain.com srvcentos-clu4
10.0.0.14 esrvcentos-clu4.yourdomain.com esrvcentos-clu4
192.168.1.20 srvcentos.yourdomain.com srv1512nas
#::1 localhost6.localdomain6 localhost6
Note: The above step 2 needs to be made on all the nodes for your cluster.

Step 3. Install the Cluster's app's

To administer the Cluster, we need to install the "High Availability", luci and ricci as follows:

Each node will be administered by the CLUM, install the ricci agent on each node(slave).

[root@srvcentos-clum]# yum -y groupinstall "High Availability"
Ricci is our heartbet client our agent so to speak.
[root@srvcentos-clum]# yum install ricci
Each node will be administered by the agent so we need to start it.
[root@srvcentos-clum]# service ricci start
Starting ricci [ O.k. ]

[root@srvcentos-clum]# chkconfig ricci on

The above step 3 needs to be made on all the nodes for your cluster.

Below is only for the master (clum) node.

Select a node to host luci and install the luci software on that node. In our case the CLUM node which is our master:
[root@srvcentos-clum]# yum -y groupinstall "High Availability Management"

[root@srvcentos-clum]# yum install luci

[root@srvcentos-clum]# chkconfig luci on

[root@srvcentos-clum]# service luci start
Starting luci:[ O.k. ]
Starting luci: generating https SSL certificates... done [ O.k. ]
Please, point your web browser to https://srvcentos-clum:8084
to access the Conga interface where we can add the nodes to the clusters. This should be done on the CLUM node it self for best overview.

The first time you access luci, a web browser specific prompt regarding the self-signed SSL certificate (of the luci server) is displayed. Upon acknowledging the dialog box or boxes, your Web browser displays the luci login page.

Although any user able to authenticate on the system that is hosting luci can log in to luci, as of Red Hat Enterprise Linux 6.2 only the root user on the system that is running luci can access any of the luci components until an administrator (the root user or a user with administrator permission) sets permissions for that user.

Logging in to luci displays the luci Homebase page, as shown in the screenshot below:

(you use the root user here for login)
Luci Home Page

For more info on how-to for login and setup of the cluster(s) it self: Cluster But remember that this walkthrough is for 22 seperated Cluster's.

Step 4. Install and setup the shared resource with iSCSI

The below part is going to get triky because we are going to assign the iSCSI Target's this way:

CLU1+CLU2 is going to use this target(initiator): clu-xlu-quorum1
CLU3+CLU4 is going to use this target(initiator): clu-xlu-quorum2
CLUM is going to use this target(initiator): clu-xlu-resource

Note: So you need to be precise in the this step4 for each node, else you will not get it to work.

iSCSI is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks. This package is available under Redhat Enterprise Linux / CentOS / Fedora Linux and can be installed using yum command:

I assume that you already have made the 3 iSCSI Target's + LUN's for this to work.
Note: Below walkthrough is mainly for the master(clum) node.

Install the iSCSI software package:
[root@srvcentos-clum]# yum -y install scsi-target-utils
Edit target iSCSI configuration:
[root@srvcentos-clum]# nano -w /etc/tgt/targets.conf

It is important that you change the highlighted areas to what you have in you end:
<target iqn.name>
backing-store /dev/md0
initiator-address 192.168.1.x
incominguser userid password
</target>
(incominguser = CHAP security userid and password)

Our end is looking like this, because we are setting up the master first(clum):
<target srv1512nas.clu-xlu-resource>
backing-store /dev/md0
initiator-address 192.168.1.20
incominguser iscsiadm 123456789adm
</target>
Start the iSCSI target daemon and configure to startup at boot system:
[root@srvcentos-clum]# /etc/init.d/tgtd start
Starter SCSI target daemon: [ O.k. ]
[root@srvcentos-clum]# chkconfig tgtd on
Below part is not needed when we have swithced off the iptables(firewall), but this is to show what you where neede to open up for, in case you wanted to use it.

Add iptables rule to allow iSCSI traffic:
[root@srvcentos-clum]# nano -w /etc/sysconfig/iptables

[root@srvcentos-clum]# -A INPUT -m state --state NEW -m tcp -p tcp --dport 3260 -j ACCEPT

[root@srvcentos-clum]# service iptables restart
We move on again, check the iSCSI target configuration:
[root@srvcentos-clum]# tgtadm --mode target --op show

Target 1: iqn.name <- this should ad up with your iSCSI target, in this case: srv1512nas.clu-xlu-resource
System information:
Driver: iSCSI
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
Account information:
iscsiadm <- this is showing which user we are using for log onto the target with
ACL information:
192.168.1.20 <- and the ip adr of the NAS
Configuring the iSCSI Initiator

Install the software package:
[root@srvcentos-clum]# yum -y install iscsi-initiator-utils
Configure the iqn name for the initiator:
[root@srvcentos-clum]# nano -w /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.name
InitiatorAlias=name <- not the iqn

This is how our looks like:
InitiatorName=srv1512nas.clu-xlu-resource
InitiatorAlias=clu-xlu-resource
Edit the iSCSI initiator configuration:
[root@srvcentos-clum]# nano -w /etc/iscsi/iscsid.conf

You need to change this in the file remove the # infront of the below lines of just write them in manually

node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsiadm
node.session.auth.password = 123456789adm
Start iSCSI initiator daemon, and start discovering targets in our iSCSI server:
[root@srvcentos-clum]# iscsiadm --mode discovery -t sendtargets --portal 192.168.1.20

Starter iscsid:[ O.k. ]
192.168.1.20:3260,1 srv1512nas.clu-xlu-resource
192.168.1.20:3260,1 srv1512nas.clu-xlu-quorum1
192.168.1.20:3260,1 srv1512nas.clu-xlu-quorum2

[root@srvcentos-clum]# chkconfig iscsid on
Trying to login with the iSCSI LUN:
[root@srvcentos-clum]# iscsiadm --mode node --targetname srv1512nas.clu-xlu-resource --portal 192.168.1.20 --login

Logging in to [iface: default, target: srv1512nas.clu-xlu-resource, portal: 192.168.1.20,3260] (multiple)
Login to [iface: default, target: srv1512nas.clu-xlu-resource, portal: 192.168.1.20,3260] successful.
This command is reponsible of the update of iSCSI targets database for the files located in /var/lib/iscsi/ :
[root@srvcentos-clum]# cat /var/lib/iscsi/send_targets/192.168.1.20,3260/st_config

discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 192.168.1.20
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
Checking the status session with the target:
[root@srvcentos-clum]# iscsiadm --mode session --op show
tcp: [2] 192.168.1.30:3260,1 srv1512nas.clu-xlu-resource
Note: The above part 4 has to be made on all the nodes, but remember that the the initiator have to be changed in all the above steps for each nodes, look at the matrix below for more info on which initiator should be for which node:

CLU1+CLU2 is going to use this target(initiator): clu-xlu-quorum1
CLU3+CLU4 is going to use this target(initiator): clu-xlu-quorum2
CLUM is going to use this target(initiator): clu-xlu-resource

Step 5. Make a Partition and Setup the Bootloader

Now you need to make an partition for the newly attached iSCSI drive, which also should be attached every time we boot the server.
Check what the disc letter is:
[root@srvcentos-clum]# fdisk -l
Disk /dev/sdb: 214.7 Gb, 214748364800 byte
255 heads, 63 sectors/track, 26108 cylinders
Units = cylindre of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001e98f
In the above you can see that our iSCSI drive is now available for partitioning at: Disk /dev/sdb.

Create a new partition to our iSCSI disk and format it:
[root@srvcentos-clum]# fdisk /dev/sdb

Command (m for help):
n + p + 1 + enter + enter
t + 83
(Above procedure will give you an ext4, like shown below)
Command (m for help): p
(p list the current partition table that you just created)

Enhed Opstart Start Slut Blokke Id System
/dev/sdb1 1 26108 209712478+ 83 Linux

Command (m for help): w
(w will write the partition to the drive)
Now we are going to format the partition with ext4:
[root@srvcentos-clum]# mkfs.ext4 /dev/sdb1
Now we are going to add the partition into our fstab but we ned to know what the UUID is
[root@srvcentos-clum]# blkid /dev/sdb1
/dev/sdb1: UUID="00xx0000-0000-0000-0000-00xx0000x000" TYPE="ext4"

Now we need to create afolder for where we will mount this new netdrive.
[root@srvcentos-clum]# mkdir /mnt/resource
Mounting automatically the iSCSI partitions at system boot
Now you need to add this line into your boot up file: /etc/fstab
[root@srvcentos-clum]# nano -w /etc/fstab
UUID=00xx0000-0000-0000-0000-00xx0000x000 /mnt/resource ext4 _netdev,rw 0 0
[root@srvcentos-clum]# chkconfig --list netfs
[root@srvcentos-clum]# mount -a -O _netdev

Now you are done with the clum node, that is now completed for this walkthrough.

Now we are going to add each cluster node clu1, clu2, clu3 and clu4 with the shared gfs drive:

CLU1+CLU2 is going to use this target: clu-xlu-quorum1
CLU3+CLU4 is going to use this target: clu-xlu-quorum2

The only different from the clum node to the slave nodes is the below part, when you are going to make a partition on the attached iSCSI drive:
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# fdisk /dev/sdb

(Below procedure will give you an gfs, like shown below, 8e = Linux LVM)
Command (m for help):
n + p + 1 + enter + enter
t + 8e

(p list the current partition table that you just created)
Command (m for help): p

Enhed Opstart Start Slut Blokke Id System
/dev/sdb1 1 26108 209712478+ 8e Linux LVM

(w will write the partition to the drive)
Command (m for help): w

Note: This needes to be done on both nodes pr cluster.

Now we need to create an volume group called something in this case we call it "stor_vg", on our newly created partition.
Note: This is only neede for one node pr cluster.
[root@srvcentos-clu1]# vgcreate stor_vg /dev/sdb1

No physical volume label read from /dev/sdb1
Physical volume "/dev/sdb1" successfully created
Volume group "stor_vg" successfully created
Now we need to create a logic volum that we can use on our volume group.(in this case it is set to 100GB)
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# lvcreate -L +100G -n stor_lv stor_vg

Logical volume "stor_lv" created
Now partitionate our logic volume so that it is accessiable from both nodes in the CLU-XLU1
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# mkfs.gfs2 -p lock_dlm -t hydra:storage -j 4 /dev/mapper/stor_vg-stor_lv

(substitute hydra with the name of your cluster)
Now you need to create a fodler for where you are going to mount your drive
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# cd /
[root@srvcentos-clu1]# mkdir storage
Update /etc/fstab to mount the filesystem to /storage at startup:
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# nano -w /etc/fstab

(The below text needs to inserted into your bootloader)
/dev/mapper/stor_vg-stor_lv /storage 2 gfs2 defaults 0 0

Now you need to mount the system to your instance
Note: This needes to be done on both nodes pr cluster
[root@srvcentos-clu1]# mount -a -O _netdev


Now you will with 99% change have 2 working clusters the last 1% is human error :-)

srvcentos-clu1 iSCSI drive (quorum1) mounted at /storage (100GB) accessiable from all the nodes in the cluster: clu-xlu1
srvcentos-clu2 iSCSI drive (quorum1) mounted at /storage (100GB) accessiable from all the nodes in the cluster: clu-xlu1
srvcentos-clu3 iSCSI drive (quorum2) mounted at /storage (100GB) accessiable from all the nodes in the cluster: clu-xlu2
srvcentos-clu4 iSCSI drive (quorum2) mounted at /storage (100GB) accessiable from all the nodes in the cluster: clu-xlu2
srvcentos-clum iSCSI drive (resource) mounted at /mnt/resource (100GB), could be used as a shared resource for any of the clusters

Luci Home Page



Function Components Description
Conga luci Remote Management System - Management Station
ricci Remote Management System - Managed Station
Cluster Configuration Tool system-config-cluster Command used to manage cluster configuration in a graphical setting.
Cluster Logical Volume Manager (CLVM) clvmd The daemon that distributes LVM metadata updates around a cluster. It must be running on all nodes in the cluster and will give an error if a node in the cluster does not have this daemon running.
lvm LVM2 tools. Provides the command-line tools for LVM2..
system-config-lvm Provides graphical user interface for LVM2.
lvm.conf The LVM configuration file. The full path is /etc/lvm/lvm.conf..
Cluster Configuration System (CCS) ccs_tool ccs_tool is part of the Cluster Configuration System (CCS). It is used to make online updates of CCS configuration files. Additionally, it can be used to upgrade cluster configuration files from CCS archives created with GFS 6.0 (and earlier) to the XML format configuration format used with this release of Red Hat Cluster Suite.
ccs_test Diagnostic and testing command that is used to retrieve information from configuration files through ccsd.
ccsd CCS daemon that runs on all cluster nodes and provides configuration file data to cluster software.
cluster.conf This is the cluster configuration file. The full path is /etc/cluster/cluster.conf.
Cluster Manager (CMAN) cman.ko The kernel module for CMAN.
cman_tool This is the administrative front end to CMAN. It starts and stops CMAN and can change some internal parameters such as votes.
dlm_controld Daemon started by cman init script to manage dlm in kernel; not used by user.
gfs_controld Daemon started by cman init script to manage gfs in kernel; not used by user.
group_tool Used to get a list of groups related to fencing, DLM, GFS, and getting debug information; includes what cman_tool services provided in RHEL 4.
groupd Daemon started by cman init script to interface between openais/cman and dlm_controld/gfs_controld/fenced; not used by user.
libcman.so.<version number> Library for programs that need to interact with cman.ko.
Resource Group Manager (rgmanager) clusvcadm Command used to manually enable, disable, relocate, and restart user services in a cluster
clustat Command used to display the status of the cluster, including node membership and services running.
clurgmgrd Daemon used to handle user service requests including service start, service disable, service relocate, and service restart
clurmtabd Daemon used to handle Clustered NFS mount tables
Fence fence_apc Fence agent for APC power switch.
fence_bladecenter Fence agent for for IBM Bladecenters with Telnet interface.
fence_bullpap Fence agent for Bull Novascale Platform Administration Processor (PAP) Interface.
fence_drac Fencing agent for Dell Remote Access Card
fence_ipmilan Fence agent for Bull Novascale Intelligent Platform Management Interface (IPMI).
fence_wti Fence agent for WTI power switch.
fence_brocade Fence agent for Brocade Fibre Channel switch.
fence_mcdata Fence agent for McData Fibre Channel switch.
fence_vixel Fence agent for Vixel Fibre Channel switch.
fence_sanbox2 Fence agent for SANBox2 Fibre Channel switch.
fence_ilo Fence agent for HP ILO interfaces (formerly fence_rib).
fence_rsa I/O Fencing agent for IBM RSA II.
fence_gnbd Fence agent used with GNBD storage.
fence_scsi I/O fencing agent for SCSI persistent reservations
fence_egenera Fence agent used with Egenera BladeFrame system.
fence_manual Fence agent for manual interaction. NOTE This component is not supported for production environments.
fence_ack_manual User interface for fence_manual agent.
fence_node A program which performs I/O fencing on a single node.
fence_xvm I/O Fencing agent for Xen virtual machines.
fence_xvmd I/O Fencing agent host for Xen virtual machines.
fence_tool A program to join and leave the fence domain.
fenced The I/O Fencing daemon.
DLM libdlm.so.<version number> Library for Distributed Lock Manager (DLM) support.
GFS gfs.ko Kernel module that implements the GFS file system and is loaded on GFS cluster nodes.
gfs_fsck Command that repairs an unmounted GFS file system.
gfs_grow Command that grows a mounted GFS file system.
gfs_jadd Command that adds journals to a mounted GFS file system.
gfs_mkfs Command that creates a GFS file system on a storage device.
gfs_quota Command that manages quotas on a mounted GFS file system.
gfs_tool Command that configures or tunes a GFS file system. This command can also gather a variety of information about the file system.
mount.gfs Mount helper called by mount(8); not used by user.
GNBD gnbd.ko Kernel module that implements the GNBD device driver on clients.
gnbd_export Command to create, export and manage GNBDs on a GNBD server.
gnbd_import Command to import and manage GNBDs on a GNBD client.
gnbd_serv A server daemon that allows a node to export local storage over the network.
LVS pulse This is the controlling process which starts all other daemons related to LVS routers. At boot time, the daemon is started by the /etc/rc.d/init.d/pulse script. It then reads the configuration file /etc/sysconfig/ha/lvs.cf. On the active LVS router, pulse starts the LVS daemon. On the backup router, pulse determines the health of the active router by executing a simple heartbeat at a user-configurable interval. If the active LVS router fails to respond after a user-configurable interval, it initiates failover. During failover, pulse on the backup LVS router instructs the pulse daemon on the active LVS router to shut down all LVS services, starts the send_arp program to reassign the floating IP addresses to the backup LVS router's MAC address, and starts the lvs daemon.
lvsd The lvs daemon runs on the active LVS router once called by pulse. It reads the configuration file /etc/sysconfig/ha/lvs.cf, calls the ipvsadm utility to build and maintain the IPVS routing table, and assigns a nanny process for each configured LVS service. If nanny reports a real server is down, lvs instructs the ipvsadm utility to remove the real server from the IPVS routing table.
ipvsadm This service updates the IPVS routing table in the kernel. The lvs daemon sets up and administers LVS by calling ipvsadm to add, change, or delete entries in the IPVS routing table.
nanny The nanny monitoring daemon runs on the active LVS router. Through this daemon, the active LVS router determines the health of each real server and, optionally, monitors its workload. A separate process runs for each service defined on each real server.
lvs.cf This is the LVS configuration file. The full path for the file is /etc/sysconfig/ha/lvs.cf. Directly or indirectly, all daemons get their configuration information from this file.
Piranha Configuration Tool This is the Web-based tool for monitoring, configuring, and administering LVS. This is the default tool to maintain the /etc/sysconfig/ha/lvs.cf LVS configuration file.
send_arp This program sends out ARP broadcasts when the floating IP address changes from one node to another during failover.
Quorum Disk qdisk A disk-based quorum daemon for CMAN / Linux-Cluster.
mkqdisk Cluster Quorum Disk Utility
qdiskd Cluster Quorum Disk Daemon
IT Nyheder
RMUS 2005-2017