High Availability Cluster on Centos 7 using Corosync & Pacemaker & pcs

At first we need to instal this packages if they are not presented:

yum -y install corosync pacemaker pcs

Before your cluster could be configured, you need to start the pcs daemon and boot up at each node, so type those command into your terminal:

systemctl enable pcsd.service
systemctl start pcsd.service

Also, yum will create a account “hacluster” for management, so you should change it’s password.

Now it is time to define your cluster. Use hacluster account and the password you chose in previous step.

pcs cluster auth itsol-db1 itsol-db2
pcs cluster setup --name itsol-db itsol-db1 itsol-db2
pcs cluster start --all

Then you should see the cluster boot up.

pcs status nodes
pcs status corosync

Now you can define resources, in this example there is a 2 virtual IP addresses, one for master and one for slave.

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=10.1.40.135 cidr_netmask=32 op monitor interval=30s
pcs resource create virtual_ip2 ocf:heartbeat:IPaddr2 ip=10.1.40.136 cidr_netmask=32 op monitor interval=30s

Since we have simple cluster (only two nodes), we’ll just disable the stonith and quorum  options:

pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore

To check current status and resource location

pcs status resources
pcs status|grep virtual_ip

Now is time to add another constraint for location – you specify on which node you prefer to be your resource. In our case:

pcs constraint location virtual_ip prefers itsol-db1=50
pcs constraint location virtual_ip2 prefers itsol-db2=50

To check all constrains

pcs constraint

 

Some other commands that can be used in this configuration

pcs cluster stop --all
pcs cluster start --all

pcs cluster stop itsol-db1

And also if you want to transfer a virtual IP to another node you can simply put the preffered one in standby

pcs cluster standby itsol-db1

And to bring it back

pcs cluster unstandby itsol-db1

 

In case you want to run a bash script on resource transfer you can copy Dummy script from heartbeat resource folder and add it like a resource

cd /usr/lib/ocf/resource.d/heartbeat/
cp Dummy FailOverScript
pcs resource create script ocf:heartbeat:FailOverScript op monitor interval=1min
pcs constraint colocation add script virtual_ip INFINITY
pcs constraint order script then virtual_ip

To test this scrip

export OCF_ROOT=/usr/lib/ocf
/usr/lib/ocf/resource.d/heartbeat/FailOverScript

 

3 Replies to “High Availability Cluster on Centos 7 using Corosync & Pacemaker & pcs”

  1. Hi, I’m recently have begun to learn about clusters servers and such so. I had read this article as same other about deployment with pcs, corosync and pacemaker. I wanna to know why disable stonith in a simple cluster (2 nodes)?
    Thanks in advance!

    • STONITH comes from Shoot The Other Node In The Head – in our case with only 2 nodes is not recommended to have it running because we don’t have a quorum and in case we lose network connectivity on the main interfaces both nodes will try to shut down other one. This is some kind of split-brain situation and can end with two shutted down nodes, we don’t want this, right?
      Usually STONITH is used to protect the data from being corrupted, in our case we have share-nothing scenario and there is no chance to lose data. But you can enable this option on your cluster, try it and share your knowledge.

      • Corosync has the ability to treat two-node clusters as if only one node is required for quorum. From my personal experience, it’s much better to end up having two nodes down than your data corrupted. However, I agree with you, STONITH can be ignored in test environments when no shared storage is used. And since a node level fencing configuration depends heavily on environment, there are different STONITH implementations available which are to much work to be covered in a single article.

Leave a Reply

Your email address will not be published. Required fields are marked *

*