The backup of information is vital within a computer network, which is why fast, efficient and scalable solutions are required to meet this objective.
DRBD is a software that allows you to replicate the data of a partition between several machines. which is excellent for always having a backup of the information.
Installing DRBD
1.-Installing DRBD packages on both server
The first thing to do is to install DRDB on both nodes. To do this it is necessary to enable an external repository called ELRepo where we will download the necessary packages. We need to be root user, so open a terminal and type sudo -i, enter your password and we’ll have admin permissions.
:~# sudo -i
Now we proceed to enable the repository:
:~# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
We install it:
:~# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
With the repository added, we can now install the DRBD packages.
:~# yum install drbd90-utils kmod-drbd90
After the installation of the packages is finished, make sure that the drbd module is loaded to the kernel. We do it with the following command:
:~# lsmod | grep -i drbd
In this case, we notice that it was not loaded, to solve this, we execute these two commands.
:~# echo drbd > /etc/modules-load.d/drbd.conf :~# modprobe drbd
The first one makes the module load at the beginning of the system and the second one enables it for the active session. We proceed to check again and we’ll get:
2.- Configure DRBD
Once the packages are installed correctly, we must modify their configuration. First we’ll back up the original file. We will do this on both nodes:
:~# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.bak
Once the initial configuration file is backed up, we will create a new one:
:~# nano /etc/drbd.d/global_common.conf
And we’ll put the following in it:
global { usage-count no; } common { net { protocol C; } }
Next we will create a new file for the new resource that will be drbd1 in this case. The file will be called drbd1.res.
:~# nano /etc/drbd.d/drbd1.res
We will add the following:
resource drbd1 { disk /dev/sdb; device /dev/drbd1; meta-disk internal; on osradar { address 192.168.1.9:7789; } on osradar2 { address 192.168.1.12:7789; } }
We briefly explain: disk refers to the hard disk to be replicated, device to the partition inside the disk, in the on segment comes the hostname of the computers; address to the ip address of the node and 7789 is the port where they will communicate.
The next step is to initialize and create the resource at each of the nodes.
:~# drbdadm create-md drbd1
Later we enable the drbd daemon on both nodes:
:~# systemctl start drbd :~# systemctl enable drbd
Now let’s define who the primary node will be, in our case it will be in “osradar”, that is, the first node.
:~# drbdadm down drbd1 :~# drbdadm up drbd1
On the second node:
:~# drbdadm down drbd1 :~# drbdadm up drbd1
In case an error occurs setting the primary node, you can use this command:
:~# drbdadm primary drbd1 --force
Next we must configure the firewall to accept connections through the drbd port.
:~# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.12" port port="7789" protocol="tcp" accept
Remember to modify the address in each node: if you are in node1 then in the command you must put the ip of node2.
and reboot the firewall:
:~# firewall-cmd --reload
We’ll have a DRBD cluster up and running, and we’ll be able to make changes to the first node’s partition and replicate from the network to the second one.
Please share this article through your social networks.