DRBD is a system that allows you to create software RAID1 over a local network. This enables high availability and resource sharing on a cluster without a disk array.
Here we will install DRBD8, with the goal of implementing a cluster filesystem (see documentation on OCFS2) which is not supported on DRBD7. We'll use the DRBD8 packages from Debian repositories. We'll work on a 2-node cluster.
# You can find an example in /usr/share/doc/drbd.../drbd.conf.exampleinclude"drbd.d/global_common.conf";include"drbd.d/*.res";
I didn't modify it.
global_common.conf
This file is the default file, which can contain host configurations, but also allows you to have a global configuration for your different DRBD configurations (common section):
# Global configurationglobal{# Do not report statistics usage to LinBitusage-countno;}# All resources inherit the options set in this sectioncommon{# C (Synchronous replication protocol)protocolC;startup{# Wait for connection timeout (in seconds)wfc-timeout1;# Wait for connection timeout, if this node was a degraded cluster (in seconds)degr-wfc-timeout1;}net{# Maximum number of requests to be allocated by DRBDmax-buffers8192;# The highest number of data blocks between two write barriersmax-epoch-size8192;# The size of the TCP socket send buffersndbuf-size512k;# How often the I/O subsystem's controller is forced to process pending I/O requestsunplug-watermark8192;# The HMAC algorithm to enable peer authentication at allcram-hmac-algsha1;# The shared secret used in peer authenticationshared-secret"xxx";# Split brains# Split brain, resource is not in the Primary role on any hostafter-sb-0pridisconnect;# Split brain, resource is in the Primary role on one hostafter-sb-1pridisconnect;# Split brain, resource is in the Primary role on both hostafter-sb-2pridisconnect;# Helps to solve the cases when the outcome of the resync decision is incompatible with the current role assignmentrr-conflictdisconnect;}handlers{# If the node is primary, degraded and if the local copy of the data is inconsistentpri-on-incon-degr"echo Current node is primary, degraded and the local copy of the data is inconsistent | wall ";}disk{# The node downgrades the disk status to inconsistent on io errorson-io-errorpass_on;# Disable protecting data if power failure (done by hardware)no-disk-barrier;# Disable the backing device to support disk flushesno-disk-flushes;# Do not let write requests drain before write requests of a new reordering domain are issuedno-disk-drain;# Disables the use of disk flushes and barrier BIOs when accessing the meta data deviceno-md-flushes;}syncer{# The maximum bandwidth a resource uses for background re-synchronizationrate500M;# Control how big the hot area (= active set) can getal-extents3833;}}
resourcer0{# Node 1onsrv1{device/dev/drbd0;# Disk containing the drbd partitiondisk/dev/mapper/datas-drbd;# IP address of this hostaddress192.168.100.1:7788;# Store metadata on the same devicemeta-diskinternal;}# Node 2onsrv2{device/dev/drbd0;disk/dev/mapper/lvm-drbd;address192.168.20.4:7788;meta-diskinternal;}}
Once the synchronization is complete, DRBD is installed and properly configured. You now need to format the device /dev/drbd0 with a filesystem, such as ext3 for active/passive or OCFS2 for example if you want active/active (there are others like GFS2).
Only a primary node can mount and access the data on the DRBD volume. When DRBD works with HeartBeat in CRM mode, if the primary node goes down, the cluster is able to switch the secondary node to primary. When the old primary is "UP" again, it will synchronize and become a secondary in turn.