GlusterFS is an open source distributed parallel file system capable of scaling to several petabytes. GlusterFS is a cluster/network file system. GlusterFS comes with two components, a server and a client. The storage server (or each server in a cluster) runs glusterfsd and clients use the mount command or glusterfs client to mount the file systems served, using FUSE.
The goal here is to run 2 servers that will perform complete replication of part of a filesystem.
Be careful not to run this type of architecture on the Internet as performance will be catastrophic. Indeed, when a node wants to read access a file, it must contact all other nodes to see if there are any discrepancies. Only then does it authorize reading, which can take a long time depending on the architectures.
### file: server-volume.vol.sample######################################## GlusterFS Server Volume File ########################################### CONFIG FILE RULES:### "#" is comment character.### - Config file is case sensitive### - Options within a volume block can be in any order.### - Spaces or tabs are used as delimitter within a line. ### - Multiple values to options will be : delimitted.### - Each option should end within a line.### - Missing or commented fields will assume default values.### - Blank/commented lines are allowed.### - Sub-volumes should already be defined above before referring.volumeposix1
typestorage/posix
optiondirectory/var/www
end-volume
volumelocks1
typefeatures/locks
subvolumesposix1
end-volume
volumebrick1
typeperformance/io-threads
optionthread-count8subvolumeslocks1
end-volume
volumeserver-tcp
typeprotocol/server
optiontransport-typetcp
optionauth.addr.brick1.allow*
optiontransport.socket.listen-port6996optiontransport.socket.nodelayon
subvolumesbrick1
end-volume
Client
For the client part, we tell it that we want to do "raid1". Here is the configuration to apply on the "ed" node:
### file: client-volume.vol.sample######################################## GlusterFS Client Volume File ########################################### CONFIG FILE RULES:### "#" is comment character.### - Config file is case sensitive### - Options within a volume block can be in any order.### - Spaces or tabs are used as delimitter within a line. ### - Each option should end within a line.### - Missing or commented fields will assume default values.### - Blank/commented lines are allowed.### - Sub-volumes should already be defined above before referring.# RAID 1# TRANSPORT-TYPE tcpvolumeed-1
typeprotocol/client
optiontransport-typetcp
optionremote-hosted
optiontransport.socket.nodelayon
optiontransport.remote-port6996optionremote-subvolumebrick1
end-volume
volumerafiki-1
typeprotocol/client
optiontransport-typetcp
optionremote-hostrafiki
optiontransport.socket.nodelayon
optiontransport.remote-port6996optionremote-subvolumebrick1
end-volume
volumemirror-0
typecluster/replicate
subvolumesrafiki-1ed-1
end-volume
volumereadahead
typeperformance/read-ahead
optionpage-count4subvolumesmirror-0
end-volume
volumeiocache
typeperformance/io-cache
optioncache-size`echo$(($(grep'MemTotal'/proc/meminfo|sed's/[^0-9]//g')/5120))`MB
optioncache-timeout1subvolumesreadahead
end-volume
volumequickread
typeperformance/quick-read
optioncache-timeout1optionmax-file-size64kB
subvolumesiocache
end-volume
volumewritebehind
typeperformance/write-behind
optioncache-size4MB
subvolumesquickread
end-volume
volumestatprefetch
typeperformance/stat-prefetch
subvolumeswritebehind
end-volume
You now have access to your glusterfs mount point in /var/www.
FAQ
Force Client Synchronization
If you want to force data synchronization for a client, it's simple. Just go to the directory where the glusterfs share is located (here /mnt/glusterfs), then perform a directory traversal like this:
It means you have permission problems. In my case, this happened in an OpenVZ container. To solve the problem, here's the solution to apply on the host machine (not in the VE) (warning, this requires stopping, applying configurations, then restarting the VE):
If you want to do glusterfs in a VE, you may encounter permission problems:
To work around them, we'll create the fuse device from the host on the VE in question and add admin rights to it (not great in terms of security, but no choice):