The Gluster File System (
GlusterFS) is an open source distributed file system that can scale out in building-block fashion to store multiple petabytes of data.
The clustered file system pools storage servers over TCP/IP or InfiniBand Remote Direct Memory Access (RDMA), aggregating disk and memory and facilitating the centralized management of data through a unified global namespace. The software works with low-cost commodity computers.
Use cases for GlusterFS include cloud computing, streaming media and content delivery.
InstallationIn my setup I am using below configuration or setup.
Storage server:-Node1.linux.vs (192.168.160.1)
Node2.linux.vs (192.168.160.2)
Node3.linux.vs(192.168.160.3)
Node4.linux.vs(192.168.160.4)
Client to use gluster file system:-Client.linux.vs (192.168.160.10)
On each server create the lvm partition and format them using xfs filesystem, however we can use the another filesystem as well (ext3,4) and Name resolution should be working.
Lvm creation process:- # fdisk /dev/sdb
# pvcreate /dev/sdb1
#vgcreate vg_storage /dev/sdb1
# lvcreate –L 10G –n lv_home vg_storage
#mkfs.xfs –i size=512 /dev/vg_storage/lv_home
#mkdir /mnt/data/node1
#mount /dev/vg_storage/lv_home /mnt/data/node1
Note:- Same process follow on other storage nodes.
Installation of glusterFsEnable EPEL RepositoryBefore Installing GlusterFS on the servers we need to install epel repository on our system depending on our operating system version using following command.
CentOS/RHEL 5 , 32 bit:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
CentOS/RHEL 5 , 64 bit:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
CentOS/RHEL 6 , 32 bit:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
CentOS/RHEL 6 , 64 bit:
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Now we have to install the GlusterFS on all nodes.
#yum install glusterfs*
Below list of installed packages
glusterfs-server-3.3.1-15.el5
glusterfs-geo-replication-3.3.1-15.el5
glusterfs-fuse-3.3.1-15.el5
glusterfs-rdma-3.3.1-15.el5
glusterfs-devel-3.3.1-15.el5
glusterfs-3.3.1-15.el5
Note:- Install the gluster packages on each storage node.
After installation, it's time to start the glusterd service on each storage node.
#/etc/init.d/glusterd start
#chkconfig glusterd on
Setting up trusted storage server pools #gluster peer status
No peers present
Command show the status of trusted storage pool
Because still we do not add any storage server. So the will be empty.
Now we need to add the other nodes
#gluster peer probe node2.linux.vs
#gluster peer probe node3.linux.vs
#gluster peer probe node4.linux.vs
Note:- We don’t need to add node1 because we are on this node and it will automatically part of storage cluster.
#gluster peer status
Number of Peers: 3
Hostname: node2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
Setting Up GlusterFS server Volume#gluster volume create Vol1 node1:/mnt/data/node1 node2:/mnt/data/node2 node3:/mnt/data/node3 node4:/mnt/data/node4
#gluster volume info
Volume Name: Vol1
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/mnt/data/node1
Brick2: node2:/mnt/data/node2
Brick3: node3:/mnt/data/node3
Brick4: node4:/mnt/data/node4
#gluster volume start Vol1
Now our gluster storage server ready now we can mount it on client side.
Information about peer,node,vols available
# cd /var/lib/glusterd
Configure on GlusterFS Client to use a volume on GlusterFS servers#yum install glusterfs-fuse glusterfs
#showmount –e node1
It will show you volume name which we shared from gluster storage servers.
#mkdir /storage
#mount.glusterfs node1:/Vol1 /storage
#touch /storage/file{1,2,3,4}
Now the file will available on node1,node2,node3,node4
because it is distributed storage volume.
