Get your server issues fixed by our experts for a price starting at just 25 USD/Hour. Click here to register and open a ticket with us now!

Author Topic: How to Install Replicated File System using GlusterFS on CentsOS 7  (Read 4129 times)

0 Members and 1 Guest are viewing this topic.

akhilu

  • Guest



In one of our previous post, we have shown how to how to Install Distributed File System using "GlusterFS". You can refer the below link to read more:


https://admin-ahead.com/forum/general-linux/how-to-install-distributed-file-system-'glusterfs'-and-setup/msg1516/#msg1516

In this post, we will see how to set up replicated storage volume using glusterfs. on CentOS 7.

Replicated Glusterfs Volume is like a RAID 1, and volume maintains exact copies of the data on all bricks. You can decide the number of replicas while creating the volume, so you would need to have atleast two bricks to create a volume with two replicas or three bricks to create a volume of 3 replicas.


Terminologies:

Brick: Brick is basic storage (directory) on a server in the trusted storage pool.

Volume: Volume is a logical collection of bricks

Replicated File System: A file system in which data is spread across multiple storages nodes and allows clients to access it over the network.

Server: is a machine where the actual file system is hosted in which the data will be stored.

Client: is a machine which mounts the volume.

glusterd: glusterd is a daemon that runs on all servers in the trusted storage pool.


Requirements:

1) For this demo, I am using 5 CentOS 7 servers all 64 bit.  Two of these will act as servers and will maintain two replicas of the volume on each server.


Code: [Select]
68.232.175.206    server1
45.77.110.210     server2
144.202.11.51     client1
149.28.45.206     client2
149.28.41.112     client3

2) Make sure that all the servers in the cluster have a free disk attached to it to create the storage volumes.

Code: [Select]
In our case both server1 and server2 has 10 GB free disk attached to it

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3) GlusterFS components use DNS for name resolutions. Since I do not have a DNS on my environment I am using below entries in /etc/hosts file on all server

Code: [Select]
68.232.175.206    server1
45.77.110.210     server2
144.202.11.51     client1
149.28.45.206     client2


Install GlusterFS server on server1 and server2

1) First, we need to make sure that epel repository is enabled on both server1 and server2 by running below command:

Code: [Select]
yum install epel-release -y
If epel is already present then the above command will give you a message saying it already exist or else it will install epel repo

2) Create Gluster repository on server1 and server2

Code: [Select]
vi /etc/yum.repos.d/Gluster.repo

Add below code in the above file.

[gluster38]
name=Gluster 3.8
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.8/
gpgcheck=0
enabled=1

Once done run the below command to see if the epel and gluster repository is enabled

Code: [Select]
yum repolist
Output:
Code: [Select]
repo id                                      repo name                                                                  status
base/7/x86_64                                CentOS-7 - Base                                                             9,911
epel/x86_64                                  Extra Packages for Enterprise Linux 7 - x86_64                             12,718
extras/7/x86_64                              CentOS-7 - Extras                                                             434
glusterfs                                    Glusterfs5                                                                     40
updates/7/x86_64                             CentOS-7 - Updates                                                          1,614
repolist: 24,717

3) Run below command on server1 and server2 to install gluster

Code: [Select]
yum install glusterfs-server -y
4) Run the below command to start glusterd and enable it on boot

Code: [Select]
systemctl enable glusterd
systemctl start glusterd



Creating LVM on server1 and server2


We have a 10 GB disk attached on both server1 and server2 which we want to convert to storage brick.

Run below commands on server1 and server2

1) check the disks attached to server1 and server2

Code: [Select]
[root@server1 ~]# lsblk

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1 1024M  0 rom 
vda    253:0    0   25G  0 disk
└─vda1 253:1    0   25G  0 part /
vdb    253:16   0   10G  0 disk

You can see that we have two disk attached to the server vda and vdb. vdb is totally free right now.

2) Now we need to create a new partition in /dev/vdb on server1 and server2

Code: [Select]
[root@server1 ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xb3065797.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


3) run partprobe command on server1 and server2 to update kernel about new partitions

Code: [Select]
[root@server1 ~]# partprobe
[root@server1 ~]#

4) Create LVM with 10 GB space on both server1 and server2. You can allocate as per your needs

Code: [Select]
[root@server1 ~]# pvcreate /dev/vdb1
  Physical volume "/dev/vdb1" successfully created.
[root@server1 ~]# vgcreate vg1 /dev/vdb1
  Volume group "vg1" successfully created
[root@server1 ~]# lvcreate -l 100%FREE -n lv1 vg1
  Logical volume "lv1" created.


Mounting LVM on server1 and server2 onto the /bricks/brick1 folder

1) Creating brick directory on server1 and server2

Code: [Select]
[root@server1 ~]# mkdir -p /bricks/brick1

2) Setting xfs file system for the LVM created

Code: [Select]
[root@server1 ~]# mkfs.xfs /dev/mapper/vg1-lv1
meta-data=/dev/mapper/vg1-lv1    isize=512    agcount=4, agsize=655104 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2620416, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3) Define the mount point in the fstab file

Code: [Select]
[root@server1 ~]# vi /etc/fstab
/dev/mapper/vg1-lv1                       /bricks/brick1          xfs     defaults        0 0

4) Mount the LVM to the brick folder

Code: [Select]
[root@server2 ~]# mount -a
[root@server2 ~]#

5) Check if the volume is properly mounted

Code: [Select]
[root@server2 ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/vda1           ext4       25G  1.3G   23G   6% /
devtmpfs            devtmpfs  486M     0  486M   0% /dev
tmpfs               tmpfs     496M     0  496M   0% /dev/shm
tmpfs               tmpfs     496M   13M  483M   3% /run
tmpfs               tmpfs     496M     0  496M   0% /sys/fs/cgroup
tmpfs               tmpfs     100M     0  100M   0% /run/user/0
/dev/mapper/vg1-lv1 xfs        10G   33M   10G   1% /bricks/brick1


Create the trusted storage pool in glusterfs

Configure Firewall:

You would need to either disable the firewall or configure the firewall to allow all connections within a cluster.

By default, glusterd will listen on tcp/24007 but opening that port is not enough on the gluster nodes. Each time you will add a brick , it will open a new port (that you’ll be able to see with “gluster volumes status”)

Code: [Select]
# Disable FirewallD
systemctl stop firewalld
systemctl disable firewalld

OR

# Run below command on a node in which you want to accept all traffics comming from the source ip
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="<ipaddress>" accept'
firewall-cmd --reload

1) Here I will run all GlusterFS commands in server1 node.

Code: [Select]
[root@server1 ~]# gluster peer probe server2
peer probe: success.

2) Verify the status of the trusted storage pool.

Code: [Select]
[root@server1 ~]# gluster peer status
Number of Peers: 1

Hostname: server2
Uuid: 4040a194-a30b-43f2-9e8a-a896cd92c37d
State: Peer in Cluster (Connected)

3) List the storage pool.

Code: [Select]
[root@server1 ~]# gluster pool list
UUID Hostname State
4040a194-a30b-43f2-9e8a-a896cd92c37d server2  Connected
22479d4a-d585-4f0c-ad35-cbd5444d3bbd localhost Connected

Setup gluster volumes

1) Create brick directory on the mounted file systems on both server1 and server2. In my case I am creating directory vol1

Code: [Select]
[root@server2 ~]# mkdir /bricks/brick1/vol1
[root@server2 ~]#

2) Since we are going to use replicated volume, so create the volume named “vol1” with two replicas on server1

Code: [Select]
[root@server1 ~]# gluster volume create vol1 replica 2 server1:/bricks/brick1/vol1 server2:/bricks/brick1/vol1
volume create: vol1: success: please start the volume to access data

3) Start the volume.

Code: [Select]
[root@server1 ~]# gluster volume start vol1
volume start: vol1: success

4) Check the status of the created volume on server1 and server2

Code: [Select]
[root@server1 ~]# gluster volume info vol1
 
Volume Name: vol1
Type: Replicate
Volume ID: 9b7939f5-01a8-487a-8c3e-9dcd734553d5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/bricks/brick1/vol1
Brick2: server2:/bricks/brick1/vol1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Setup GlusterFS Client

1) On all client server client1, client2 and client3 run below command to install glusterfs client package to support mounting of glusterfs filesystems.

Code: [Select]
yum install -y glusterfs-client
2) Create a directory to mount the GlusterFS filesystem

Code: [Select]
mkdir -p /mnt/glusterfs
3) Now mount the gluster file system to the above directory on all client servers by adding below entry in /etc/fstab

Code: [Select]
[root@client2 ~]# vi /etc/fstab
server1:/vol1                             /mnt/glusterfs          glusterfs defaults,_netdev 0 0

4) Run below command on all client-server to mount the gluster volumes

Code: [Select]
[root@client1 ~]# mount -a
[root@client1 ~]#

5) Check if the mount point has been properly mounted.

Code: [Select]
[root@client1 ~]# df -Th
Filesystem     Type            Size  Used Avail Use% Mounted on
/dev/vda1      ext4             25G  1.3G   23G   6% /
devtmpfs       devtmpfs        486M     0  486M   0% /dev
tmpfs          tmpfs           496M     0  496M   0% /dev/shm
tmpfs          tmpfs           496M   13M  483M   3% /run
tmpfs          tmpfs           496M     0  496M   0% /sys/fs/cgroup
tmpfs          tmpfs           100M     0  100M   0% /run/user/0
server1:/vol1  fuse.glusterfs   10G  135M  9.9G   2% /mnt/glusterfs


Testing if the data replication is working accross client and server

1) Since we have mounted the gluster volumes on /mnt/glusterfs folder on both client system we need to move into /mnt/glusterfs and create some test files.

Code: [Select]
[root@client1 glusterfs]# touch file{1..5}
[root@client1 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5


2) Now you need to go to the server nodes and see if these files are there in the brick volume that we created /bricks/brick1/vol1/

Code: [Select]
[root@server1 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

3) It should be also there in server2 and the client 2 and client3 servers.

Code: [Select]
[root@server2 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

Code: [Select]
[root@client2 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5

This type of setup is important for data redundancy. This setup is also efficient while using website with load balancer setup.

I hope you find this information useful :) Thank you for reading.
« Last Edit: November 25, 2018, 01:10:16 pm by akhilu »