Get your server issues fixed by our experts for a price starting at just 25 USD/Hour. Click here to register and open a ticket with us now!

Recent Posts

Pages: [1] 2 3 ... 10
General Linux / SSH notification mail alert in Centos
« Last post by arunlalpr on December 18, 2018, 02:43:06 pm »
SSH notification mail alert in Centos

Open the "bash_profile"

vi ~/.bash_profile

And the following content and replace the "" with your email account that you wish to recieve email alerts.
# Email admin when user logs in as root
rootalert() {
  echo 'ALERT - Root Shell Login'
  echo 'Server: '`hostname`
  echo 'Time: '`date`
  echo 'User: '`who | awk '{ print $1 }'`
  echo 'TTY: '`who | awk '{ print $2 }'`
  echo 'Source: '`who | awk '{ print $6 }' | /bin/cut -d '(' -f 2 | /bin/cut -d ')' -f 1`
  echo 'This email is an alert automatically created by your server telling you that someone, even if it is you, logged into SSH as the root user.  If you or someone you know and trust logged in as root, disregard this email.  If you or someone you know and trust did not login to the server as root, then you may have a hack attempt in progress on your server.'

rootalert | mail -s "Alert: Root Login [`hostname`]"

General Linux / esac in shell scripting
« Last post by arunlalpr on December 18, 2018, 10:29:08 am »
esac in shell scripting

Like fi for if. esac is the spell backward of "case".




case "$FRUIT" in
   "apple") echo "Apple pie is quite tasty."
   "banana") echo "I like banana nut bread."
   "kiwi") echo "New Zealand is famous for kiwi."


Happy scripting.
cPanel / Disable ssl via ssh (unable to login into WHM)
« Last post by nidhinjo on December 18, 2018, 08:19:19 am »
If you are unable to login into WHM because of ssl error  (An error occurred during a connection to. Peer's Certificate has been revoked. (Error code: sec_error_revoked_certificate)

You would be able to disable those options from the shell.  Please follow the steps provided below for the same,

1.) SSH to your server as root

2.) Open

# vi /var/cpanel/cpanel.config and set the following options to 0 (zero).


Code: [Select]

3.) Save the file and exit.
=================================== ;)==================================
Error: Unable to update domain data: MySQL query failed: Unknown column 'syncRecords' in 'field list'

This error is due to the missing of column "syncRecords" in the table "dns_zone" of the database "psa" which mainly occur during Plesk updates.

In order to solve the issue, we need to add the missing columns. In our case, we added " " and "syncSoa".

Kindly follow the below steps.

1) Access Mysql

MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin psa

2) Access database psa.

mysql> use psa;

3) Then we need to alter the table "dns_zone"

ALTER TABLE dns_zone

ADD COLUMN syncRecords enum('true','false','skip');

ALTER TABLE dns_zone

ADD COLUMN syncSoa enum('true','false','skip');

Please try the same from your end and let us know the results :)
General Linux / ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
« Last post by nandulalr on December 13, 2018, 12:46:10 am »
ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

If you are facing issues with ssh login and displaying the error message “ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)”, try to follow the below steps:

You may try edit in /etc/ssh/sshd_config:

Code: [Select]
cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.back
vim /etc/ssh/sshd_config

Change "PasswordAuthentication no" to "PasswordAuthentication yes"

Restart sshd service

Code: [Select]
service sshd restart
Try to login again.
General Linux / Yum gets stuck
« Last post by nandulalr on December 13, 2018, 12:31:05 am »
Yum gets stuck

Sometimes Yum gets stuck when we are trying to install or update packages. No error messages will be displayed at that time. It just freezes after 2 lines of output:

There could be several reasons why it happens, and the fixes are pretty simple.

1. Check if you have the DNS set properly in your server. You can try something like:

Code: [Select]
If you are getting a response, your DNS is working fine. If not, check the /etc/resolv.conf file Add the following in it (if not already exists):

Code: [Select]
Now try again. If it still does not work, check step 2

2. Clean and rebuild the RPM databases. Keep a backup before removing the files.

Code: [Select]
rm -f /var/lib/rpm/__*
rpm --rebuilddb -v -v
yum clean all

It should work now. If you’re still facing issues, it’s probably the firewall. Check your firewall rules and make sure that the server can contact the remote repositories.
General Linux / How to check Magento Version
« Last post by arunlalpr on November 30, 2018, 09:41:40 pm »
How to check Magento Version

Go to your Magento installation directory and run the following command.

php -r "require 'app/Mage.php'; echo Mage::getVersion(); "

In our case it's

kindly check and let us know the result  ;)

General Linux / FATAL ERROR: Cannot decode data link type 113
« Last post by arunlalpr on November 29, 2018, 02:37:00 am »
When the interface is not eth0. We need to alter the installation snort configuration.


#snort -A console -q -u snort -g snort -c /etc/snort/snort.conf -i venet0
FATAL ERROR: Cannot decode data link type 113

We need skip the decoders by using "--enable-non-ether-decoders" during configuration will solve the issue.

cd ~/snort_src
tar -xvzf snort-
cd snort-
./configure --enable-sourcefire --enable-non-ether-decoders
sudo make install

That it :)
Plesk / Error: Client denied by server configuration
« Last post by vaishakp on November 28, 2018, 10:00:51 pm »
The following error can be found in /var/www/vhosts/

[access_compat:error] [pid 13317:tid 140073563543296] [client] AH01797: client denied by server configuration: /var/www/vhosts/


Apache 2.4 is installed, but the file .htaccess contains authorization and authentication entries with syntax from the 2.2 version.

In Plesk Interface:

1. Log into Plesk.
2. Go to Domains > > File Manager and click on the name of the file .htaccess.
3. Replace the directives as follows:

    Order allow,deny
    Allow from all

    Require all granted

Click OK or Apply to save the changes.



1. Connect to the server using SSH.
2. Open the file /var/www/vhosts/ for editing.

Change the following entries:
   Order allow,deny
   Allow from all
   Require all granted

General Linux / How to Install Replicated File System using GlusterFS on CentsOS 7
« Last post by akhilu on November 25, 2018, 12:16:35 pm »

In one of our previous post, we have shown how to how to Install Distributed File System using "GlusterFS". You can refer the below link to read more:'glusterfs'-and-setup/msg1516/#msg1516

In this post, we will see how to set up replicated storage volume using glusterfs. on CentOS 7.

Replicated Glusterfs Volume is like a RAID 1, and volume maintains exact copies of the data on all bricks. You can decide the number of replicas while creating the volume, so you would need to have atleast two bricks to create a volume with two replicas or three bricks to create a volume of 3 replicas.


Brick: Brick is basic storage (directory) on a server in the trusted storage pool.

Volume: Volume is a logical collection of bricks

Replicated File System: A file system in which data is spread across multiple storages nodes and allows clients to access it over the network.

Server: is a machine where the actual file system is hosted in which the data will be stored.

Client: is a machine which mounts the volume.

glusterd: glusterd is a daemon that runs on all servers in the trusted storage pool.


1) For this demo, I am using 5 CentOS 7 servers all 64 bit.  Two of these will act as servers and will maintain two replicas of the volume on each server.

Code: [Select]    server1     server2     client1     client2     client3

2) Make sure that all the servers in the cluster have a free disk attached to it to create the storage volumes.

Code: [Select]
In our case both server1 and server2 has 10 GB free disk attached to it

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3) GlusterFS components use DNS for name resolutions. Since I do not have a DNS on my environment I am using below entries in /etc/hosts file on all server

Code: [Select]    server1     server2     client1     client2

Install GlusterFS server on server1 and server2

1) First, we need to make sure that epel repository is enabled on both server1 and server2 by running below command:

Code: [Select]
yum install epel-release -y
If epel is already present then the above command will give you a message saying it already exist or else it will install epel repo

2) Create Gluster repository on server1 and server2

Code: [Select]
vi /etc/yum.repos.d/Gluster.repo

Add below code in the above file.

name=Gluster 3.8

Once done run the below command to see if the epel and gluster repository is enabled

Code: [Select]
yum repolist
Code: [Select]
repo id                                      repo name                                                                  status
base/7/x86_64                                CentOS-7 - Base                                                             9,911
epel/x86_64                                  Extra Packages for Enterprise Linux 7 - x86_64                             12,718
extras/7/x86_64                              CentOS-7 - Extras                                                             434
glusterfs                                    Glusterfs5                                                                     40
updates/7/x86_64                             CentOS-7 - Updates                                                          1,614
repolist: 24,717

3) Run below command on server1 and server2 to install gluster

Code: [Select]
yum install glusterfs-server -y
4) Run the below command to start glusterd and enable it on boot

Code: [Select]
systemctl enable glusterd
systemctl start glusterd

Creating LVM on server1 and server2

We have a 10 GB disk attached on both server1 and server2 which we want to convert to storage brick.

Run below commands on server1 and server2

1) check the disks attached to server1 and server2

Code: [Select]
[root@server1 ~]# lsblk

sr0     11:0    1 1024M  0 rom 
vda    253:0    0   25G  0 disk
└─vda1 253:1    0   25G  0 part /
vdb    253:16   0   10G  0 disk

You can see that we have two disk attached to the server vda and vdb. vdb is totally free right now.

2) Now we need to create a new partition in /dev/vdb on server1 and server2

Code: [Select]
[root@server1 ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xb3065797.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

3) run partprobe command on server1 and server2 to update kernel about new partitions

Code: [Select]
[root@server1 ~]# partprobe
[root@server1 ~]#

4) Create LVM with 10 GB space on both server1 and server2. You can allocate as per your needs

Code: [Select]
[root@server1 ~]# pvcreate /dev/vdb1
  Physical volume "/dev/vdb1" successfully created.
[root@server1 ~]# vgcreate vg1 /dev/vdb1
  Volume group "vg1" successfully created
[root@server1 ~]# lvcreate -l 100%FREE -n lv1 vg1
  Logical volume "lv1" created.

Mounting LVM on server1 and server2 onto the /bricks/brick1 folder

1) Creating brick directory on server1 and server2

Code: [Select]
[root@server1 ~]# mkdir -p /bricks/brick1

2) Setting xfs file system for the LVM created

Code: [Select]
[root@server1 ~]# mkfs.xfs /dev/mapper/vg1-lv1
meta-data=/dev/mapper/vg1-lv1    isize=512    agcount=4, agsize=655104 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2620416, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3) Define the mount point in the fstab file

Code: [Select]
[root@server1 ~]# vi /etc/fstab
/dev/mapper/vg1-lv1                       /bricks/brick1          xfs     defaults        0 0

4) Mount the LVM to the brick folder

Code: [Select]
[root@server2 ~]# mount -a
[root@server2 ~]#

5) Check if the volume is properly mounted

Code: [Select]
[root@server2 ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/vda1           ext4       25G  1.3G   23G   6% /
devtmpfs            devtmpfs  486M     0  486M   0% /dev
tmpfs               tmpfs     496M     0  496M   0% /dev/shm
tmpfs               tmpfs     496M   13M  483M   3% /run
tmpfs               tmpfs     496M     0  496M   0% /sys/fs/cgroup
tmpfs               tmpfs     100M     0  100M   0% /run/user/0
/dev/mapper/vg1-lv1 xfs        10G   33M   10G   1% /bricks/brick1

Create the trusted storage pool in glusterfs

Configure Firewall:

You would need to either disable the firewall or configure the firewall to allow all connections within a cluster.

By default, glusterd will listen on tcp/24007 but opening that port is not enough on the gluster nodes. Each time you will add a brick , it will open a new port (that you’ll be able to see with “gluster volumes status”)

Code: [Select]
# Disable FirewallD
systemctl stop firewalld
systemctl disable firewalld


# Run below command on a node in which you want to accept all traffics comming from the source ip
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="<ipaddress>" accept'
firewall-cmd --reload

1) Here I will run all GlusterFS commands in server1 node.

Code: [Select]
[root@server1 ~]# gluster peer probe server2
peer probe: success.

2) Verify the status of the trusted storage pool.

Code: [Select]
[root@server1 ~]# gluster peer status
Number of Peers: 1

Hostname: server2
Uuid: 4040a194-a30b-43f2-9e8a-a896cd92c37d
State: Peer in Cluster (Connected)

3) List the storage pool.

Code: [Select]
[root@server1 ~]# gluster pool list
UUID Hostname State
4040a194-a30b-43f2-9e8a-a896cd92c37d server2  Connected
22479d4a-d585-4f0c-ad35-cbd5444d3bbd localhost Connected

Setup gluster volumes

1) Create brick directory on the mounted file systems on both server1 and server2. In my case I am creating directory vol1

Code: [Select]
[root@server2 ~]# mkdir /bricks/brick1/vol1
[root@server2 ~]#

2) Since we are going to use replicated volume, so create the volume named “vol1” with two replicas on server1

Code: [Select]
[root@server1 ~]# gluster volume create vol1 replica 2 server1:/bricks/brick1/vol1 server2:/bricks/brick1/vol1
volume create: vol1: success: please start the volume to access data

3) Start the volume.

Code: [Select]
[root@server1 ~]# gluster volume start vol1
volume start: vol1: success

4) Check the status of the created volume on server1 and server2

Code: [Select]
[root@server1 ~]# gluster volume info vol1
Volume Name: vol1
Type: Replicate
Volume ID: 9b7939f5-01a8-487a-8c3e-9dcd734553d5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: server1:/bricks/brick1/vol1
Brick2: server2:/bricks/brick1/vol1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Setup GlusterFS Client

1) On all client server client1, client2 and client3 run below command to install glusterfs client package to support mounting of glusterfs filesystems.

Code: [Select]
yum install -y glusterfs-client
2) Create a directory to mount the GlusterFS filesystem

Code: [Select]
mkdir -p /mnt/glusterfs
3) Now mount the gluster file system to the above directory on all client servers by adding below entry in /etc/fstab

Code: [Select]
[root@client2 ~]# vi /etc/fstab
server1:/vol1                             /mnt/glusterfs          glusterfs defaults,_netdev 0 0

4) Run below command on all client-server to mount the gluster volumes

Code: [Select]
[root@client1 ~]# mount -a
[root@client1 ~]#

5) Check if the mount point has been properly mounted.

Code: [Select]
[root@client1 ~]# df -Th
Filesystem     Type            Size  Used Avail Use% Mounted on
/dev/vda1      ext4             25G  1.3G   23G   6% /
devtmpfs       devtmpfs        486M     0  486M   0% /dev
tmpfs          tmpfs           496M     0  496M   0% /dev/shm
tmpfs          tmpfs           496M   13M  483M   3% /run
tmpfs          tmpfs           496M     0  496M   0% /sys/fs/cgroup
tmpfs          tmpfs           100M     0  100M   0% /run/user/0
server1:/vol1  fuse.glusterfs   10G  135M  9.9G   2% /mnt/glusterfs

Testing if the data replication is working accross client and server

1) Since we have mounted the gluster volumes on /mnt/glusterfs folder on both client system we need to move into /mnt/glusterfs and create some test files.

Code: [Select]
[root@client1 glusterfs]# touch file{1..5}
[root@client1 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5

2) Now you need to go to the server nodes and see if these files are there in the brick volume that we created /bricks/brick1/vol1/

Code: [Select]
[root@server1 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

3) It should be also there in server2 and the client 2 and client3 servers.

Code: [Select]
[root@server2 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

Code: [Select]
[root@client2 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5

This type of setup is important for data redundancy. This setup is also efficient while using website with load balancer setup.

I hope you find this information useful :) Thank you for reading.
Pages: [1] 2 3 ... 10