Get your server issues fixed by our experts for a price starting at just 25 USD/Hour. Click here to register and open a ticket with us now!

Recent Posts

Pages: [1] 2 3 ... 10
1
Error: Unable to update domain data: MySQL query failed: Unknown column 'syncRecords' in 'field list'

This error is due to the missing of column "syncRecords" in the table "dns_zone" of the database "psa" which mainly occur during Plesk updates.

In order to solve the issue, we need to add the missing columns. In our case, we added " " and "syncSoa".


Kindly follow the below steps.


1) Access Mysql

MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin psa

2) Access database psa.

mysql> use psa;

3) Then we need to alter the table "dns_zone"

ALTER TABLE dns_zone

ADD COLUMN syncRecords enum('true','false','skip');


ALTER TABLE dns_zone


ADD COLUMN syncSoa enum('true','false','skip');


Please try the same from your end and let us know the results :)
2
General Linux / ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
« Last post by nandulalr on December 13, 2018, 12:46:10 am »
ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

If you are facing issues with ssh login and displaying the error message “ssh failed Permission denied (publickey,gssapi-keyex,gssapi-with-mic)”, try to follow the below steps:

You may try edit in /etc/ssh/sshd_config:

Code: [Select]
cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.back
vim /etc/ssh/sshd_config

Change "PasswordAuthentication no" to "PasswordAuthentication yes"

Restart sshd service

Code: [Select]
service sshd restart
Try to login again.
3
General Linux / Yum gets stuck
« Last post by nandulalr on December 13, 2018, 12:31:05 am »
Yum gets stuck

Sometimes Yum gets stuck when we are trying to install or update packages. No error messages will be displayed at that time. It just freezes after 2 lines of output:

There could be several reasons why it happens, and the fixes are pretty simple.

1. Check if you have the DNS set properly in your server. You can try something like:

Code: [Select]
ping www.google.com
If you are getting a response, your DNS is working fine. If not, check the /etc/resolv.conf file Add the following in it (if not already exists):

Code: [Select]
nameserver 8.8.8.8
Now try again. If it still does not work, check step 2

2. Clean and rebuild the RPM databases. Keep a backup before removing the files.

Code: [Select]
rm -f /var/lib/rpm/__*
rpm --rebuilddb -v -v
yum clean all

It should work now. If you’re still facing issues, it’s probably the firewall. Check your firewall rules and make sure that the server can contact the remote repositories.
4
General Linux / How to check Magento Version
« Last post by arunlalpr on November 30, 2018, 09:41:40 pm »
How to check Magento Version

Go to your Magento installation directory and run the following command.

php -r "require 'app/Mage.php'; echo Mage::getVersion(); "


In our case it's 1.7.0.2

kindly check and let us know the result  ;)







5
General Linux / FATAL ERROR: Cannot decode data link type 113
« Last post by arunlalpr on November 29, 2018, 02:37:00 am »
When the interface is not eth0. We need to alter the installation snort configuration.


Error:

--------------------------------------
#snort -A console -q -u snort -g snort -c /etc/snort/snort.conf -i venet0
FATAL ERROR: Cannot decode data link type 113

--------------------------------------
We need skip the decoders by using "--enable-non-ether-decoders" during configuration will solve the issue.

--------------------------------------------
cd ~/snort_src
wget https://snort.org/downloads/snort/snort-2.9.8.0.tar.gz
tar -xvzf snort-2.9.8.0.tar.gz
cd snort-2.9.8.0
./configure --enable-sourcefire --enable-non-ether-decoders
make
sudo make install
----------------------------------------------

That it :)
6
Plesk / Error: Client denied by server configuration
« Last post by vaishakp on November 28, 2018, 10:00:51 pm »
The following error can be found in /var/www/vhosts/example.com/logs/error_log

[access_compat:error] [pid 13317:tid 140073563543296] [client 203.0.113.2:51234] AH01797: client denied by server configuration: /var/www/vhosts/example.com/httpdocs/


Cause:

Apache 2.4 is installed, but the file .htaccess contains authorization and authentication entries with syntax from the 2.2 version.

Resolution:
In Plesk Interface:

1. Log into Plesk.
2. Go to Domains > example.com > File Manager and click on the name of the file .htaccess.
3. Replace the directives as follows:

 Before:
    Order allow,deny
    Allow from all

 After:
    Require all granted

Click OK or Apply to save the changes.

OR

In CLI:

1. Connect to the server using SSH.
2. Open the file /var/www/vhosts/example.com/httpdocs/.htaccess for editing.

Change the following entries:
   Order allow,deny
   Allow from all
 to
   Require all granted

7
General Linux / How to Install Replicated File System using GlusterFS on CentsOS 7
« Last post by akhilu on November 25, 2018, 12:16:35 pm »



In one of our previous post, we have shown how to how to Install Distributed File System using "GlusterFS". You can refer the below link to read more:


https://admin-ahead.com/forum/general-linux/how-to-install-distributed-file-system-'glusterfs'-and-setup/msg1516/#msg1516

In this post, we will see how to set up replicated storage volume using glusterfs. on CentOS 7.

Replicated Glusterfs Volume is like a RAID 1, and volume maintains exact copies of the data on all bricks. You can decide the number of replicas while creating the volume, so you would need to have atleast two bricks to create a volume with two replicas or three bricks to create a volume of 3 replicas.


Terminologies:

Brick: Brick is basic storage (directory) on a server in the trusted storage pool.

Volume: Volume is a logical collection of bricks

Replicated File System: A file system in which data is spread across multiple storages nodes and allows clients to access it over the network.

Server: is a machine where the actual file system is hosted in which the data will be stored.

Client: is a machine which mounts the volume.

glusterd: glusterd is a daemon that runs on all servers in the trusted storage pool.


Requirements:

1) For this demo, I am using 5 CentOS 7 servers all 64 bit.  Two of these will act as servers and will maintain two replicas of the volume on each server.


Code: [Select]
68.232.175.206    server1
45.77.110.210     server2
144.202.11.51     client1
149.28.45.206     client2
149.28.41.112     client3

2) Make sure that all the servers in the cluster have a free disk attached to it to create the storage volumes.

Code: [Select]
In our case both server1 and server2 has 10 GB free disk attached to it

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3) GlusterFS components use DNS for name resolutions. Since I do not have a DNS on my environment I am using below entries in /etc/hosts file on all server

Code: [Select]
68.232.175.206    server1
45.77.110.210     server2
144.202.11.51     client1
149.28.45.206     client2


Install GlusterFS server on server1 and server2

1) First, we need to make sure that epel repository is enabled on both server1 and server2 by running below command:

Code: [Select]
yum install epel-release -y
If epel is already present then the above command will give you a message saying it already exist or else it will install epel repo

2) Create Gluster repository on server1 and server2

Code: [Select]
vi /etc/yum.repos.d/Gluster.repo

Add below code in the above file.

[gluster38]
name=Gluster 3.8
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.8/
gpgcheck=0
enabled=1

Once done run the below command to see if the epel and gluster repository is enabled

Code: [Select]
yum repolist
Output:
Code: [Select]
repo id                                      repo name                                                                  status
base/7/x86_64                                CentOS-7 - Base                                                             9,911
epel/x86_64                                  Extra Packages for Enterprise Linux 7 - x86_64                             12,718
extras/7/x86_64                              CentOS-7 - Extras                                                             434
glusterfs                                    Glusterfs5                                                                     40
updates/7/x86_64                             CentOS-7 - Updates                                                          1,614
repolist: 24,717

3) Run below command on server1 and server2 to install gluster

Code: [Select]
yum install glusterfs-server -y
4) Run the below command to start glusterd and enable it on boot

Code: [Select]
systemctl enable glusterd
systemctl start glusterd



Creating LVM on server1 and server2


We have a 10 GB disk attached on both server1 and server2 which we want to convert to storage brick.

Run below commands on server1 and server2

1) check the disks attached to server1 and server2

Code: [Select]
[root@server1 ~]# lsblk

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1 1024M  0 rom 
vda    253:0    0   25G  0 disk
└─vda1 253:1    0   25G  0 part /
vdb    253:16   0   10G  0 disk

You can see that we have two disk attached to the server vda and vdb. vdb is totally free right now.

2) Now we need to create a new partition in /dev/vdb on server1 and server2

Code: [Select]
[root@server1 ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xb3065797.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


3) run partprobe command on server1 and server2 to update kernel about new partitions

Code: [Select]
[root@server1 ~]# partprobe
[root@server1 ~]#

4) Create LVM with 10 GB space on both server1 and server2. You can allocate as per your needs

Code: [Select]
[root@server1 ~]# pvcreate /dev/vdb1
  Physical volume "/dev/vdb1" successfully created.
[root@server1 ~]# vgcreate vg1 /dev/vdb1
  Volume group "vg1" successfully created
[root@server1 ~]# lvcreate -l 100%FREE -n lv1 vg1
  Logical volume "lv1" created.


Mounting LVM on server1 and server2 onto the /bricks/brick1 folder

1) Creating brick directory on server1 and server2

Code: [Select]
[root@server1 ~]# mkdir -p /bricks/brick1

2) Setting xfs file system for the LVM created

Code: [Select]
[root@server1 ~]# mkfs.xfs /dev/mapper/vg1-lv1
meta-data=/dev/mapper/vg1-lv1    isize=512    agcount=4, agsize=655104 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2620416, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3) Define the mount point in the fstab file

Code: [Select]
[root@server1 ~]# vi /etc/fstab
/dev/mapper/vg1-lv1                       /bricks/brick1          xfs     defaults        0 0

4) Mount the LVM to the brick folder

Code: [Select]
[root@server2 ~]# mount -a
[root@server2 ~]#

5) Check if the volume is properly mounted

Code: [Select]
[root@server2 ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/vda1           ext4       25G  1.3G   23G   6% /
devtmpfs            devtmpfs  486M     0  486M   0% /dev
tmpfs               tmpfs     496M     0  496M   0% /dev/shm
tmpfs               tmpfs     496M   13M  483M   3% /run
tmpfs               tmpfs     496M     0  496M   0% /sys/fs/cgroup
tmpfs               tmpfs     100M     0  100M   0% /run/user/0
/dev/mapper/vg1-lv1 xfs        10G   33M   10G   1% /bricks/brick1


Create the trusted storage pool in glusterfs

Configure Firewall:

You would need to either disable the firewall or configure the firewall to allow all connections within a cluster.

By default, glusterd will listen on tcp/24007 but opening that port is not enough on the gluster nodes. Each time you will add a brick , it will open a new port (that you’ll be able to see with “gluster volumes status”)

Code: [Select]
# Disable FirewallD
systemctl stop firewalld
systemctl disable firewalld

OR

# Run below command on a node in which you want to accept all traffics comming from the source ip
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="<ipaddress>" accept'
firewall-cmd --reload

1) Here I will run all GlusterFS commands in server1 node.

Code: [Select]
[root@server1 ~]# gluster peer probe server2
peer probe: success.

2) Verify the status of the trusted storage pool.

Code: [Select]
[root@server1 ~]# gluster peer status
Number of Peers: 1

Hostname: server2
Uuid: 4040a194-a30b-43f2-9e8a-a896cd92c37d
State: Peer in Cluster (Connected)

3) List the storage pool.

Code: [Select]
[root@server1 ~]# gluster pool list
UUID Hostname State
4040a194-a30b-43f2-9e8a-a896cd92c37d server2  Connected
22479d4a-d585-4f0c-ad35-cbd5444d3bbd localhost Connected

Setup gluster volumes

1) Create brick directory on the mounted file systems on both server1 and server2. In my case I am creating directory vol1

Code: [Select]
[root@server2 ~]# mkdir /bricks/brick1/vol1
[root@server2 ~]#

2) Since we are going to use replicated volume, so create the volume named “vol1” with two replicas on server1

Code: [Select]
[root@server1 ~]# gluster volume create vol1 replica 2 server1:/bricks/brick1/vol1 server2:/bricks/brick1/vol1
volume create: vol1: success: please start the volume to access data

3) Start the volume.

Code: [Select]
[root@server1 ~]# gluster volume start vol1
volume start: vol1: success

4) Check the status of the created volume on server1 and server2

Code: [Select]
[root@server1 ~]# gluster volume info vol1
 
Volume Name: vol1
Type: Replicate
Volume ID: 9b7939f5-01a8-487a-8c3e-9dcd734553d5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/bricks/brick1/vol1
Brick2: server2:/bricks/brick1/vol1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Setup GlusterFS Client

1) On all client server client1, client2 and client3 run below command to install glusterfs client package to support mounting of glusterfs filesystems.

Code: [Select]
yum install -y glusterfs-client
2) Create a directory to mount the GlusterFS filesystem

Code: [Select]
mkdir -p /mnt/glusterfs
3) Now mount the gluster file system to the above directory on all client servers by adding below entry in /etc/fstab

Code: [Select]
[root@client2 ~]# vi /etc/fstab
server1:/vol1                             /mnt/glusterfs          glusterfs defaults,_netdev 0 0

4) Run below command on all client-server to mount the gluster volumes

Code: [Select]
[root@client1 ~]# mount -a
[root@client1 ~]#

5) Check if the mount point has been properly mounted.

Code: [Select]
[root@client1 ~]# df -Th
Filesystem     Type            Size  Used Avail Use% Mounted on
/dev/vda1      ext4             25G  1.3G   23G   6% /
devtmpfs       devtmpfs        486M     0  486M   0% /dev
tmpfs          tmpfs           496M     0  496M   0% /dev/shm
tmpfs          tmpfs           496M   13M  483M   3% /run
tmpfs          tmpfs           496M     0  496M   0% /sys/fs/cgroup
tmpfs          tmpfs           100M     0  100M   0% /run/user/0
server1:/vol1  fuse.glusterfs   10G  135M  9.9G   2% /mnt/glusterfs


Testing if the data replication is working accross client and server

1) Since we have mounted the gluster volumes on /mnt/glusterfs folder on both client system we need to move into /mnt/glusterfs and create some test files.

Code: [Select]
[root@client1 glusterfs]# touch file{1..5}
[root@client1 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5


2) Now you need to go to the server nodes and see if these files are there in the brick volume that we created /bricks/brick1/vol1/

Code: [Select]
[root@server1 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

3) It should be also there in server2 and the client 2 and client3 servers.

Code: [Select]
[root@server2 vol1]# ll
total 0
-rw-r--r-- 2 root root 0 Nov 25 06:40 file1
-rw-r--r-- 2 root root 0 Nov 25 06:40 file2
-rw-r--r-- 2 root root 0 Nov 25 06:40 file3
-rw-r--r-- 2 root root 0 Nov 25 06:40 file4
-rw-r--r-- 2 root root 0 Nov 25 06:40 file5

Code: [Select]
[root@client2 glusterfs]# ll
total 0
-rw-r--r-- 1 root root 0 Nov 25 06:40 file1
-rw-r--r-- 1 root root 0 Nov 25 06:40 file2
-rw-r--r-- 1 root root 0 Nov 25 06:40 file3
-rw-r--r-- 1 root root 0 Nov 25 06:40 file4
-rw-r--r-- 1 root root 0 Nov 25 06:40 file5

This type of setup is important for data redundancy. This setup is also efficient while using website with load balancer setup.

I hope you find this information useful :) Thank you for reading.
8
General Linux / Some Ways To Get Memcached Stats
« Last post by Vineesh K P on November 23, 2018, 03:23:13 pm »
In this article, we will check out some ways to get stats of Memcached.

For this to work, you need to have an ssh access to the server.

Method 1


  • SSH to the server which has Memcached.
  • Connect to Memcached port as shown below.
    telnet 127.0.0.1 11211
  • Once the connection has been established, type stats and hit enter.

An alternate way to do this is by using the nc command.
echo stats | nc 127.0.0.1 11211

You will get results similar to this.
Code: [Select]
STAT pid 22020
STAT uptime 3689364
STAT time 1227753109
STAT version 1.2.5
STAT pointer_size 64
STAT rusage_user 4543.071348
STAT rusage_system 8568.293421
STAT curr_items 139897
STAT total_items 51710845
STAT bytes 360147055
STAT curr_connections 40
STAT total_connections 66762
STAT connection_structures 327
STAT cmd_get 319992973
STAT cmd_set 51710845
STAT get_hits 280700485
STAT get_misses 39292488
STAT evictions 849165
STAT bytes_read 141320046298
STAT bytes_written 544357801590
STAT limit_maxbytes 402653184
STAT threads 4
END

Method 2

Here’s an easy “top” emulator for Memcached:

watch "echo stats | nc 127.0.0.1 11211"

If you don’t have netcat (nc), you can also use Bash’s built-in /proc/tcp magic if it’s enabled. Anything that can push a couple of characters to a TCP port and print the result to stdout will work. Or you can use something like this, if you must do it via PHP:

watch 'php -r '"'"'$m=new Memcache;$m->connect("127.0.0.1", 11211);print_r($m->getstats());'"'"

Hope you found some value in it.

Until next time, cheers!
9
cPanel / cpanelsolr process and how to disable it on cPanel servers
« Last post by vaishakp on November 22, 2018, 06:50:18 pm »
cpanelsolr was introduced in cPanel version 64. cpanelsolr uses Java to index email messages managed by dovecot. cPanel describe the service as;

------------------------------------------------------------
Fast Email Searching (IMAP Full-Text Search)
Full-Text Search Indexing (powered by Solr) provides fast search capabilities for IMAP mailboxes. Users of iOS devices, Microsoft® Outlook™, SquirrelMail, Horde, Roundcube, and Mozilla™ Thunderbird will notice significantly improved search speed and convenience.
------------------------------------------------------------

How to disable cpanelsolr

1) Login to WHM with the root user account
2) Select “Service Manager”
3) Once the page loads untick the two boxes next to the cpanel-dovecot-solr service.
4) Click save at the bottom of the page

This will now disable the cpanel-dovecot-solr service and you should notice your memory usage won’t be as high. It’s important to remember you should only disable this service if you are having problems with it or your system is running a low amount of memory. Any system over 4GB of ram running the defaults provided by cPanel should be fine leaving the service enabled.
10
Plesk / Plesk website is loading apache default page (Nginx + Apache setup)
« Last post by Shibu B on November 22, 2018, 10:05:37 am »
This error is noticed in domains hosted in plesk servers. Nginx is running in frontend and Apache in the backend. This is mainly due to Apache doesn't reach the configurations added by Plesk.


* As a first step, we need to disable Nginx by using the following command

# /usr/local/psa/admin/bin/nginxmng -d

* Now the frontend web server is switched to apache. Check whether the issue still persists on the websites. If the issue is fixed. Enable Nginx again using the following command. This will resolve the issue.

# /usr/local/psa/admin/bin/nginxmng -e

* Make sure that zz010_psa_httpd.conf file is present in /etc/httpd/conf.d folder and also "IncludeOptional conf.d/*.conf" is addeded in apache conf (/etc/httpd/conf/httpd.conf)


* If the error for website is still not resolved after disabling Nginx, try rebuilding the Apache configuration files for the domain:

# /usr/local/psa/admin/bin/httpdmng --reconfigure-domain <domain_name>

or

# plesk repair web <domain_name>

* If the error is for all domains, please run following command

# /usr/local/psa/admin/bin/httpdmng --reconfigure-all

Pages: [1] 2 3 ... 10