Get your server issues fixed by our experts for a price starting at just 25 USD/Hour. Click here to register and open a ticket with us now!

Recent Posts

Pages: [1] 2 3 ... 10
1
General Linux / How to automatically dim your screen on Linux
« Last post by akhilt on August 05, 2018, 04:48:31 pm »
How to automatically dim your screen on Linux

When you start spending the majority of your time in front of a computer, natural questions start arising. Is this healthy? How can I diminish the strain on my eyes? Why is the sunlight burning me? Although active research is still going on to answer these questions, a lot of programmers have already adopted a few applications to make their daily habits a little healthier for their eyes. Among those applications, there are two which I found particularly interesting: Calise and Redshift.

Calise

In and out of development limbo, Calise stands for "Camera Light Sensor." In other terms, it is an open source program that computes the best backlight level for your screen based on the light intensity received by your webcam. And for more precision, Calise is capable of taking in account the weather in your area based on your geographical coordinates. What I like about it is the compatibility with every desktops, even non-X ones.

It comes with a command line interface and a GUI, supports multiple user profiles, and can even export its data to CSV. After installation, you will have to calibrate it quickly before the magic happens.

What is less likeable is unfortunately that if you are as paranoid as I am, you have a little piece of tape in front of your webcam, which greatly affects Calise's precision. But that aside, Calise is a great application, which deserves our attention and support. As I mentioned earlier, it has gone through some rough patches in its development schedule over the last couple of years, so I really hope that this project will continue.

Find the tool from here:
http://calise.sourceforge.net/wordpress/


Redshift

If you already considered decreasing the strain on your eyes caused by your screen, it is possible that you have heard of f.lux, a free proprietary software that modifies the luminosity and color scheme of your display based on the time of the day. However, if you really prefer open source software, there is an alternative: Redshift. Inspired by f.lux, Redshift also alters the color scheme and luminosity to enhance the experience of sitting in front of your screen at night. On startup, you can configure it with you geographic position as longitude and latitude, and then let it run in tray. Redshift will smoothly adjust the color scheme or your screen based on the position of the sun. At night, you will see the screen's color temperature turn towards red, making it a lot less painful for your eyes.

Just like Calise, it proposes a command line interface as well as a GUI client. To start Redshift quickly, just use the command:

Code: [Select]
$ redshift -l [LAT]:[LON]
Replacing [LAT]:[LON] by your latitude and longitude.

However, it is also possible to input your coordinates by GPS via the gpsd module. For Arch Linux users, I recommend this wiki page.

Find the tool from here:
http://jonls.dk/redshift/


Conclusion

To conclude, Linux users have no excuse for not taking care of their eyes. Calise and Redshift are both amazing. I really hope that their development will continue and that they get the support they deserve. Of course, there are more than just two programs out there to fulfill the purpose of protecting your eyes and staying healthy, but I feel that Calise and Redshift are a good start.
2
General Linux / Back up a WordPress website to remote cloud storage
« Last post by akhilt on August 05, 2018, 04:23:39 pm »
Back up a WordPress website to remote cloud storage

There are many different ways to archive the current snapshot of a WordPress site. Some web hosting or VPS companies offer automatic daily backup service for an extra fee. Many web-based hosting control panels (e.g., cPanel, Webmin) come with a full website backup option for you to back things up interactively. There are also WordPress plugins dedicated to firing scheduled WordPress backup cron jobs. Even some third-party online services enable you to back up and version-control a WordPress deployment off-site and restore any previous snapshot at your command.

Yet another WordPress backup option is Linux command line based WordPress backup. As you can imagine, this is the most flexible option, and you retain complete control over the entire backup process. This option is applicable only if your hosting provider or VPS allows SSH remote access. Assuming this applies to you, here is how you can back up your WordPress website and store it to offsite cloud storage, all from the command line.

There are two steps to WordPress backup. One is to back up a WordPress database which stores WordPress content (e.g., postings, comments). The other is to back up PHP files or design files hosted on your WordPress site. After going over these steps one by one, I will present full scripts for complete WordPress backup, so you can easily copy and paste them for your own use.

Backing up a WordPress Database

Backing up a WordPress database can easily be done from the command line with a tool called mysqldump. This command line tool comes with MySQL client package, which you can install easily if you are running a VPS. If your hosting provider provides SSH remote access, most likely MySQL client is installed on the server you are on.

The following command will dump the content of your WordPress database into a MySQL dump file called db.sql.

Code: [Select]
$ mysqldump --add-drop-table -h<db-host> -u<db-user> -p<db-password> <db-name> > db.sql
You can find <db-host>, <db-user>, <db-password> and <db-name> in wp-config.php of your WordPress installation.

The "--add-drop-table" option above tells mysqldump to add "DROP TABLE IF EXISTS" statement before each table creation statement in the MySQL dump file. When you import a MySQL dump into a database, if the database already has a table with the same name as the one to import, the import will fail due to duplicate table name. Thus adding "DROP TABLE IF EXISTS" statement is a precaution to prevent this potential import failure.

Note that if MySQL server is running remotely (i.e., "db-host" is not localhost), you have to make sure that the MySQL server allows remote database access. Otherwise, the mysqldump command will fail.

Backing up WordPress Files

The next step is to back up all PHP and design files hosted in your WordPress site.

First, go to the root directory (e.g., ~/public_html) of your WordPress site. The exact path may vary depending on your setup.

Code: [Select]
$ cd ~/public_html
Then create a compressed archive which contains all the files in the root directory using tar command.

One useful option to use is "--exclude=PATTERN", which allows you to exclude files/directories which you do not want to include in the TAR archive. You can repeat this option as many time as you want. For example, you can exclude the "wp-content/cache" directory which is used by different plugins to hold temporarily cached files for speedup purposes. You can also exclude any plugins/themes (e.g., wp-content/plugins/unused_plugin) you do not and will not plan to use.

The following command creates a bzip-compressed archive of the root directory of your WordPress site, excluding the WordPress cache directory.

Code: [Select]
$ tar -jcvf ../backup.tar.bz2 --exclude='wp-content/cache/*' .
Store a WordPress Backup at Off-site Cloud Storage

Once a backup archive is created, it is best to store the archive at a remote location off of the hosting server to prevent any accidental data loss. An affordable option for offsite backup is cloud storage. For example, Dropbox offers 2GB of free space, while Amazon S3 gives away 5GB via AWS Free Usage Tier.

Here I demonstrate how to upload a WordPress backup archive to both Dropbox and Amazon S3.

Upload to Dropbox Cloud Storage

If you prefer AWS S3 as a WordPress backup storage, here is the shell script to upload to AWS S3. Replace bucket, s3Key and s3Secret with your own information.

file=backup.tar.bz2
bucket="xxxxxxxxxxxxxxxxxxx"
s3Key="XXXXXXXXXXXXXXXXXXX"
s3Secret="YYYYYYYYYYYYYYYYYYYYY"
 
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -X PUT -T "${file}" \
  -H "Host: ${bucket}.s3.amazonaws.com" \
  -H "Date: ${dateValue}" \
  -H "Content-Type: ${contentType}" \
  -H "Authorization: AWS ${s3Key}:${signature}" \
  https://${bucket}.s3.amazonaws.com/${file}


Note that this script requires curl and openssl installed.

Full Scripts for Remote WordPress Backup

For you to easily apply what I have shown so far, here are the full WordPress backup scripts. Each script creates a bzip-compressed backup of your WordPress site including its database, and uploads the archive to either Dropbox or Amazon AWS S3. For daily backups, the created backup archive is named after the current date.

WordPress Backup to Dropbox

#!/bin/sh
 
# MySQL information: replace it with your own
hostname=localhost
username=MY_USER
password=MY_PASS
database=MY_DB
 
# Dropbox information: replace it with your own
dropbox_uploader="~/bin/dropbox_uploader.sh"
dropbox_folder="Backup"
 
cur=`date +"%Y-%m-%d"`
dbfile=db.$cur.sql
wpfile=backup.$cur.tar.bz2
 
cd ~/public_html
 
echo "back up database"
mysqldump --add-drop-table -h$hostname -u$username -p$password $database > db.$cur.sql
 
echo "compress database"
bzip2 $dbfile
 
echo "back up wordpress"
tar -jcvf ../$wpfile --exclude='wp-content/cache/*' .
 
echo "transfer to dropbox"
$dropbox_uploader upload ../$wpfile $dropbox_folder
 
echo "done!"


WordPress Backup to Amazon AWS S3

#!/bin/sh
 
# MySQL information: replace it with your own
hostname=MY_HOST
username=MY_USER
password=MY_PASS
database=MY_DB
 
# AWS S3 information: replace it with your own
bucket=MY_BUCKET
s3Key=MY_S3_KEY
s3Secret=MY_S3_SECRET
 
cur=`date +"%Y-%m-%d"`
dbfile=db.$cur.sql
wpfile=backup.$cur.tar.bz2
 
cd ~/public_html
 
echo "back up database"
mysqldump --add-drop-table -h$hostname -u$username -p$password $database > db.$cur.sql
 
echo "compress database"
bzip2 $dbfile
 
echo "back up wordpress"
tar -jcvf ../$wpfile --exclude='wp-content/cache/*' .
 
echo "transfer to s3"
resource="/${bucket}/${wpfile}"
contentType="application/x-compressed-tar"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -X PUT -T "../${wpfile}" \
  -H "Host: ${bucket}.s3.amazonaws.com" \
  -H "Date: ${dateValue}" \
  -H "Content-Type: ${contentType}" \
  -H "Authorization: AWS ${s3Key}:${signature}" \
  https://${bucket}.s3.amazonaws.com/${wpfile}
 
echo "done!
"

To make WordPress backup a daily routine, you can set up a daily CRON job to execute either script on a daily basis.
Conclusion

Regular website backup goes a long way to protect your long-term investment in your website. In this tutorial, I present a way to back up a WordPress website to remote cloud storage such as Dropbox and AWS S3.

Another approach to WordPress backup is performing a full backup followed by rsync-based incremental backups. During incremental backups, only changes made to your WordPress site (e.g., newly created, modified or deleted files) since the latest backup are archived. The incremental backups can save disk space (and bandwidth for offsite backup) if your website rarely gets updated. The price you pay however is when you want to "restore" a backup. You would have to have the last full backup and "every" incremental backup in-between, and restore them one by one in exact sequence. Also, the remote storage would need to talk the rsync protocol, which is not the case for existing cloud storage.

I personally prefer full WordPress backup to incremental snapshots due to the simplicity of the former. But everyone is entitled to his or her own opinion so feel free to share yours. Happy backup!
3
General Linux / How to configure networking in CentOS Desktop with command line
« Last post by akhilt on August 05, 2018, 02:51:19 pm »
How to configure networking in CentOS Desktop with command line

If you want to configure networking in CentOS Desktop by using command line utilities, you should be aware that networking on CentOS Desktop is by default managed by a daemon with GUI interface, which is called NetworkManager. This means that whenever you want to change network configuration, you are supposed to do so via NetworkManager. Any change made otherwise will be lost or overwritten by NetworkManager later.

So the first step to configuring networking using command line on CentOS Desktop is to disable NetworkManager, and enable network service.

To disable NetworkManager permanently on CentOS, do the following.
Code: [Select]
$ sudo service NetworkManager stop
$ sudo chkconfig NetworkManager off


Then, activate network service instead.

Code: [Select]
$ sudo service network start
$ sudo chkconfig network on

Once NetworkManager is disabled, you can now configure networking simply by editing files in /etc/sysconfig/network-script, as described in the post.

When you are done with configuring all existing interfaces, restart network to activate the change.

Code: [Select]
$ sudo /etc/init.d/network restart
4
General Linux / How to block network traffic by country on Linux
« Last post by akhilt on August 05, 2018, 02:13:03 pm »
How to block network traffic by country on Linux

As a system admin who maintains production Linux servers, there are circumstances where you need to selectively block or allow network traffic based on geographic locations. For example, you are experiencing denial-of-service attacks mostly originating from IP addresses registered with a particular country. In other cases, you want to block SSH logins from unknown foreign countries for security reasons. Or your company has a distribution right to online videos, which allows it to legally stream to particular countries only. Or you need to prevent any local host from uploading documents to any non-US remote cloud storage due to geo-restriction company policies.

All these scenarios require an ability to set up a firewall which does country-based traffic filtering. There are a couple of ways to do that. For one, you can use TCP wrappers to set up conditional blocking for individual applications (e.g., SSH, NFS, httpd). The downside is that the application you want to protect must be built with TCP wrappers support. Besides, TCP wrappers are not universally available across different platforms (e.g., Arch Linux dropped its support). An alternative approach is to set up ipset with country-based GeoIP information and apply it to iptables rules. The latter approach is more promising as the iptables-based filtering is application-agnostic and easy to set up.

In this tutorial, I am going to present another iptables-based GeoIP filtering which is implemented with xtables-addons. For those unfamiliar with it, xtables-addons is a suite of extensions for netfilter/iptables. Included in xtables-addons is a module called xt_geoip which extends the netfilter/iptables to filter, NAT or mangle packets based on source/destination countries. For you to use xt_geoip, you don't need to recompile the kernel or iptables, but only need to build xtables-addons as modules, using the current kernel build environment (/lib/modules/`uname -r`/build). Reboot is not required either. As soon as you build and install xtables-addons, xt_geoip is immediately usable with iptables.

As for the comparison between xt_geoip and ipset, the official source mentions that xt_geoip is superior to ipset in terms of memory foot print. But in terms of matching speed, hash-based ipset might have an edge.

In the rest of the tutorial, I am going to show how to use iptables/xt_geoip to block network traffic based on its source/destination countries.

Install Xtables-addons on Linux

Here is how you can compile and install xtables-addons on various Linux platforms.

To build xtables-addons, you need to install a couple of dependent packages first.

Install Dependencies on Debian, Ubuntu or Linux Mint
Code: [Select]
$ sudo apt-get install iptables-dev xtables-addons-common libtext-csv-xs-perl pkg-config
Install Dependencies on CentOS, RHEL or Fedora

CentOS/RHEL 6 requires EPEL repository being set up first (for perl-Text-CSV_XS).
Code: [Select]
$ sudo yum install gcc-c++ make automake kernel-devel-`uname -r` wget unzip iptables-devel perl-Text-CSV_XS
Compile and Install Xtables-addons

Download the latest xtables-addons source code from the official site, and build/install it as follows.
Code: [Select]
$ wget http://downloads.sourceforge.net/project/xtables-addons/Xtables-addons/xtables-addons-2.10.tar.xz
$ tar xf xtables-addons-2.10.tar.xz
$ cd xtables-addons-2.10
$ ./configure
$ make
$ sudo make install

Note that for Red Hat based systems (CentOS, RHEL, Fedora) which have SELinux enabled by default, it is necessary to adjust SELinux policy as follows. Otherwise, SELinux will prevent iptables from loading xt_geoip module

Code: [Select]
$ sudo chcon -vR --user=system_u /lib/modules/$(uname -r)/extra/*.ko
$ sudo chcon -vR --type=lib_t /lib64/xtables/*.so

Install GeoIP Database for Xtables-addons

The next step is to install GeoIP database which will be used by xt_geoip for IP-to-country mapping. Conveniently, the xtables-addons source package comes with two helper scripts for downloading GeoIP database from MaxMind and converting it into a binary form recognized by xt_geoip. These scripts are found in geoip folder inside the source package. Follow the instructions below to build and install GeoIP database on your system.

Code: [Select]
$ cd geoip
$ ./xt_geoip_dl
$ ./xt_geoip_build GeoIPCountryWhois.csv
$ sudo mkdir -p /usr/share/xt_geoip
$ sudo cp -r {BE,LE} /usr/share/xt_geoip

According to MaxMind, their GeoIP database is 99.8% accurate on a country-level, and the database is updated every month. To keep the locally installed GeoIP database up-to-date, you want to set up a monthly cron job to refresh the local GeoIP database as often.

Block Network Traffic Originating from or Destined to a Country

Once xt_geoip module and GeoIP database are installed, you can immediately use the geoip match options in iptables command.

Code: [Select]
$ sudo iptables -m geoip --src-cc country[,country...] --dst-cc country[,country...]
Countries you want to block are specified using two-letter ISO3166 code (e.g., US (United States), CN (China), IN (India), FR (France)).

For example, if you want to block incoming traffic from Yemen (YE) and Zambia (ZM), the following iptables command will do.
Code: [Select]
$ sudo iptables -I INPUT -m geoip --src-cc YE,ZM -j DROP If you want to block outgoing traffic destined to China (CN), run the following command.
Code: [Select]
$ sudo iptables -A OUTPUT -m geoip --dst-cc CN -j DROP
The matching condition can also be "negated" by prepending "!" to "--src-cc" or "--dst-cc". For example:

If you want to block all incoming non-US traffic on your server, run this:

Code: [Select]
$ sudo iptables -I INPUT -m geoip ! --src-cc US -j DROP
For Firewall-cmd Users

Some distros such as CentOS/RHEL 7 or Fedora have replaced iptables with firewalld as the default firewall service. On such systems, you can use firewall-cmd to block traffic using xt_geoip similarly. The above three examples can be rewritten with firewall-cmd as follows.
Code: [Select]
$ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip --src-cc YE,ZM -j DROP
$ sudo firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -m geoip --dst-cc CN -j DROP
$ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -m geoip ! --src-cc US -j DROP

Conclusion

In this tutorial, I presented iptables/xt_geoip which is an easy way to filter network packets based on their source/destination countries. This can be a useful arsenal to deploy in your firewall system if needed. As a final word of caution, I should mention that GeoIP-based traffic filtering is not a foolproof way to ban certain countries on your server. GeoIP database is by nature inaccurate/incomplete, and source/destination geography can easily be spoofed using VPN, Tor or any compromised relay hosts. Geography-based filtering can even block legitimate traffic that should not be banned. Understand this limitation before you decide to deploy it in your production environment.
5
General Linux / Install and Configure PostgreSQL from Source on Linux
« Last post by akhilt on August 05, 2018, 01:36:39 pm »
Install and Configure PostgreSQL from Source on Linux

Similar to mySQL, postgreSQL is very famous and feature packed free and open source database.

In this article, let us review how to install postgreSQL database on Linux from source code.

Step 1: Download postgreSQL source code

From the postgreSQL download site(https://www.postgresql.org/ftp/source/), choose the mirror site that is located in your country.

Step 2: Install postgreSQL
Code: [Select]
# tar xvfz postgresql-8.3.7.tar.gz

# cd postgresql-8.3.7

# ./configure

# make

# make install

PostgreSQL ./configure options

Following are various options that can be passed to the ./configure:

    –prefix=PREFIX install architecture-independent files in PREFIX. Default installation location is /usr/local/pgsql
    –enable-integer-datetimes  enable 64-bit integer date/time support
    –enable-nls[=LANGUAGES]  enable Native Language Support
    –disable-shared         do not build shared libraries
    –disable-rpath           do not embed shared library search path in executables
    –disable-spinlocks    do not use spinlocks
    –enable-debug           build with debugging symbols (-g)
    –enable-profiling       build with profiling enabled
    –enable-dtrace           build with DTrace support
    –enable-depend         turn on automatic dependency tracking
    –enable-cassert         enable assertion checks (for debugging)
    –enable-thread-safety  make client libraries thread-safe
    –enable-thread-safety-force  force thread-safety despite thread test failure
    –disable-largefile       omit support for large files
    –with-docdir=DIR      install the documentation in DIR [PREFIX/doc]
    –without-docdir         do not install the documentation
    –with-includes=DIRS  look for additional header files in DIRS
    –with-libraries=DIRS  look for additional libraries in DIRS
    –with-libs=DIRS         alternative spelling of –with-libraries
    –with-pgport=PORTNUM   change default port number [5432]
    –with-tcl                     build Tcl modules (PL/Tcl)
    –with-tclconfig=DIR   tclConfig.sh is in DIR
    –with-perl                   build Perl modules (PL/Perl)
    –with-python              build Python modules (PL/Python)
    –with-gssapi               build with GSSAPI support
    –with-krb5                  build with Kerberos 5 support
    –with-krb-srvnam=NAME  default service principal name in Kerberos [postgres]
    –with-pam                  build with PAM support
    –with-ldap                  build with LDAP support
    –with-bonjour            build with Bonjour support
    –with-openssl            build with OpenSSL support
    –without-readline      do not use GNU Readline nor BSD Libedit for editing
    –with-libedit-preferred  prefer BSD Libedit over GNU Readline
    –with-ossp-uuid        use OSSP UUID library when building contrib/uuid-ossp
    –with-libxml               build with XML support
    –with-libxslt               use XSLT support when building contrib/xml2
    –with-system-tzdata=DIR  use system time zone data in DIR
    –without-zlib              do not use Zlib
    –with-gnu-ld              assume the C compiler uses GNU ld [default=no]

PostgreSQL Installation Issue1:

You may encounter the following error message while performing ./configure during postgreSQL installation.

Code: [Select]
# ./configure
checking for -lreadline... no
checking for -ledit... no
configure: error: readline library not found
If you have readline already installed, see config.log for details on the
failure.  It is possible the compiler isn't looking in the proper directory.
Use --without-readline to disable readline support.

PostgreSQL Installation Solution1:

Install the readline-devel and libtermcap-devel to solve the above issue.

Code: [Select]
# rpm -ivh libtermcap-devel-2.0.8-46.1.i386.rpm readline-devel-5.1-1.1.i386.rpm
Step 3: Verify the postgreSQL directory structure

After the installation, make sure bin, doc, include, lib, man and share directories are created under the default /usr/local/pgsql directory as shown below.

Code: [Select]
# ls -l /usr/local/pgsql/

drwxr-xr-x 2 root root 4096 Aug  5 23:25 bin
drwxr-xr-x 3 root root 4096 Aug  5 23:25 doc
drwxr-xr-x 6 root root 4096 Aug  5 23:25 include
drwxr-xr-x 3 root root 4096 Aug 5 23:25 lib
drwxr-xr-x 4 root root 4096 Aug 5 23:25 man
drwxr-xr-x 5 root root 4096 Aug 5 23:25 share

Step 4: Create postgreSQL user account
Code: [Select]
# adduser postgres

# passwd postgres
Changing password for user postgres.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

Step 5: Create postgreSQL data directory

Create the postgres data directory and make postgres user as the owner.

# mkdir /usr/local/pgsql/data

# chown postgres:postgres /usr/local/pgsql/data

# ls -ld /usr/local/pgsql/data
drwxr-xr-x 2 postgres postgres 4096 Apr  8 23:26 /usr/local/pgsql/data

Step 6: Initialize postgreSQL data directory

Before you can start creating any postgreSQL database, the empty data directory created in the above step should be initialized using the initdb command as shown below.

Code: [Select]
# su - postgres

# /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
The files belonging to this database system will be owned by user postgres
This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to "english".

fixing permissions on existing directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers/max_fsm_pages ... 32MB/204800
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.

Success. You can now start the database server using:

    /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
or
    /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start

Step 7: Validate the postgreSQL data directory

Make sure all postgres DB configuration files (For example, postgresql.conf) are created under the data directory as shown below.

Code: [Select]
$ ls -l /usr/local/pgsql/data
total 64
drwx------ 5 postgres postgres  4096 Aug 5 23:29 base
drwx------ 2 postgres postgres  4096 Aug 5 23:29 global
drwx------ 2 postgres postgres  4096 Aug 5 23:29 pg_clog
-rw------- 1 postgres postgres  3429 Aug 5 23:29 pg_hba.conf
-rw------- 1 postgres postgres  1460 Aug 5 23:29 pg_ident.conf
drwx------ 4 postgres postgres  4096 Aug 5 23:29 pg_multixact
drwx------ 2 postgres postgres  4096 Aug 5 23:29 pg_subtrans
drwx------ 2 postgres postgres  4096 Aug 5 23:29 pg_tblspc
drwx------ 2 postgres postgres  4096 Aug 5 23:29 pg_twophase
-rw------- 1 postgres postgres     4 Aug 5 23:29 PG_VERSION
drwx------ 3 postgres postgres  4096 Aug 5 23:29 pg_xlog
-rw------- 1 postgres postgres 16592 Aug 5 23:29 postgresql.conf

Step 8: Start postgreSQL database

Use the postgres postmaster command to start the postgreSQL server in the background as shown below.

Code: [Select]
$ /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
[1] 2222

$ cat logfile
LOG:  database system was shut down at 2009-04-08 23:29:50 PDT
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections

Step 9: Create postgreSQL DB and test the installation

Create a test database and connect to it to make sure the installation was successful as shown below. Once you start using the database, take backups frequently.

Code: [Select]
$ /usr/local/pgsql/bin/createdb test

$ /usr/local/pgsql/bin/psql test
Welcome to psql 8.3.7, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help with psql commands
       \g or terminate with semicolon to execute query
       \q to quit

test=#

Thank you for reading this article.  :D
6
General Linux / Install and Configure MariaDB MySQL on CentOS / RedHat
« Last post by akhilt on August 05, 2018, 12:05:33 pm »
Install and Configure MariaDB MySQL on CentOS / RedHat

Starting from CentOS 7, you will not see a package called mysql-server in the yum repository.

Now the package is called as mariadb-server.

The original MySQL is now owned by Oracle corporation.

But MariaDB is a fork of the original MySQL database. Just like the original MySQL, MariaDB is also open source, developed by open source community, maintained and supported by MariaDB corporation.

From our point of view, only the package name is changed. MariaDB is still MySQL, and all the mysql command line utilities are still exactly named the same including the command called mysql.

This tutorial explains step-by-step on how to install and configure MariaDB on CentOS or RedHat based Linux distros.

1. MariaDB MySQL Packages

The following are the three main MariaDB packages:

> mariadb-5.5.52-1.el7.x86_64 -This contains several MySQL client programs and utilities.
> mariadb-server-5.5.52-1.el7.x86_64 – This is the main MariaDB MySQL database server.
> mariadb-libs-5.5.52-1.el7.x86_64 – This contains the shared libraries that are required for client program interface.

The current version of MariaDB-server that is available on CentOS 7 yum repository is 5.5.52 as shown below.

Code: [Select]
# yum info mariadb-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: repos-va.psychz.net
 * extras: linux.cc.lehigh.edu
 * updates: mirror.us.leaseweb.net
Available Packages
Name        : mariadb-server
Arch        : x86_64
Epoch       : 1
Version     : 5.5.52
Release     : 1.el7
Size        : 11 M
Repo        : base/7/x86_64
..

2. Install MariaDB MySQL Server

Install the MariaDB MySQL server package as shown below using yum install.
Code: [Select]
# yum install mariadb-server
In this case, on this server, it has installed mariadb-server along with the following dependent packages.

    - mariadb-server.x86_64 1:5.5.52-1.el7
    - mariadb-libs.x86_64 1:5.5.52-1.el7
    - mariadb.x86_64 1:5.5.52-1.el7
    - libaio.x86_64 0:0.3.109-13.el7
    - perl-DBD-MySQL.x86_64 0:4.023-5.el7
    - perl-DBI.x86_64 0:1.627-4.el7
    - perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7
    - perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7
    - perl-Data-Dumper.x86_64 0:2.145-3.el7
    - perl-IO-Compress.noarch 0:2.061-2.el7
    - perl-Net-Daemon.noarch 0:0.48-5.el7
    - perl-PlRPC.noarch 0:0.2020-14.el7

Verify to make sure this has installed the three important MariaDB mysql packages.
Code: [Select]
# rpm -qa | grep -i maria
mariadb-5.5.52-1.el7.x86_64
mariadb-server-5.5.52-1.el7.x86_64
mariadb-libs-5.5.52-1.el7.x86_64

3. Startup MariaDB Database

As you see below, mariadb database server module is loaded, but not started yet.
Code: [Select]
? mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Start the MariaDB mysql server using systemctl as shown below.
Code: [Select]
# systemctl start mariadb
Verify the systemctl status to make sure the mariadb database server is started successfully.
Code: [Select]
# systemctl status mariadb
? mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2017-06-26 18:26:35 UTC; 13s ago
  Process: 4049 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 3969 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 4048 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           +-4048 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           +-4206 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/ma...

Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: The latest information about MariaDB is available at http://mariadb.org/.
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: You can find additional information about the MySQL part at:
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: http://dev.mysql.com
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: Support MariaDB development by buying support/new features from MariaDB
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: Corporation Ab. You can contact us about this at sales@mariadb.com.
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: Alternatively consider joining our community based development effort:
Jun 26 18:26:32 deploy mariadb-prepare-db-dir[3969]: http://mariadb.com/kb/en/contributing-to-the-mariadb-project/
Jun 26 18:26:32 deploy mysqld_safe[4048]: 170601 18:26:32 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
Jun 26 18:26:32 deploy mysqld_safe[4048]: 170601 18:26:32 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Jun 26 18:26:35 deploy systemd[1]: Started MariaDB database server.

4. Connect and Verify MariaDB Server

Use the mysql command as shown below to connect to the database using mysql’s root user.
Code: [Select]
# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.52-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>

The following show database command will display the default mysql databases.
Code: [Select]
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

5. Perform MariaDB Post Installation Steps

As you see from above, by default, the installation doesn’t assign any password to MySQL root account.

To set the mysql root user password and perform other security configuration on the database, execute the mysql_secure_installation script as shown below.
Code: [Select]
# /usr/bin/mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Since this is the first time we are running this script, there is no password assigned for the mysql root account. So, press enter here.

At this stage, say “y” to assign a password for the MySQL root account. Enter the password after that.

Please note that this mysql root account is different than the linux root account. So, here we are setting the password for the mysql root account, which has nothing to do with Linux root account.

Code: [Select]
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!

As part of the default installation, mysql installs anonymous user who can login to the database without a real user. So, we should really remove this user.

Code: [Select]
Remove anonymous users? [Y/n] y
 ... Success!
Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

As you can imagine, mysql root account will have access to all mysql database. So, it is important to keep this secure. Also, we should make sure remote clients from other servers are not allowed to connect using this mysql root account.

Instead, only the localhost (where the mysql server is installed) can connect using root account. So, we should really disallow root login remotely.

Code: [Select]
Disallow root login remotely? [Y/n] y
 ... Success!

This is the default test database, which we should be removed.

Code: [Select]
Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Type y here to make sure all the changes we did so far will take into effect immediately.
Code: [Select]
Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

6. Validate MySQL root access

Now, if you connect to Mysql without a root password you’ll get the following access denied error message.

Code: [Select]
# mysql -u root
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

To specify the password, use the -p option as shown below. This will prompt the user to enter the password.

Code: [Select]
# mysql -u root -p
Enter password:

Also, as you see below from the show databases command, the test database is now removed.

Code: [Select]
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

If you want to pass the password in the mysql command line, specify it right next to -p option as shown below.
Code: [Select]
# mysql -u root -pMySecurePassword
Hope you will like this artuicle,Thank you!


7
General Linux / Install and configure snort
« Last post by akhilt on August 05, 2018, 11:39:53 am »
Install and configure snort

Snort is a free lightweight network intrusion detection system for both UNIX and Windows.

In this article, let us review how to install snort from source, write rules, and perform basic testing.

1. Download and Extract Snort

Download the latest snort free version from snort website(https://www.snort.org/downloads). Extract the snort source code to the /usr/src directory as shown below.

Code: [Select]
# cd /usr/src

# wget https://www.snort.org/downloads/snort/snort-2.9.11.1.tar.gz

# tar -xvf snort-2.9.11.1.tar.gz

2. Install Snort

Before installing snort, make sure you have dev packages of libpcap and libpcre.

Code: [Select]
# apt-cache policy libpcap0.8-dev
libpcap0.8-dev:
  Installed: 1.0.0-2ubuntu1
  Candidate: 1.0.0-2ubuntu1

# apt-cache policy libpcre3-dev
libpcre3-dev:
  Installed: 7.8-3
  Candidate: 7.8-3

Follow the steps below to install snort.

Code: [Select]
# cd snort-2.8.6.1

# ./configure

# make

# make install

3. Verify the Snort Installation

Verify the installation as shown below.
Code: [Select]
# snort --version

   ,,_     -*> Snort! <*-
  o"  )~   Version 2.8.6.1 (Build 39) 
   ''''    By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team
           Copyright (C) 1998-2010 Sourcefire, Inc., et al.
           Using PCRE version: 7.8 2008-09-05

4. Create the required files and directory

You have to create the configuration file, rule file and the log directory.

Create the following directories:

Code: [Select]
# mkdir /etc/snort

# mkdir /etc/snort/rules

# mkdir /var/log/snort

Create the following snort.conf and icmp.rules files:

Code: [Select]
# cat /etc/snort/snort.conf
include /etc/snort/rules/icmp.rules

# cat /etc/snort/rules/icmp.rules
alert icmp any any -> any any (msg:"ICMP Packet"; sid:477; rev:3;)

The above basic rule does alerting when there is an ICMP packet (ping).

Following is the structure of the alert:
<Rule Actions> <Protocol> <Source IP Address> <Source Port> <Direction Operator> <Destination IP Address> <Destination > (rule options)


5. Execute snort

Execute snort from command line, as mentioned below.
Code: [Select]
# snort -c /etc/snort/snort.conf -l /var/log/snort/
Try pinging some IP from your machine, to check our ping rule. Following is the example of a snort alert for this ICMP rule.

Code: [Select]
# head /var/log/snort/alert
[**] [1:477:3] ICMP Packet [**]
[Priority: 0]
07/27-20:41:57.230345 > l/l len: 0 l/l type: 0x200 0:0:0:0:0:0
pkt type:0x4 proto: 0x800 len:0x64
209.85.231.102 -> 209.85.231.104 ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:84 DF
Type:8  Code:0  ID:24905   Seq:1  ECHO

Alert Explanation

A couple of lines are added for each alert, which includes the following:

    - Message is printed in the first line.
    - Source IP
    - Destination IP
    - Type of packet, and header information.

If you have a different interface for the network connection, then use -dev -i option. In this example my network interface is ppp0.

Code: [Select]
# snort -dev -i ppp0 -c /etc/snort/snort.conf -l /var/log/snort/
Execute snort as Daemon

Add -D option to run snort as a daemon.
Code: [Select]
# snort -D -c /etc/snort/snort.conf -l /var/log/snort/
Additional Snort information

    - Default config file will be available at snort-2.8.6.1/etc/snort.conf
    - Default rules can be downloaded from: http://www.snort.org/snort-rules
8
General Linux / Three Sysadmin Rules You Should not Break
« Last post by akhilt on August 05, 2018, 11:12:41 am »
Three Sysadmin Rules You Should not Break

While habits are good, sometimes rules might even be better, especially in the sysadmin world, when handling a production environment.

Rule #1: Backup Everything ( and validate the backup regularly )

Experienced sysadmin knows that production system will crash someday, no matter how proactive we are. The best way to be prepared for that situation is to have a valid backup.

If you don’t have a backup of your critical systems, you should start planning for it immediately. While planning for a backup, keep the following factors in your mind:


    - What software (or custom script?) you would use to take a backup?
    - Do you have enough disk space to keep the backup?
    - How often would you rotate the backups?
    - Apart from full-backup, do you also need regular incremental-backup?
    - How would you execute your backup? i.e Using crontab or some other schedulers?

If you don’t have a backup of your critical systems, Start planning for your backup immediately.

Rule #2: Master the Command Line ( and avoid the UI if possible )

There is not a single task on a Unix / Linux server, that you cannot perform from command line. While there are some user interface available to make some of the sysadmin task easy, you really don’t need them and should be using command line all the time.

So, if you are a Linux sysadmin, you should master the command line.

On any system, if you want to be very fluent and productive, you should master the command line. The main difference between a Windows sysadmin and Linux sysadmin is — GUI Vs Command line. Windows sysadmin are not very comfortable with command line. Linux sysadmin should be very comfortable with command line.

Even when you have a UI to do certain task, you should still prefer command line, as you would understand how a particular service works, if you do it from the command line. In lot of production server environment, sysadmin’s typically uninstall all GUI related services and tools.

If you are Unix / Linux sysadmin and don’t want to follow this rule, probably there is a deep desire inside you to become a Windows sysadmin.  :)

Rule #3: Automate Everything ( and become lazy )

Lazy sysadmin is the best sysadmin.

There is not even a single sysadmin that I know of, who likes to break this rule. That might have something to do with the lazy part.

Take few minutes to think and list out all the routine tasks that you might do daily, weekly or monthly. Once you have that list, figure out how you can automate those. The best sysadmin typically doesn’t like to be busy. He would rather be relaxed and let the system do the job for him.
9
General Linux / Instantly Share Terminal Session With Other Linux Users
« Last post by akhilt on August 05, 2018, 10:55:42 am »
Instantly Share Terminal Session With Other Linux Users


If you want to share the SSH session terminal with other users over a secure network, tmate is your friend.

tmate is a terminal multiplexer with instant terminal sharing i.e, it enables sharing your terminal session with a number of trusted users. It is similar to the concept of multicasting. All the recipients get to view the terminal session over an SSH connection.

tmate is actually a fork of Tmux, a popular terminal multiplexer that lets you use several programs in a single Terminal. It gives you an IDE kind of experience in the terminal window.

How tmate works

On starting Tmate, it will first establish SSH (secure shell) connection to tmate.io website which acts as a server on the internet. Once the connection is established, a random SSH URL token is generated for each session. The ssh URL ID will be displayed at the bottom of your terminal session. Now the terminal is ready to be shared.

Trusted teammates can access your terminal session through the URL ID and can use it as long as the connection is active. In my opinion, tmate’s best application is to assist in group projects, or to debug the project with a team of developers, or get technical support on the remote network.

How to install tmate in Linux

tmate is a popular program and hence it is available in the default repositories of most Linux distributions. All you have to do is to use your Linux distribution’s package manager and install it.

In Debian and Ubuntu-based Linux distributions, use this command:
Code: [Select]
sudo apt install tmate
For Fedora, you can use this command:
Code: [Select]
sudo dnf install tmate
tmate is available in AUR(Arch User Repository) so you can use your favorite AUR Helper in Arch Linux:
Code: [Select]
yaourt -S tmate
In openSUSE, you can zypper to install tmate.
Code: [Select]
sudo zypper in tmate
How to share terminal with tmate

Step 1: Generate SSH Key-Pair

To use tmate, we need to create an SSH key-pair. The Tmate program first establishes a secure SSH connection of host machine with tmate.io website using that SSH key pair.

Also, the authentication of every client machines that try to connect to host terminal is also made by tmate.io server through same ssh keys. Hence, every system should have their SSH key generated.

Use this command to generate ssh-key:
Code: [Select]
ssh-keygen -t rsa
Step 2: Use tmate on host system

On the system where the terminal session will be used, open a terminal and enter “tmate” command in your terminal.
Code: [Select]
tmate
In a few seconds, the SSH session ID will disappear. You need this session ID so that others can view your session.

To find the tmate sesson id, use the following command:
Code: [Select]
tmate show-messages
Step 3: Access tmate session

Share the SSH session ID with your trusted teammates and they can access your terminal using this command in their own terminal.
Code: [Select]
ssh <SSH_session_ID>
By default, tmate allows both read and write access to the shared terminal session. Which means that anyone connected to your session can run commands in your terminal.

If you don’t want that, you can share the read only session id. If you look at the output of the show-messages command, you’ll notice there are several session IDs. You can find the read-only session id there.

Not only with SSH, you can share your terminal through web URL as well. You can get the web session URL in the show-messages output.

Step 4: End tmate session

Use “exit” command to exit the tmate session.
Code: [Select]
exit
Since tmate is based on tmux, you can use all tmux commands in tmate terminal sessions. This is very useful for Linux power users.

I hope you liked this quick article on sharing terminal with tmate.
10
The Status module allows a server administrator to find out how well their server is performing. A HTML page is presented that gives the current server statistics in an easily readable form. If required this page can be made to automatically refresh (given a compatible browser). Another page gives a simple machine-readable list of the current server state.

1. Connect to a Plesk server via SSH.
2. Check if the "status_module" is loaded:
 
Code: [Select]
# httpd -M | grep status

It will show "status_module (shared)" if it is installed.

3. If not, navigate to Tools & Settings > Apache Web Server. to enable the module

4. To make status reports visible to IP address 1.2.3.4 and localhost, add the below code to the Apache configuration file (create a new file, if there is no:

Code: [Select]
# vim  /etc/httpd/conf.modules.d/status.conf
for Apache 2.2:

Code: [Select]
<IfModule mod_status.c>
    <Location /server-status>
        SetHandler server-status
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1 localhost ip6-localhost 1.2.3.4
    </Location>
    ExtendedStatus On
</IfModule>

for Apache 2.4:

Code: [Select]
<IfModule mod_status.c>
    <Location /server-status>
        SetHandler server-status
        <RequireAny>
            Require local
            Require ip 1.2.3.4
        </RequireAny>
    </Location>
    ExtendedStatus On
</IfModule>

5. Restart Apache:
Pages: [1] 2 3 ... 10