Automating the Backup Process in Proxmox: Practical Crontab Script and Configuration

In today’s world, where data is becoming increasingly valuable, proper backup management is crucial for the security of information systems. In this article, I present an effective way to automate the backup of key configuration files in Proxmox-based systems using a simple bash script and Crontab configuration.

Bash Script for Backup of the /etc Directory

The /etc file contains critical system configuration files that are essential for the proper functioning of the operating system and various applications. Loss or damage to these files can lead to serious problems. Below, I present an effective script, backup-etc.sh, that allows for the automated backup of this directory:

This script performs the following operations:

  1. Generates the current date and time, which are added to the name of the archive to easily identify individual copies.
  2. Uses the tar program with zstd compression to create an archived and compressed copy of the /etc directory.
  3. Removes archives older than 100 days from the /var/lib/vz/dump/ location, thus ensuring optimal disk space management.

Adding Script to Crontab

To automate the backup process, the script should be added to crontab. Below is a sample configuration that runs the script daily at 2:40 AM:

Redirecting output to /dev/null ensures that operations are performed quietly without generating additional output to standard output.

Download the Script from soban.pl

The backup-etc.sh script is also available for download from the soban.pl website. You can download it using the following wget command and immediately save it as /root/backup-etc.sh:

With this simple command, the script is downloaded from the server and granted appropriate executable permissions.

Benefits and Modifications

The backup-etc.sh script is flexible and can be easily modified to suit different systems. It is default placed in the /var/lib/vz/dump/ folder, which is a standard backup storage location in Proxmox environments. This simplifies backup management and can be easily integrated with existing backup solutions.

By keeping backups for 100 days, we ensure a balance between availability and disk space management. Old copies are automatically deleted, minimizing the risk of disk overflow and reducing data storage costs.

Summary

Automating backups using a bash script and Crontab is an effective method to secure critical system data. The backup-etc.sh script provides simplicity, flexibility, and efficiency, making it an excellent solution for Proxmox system administrators. I encourage you to adapt and modify this script according to your own needs to provide even better protection for your IT environment.

Proxy through nginx frontend to the second virtual server wordpress

In a situation where we have one public IP address and we have many domains directed to that IP address, it is worth considering spreading the traffic to other servers. Proxmox, which allows you to create a pair of virtual machines, is perfect in such a situation. In my case, each virtual machine is separated and the traffic is broken down by nginx, which distributes the traffic to other servers. The virtual machine on my website will redirect traffic, I have the IP address for wordpress: 10.10.11.105 on port 80. In this case, no encryption is required, but the frontend itself, which manages the traffic, will present itself with encryption and security on port 443.

Two machines with the following configuration will participate throughout the process:
up-page IP: 10.10.14.200
soban-pl IP: 10.10.11.105

So let’s move on to the frontend that distributes traffic to other machines.
The frontend is done by linux debian 11 (bullseye), in addition, I have the following entry in the repository (/etc/apt/sources.list):

To install nginx, run the following commands:

You should make sure that the traffic from the frontend has the appropriate port 80 transitions. You can read how to check the network transitions here: Check network connection and open TCP port via netcat.

Screenshot of a terminal window showing a successful telnet connection to the IP address 10.10.11.105 on port 80, followed by the user exiting the telnet session with the 'quit' command.

The configuration of the frontend that distributes the traffic is as follows (/etc/nginx/conf.d/soban.pl.ssl.conf):

Configuration of the above-mentioned wordpress, additional authorization is also set when you try to log in to wp-admin, you can read about it here: More security wp-admin in nginx.

In the next step, check if the nginx configuration is correct by:

Terminal output displaying a successful nginx configuration test with the messages: 'nginx: the configuration file /etc/nginx/nginx.conf syntax is ok' and 'nginx: configuration file /etc/nginx/nginx.conf test is successful'.

If everything is fine, restart nginx:

In a virtual machine with nginx it should also be installed. This is the same as debian linux 11 (bullseye), so the respository should look like this:

Just installing nginx looks the same as on a machine that acts as a proxy.

All configuration is in /etc/nginx/conf.d/soban.pl.conf:

Also in this case, check the correctness of the nginx service configuration:

Everything looks fine, so let’s move on to restarting the service:

If the whole configuration was done correctly, the page should be directed without encrypted traffic to the virtual machine with wordpress. A wordpress service with nginx is not the only one that can be hosted or proxied. We can direct traffic from nginx to e.g. jboss, apacha and all other web services. Of course, this requires a corresponding modification of the configuration presented above, but the general outline of the concept as an nginx proxy has been presented. You should also remember about the appropriate configuration of keys and certificates. In my case let’s encrypt works perfectly for this.

Improving encryption on old red hat 5 by new Oracle Linux 7 using apache mod_proxy

There are situations when we need to increase the encryption level on the old system – according to the PCI audit requirements. However, the old system is no longer supported, so updating the encryption level is not possible. This is not a recommended solution, because we should try to transfer the application to a new system. After all, when we have little time, it is possible to hide the old version of the system and allow only the new machine to move to it. In this particular example, we will use mod_proxy as a proxy to redirect traffic to the old machine, while using iptables we will only allow communication with the new machine. It is not a recommended solution, but it works and I would like to present it here. The systems that I will be basing on in this example are the old red hat 5 and the new oracle linux 7. Recently, it has become very important to use a minimum of tls 1.2 and none below for banking transactions. Let’s start with the proxy server configuration oracle linux 7.

As of this writing, the addressing is as follows:
new_machine IP: 10.10.14.100
old_machine IP: 10.10.14.101
Traffic will be routed on port 443 from new_machine to old_machine.

Before we go to proxy configuration, please make sure there are network transitions from new_machine (10.10.14.100) to old_machine (10.10.14.101) to port 443. You can read how to verify network connections here: check network connection and open tcp port via netcat.

We go to the installation of apache and mod_proxy:

After installing apache, go to the edition:

Below are the news on the check level, what are the updates, and ip on the next service update:

In order to verify the correctness of apache configuration, you can issue a command that will check it:

If the apache configuration is correct, we can proceed to reloading apache:

At this point, we have a configured proxy connection. Before we move on to limiting traffic with iptables, I suggest you go to the site – with the new mod_proxy configured and test if everything is working properly and if there are any problems with the application.

Once everything is working fine, the network transitions are there, we can go to the iptables configuration for red hat 5. Let’s start by checking the system version:

Now we are going to prepare iptables so that the network traffic is available on port 443 from the new_machine (10.10.14.100). To do this, edit the file /etc/sysconfig/iptables:

After iptables settings are correct, we can reload the service:

In this way, we managed to cover up the weak encryption by proxying and diverting traffic to the new machine. This is not a recommended solution and you should try to transfer the application to a new environment compatible with the new system. However, in crisis situations, we can use this solution. Network traffic is not allowed by other IP addresses, so scanners will not be able to detect weak encryption on the old machine, and users using the old environment will not be able to use it. This does not change the fact that weak encryption is still set in the old environment and needs to be corrected. The example I gave is for the old red hat 5 and the new oracle linux 7, but it can be assumed that a similar solution and configuration is possible for other versions of the system.

Increasing the security of the ssh service

Nowadays, many bots or hackers look for port 22 on servers and try to log in. Usually, the login attempt is made as the standard linuxe root user. In this short article, I will describe how to create a user that will be able to log in as root and change the default ssh port 22 to 2222. Let’s go:

This way we created the user ‘soban’ and assigned it the default shell ‘/bin/bash’.

We still need to set a password for the user ‘soban’:

In the next step, let’s add it to ‘/etc/sudoers’ so that it can become root. Keep in mind that once the user can get root, he will be able to do anything on the machine!

Please add this entry below:

How can we test whether the user has the ability to log in as root? Nothing easier, first we’ll switch to the user we just created:

To list the possible sudo commands, just type the command:

Finally, to confirm whether it is possible to log in as root, you should issue the command:

Now that we have a root user ready, let’s try disabling ssh logon directly and change the default port. To do this, go to the default configuration of the ssh service, which is located in ‘/etc/ssh/sshd_config’:

We are looking for a line containing ‘Port’ – it can be hashed, so it should be unhashed and ‘PermitRootLogin’. Then set them as below:

In this way, we changed the default port 22 to 2222 and disallowed the possibility of logging in directly to the root user. However, the ssh service still needs to be reloaded, in debian or kali linux we do it like this:

In this way, we have managed to create a user who can safely log into the ssh service and become root. In addition, after changing the port, we will not go out on port 22 scans, which by default is set and scanned by a potential burglar. Installing the fail2ban service is also a very good improvement in security.

iftop as a good network traffic monitoring tool

iftop is a command-line tool used for real-time network bandwidth monitoring. It displays a continuously updated list of network connections and the amount of data transferred between them. The connections are listed in a table format and are sorted by either the amount of data transferred or the total number of packets sent or received.

iftop provides a variety of filtering options, allowing you to limit the display to specific hosts, networks, or ports. It also provides support for IPv6, and it can display information about the source and destination IP addresses, port numbers, and protocols.

iftop is particularly useful for monitoring network traffic in real-time and identifying which applications or services are consuming the most bandwidth. It can also help identify network performance issues and can assist in troubleshooting network problems.

Overall, iftop is a powerful and flexible tool for network monitoring and analysis, and it can be a valuable addition to any network administrator’s toolkit.

One of the more useful network traffic monitoring tools I find is iftop. It is especially useful when the link’s throat is flooded. In my experience, it is easy to use it to catch all kinds of network attacks, especially DoS. In the example given below, I will send a larger file to the remote machine and limit its upload speed, in the meantime I will observe the traffic with the iftop tool. Let’s start by installing iftop on the local machine. In this case it is kali linux: 

The distribution doesn’t matter in this case, just like it installs on any other operating system, it may well be linux debian.

We will do the same on the remote machine, so let’s move on to installing iftop on linux debian:

To start monitoring network traffic, run iftop with parameters: ‘-PpNn’:

As I am ssh connected to the remote machine, I can see my network connection.

Now let’s go back to the local machine, create a large file:

Once we have created a 1GB file, let’s try to send it with a transfer limit to the remote machine:

In this case, I used scp with the limit of 800 to send the file. To calculate how many KB/sec this is, divide by 8. From a simple calculation it follows that 800/8 = 100. To see scp and how to send files I encourage you to read: Securely Copy Files (scp) tool to copying files by ssh.

When sending the file, the traffic on the local machine looked like this (outgoing traffic):

At the same time, it looked like this on the remote machine (incoming traffic):

As you can see, in this way you can catch both outgoing and incoming traffic. The iftop tool has more parameters, I encourage you to read the manual. It is a simple tool, however, thanks to it, we can easily observe live network traffic. In the case of bruteforce, a significant number of connections will be made, but in the case of a DoS attack, the attacker will try to saturate the bandwidth, therefore the incoming traffic on the machine will be large. There are situations when the machine is naturally overloaded with the network, then you should limit the connection speed, in this case iptables works perfectly.

sshfs great tool to mount remote file system

SSHFS (SSH File System) is a secure file transfer system that enables users to remotely access and manage files on a remote server over an encrypted SSH (Secure Shell) connection. SSHFS uses the SSH protocol to establish a secure connection between the local and remote systems, which enables users to securely transfer files between the two systems.

To use SSHFS, the user needs to have SSHFS installed on their local system as well as the remote system that they want to connect to. Once SSHFS is installed, the user can mount the remote system as a local directory on their system, and access the remote files as if they were stored locally.

SSHFS provides a secure and convenient way to access and manage files on remote systems, without the need for additional software or complicated configuration. It also enables users to access files on remote systems using standard file operations, such as copying, moving, and deleting, making it a simple and effective way to manage files on remote systems.

SSH Filesystem (sshfs) is a very useful tool for remotely transferring files over the ssh protocol. An additional advantage of the whole is encryption. This is a convenient way to mount a remote folder to delete files. Below I will try to briefly introduce how to install sshfs and how to mount the folder remotely. Additionally, we will make an entry in /etc/fstab at the end, so that the resource itself is mounted after restarting the system. Let’s move on to installing the tool itself:

In this case, as you can see, the installation was done on kali linuxe, however the procedure is the same on debian.

Let’s move on to the file mounting itself, at this point I will point out that the default port is 22. In my case, however, the port has been changed to 2222. For services such as ssh, I try to change the default ports so as not to get caught by bots and not end up in the database such as shodan.io. The command itself in this case is very simple, but first we need to create a folder:

Let’s try to mount a remote folder:

During mounting, we will be asked if the fingerprint is correct. Then for the system password. The command itself can be disassembled into ‘soban‘ – this is the username. Then ‘soban.pl‘ is the domain name, you can also put the IP address here. The next ‘/home/soban‘ element is the folder that will be mounted. And after the space ‘/home/kali/myremotedir‘ we give the folder where the remote folder should be mounted. If everything went as planned, we can list ‘/home/kali/myremotedir‘ and it should list the contents of the remotely mounted folder ‘/home/soban‘. Let’s list the contents of the ‘/home/kali/myremotedir‘ folder:

Let’s create a remote file:

Now let’s unmount the remote folder and try listing it again:

As expected, the folder is empty and the file we created was created on a remotely mounted drive. After unmounting as you can see the file ‘/home/kali/myremotedir/example‘.

The next step is to create a private key to mount the folder without entering a password. It is very important not to send nikmou your private key. How we can generate and add a public key to a remote server can be read here: “Generate SSH key pair in Linux“.

Now we will try to add an entry to /etc/fstab which will allow automatic mounting on startup of the remote folder system.
To do this, edit the /etc/fstab entry and add this entry:

It is important that all data is correct, in order to verify the parameters, you can use the command for this ‘id‘:

Now we can move on to mounting the resource:

When mounting for the first time, we may be asked to accept and confirm that the fingerprint is correct. After verifying the correctness of mounting the remote resource, we can restart the system. One note here, the system may get up longer.

Generate SSH key pair in Linux

A very convenient way to log into remote systems via ssh is without the use of passwords. Here it is very important not to share your private key with anyone. Currently, when trying to connect, I am asked for the password to the server:

During the connection, we will be asked if the fingerprint is correct. Then enter the user password that is set on the remote server. During the ssh command ‘soban@soban.pl -p2222‘ I gave the username ‘soban‘ then the domain ‘soban.pl‘ and ‘-p2222‘ port ‘2222’. The default port after ssh is 22, but in this case I changed it so that it does not come out on scans – this increases security as often bots / hackers look for port 22, which is the default ssh port set.

Let’s move on to generating the key and copying it to the server:

This is how the key generation looks like, I hit enter for each question:

As a result, a private key was generated: (/home/kali/.ssh/id_rsa) and a public key (/home/kali/.ssh/id_rsa) that we will place on the remote server:

The last time we log in to the server by entering the password. When logging in, we will not be asked for a password now. This way we are able to add our public key (.ssh / authorized_keys) to the remote server.

More security wp-admin in nginx

Some time ago I noticed that my wordpress hacks are being hacked by logging into the backend of the website. A bot or a hacker is trying to do this using a set of passwords. I decided to secure the website’s backend by requiring additional authentication. In nginxe we can set this up by:

We still need to provide the username for authorization and save to the file (/etc/nginx/.htpasswd) as we entered in the nginx configuration file. In “my_user_name”, replace the login of the user with which we will be authorized.:

And the encrypted password has been set by openssl:

Openssl will ask you to come up with a password and enter it twice:

As a result, we will get a file with an encrypted password:

Before reloading nginx, we do a configuration verification:

If everything is set correctly, we should receive the following message:

Now we can restart the service nginx:

The final verification will be to log in to the backend (e.g. www.example-page-wordpress.pl/wp-admin/), as a result, we should be asked for the login and password that we created above:

This is a simple trick to protect your wordpress from bot attacks. However, it should be remembered that we do not share passwords with anyone and setting default usernames and simple passwords is asking for a problem.

Check network connection and open TCP port via netcat

Netcat, also known as “nc,” is a versatile networking tool that is commonly used in Linux and other Unix-like operating systems. It is a command-line utility that can be used for various network-related tasks, such as port scanning, file transfer, and even as a lightweight web server.

The primary function of Netcat is to create network connections between two hosts, allowing data to be transferred between them. It can establish a connection as a client or a server, and it supports both TCP and UDP protocols. This makes it useful for testing network services, troubleshooting network issues, and performing security assessments.

Netcat can be used to scan for open ports on a remote host, allowing system administrators to identify potential security vulnerabilities. It can also be used to transfer files between hosts, similar to the way that the “cp” command works in Linux. Additionally, it can be used to create a simple web server, allowing files to be served over HTTP.

One of the key features of Netcat is its ability to operate in both interactive and non-interactive modes. In interactive mode, it acts like a chat program, allowing users to communicate with each other in real-time. In non-interactive mode, it can be used as a background process that quietly sends or receives data without any user interaction.

Overall, Netcat is a powerful and flexible tool that can be used for a wide range of networking tasks. Its simplicity and ease of use make it a popular choice among system administrators, network engineers, and security professionals.

Sometimes network connections are blocked by various network devices. In the verification of the connection over TCP, we can use, for example, telnet. After all, before we start a server-side service like jboss, we can use a simple utility like netcat to open the port.

In this example we will be using two machines. However, one of them is “host-soban-pl” with the IP address: 10.10.14.100:

The second is “soban-pl” with the IP address: 10.10.11.105:

Below, for example, I will show you how to check an already open tcp connection and one that is closed. On the other side, on port 80, I have an open port with nginx:

Nmap below confirms port opening, additionally identified the service as http:

The conclusion is that the service has network transitions and you can correctly connect over TCP. Now it will try to open a connection that does not exist, e.g. on port 81.

As you can see, the connection is not possible because the port is closed. The assumption is that the port may be open, but for example the firewall blocks it. Then you need to set the appropriate rules on it.

After all, in this case I know that the firewall does not block anything, so it will try to open the port with netcat. First we need to install netcat in debian, it is done like this:

Now let’s move on to running netcat on port 81:

In this case, I specially gave the command ‘&’ at the end to leave the netcat process in the background. At this point, netcat is listening on port 81.

Now we can proceed to checking the correctness of the connection with the use of telnet:

In the meantime, on the server machine, we can use the netstat tool to verify the connection and check from which machine the traffic is coming:

As you can see, a correct connection from the 10.10.14.100 host has been established with the server on 10.10.11.105 on port 81.

To end the call, hit ‘^]‘ (ctrl +]), then type quit and enter.

In this way, we can verify the correctness of the network connection and whether any firewall or other network problem is an obstacle to its correct establishment. Netcat is a very powerful and useful tool, you can use it to transfer files etc. Netstat is also very useful in situations where network congestion occurs and one of the hosts is attacked. It is then easy to notice that a large number of network connections are made.

Netdiscover great tool for scaning and watching local network

Netdiscover is a popular network discovery tool that is used in Linux to identify live hosts on a network. It sends ARP (Address Resolution Protocol) requests to the network and then listens for replies from active hosts. By analyzing the replies, Netdiscover can build a list of all hosts that are currently active on the network.

Netdiscover is typically used by network administrators to identify all devices on a network and to detect any unauthorized devices that may be connected. It can also be used to identify the IP address of a device on a network that is not responding to conventional network scanning techniques.

Netdiscover is a command-line tool and has a range of options that allow it to be customized for specific network environments. For example, it can be set to scan a particular subnet or to use a specific network interface. Additionally, Netdiscover can output its results in a range of formats, including CSV and XML, making it easy to integrate with other tools and applications.

Overall, Netdiscover is a useful tool for network administrators who need to identify all devices on a network and detect any unauthorized devices that may be connected. Its ability to output results in a range of formats and its customizable options make it a versatile and valuable addition to any network security toolkit.

Netdiscover is a great tool to scan your local network for locally attached devices. It is installed by default in Kali Linux. However, if you want to use it on a raspberry pi, you need to install it. You can do this as follows:

In virtualbox I have this setup of network in Kali Linux:

The very use of the tool requires specifying the subnetwork in which we are located. We can check it like this:

In this case, we can scan network 192.168.1.0/24, so in netdiscover we can use:

The screen will show the network scanner:

Netdiscover also gives you the option to direct the result to a file, in this case it refreshes the scan every 2 seconds:

Now we can also use nslookup to get hostname:

Also we can use nmap:

You can use more parameters in nmap for more information, however this will significantly increase the scan time. Still, sometimes it’s worth the wait.