If you use Fail2Ban with Nginx and WordPress, sooner or later you’ll notice one thing: the same IP addresses keep coming back. They get banned for a few minutes or an hour, disappear… and shortly after try again /.env, /wp-login.php, /phpmyadmin, or other common attack paths.
The solution is not to aggressively tighten the filters. The solution is recidive — a second layer of protection in Fail2Ban that analyzes the ban history and blocks repeat offenders long-term.
Reference to the previous configuration
If you don’t yet have a basic Fail2Ban configuration for Nginx and WordPress, I described it here:
In that article, we configure jails such as nginx-exploit, nginx-secure, and sshd. Recidive does not replace that configuration — it strengthens it.
How to find repeat offenders in the logs
First, it’s worth checking whether the issue actually exists. We extract from the Fail2Ban logs the list of IP addresses that were banned most frequently:
If you see numbers like 8, 9, or 13 — it means those IPs are coming back after the ban expires. A short bantime is just a technical pause for them.
Why recidive is better than increasing bantime
You don’t have to ban everyone for 24 hours because of a single typo in a URL.
You don’t increase the risk of blocking legitimate users.
The penalty is progressive and applies only to returning addresses.
Recidive analyzes /var/log/fail2ban.log and counts how many times a given IP has been banned by other jails. This way, you only “finish off” those that have already been blocked multiple times before.
Recidive configuration (5 bans in 24h = 7 days ban)
Add the following block to /etc/fail2ban/jail.local:
Edit jail.local file
Shell
1
nano/etc/fail2ban/jail.local
At the end of the file, paste:
Recidive jail configuration
Shell
1
2
3
4
5
6
[recidive]
enabled=true
logpath=/var/log/fail2ban.log
bantime=7d
findtime=1d
maxretry=5
Save the file and restart Fail2Ban:
Restart Fail2Ban
Shell
1
systemctl restart fail2ban
Check the jail status:
Recidive jail status
Shell
1
fail2ban-client status recidive
How to check who is close to the recidive threshold
If you want to see IPs that already have several bans and are approaching the recidive threshold:
Recidive is one of the simplest and most effective ways to limit recurring scanners and bots. Instead of aggressively banning everyone — you block only those who repeatedly come back.
In an environment with multiple domains, Nginx reverse proxy, and WordPress, it’s practically a must-have configuration element: less noise in logs, fewer repeated attacks, and less manual analysis.
The Linux system is a powerful tool that offers users tremendous flexibility and control over their working environment. However, to fully harness its potential, it is worth knowing the key commands that are essential for both beginners and advanced users. In this article, we will present and discuss the most important Linux commands that every user should know.
1. Basic Navigation Commands
pwd – Displays the current directory path you are in:
Example of using pwd
Shell
1
pwd
ls – Lists the contents of a directory. You can use the -l option for a detailed view or -a to show hidden files:
Example of using ls
Shell
1
ls-a
cd – Changes the directory. For example, cd /home/user will move you to the /home/user directory:
Example of using cd
Shell
1
cd~
mkdir – Creates a new directory:
Example of using mkdir
Shell
1
mkdirprojects
rmdir – Removes an empty directory:
Example of using rmdir
Shell
1
rmdirold_files
2. File Management
cp – Copies files or directories:
Example of using cp
Shell
1
cpdocument.txtnew_directory/
mv – Moves or renames files/directories:
Example of using mv
Shell
1
mvfile.txt/home/user/new_directory/
rm – Removes files or directories. Use the -r option to remove a directory with its contents:
Example of using rm
Shell
1
rm-rold_data
touch – Creates an empty file or updates the modification time of an existing file:
Example of using touch
Shell
1
touchreport.txt
3. Process Management
ps – Displays currently running processes. Use the -aux option to see all processes:
Example of using ps
Shell
1
ps-aux
top – Displays a dynamic list of processes in real time:
Example of using top
Shell
1
top
kill – Stops a process by its ID:
Example of using kill
Shell
1
kill1234
bg and fg – Manage background and foreground processes:
Example of using fg
Shell
1
fg
4. User and Permission Management
sudo – Allows a command to be executed with administrator privileges:
Example of using sudo
Shell
1
sudo apt update
chmod – Changes permissions for files/directories:
Example of using chmod
Shell
1
chmod755script.sh
chown – Changes the owner of a file/directory:
Example of using chown
Shell
1
chownadmin:admin file.txt
useradd and userdel – Adds and removes users:
Example of using useradd
Shell
1
useradd janek
5. Networking and Communication
ping – Checks the connection with another host:
Example of using ping
Shell
1
ping192.168.1.1
ifconfig – Displays information about network interfaces:
Example of using ifconfig
Shell
1
ifconfig
ssh – Connects remotely to another computer:
Example of using ssh
Shell
1
ssh user@192.168.1.2
scp – Copies files over SSH:
Example of using scp
Shell
1
scp file.txtuser@host:/home/user/
6. Command Usage Examples
Below is an example of using several discussed commands:
chmod – Changes permissions for files/directories:
Example of using chmod
Shell
1
chmod755script.sh
chown – Changes the owner of a file/directory:
Example of using chown
Shell
1
chownadmin:developers logs.txt
useradd and userdel – Adds and removes users:
Example of using useradd
Shell
1
useradd janek
7. Disk and File System Management
df – Displays information about disk space availability:
Example of using df
Shell
1
df-h
du – Shows the size of files and directories:
Example of using du
Shell
1
du-shdocuments
mount – Mounts a file system:
Example of using mount
Shell
1
mount/dev/sdb1/mnt/external
umount – Unmounts a file system:
Example of using umount
Shell
1
umount/mnt/external
8. Searching for Files
find – Searches for files in the system:
locate – Quickly searches for files in the system:
grep – Searches for patterns in files:
which – Finds the full path to an executable file:
9. Communicating with the System
echo – Displays text on the screen:
cat – Displays the contents of a file:
more – Displays the contents of a file page by page:
less – Similar to more, but offers more navigation options:
man – Displays the user manual for a command:
10. Working with Archives
tar – Creates or extracts archives:
zip – Creates a ZIP archive:
unzip – Extracts ZIP files:
tar -xvzf – Extracts a TAR.GZ archive:
gzip – Compresses files in .gz format:
gunzip – Extracts .gz files:
11. System Monitoring
uptime – Displays the system uptime and load:
dmesg – Displays system messages related to boot and hardware:
iostat – Shows input/output system statistics:
free – Displays information about RAM:
netstat – Displays information about network connections:
ss – A modern version of netstat, used for monitoring network connections:
12. Working with System Logs
journalctl – Reviews system logs:
tail – Displays the last lines of a file:
logrotate – Automatically manages logs:
13. Advanced File Operations
ln – Creates a link to a file:
xargs – Passes arguments from input to other commands:
chmod – Changes permissions for files/directories:
chattr – Changes file attributes:
Linux offers a wide array of commands that allow for complete control over the computer. Key commands such as ls, cd, cp, and rm are used daily to navigate through the file system, manage files, and directories. To effectively master these commands, it’s best to start with those that are most useful in everyday work. For instance, commands for navigating directories and managing files are fundamental and require practice to become intuitive. Other commands, such as ps for monitoring processes, ping for testing network connections, or chmod for changing permissions, are also worth knowing to fully leverage the power of the Linux system.
To learn effectively, it’s advisable to start by experimenting with commands in practice. Creating files, directories, copying, and deleting data allows for familiarity with their operation. Over time, it’s worthwhile to start combining different commands to solve more advanced problems, such as monitoring processes, managing users, or working with system logs. One can also use documentation, such as man or websites, to delve into the details of each command and its options.
Remember, regular use of the terminal allows for learning habits that make handling the Linux system more natural. Frequent use of commands, solving problems, and experimenting with new commands is the best way to master the system and fully utilize it.
Linux is indeed a powerful tool that provides great control over the system… but remember, don’t experiment on production! After all, experimenting on a production server is a bit like playing Russian roulette — only with bigger consequences. If you want to feel like a true Linux wizard, always test your commands in a development environment. Only then will you be able to learn from mistakes instead of searching for the cause of several gigabytes of data disappearance. And if you don’t know what you’re doing, simply summon your trusty weapon: man!
Automation of Disk Space Management in a Linux Environment
In today’s digital world, where data is being accumulated in ever-increasing amounts, managing disk space has become a key aspect of maintaining operational efficiency in systems. In this article, I will present a script that automates the process of managing space on a remote disk mounted via SSHFS, particularly useful for system administrators who regularly deal with filling storage media.
Prerequisites
Before starting, ensure that SSHFS and all necessary packages enabling its proper operation are installed on your system. SSHFS allows remote file systems to be mounted via SSH, which is crucial for our script’s operation. To install SSHFS and the necessary tools, including a package that enables password forwarding (sshpass), use the following command:
Installing SSHFS and sshpass
Shell
1
2
apt-getupdate
apt-getinstall sshfs fuse sshpass-y
Bash Script for Disk Space Management
Our Bash script focuses on monitoring and maintaining a defined percentage of free disk space on a remote disk mounted via SSHFS. Here are the script’s main functions:
Goal Definition:
TARGET_USAGE=70 – the percentage of disk space we want to maintain as occupied. The script will work to keep at least 30% of the disk space free.
Mount Point and Paths:
MOUNT_POINT=”/mnt/qnapskorupki” – the local directory where the remote disk is mounted. TARGET_DIRS=”$MOUNT_POINT/up*.soban.pl” – the directories where the script will look for files to delete if needed.
Function check_qnap: This function checks whether the disk is mounted and whether the mount directory is not empty. If there are issues, the script attempts to unmount and remount the disk using sshfs with a password forwarded through sshpass.
File Deletion: The script monitors disk usage and, if TARGET_USAGE is exceeded, it finds and deletes the oldest files in specified directories until the target level of free space is achieved.
Full Bash Script
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
#!/bin/bash
TARGET_USAGE=70
MOUNT_POINT="/mnt/qnapskorupki"
TARGET_DIRS="$MOUNT_POINT/up*.soban.pl"
# Function to check and mount SSHFS
functioncheck_qnap{
local remote_path="/share/MD0_DATA/backup_proxmox/"
local user_remote="remote_user"
local remote_host="192.168.1.XX"
local port=22
local password='XXXXXXXXXXXXXXXXXXXXXXX'
# Check if the mounting directory exists and is empty
if[!-d"$MOUNT_POINT"]||[-z"$(ls -A $MOUNT_POINT)"];then
echo"Problem: The directory $MOUNT_POINT is missing or empty. Attempting to remount..."
# Unmount if anything is currently mounted
ifmountpoint-q$MOUNT_POINT;then
echo"Unmounting $MOUNT_POINT..."
fusermount-u$MOUNT_POINT
sleep5
fi
# Remount
echo"Mounting SSHFS: $user_remote@$remote_host:$remote_path to $MOUNT_POINT..."
The script uses a password directly in the command line, which can pose a security risk. In practical applications, it is recommended to use more advanced authentication methods, such as SSH keys, which are more secure and do not require a plaintext password in the script. However, in the case of QNAP, we used a password when writing this script.
Conclusion
The presented script is an example of how daily administrative tasks, such as disk space management, can be automated, thus increasing efficiency and reliability. Its implementation in real IT environments can significantly streamline data management processes, especially in situations where quick response to changes in disk usage is critical.
Automatically Shutting Down Your Laptop at Low Battery Levels
Maintaining long battery life and protecting data are crucial for laptop users. In this article, we’ll show you how to create a simple Bash script that automatically shuts down your laptop when the battery level falls below 20%. Additionally, you’ll learn how to set up a crontab to run the script every 10 minutes, ensuring continuous monitoring.
Creating a Bash Script
The Bash script we have prepared will check the current battery level and compare it to a set minimum threshold. If the battery level drops below this threshold, the script initiates a system shutdown, helping to protect your data and hardware.
Battery Check Script
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/bash
# Define the minimum battery level before shutdown
echo"Battery level is $current_level%, which is below the threshold of $MIN_BATTERY_LEVEL%. Shutting down..."
# Shutdown command
/sbin/shutdown-hnow
else
echo"Battery level is $current_level%, no need to shut down."
fi
Also you can download script:
Download script
Shell
1
wget https://soban.pl/bash/check_battery.sh
Don’t forget to grant permissions to run it:
Permision to run script
Shell
1
chmod+xcheck_battery.sh
Crontab Configuration
Crontab is a tool that allows you to schedule tasks in the Linux system. With it, we can set up regular battery checks.
Crontab Configuration
Shell
1
2
3
4
crontab-e
# Add the following line to crontab to run the script every 10 minutes
*/10****/root/check_battery.sh>/dev/null2>&1
Summary
With this setup, you can rest assured about the condition of your laptop even during intensive use. Automatic shutdown at low battery levels not only protects the equipment but also helps maintain a longer battery life.
Managing Proxmox clusters can sometimes present technical difficulties, such as inconsistencies in cluster configuration or issues with restoring LXC containers. Finding and resolving these issues is crucial for maintaining the stability and performance of the virtualization environment. In this article, I present a detailed guide on how to diagnose and resolve an issue with an unreachable node and how to successfully restore an LXC container.
Before you begin any actions, make sure you have a current backup of the system.
Diagnosing the State of the Proxmox Cluster
Shell
1
2
pvecm delnode up-page-02
Node/IP:up-page-02isnotaknown host of the cluster.
and:
Shell
1
2
pct restore107vzdump-lxc-107-2024_11_12-03_00_01.tar.zst--storage local
CT107already exists on node'up-page-02'
To understand the state of the cluster, execute the following command on the node-up-page-04 node:
Shell
1
pvecm nodes
Expected output:
Shell
1
2
3
4
5
Membership information
----------------------
Nodeid Votes Name
11node-up-page-01
21node-up-page-04(local)
Then check the detailed cluster information with the following command:
Shell
1
pvecm status
Expected output:
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Cluster information
-------------------
Name:soban-proxmox
Config Version:4
Transport:knet
Secure auth:on
Quorum information
------------------
Date:Wed Nov1310:40:122024
Quorum provider:corosync_votequorum
Nodes:2
Node ID:0x00000002
Ring ID:1.e6
Quorate:Yes
Votequorum information
----------------------
Expected votes:2
Highest expected:2
Total votes:2
Quorum:2
Flags:Quorate
Membership information
----------------------
Nodeid Votes Name
0x000000011<masked IP>
0x000000021<masked IP>(local)
Removing the Container Configuration File and Cleaning Data
I discovered that the configuration file for container 107 still exists on the cluster’s file system at the path:
The restoration process was successful, and the container was ready for use. This case illustrates the importance of thorough diagnostics and configuration file management in Proxmox when working with clusters. Regular reviews of configurations are advisable to avoid inconsistencies and operational issues in the future.
Virtualization servers based on Debian family systems, such as Proxmox, are often used in test environments where continuous availability is crucial. Sometimes these servers are installed on laptops, which serve as low-budget or portable solutions. However, the standard power management settings in laptops can lead to undesirable behaviors, such as sleeping or hibernating when the lid is closed. Below, I describe how to change these settings in an operating system based on Debian to ensure uninterrupted server operation.
Step 1: Accessing the Configuration File
Open the terminal and enter the following command to edit the /etc/systemd/logind.conf file using a text editor (e.g., nano):
Editing logind
Shell
1
nano/etc/systemd/logind.conf
Step 2: Modifying logind Settings
Find the line containing HandleLidSwitch and change its value to ignore. If the line is commented out (preceded by a # symbol), remove the #. You can also add this line to the end of the file if it does not exist.
Shell
1
HandleLidSwitch=ignore
Step 3: Applying and Restarting the Service
After making the changes and saving the file, you need to restart the systemd-logind service for the changes to take effect. Use the following command in the terminal:
Resetting systemd-logind
Shell
1
systemctl restart systemd-logind
With these changes, closing the laptop lid will no longer initiate hibernation or sleep, which is especially important when using Debian-based servers, including Proxmox, as server solutions.
Managing SWAP memory is a key element in administering Linux operating systems, especially in virtualization environments like Proxmox. SWAP acts as “virtual memory” that can be used when the system’s physical RAM is full. In this article, we will show how to increase SWAP space on a Proxmox server, using the lvresize tool to free up disk space that can then be allocated to SWAP.
Problem Overview
A user wants to increase SWAP space from 8 GB to 16 GB, but encounters the problem of lacking available space in the LVM volume group, which is required to increase SWAP.
Step 1: Checking Available Space
Shell
1
vgs
The command vgs displays the volume groups along with their sizes and available space.
Step 2: Reducing the Volume
Suppose there is a root volume of 457.26 GB, which can be reduced to free up an additional 8 GB for SWAP. Before reducing the volume, it is necessary to reduce the file system on this volume.
Shell
1
resize2fs/dev/pve/root449.26G
However, in the case of the XFS file system, reduction must occur offline or from a live CD.
Step 3: Using lvreduce
Shell
1
lvreduce-L-8G/dev/pve/root
This command reduces the root volume by 8 GB, which is confirmed by a message about the volume size change.
Step 4: Deactivating SWAP
Shell
1
swapoff-a
Before starting changes in SWAP size, SWAP must first be turned off using the above command.
Step 5: Expanding SWAP
Shell
1
2
3
lvresize-L+8G/dev/pve/swap
mkswap/dev/pve/swap
swapon/dev/pve/swap
The above commands first increase the SWAP space, then format it and reactivate it.
Shell
1
swapon--show
Finally, we verify the active SWAP areas using the above command to ensure everything is configured correctly.
This process shows how you can flexibly manage disk space on Proxmox servers, adjusting the size of SWAP depending on needs. Using lvreduce requires caution, as any operation on partitions and volumes carries the risk of data loss, therefore it is always recommended to make backups before proceeding with changes.
Upgrading Apache Cassandra to a newer version is a significant task that database administrators undertake to ensure their systems benefit from new features, enhanced security measures, and improved performance. This guide provides a detailed walkthrough for upgrading Apache Cassandra from version 3.1.15 and higher to the latest 4.1.x version, specifically on Ubuntu 20.04.5 LTS, with an emphasis on pre-upgrade cleaning operations to manage disk space effectively.
Pre-upgrade Preparation
Backup Configuration Directory:
Before initiating the upgrade, it’s crucial to back up the Cassandra configuration directory. This precaution allows for a swift restoration of the configuration should any issues arise during the upgrade process. Utilize the following command to create a backup, incorporating the current date into the folder name for easy identification:
Preparation is key to a smooth upgrade. Begin with maintenance commands to guarantee data integrity and optimize space usage, especially important for systems with limited disk space.
Scrub Data:
Execute nodetool scrub to clean and reorganize data on disk. Given that this operation may be time-consuming, particularly for databases with large amounts of data or limited disk space, it’s a critical step for a healthy upgrade process.
Clear Snapshots:
To further manage disk space, use nodetool clearsnapshot to remove existing snapshots, freeing up space for the upgrade process. To delete all snapshots on the node, simply use this method if you’re running out of space:
clear all snapshot
Shell
1
# nodetool clearsnapshot --all
Cleanup Data:
Perform a nodetool cleanup to purge unnecessary data. In scenarios where disk space is a premium, it’s advisable to execute a scrub operation without generating a snapshot to conserve space:
scrub cassandra
Shell
1
# nodetool scrub --no-snapshot
Draining and Stopping Cassandra
Drain the Node:
Prior to halting the Cassandra service, ensure all data in memory is flushed to disk with nodetool drain.
drain cassandra
Shell
1
# nodetool drain
Stop the Cassandra Service:
Cease the running Cassandra services to proceed with the upgrade safely:
drain cassandra
Shell
1
# systemctl stop cassandra.service
Upgrading Cassandra
Update Source List:
Edit the repository sources to point to the new version of Cassandra by adjusting the cassandra.sources.list file:
With the repository sources updated, refresh the package list and upgrade the packages. When executing the apt upgrade command, you can keep pressing Enter as the default option is ‘N’ (No):
Upgrade cassandra
Shell
1
# apt update && apt upgrade
Modify Configuration:
Adjust the Cassandra configuration for version 4.1.x by commenting out or deleting deprecated options:
deleting deprecated options in cassandra
Shell
1
# for var in thrift_prepared_statements_cache_size_mb start_rpc rpc_port rpc_server_type thrift_framed_transport_size_in_mb request_scheduler; do sed -i "/$var:/s/^/#/" /etc/cassandra/cassandra.yaml; done
Update JAMM Library:
Ensure the Java Agent Memory Manager (JAMM) library is updated to enhance performance:
change jamm version in cassandra
Shell
1
# sed -i 's|jamm-0.3.0.jar|jamm-0.3.2.jar|g' /etc/cassandra/cassandra-env.sh
Backup and update the JVM options file:
It’s a good practice to back up configuration files before making changes. This step renames the existing jvm-server.options file to jvm-server.options.orig as a backup. Then, it copies the jvm.options file to jvm-server.options to apply the standard JVM options for Cassandra servers.
Post-upgrade, it’s beneficial to evaluate and optimize memory usage and swap space to ensure efficient Cassandra operation:
free ram
Shell
1
# swapoff -a && swapon -a
Restart the Cassandra Service:
Apply the new version by restarting the Cassandra service:
start service cassandra
Shell
1
# systemctl start cassandra.service
Verify Upgrade:
Confirm the success of the upgrade by inspecting the cluster’s topology and state, ensuring all nodes are functional:
verify upgrade of cassandra
Shell
1
2
# nodetool describecluster
# nodetool status
By adhering to this comprehensive guide, database administrators can effectively upgrade Apache Cassandra to version 4.1.x, capitalizing on the latest advancements and optimizations the platform has to offer, while ensuring data integrity and system performance through careful pre-upgrade preparations.
Optimization and Verification
After successfully upgrading Apache Cassandra to version 4.1.x and ensuring the cluster is fully operational, it’s crucial to conduct post-upgrade maintenance to optimize the performance and security of your database system. This section outlines essential steps and considerations to maintain a healthy and efficient Cassandra environment.
Monitor Performance and Logs
In the immediate aftermath of the upgrade, closely monitor the system’s performance, including CPU, memory usage, and disk I/O, to identify any unexpected behavior or bottlenecks. Additionally, review the Cassandra system logs for warnings or errors that may indicate potential issues requiring attention.
Tune and Optimize
Based on the performance monitoring insights, you may need to adjust Cassandra’s configuration settings for optimal performance. Consider tuning parameters related to JVM options, compaction, and read/write performance, keeping in mind the specific workload and data patterns of your application.
Run nodetool upgradesstables
To ensure that all SSTables are updated to the latest format, execute nodetool upgradesstables on each node in the cluster. This operation will rewrite SSTables that are not already in the current format, which is essential for taking full advantage of the improvements and features in Cassandra 4.1.x (Check the space, and if required, delete all snapshots as shown above.):
Upgrade sstables
Shell
1
# time nodetool upgradesstables
This process can be resource-intensive and should be scheduled during off-peak hours to minimize impact on live traffic.
Implement Security Enhancements
Cassandra 4.1.x includes several security enhancements. Review the latest security features and best practices, such as enabling client-to-node encryption, node-to-node encryption, and advanced authentication mechanisms, to enhance the security posture of your Cassandra cluster.
Review and Update Backup Strategies
With the new version in place, reassess your backup strategies to ensure they are still effective and meet your recovery objectives. Verify that your backup and restore procedures are compatible with Cassandra 4.1.x and consider leveraging new tools or features that may have been introduced in this release for more efficient data management.
In a situation where we have one public IP address and we have many domains directed to that IP address, it is worth considering spreading the traffic to other servers. Proxmox, which allows you to create a pair of virtual machines, is perfect in such a situation. In my case, each virtual machine is separated and the traffic is broken down by nginx, which distributes the traffic to other servers. The virtual machine on my website will redirect traffic, I have the IP address for wordpress: 10.10.11.105 on port 80. In this case, no encryption is required, but the frontend itself, which manages the traffic, will present itself with encryption and security on port 443.
Two machines with the following configuration will participate throughout the process: up-page IP: 10.10.14.200 soban-pl IP: 10.10.11.105
So let’s move on to the frontend that distributes traffic to other machines. The frontend is done by linux debian 11 (bullseye), in addition, I have the following entry in the repository (/etc/apt/sources.list):
cat
Shell
1
2
3
#...
deb http://nginx.org/packages/debian/bullseye nginx
Configuration of the above-mentioned wordpress, additional authorization is also set when you try to log in to wp-admin, you can read about it here: More security wp-admin in nginx.
In the next step, check if the nginx configuration is correct by:
checking nginx configuration
Shell
1
# service nginx configtest
If everything is fine, restart nginx:
restart nginx
Shell
1
# service nginx restart
In a virtual machine with nginx it should also be installed. This is the same as debian linux 11 (bullseye), so the respository should look like this:
cat
Shell
1
2
3
#...
deb http://nginx.org/packages/debian/bullseye nginx
Also in this case, check the correctness of the nginx service configuration:
checking nginx configuration
Shell
1
# service nginx configtest
Everything looks fine, so let’s move on to restarting the service:
restart nginx
Shell
1
# service nginx restart
If the whole configuration was done correctly, the page should be directed without encrypted traffic to the virtual machine with wordpress. A wordpress service with nginx is not the only one that can be hosted or proxied. We can direct traffic from nginx to e.g. jboss, apacha and all other web services. Of course, this requires a corresponding modification of the configuration presented above, but the general outline of the concept as an nginx proxy has been presented. You should also remember about the appropriate configuration of keys and certificates. In my case let’s encrypt works perfectly for this.
There are situations when we need to increase the encryption level on the old system – according to the PCI audit requirements. However, the old system is no longer supported, so updating the encryption level is not possible. This is not a recommended solution, because we should try to transfer the application to a new system. After all, when we have little time, it is possible to hide the old version of the system and allow only the new machine to move to it. In this particular example, we will use mod_proxy as a proxy to redirect traffic to the old machine, while using iptables we will only allow communication with the new machine. It is not a recommended solution, but it works and I would like to present it here. The systems that I will be basing on in this example are the old red hat 5 and the new oracle linux 7. Recently, it has become very important to use a minimum of tls 1.2 and none below for banking transactions. Let’s start with the proxy server configuration oracle linux 7.
As of this writing, the addressing is as follows: new_machine IP: 10.10.14.100 old_machine IP: 10.10.14.101 Traffic will be routed on port 443 from new_machine to old_machine.
Before we go to proxy configuration, please make sure there are network transitions from new_machine (10.10.14.100) to old_machine (10.10.14.101) to port 443. You can read how to verify network connections here: check network connection and open tcp port via netcat.
We go to the installation of apache and mod_proxy:
Installing httpd and mod_proxy
Shell
1
# yum install httpd mod_proxy
After installing apache, go to the edition:
Edit configuration apache
Shell
1
# vi /etc/httpd/conf.d/ssl.conf
Below are the news on the check level, what are the updates, and ip on the next service update:
In order to verify the correctness of apache configuration, you can issue a command that will check it:
checking configuration of apache
Shell
1
# service httpd configtest
If the apache configuration is correct, we can proceed to reloading apache:
Restart apache
Shell
1
# service httpd restart
At this point, we have a configured proxy connection. Before we move on to limiting traffic with iptables, I suggest you go to the site – with the new mod_proxy configured and test if everything is working properly and if there are any problems with the application.
Once everything is working fine, the network transitions are there, we can go to the iptables configuration for red hat 5. Let’s start by checking the system version:
checking version red hat
Shell
1
# cat /etc/redhat-release
Now we are going to prepare iptables so that the network traffic is available on port 443 from the new_machine (10.10.14.100). To do this, edit the file /etc/sysconfig/iptables:
edit /etc/sysconfig/iptables
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT[0:0]
:FORWARD ACCEPT[0:0]
:OUTPUT ACCEPT[0:0]
-AINPUT-mstate--state ESTABLISHED,RELATED-jACCEPT
-AINPUT-picmp-jACCEPT
-AINPUT-ilo-jACCEPT
-AINPUT-ptcp--dport443-s10.10.14.100-jACCEPT
-AINPUT-ptcp--dport443-jDROP
-AINPUT-ptcp--dport80-jDROP
-AINPUT-jACCEPT
COMMIT
After iptables settings are correct, we can reload the service:
Restart iptables
Shell
1
# /etc/init.d/iptables restart
In this way, we managed to cover up the weak encryption by proxying and diverting traffic to the new machine. This is not a recommended solution and you should try to transfer the application to a new environment compatible with the new system. However, in crisis situations, we can use this solution. Network traffic is not allowed by other IP addresses, so scanners will not be able to detect weak encryption on the old machine, and users using the old environment will not be able to use it. This does not change the fact that weak encryption is still set in the old environment and needs to be corrected. The example I gave is for the old red hat 5 and the new oracle linux 7, but it can be assumed that a similar solution and configuration is possible for other versions of the system.