The most important Linux commands that every user should know

The Linux system is a powerful tool that offers users tremendous flexibility and control over their working environment. However, to fully harness its potential, it is worth knowing the key commands that are essential for both beginners and advanced users. In this article, we will present and discuss the most important Linux commands that every user should know.

1. Basic Navigation Commands

  • pwd – Displays the current directory path you are in:
  • ls – Lists the contents of a directory. You can use the -l option for a detailed view or -a to show hidden files:
  • cd – Changes the directory. For example, cd /home/user will move you to the /home/user directory:
  • mkdir – Creates a new directory:
  • rmdir – Removes an empty directory:

2. File Management

  • cp – Copies files or directories:
  • mv – Moves or renames files/directories:
  • rm – Removes files or directories. Use the -r option to remove a directory with its contents:
  • touch – Creates an empty file or updates the modification time of an existing file:

3. Process Management

  • ps – Displays currently running processes. Use the -aux option to see all processes:
  • top – Displays a dynamic list of processes in real time:
  • kill – Stops a process by its ID:
  • bg and fg – Manage background and foreground processes:

4. User and Permission Management

  • sudo – Allows a command to be executed with administrator privileges:
  • chmod – Changes permissions for files/directories:
  • chown – Changes the owner of a file/directory:
  • useradd and userdel – Adds and removes users:

5. Networking and Communication

  • ping – Checks the connection with another host:
  • ifconfig – Displays information about network interfaces:
  • ssh – Connects remotely to another computer:
  • scp – Copies files over SSH:

6. Command Usage Examples

Below is an example of using several discussed commands:

  • chmod – Changes permissions for files/directories:
  • chown – Changes the owner of a file/directory:
  • useradd and userdel – Adds and removes users:

7. Disk and File System Management

  • df – Displays information about disk space availability:
  • du – Shows the size of files and directories:
  • mount – Mounts a file system:
  • umount – Unmounts a file system:

8. Searching for Files

  • find – Searches for files in the system:
  • locate – Quickly searches for files in the system:
  • grep – Searches for patterns in files:
  • which – Finds the full path to an executable file:

9. Communicating with the System

  • echo – Displays text on the screen:
  • cat – Displays the contents of a file:
  • more – Displays the contents of a file page by page:
  • less – Similar to more, but offers more navigation options:
  • man – Displays the user manual for a command:

10. Working with Archives

  • tar – Creates or extracts archives:
  • zip – Creates a ZIP archive:
  • unzip – Extracts ZIP files:
  • tar -xvzf – Extracts a TAR.GZ archive:
  • gzip – Compresses files in .gz format:
  • gunzip – Extracts .gz files:

11. System Monitoring

  • uptime – Displays the system uptime and load:
  • dmesg – Displays system messages related to boot and hardware:
  • iostat – Shows input/output system statistics:
  • free – Displays information about RAM:
  • netstat – Displays information about network connections:
  • ss – A modern version of netstat, used for monitoring network connections:

12. Working with System Logs

  • journalctl – Reviews system logs:
  • tail – Displays the last lines of a file:
  • logrotate – Automatically manages logs:

13. Advanced File Operations

  • ln – Creates a link to a file:
  • xargs – Passes arguments from input to other commands:
  • chmod – Changes permissions for files/directories:
  • chattr – Changes file attributes:

Linux offers a wide array of commands that allow for complete control over the computer. Key commands such as ls, cd, cp, and rm are used daily to navigate through the file system, manage files, and directories. To effectively master these commands, it’s best to start with those that are most useful in everyday work. For instance, commands for navigating directories and managing files are fundamental and require practice to become intuitive. Other commands, such as ps for monitoring processes, ping for testing network connections, or chmod for changing permissions, are also worth knowing to fully leverage the power of the Linux system.

To learn effectively, it’s advisable to start by experimenting with commands in practice. Creating files, directories, copying, and deleting data allows for familiarity with their operation. Over time, it’s worthwhile to start combining different commands to solve more advanced problems, such as monitoring processes, managing users, or working with system logs. One can also use documentation, such as man or websites, to delve into the details of each command and its options.

Remember, regular use of the terminal allows for learning habits that make handling the Linux system more natural. Frequent use of commands, solving problems, and experimenting with new commands is the best way to master the system and fully utilize it.

Linux is indeed a powerful tool that provides great control over the system… but remember, don’t experiment on production! After all, experimenting on a production server is a bit like playing Russian roulette — only with bigger consequences. If you want to feel like a true Linux wizard, always test your commands in a development environment. Only then will you be able to learn from mistakes instead of searching for the cause of several gigabytes of data disappearance. And if you don’t know what you’re doing, simply summon your trusty weapon: man!

Automatic deletion of files on QNAP drive via SSHFS


Automation of Disk Space Management in a Linux Environment

In today’s digital world, where data is being accumulated in ever-increasing amounts, managing disk space has become a key aspect of maintaining operational efficiency in systems. In this article, I will present a script that automates the process of managing space on a remote disk mounted via SSHFS, particularly useful for system administrators who regularly deal with filling storage media.

Prerequisites

Before starting, ensure that SSHFS and all necessary packages enabling its proper operation are installed on your system. SSHFS allows remote file systems to be mounted via SSH, which is crucial for our script’s operation. To install SSHFS and the necessary tools, including a package that enables password forwarding (sshpass), use the following command:

Bash Script for Disk Space Management

Our Bash script focuses on monitoring and maintaining a defined percentage of free disk space on a remote disk mounted via SSHFS. Here are the script’s main functions:

Goal Definition:

TARGET_USAGE=70 – the percentage of disk space we want to maintain as occupied. The script will work to keep at least 30% of the disk space free.

Mount Point and Paths:

MOUNT_POINT=”/mnt/qnapskorupki” – the local directory where the remote disk is mounted. TARGET_DIRS=”$MOUNT_POINT/up*.soban.pl” – the directories where the script will look for files to delete if needed.

Function check_qnap: This function checks whether the disk is mounted and whether the mount directory is not empty. If there are issues, the script attempts to unmount and remount the disk using sshfs with a password forwarded through sshpass.

File Deletion: The script monitors disk usage and, if TARGET_USAGE is exceeded, it finds and deletes the oldest files in specified directories until the target level of free space is achieved.

Example Script Execution:

script starts working and gradually deletes files

The script will run until it reaches 70% usage as planned:

Script runs until reaching 70%

Downloading the script and adding it to crontab

Of course, the script should be adjusted to meet your specific needs. However, if you want to download it and add it to crontab, follow these steps:

If you want to automate the file removal process, for example, at the end of the day, add the following entry to crontab:

In this case, the script will run every day at 11:55 PM:

Make sure to use the correct path to the script.

Security and Optimization

The script uses a password directly in the command line, which can pose a security risk. In practical applications, it is recommended to use more advanced authentication methods, such as SSH keys, which are more secure and do not require a plaintext password in the script. However, in the case of QNAP, we used a password when writing this script.

Conclusion

The presented script is an example of how daily administrative tasks, such as disk space management, can be automated, thus increasing efficiency and reliability. Its implementation in real IT environments can significantly streamline data management processes, especially in situations where quick response to changes in disk usage is critical.

How to automatically turn off your laptop when battery status is displayed in Linux


Automatically Shutting Down Your Laptop at Low Battery Levels

Maintaining long battery life and protecting data are crucial for laptop users. In this article, we’ll show you how to create a simple Bash script that automatically shuts down your laptop when the battery level falls below 20%. Additionally, you’ll learn how to set up a crontab to run the script every 10 minutes, ensuring continuous monitoring.

Creating a Bash Script

The Bash script we have prepared will check the current battery level and compare it to a set minimum threshold. If the battery level drops below this threshold, the script initiates a system shutdown, helping to protect your data and hardware.

Also you can download script:

Don’t forget to grant permissions to run it:

Crontab Configuration

Crontab is a tool that allows you to schedule tasks in the Linux system. With it, we can set up regular battery checks.

Summary

With this setup, you can rest assured about the condition of your laptop even during intensive use. Automatic shutdown at low battery levels not only protects the equipment but also helps maintain a longer battery life.

Troubleshooting Proxmox clusters and restoring the LXC container

Managing Proxmox clusters can sometimes present technical difficulties, such as inconsistencies in cluster configuration or issues with restoring LXC containers. Finding and resolving these issues is crucial for maintaining the stability and performance of the virtualization environment. In this article, I present a detailed guide on how to diagnose and resolve an issue with an unreachable node and how to successfully restore an LXC container.

Before you begin any actions, make sure you have a current backup of the system.

Diagnosing the State of the Proxmox Cluster

and:

To understand the state of the cluster, execute the following command on the node-up-page-04 node:

Expected output:

Then check the detailed cluster information with the following command:

Expected output:

Removing the Container Configuration File and Cleaning Data

I discovered that the configuration file for container 107 still exists on the cluster’s file system at the path:

Output:

To remove this file and any remaining data associated with the detached node, execute:

Restoring the Container

After removing the configuration file, I restored the LXC container on the node-up-page-04 node using the command:

Output:

The restoration process was successful, and the container was ready for use. This case illustrates the importance of thorough diagnostics and configuration file management in Proxmox when working with clusters. Regular reviews of configurations are advisable to avoid inconsistencies and operational issues in the future.

How to prevent hibernation and sleep on debian and proxmox laptops when the lid is closed

Virtualization servers based on Debian family systems, such as Proxmox, are often used in test environments where continuous availability is crucial. Sometimes these servers are installed on laptops, which serve as low-budget or portable solutions. However, the standard power management settings in laptops can lead to undesirable behaviors, such as sleeping or hibernating when the lid is closed. Below, I describe how to change these settings in an operating system based on Debian to ensure uninterrupted server operation.

Step 1: Accessing the Configuration File

Open the terminal and enter the following command to edit the /etc/systemd/logind.conf file using a text editor (e.g., nano):

Step 2: Modifying logind Settings

Find the line containing HandleLidSwitch and change its value to ignore. If the line is commented out (preceded by a # symbol), remove the #. You can also add this line to the end of the file if it does not exist.

Step 3: Applying and Restarting the Service

After making the changes and saving the file, you need to restart the systemd-logind service for the changes to take effect. Use the following command in the terminal:

With these changes, closing the laptop lid will no longer initiate hibernation or sleep, which is especially important when using Debian-based servers, including Proxmox, as server solutions.

Extending SWAP space on Proxmox using lvreduce

Introduction

Managing SWAP memory is a key element in administering Linux operating systems, especially in virtualization environments like Proxmox. SWAP acts as “virtual memory” that can be used when the system’s physical RAM is full. In this article, we will show how to increase SWAP space on a Proxmox server, using the lvresize tool to free up disk space that can then be allocated to SWAP.

Problem Overview

A user wants to increase SWAP space from 8 GB to 16 GB, but encounters the problem of lacking available space in the LVM volume group, which is required to increase SWAP.

Step 1: Checking Available Space

The command vgs displays the volume groups along with their sizes and available space.

Step 2: Reducing the Volume

Suppose there is a root volume of 457.26 GB, which can be reduced to free up an additional 8 GB for SWAP. Before reducing the volume, it is necessary to reduce the file system on this volume.

However, in the case of the XFS file system, reduction must occur offline or from a live CD.

Step 3: Using lvreduce

This command reduces the root volume by 8 GB, which is confirmed by a message about the volume size change.

Step 4: Deactivating SWAP

Before starting changes in SWAP size, SWAP must first be turned off using the above command.

Step 5: Expanding SWAP

The above commands first increase the SWAP space, then format it and reactivate it.

Finally, we verify the active SWAP areas using the above command to ensure everything is configured correctly.

This process shows how you can flexibly manage disk space on Proxmox servers, adjusting the size of SWAP depending on needs. Using lvreduce requires caution, as any operation on partitions and volumes carries the risk of data loss, therefore it is always recommended to make backups before proceeding with changes.

Upgrading Apache Cassandra from Version 3.1.15 and Higher to 4.1.x on Ubuntu 20.04.5 LTS: A Comprehensive Guide

Upgrading Apache Cassandra to a newer version is a significant task that database administrators undertake to ensure their systems benefit from new features, enhanced security measures, and improved performance. This guide provides a detailed walkthrough for upgrading Apache Cassandra from version 3.1.15 and higher to the latest 4.1.x version, specifically on Ubuntu 20.04.5 LTS, with an emphasis on pre-upgrade cleaning operations to manage disk space effectively.

Pre-upgrade Preparation

Backup Configuration Directory:

Before initiating the upgrade, it’s crucial to back up the Cassandra configuration directory. This precaution allows for a swift restoration of the configuration should any issues arise during the upgrade process. Utilize the following command to create a backup, incorporating the current date into the folder name for easy identification:

Pre-Cleanup Operations

Preparation is key to a smooth upgrade. Begin with maintenance commands to guarantee data integrity and optimize space usage, especially important for systems with limited disk space.

Scrub Data:

Execute nodetool scrub to clean and reorganize data on disk. Given that this operation may be time-consuming, particularly for databases with large amounts of data or limited disk space, it’s a critical step for a healthy upgrade process.

Clear Snapshots:

To further manage disk space, use nodetool clearsnapshot to remove existing snapshots, freeing up space for the upgrade process. To delete all snapshots on the node, simply use this method if you’re running out of space:

Cleanup Data:

Perform a nodetool cleanup to purge unnecessary data. In scenarios where disk space is a premium, it’s advisable to execute a scrub operation without generating a snapshot to conserve space:

Draining and Stopping Cassandra

Drain the Node:

Prior to halting the Cassandra service, ensure all data in memory is flushed to disk with nodetool drain.

Stop the Cassandra Service:

Cease the running Cassandra services to proceed with the upgrade safely:

Upgrading Cassandra

Update Source List:

Edit the repository sources to point to the new version of Cassandra by adjusting the cassandra.sources.list file:

Upgrade Packages:

With the repository sources updated, refresh the package list and upgrade the packages. When executing the apt upgrade command, you can keep pressing Enter as the default option is ‘N’ (No):

Modify Configuration:

Adjust the Cassandra configuration for version 4.1.x by commenting out or deleting deprecated options:

Update JAMM Library:

Ensure the Java Agent Memory Manager (JAMM) library is updated to enhance performance:

Backup and update the JVM options file:

It’s a good practice to back up configuration files before making changes. This step renames the existing jvm-server.options file to jvm-server.options.orig as a backup. Then, it copies the jvm.options file to jvm-server.options to apply the standard JVM options for Cassandra servers.

Optimization and Verification

Optimize Memory Usage:

Post-upgrade, it’s beneficial to evaluate and optimize memory usage and swap space to ensure efficient Cassandra operation:

Restart the Cassandra Service:

Apply the new version by restarting the Cassandra service:

Verify Upgrade:

Confirm the success of the upgrade by inspecting the cluster’s topology and state, ensuring all nodes are functional:

By adhering to this comprehensive guide, database administrators can effectively upgrade Apache Cassandra to version 4.1.x, capitalizing on the latest advancements and optimizations the platform has to offer, while ensuring data integrity and system performance through careful pre-upgrade preparations.

Optimization and Verification

After successfully upgrading Apache Cassandra to version 4.1.x and ensuring the cluster is fully operational, it’s crucial to conduct post-upgrade maintenance to optimize the performance and security of your database system. This section outlines essential steps and considerations to maintain a healthy and efficient Cassandra environment.

Monitor Performance and Logs

In the immediate aftermath of the upgrade, closely monitor the system’s performance, including CPU, memory usage, and disk I/O, to identify any unexpected behavior or bottlenecks. Additionally, review the Cassandra system logs for warnings or errors that may indicate potential issues requiring attention.

Tune and Optimize

Based on the performance monitoring insights, you may need to adjust Cassandra’s configuration settings for optimal performance. Consider tuning parameters related to JVM options, compaction, and read/write performance, keeping in mind the specific workload and data patterns of your application.

Run nodetool upgradesstables

To ensure that all SSTables are updated to the latest format, execute nodetool upgradesstables on each node in the cluster. This operation will rewrite SSTables that are not already in the current format, which is essential for taking full advantage of the improvements and features in Cassandra 4.1.x (Check the space, and if required, delete all snapshots as shown above.):

This process can be resource-intensive and should be scheduled during off-peak hours to minimize impact on live traffic.

Implement Security Enhancements

Cassandra 4.1.x includes several security enhancements. Review the latest security features and best practices, such as enabling client-to-node encryption, node-to-node encryption, and advanced authentication mechanisms, to enhance the security posture of your Cassandra cluster.

Review and Update Backup Strategies

With the new version in place, reassess your backup strategies to ensure they are still effective and meet your recovery objectives. Verify that your backup and restore procedures are compatible with Cassandra 4.1.x and consider leveraging new tools or features that may have been introduced in this release for more efficient data management.

Proxy through nginx frontend to the second virtual server wordpress

In a situation where we have one public IP address and we have many domains directed to that IP address, it is worth considering spreading the traffic to other servers. Proxmox, which allows you to create a pair of virtual machines, is perfect in such a situation. In my case, each virtual machine is separated and the traffic is broken down by nginx, which distributes the traffic to other servers. The virtual machine on my website will redirect traffic, I have the IP address for wordpress: 10.10.11.105 on port 80. In this case, no encryption is required, but the frontend itself, which manages the traffic, will present itself with encryption and security on port 443.

Two machines with the following configuration will participate throughout the process:
up-page IP: 10.10.14.200
soban-pl IP: 10.10.11.105

So let’s move on to the frontend that distributes traffic to other machines.
The frontend is done by linux debian 11 (bullseye), in addition, I have the following entry in the repository (/etc/apt/sources.list):

To install nginx, run the following commands:

You should make sure that the traffic from the frontend has the appropriate port 80 transitions. You can read how to check the network transitions here: Check network connection and open TCP port via netcat.

Screenshot of a terminal window showing a successful telnet connection to the IP address 10.10.11.105 on port 80, followed by the user exiting the telnet session with the 'quit' command.

The configuration of the frontend that distributes the traffic is as follows (/etc/nginx/conf.d/soban.pl.ssl.conf):

Configuration of the above-mentioned wordpress, additional authorization is also set when you try to log in to wp-admin, you can read about it here: More security wp-admin in nginx.

In the next step, check if the nginx configuration is correct by:

Terminal output displaying a successful nginx configuration test with the messages: 'nginx: the configuration file /etc/nginx/nginx.conf syntax is ok' and 'nginx: configuration file /etc/nginx/nginx.conf test is successful'.

If everything is fine, restart nginx:

In a virtual machine with nginx it should also be installed. This is the same as debian linux 11 (bullseye), so the respository should look like this:

Just installing nginx looks the same as on a machine that acts as a proxy.

All configuration is in /etc/nginx/conf.d/soban.pl.conf:

Also in this case, check the correctness of the nginx service configuration:

Everything looks fine, so let’s move on to restarting the service:

If the whole configuration was done correctly, the page should be directed without encrypted traffic to the virtual machine with wordpress. A wordpress service with nginx is not the only one that can be hosted or proxied. We can direct traffic from nginx to e.g. jboss, apacha and all other web services. Of course, this requires a corresponding modification of the configuration presented above, but the general outline of the concept as an nginx proxy has been presented. You should also remember about the appropriate configuration of keys and certificates. In my case let’s encrypt works perfectly for this.

Improving encryption on old red hat 5 by new Oracle Linux 7 using apache mod_proxy

There are situations when we need to increase the encryption level on the old system – according to the PCI audit requirements. However, the old system is no longer supported, so updating the encryption level is not possible. This is not a recommended solution, because we should try to transfer the application to a new system. After all, when we have little time, it is possible to hide the old version of the system and allow only the new machine to move to it. In this particular example, we will use mod_proxy as a proxy to redirect traffic to the old machine, while using iptables we will only allow communication with the new machine. It is not a recommended solution, but it works and I would like to present it here. The systems that I will be basing on in this example are the old red hat 5 and the new oracle linux 7. Recently, it has become very important to use a minimum of tls 1.2 and none below for banking transactions. Let’s start with the proxy server configuration oracle linux 7.

As of this writing, the addressing is as follows:
new_machine IP: 10.10.14.100
old_machine IP: 10.10.14.101
Traffic will be routed on port 443 from new_machine to old_machine.

Before we go to proxy configuration, please make sure there are network transitions from new_machine (10.10.14.100) to old_machine (10.10.14.101) to port 443. You can read how to verify network connections here: check network connection and open tcp port via netcat.

We go to the installation of apache and mod_proxy:

After installing apache, go to the edition:

Below are the news on the check level, what are the updates, and ip on the next service update:

In order to verify the correctness of apache configuration, you can issue a command that will check it:

If the apache configuration is correct, we can proceed to reloading apache:

At this point, we have a configured proxy connection. Before we move on to limiting traffic with iptables, I suggest you go to the site – with the new mod_proxy configured and test if everything is working properly and if there are any problems with the application.

Once everything is working fine, the network transitions are there, we can go to the iptables configuration for red hat 5. Let’s start by checking the system version:

Now we are going to prepare iptables so that the network traffic is available on port 443 from the new_machine (10.10.14.100). To do this, edit the file /etc/sysconfig/iptables:

After iptables settings are correct, we can reload the service:

In this way, we managed to cover up the weak encryption by proxying and diverting traffic to the new machine. This is not a recommended solution and you should try to transfer the application to a new environment compatible with the new system. However, in crisis situations, we can use this solution. Network traffic is not allowed by other IP addresses, so scanners will not be able to detect weak encryption on the old machine, and users using the old environment will not be able to use it. This does not change the fact that weak encryption is still set in the old environment and needs to be corrected. The example I gave is for the old red hat 5 and the new oracle linux 7, but it can be assumed that a similar solution and configuration is possible for other versions of the system.

Increasing the security of the ssh service

Nowadays, many bots or hackers look for port 22 on servers and try to log in. Usually, the login attempt is made as the standard linuxe root user. In this short article, I will describe how to create a user that will be able to log in as root and change the default ssh port 22 to 2222. Let’s go:

This way we created the user ‘soban’ and assigned it the default shell ‘/bin/bash’.

We still need to set a password for the user ‘soban’:

In the next step, let’s add it to ‘/etc/sudoers’ so that it can become root. Keep in mind that once the user can get root, he will be able to do anything on the machine!

Please add this entry below:

How can we test whether the user has the ability to log in as root? Nothing easier, first we’ll switch to the user we just created:

To list the possible sudo commands, just type the command:

Finally, to confirm whether it is possible to log in as root, you should issue the command:

Now that we have a root user ready, let’s try disabling ssh logon directly and change the default port. To do this, go to the default configuration of the ssh service, which is located in ‘/etc/ssh/sshd_config’:

We are looking for a line containing ‘Port’ – it can be hashed, so it should be unhashed and ‘PermitRootLogin’. Then set them as below:

In this way, we changed the default port 22 to 2222 and disallowed the possibility of logging in directly to the root user. However, the ssh service still needs to be reloaded, in debian or kali linux we do it like this:

In this way, we have managed to create a user who can safely log into the ssh service and become root. In addition, after changing the port, we will not go out on port 22 scans, which by default is set and scanned by a potential burglar. Installing the fail2ban service is also a very good improvement in security.