Ever wondered how to streamline the management and monitoring of virtual machines in your Proxmox environment? QEMU Guest Agent is a game-changer, offering tools that significantly enhance the way you interact with virtual systems. Let’s dive into how this tool can transform your setup.
What Makes QEMU Guest Agent Indispensable?
Time Synchronization: Keeping time consistent across your virtual machines and the host can be tricky, but QEMU Guest Agent automates this, ensuring that time-sensitive operations run smoothly.
Power Management: Imagine being able to shut down or reboot your virtual machines right from the Proxmox panel — no need to log in to each VM. It’s not only convenient but also a time saver.
System Monitoring: Get detailed insights into file systems, network activities, and other operational parameters directly from your host. This level of monitoring allows for timely diagnostics and adjustments.
Disk Management: Handling disk operations without having to intervene directly on the VM makes backing up and restoring data more straightforward than ever.
Setting Up QEMU Guest Agent on Your Proxmox Server
Getting started with QEMU Guest Agent involves a few simple steps:
Enable the Agent: Log in to your Proxmox panel, go to the ‘Options’ section of your desired VM, and make sure the ‘QEMU Guest Agent’ option is checked.
Next up, installing it on an Ubuntu VM:
Install QEMU Guest Agent
Shell
1
2
3
sudo apt-getinstall qemu-guest-agent
sudo systemctl start qemu-guest-agent
sudo systemctl enable qemu-guest-agent
To check whether the qeumu-guest-agent that can make the change is working properly:
Check agent
Shell
1
systemctl status qemu-guest-agent
The QEMU Guest Agent doesn’t just make life easier by automating the mundane tasks — it also enhances the security and efficiency of your virtual environment. Whether you’re managing a single VM or a whole fleet, it’s an invaluable addition to your toolkit.
Managing Proxmox clusters can sometimes present technical difficulties, such as inconsistencies in cluster configuration or issues with restoring LXC containers. Finding and resolving these issues is crucial for maintaining the stability and performance of the virtualization environment. In this article, I present a detailed guide on how to diagnose and resolve an issue with an unreachable node and how to successfully restore an LXC container.
Before you begin any actions, make sure you have a current backup of the system.
Diagnosing the State of the Proxmox Cluster
Shell
1
2
pvecm delnode up-page-02
Node/IP:up-page-02isnotaknown host of the cluster.
and:
Shell
1
2
pct restore107vzdump-lxc-107-2024_11_12-03_00_01.tar.zst--storage local
CT107already exists on node'up-page-02'
To understand the state of the cluster, execute the following command on the node-up-page-04 node:
Shell
1
pvecm nodes
Expected output:
Shell
1
2
3
4
5
Membership information
----------------------
Nodeid Votes Name
11node-up-page-01
21node-up-page-04(local)
Then check the detailed cluster information with the following command:
Shell
1
pvecm status
Expected output:
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Cluster information
-------------------
Name:soban-proxmox
Config Version:4
Transport:knet
Secure auth:on
Quorum information
------------------
Date:Wed Nov1310:40:122024
Quorum provider:corosync_votequorum
Nodes:2
Node ID:0x00000002
Ring ID:1.e6
Quorate:Yes
Votequorum information
----------------------
Expected votes:2
Highest expected:2
Total votes:2
Quorum:2
Flags:Quorate
Membership information
----------------------
Nodeid Votes Name
0x000000011<masked IP>
0x000000021<masked IP>(local)
Removing the Container Configuration File and Cleaning Data
I discovered that the configuration file for container 107 still exists on the cluster’s file system at the path:
The restoration process was successful, and the container was ready for use. This case illustrates the importance of thorough diagnostics and configuration file management in Proxmox when working with clusters. Regular reviews of configurations are advisable to avoid inconsistencies and operational issues in the future.
Managing SWAP memory is a key element in administering Linux operating systems, especially in virtualization environments like Proxmox. SWAP acts as “virtual memory” that can be used when the system’s physical RAM is full. In this article, we will show how to increase SWAP space on a Proxmox server, using the lvresize tool to free up disk space that can then be allocated to SWAP.
Problem Overview
A user wants to increase SWAP space from 8 GB to 16 GB, but encounters the problem of lacking available space in the LVM volume group, which is required to increase SWAP.
Step 1: Checking Available Space
Shell
1
vgs
The command vgs displays the volume groups along with their sizes and available space.
Step 2: Reducing the Volume
Suppose there is a root volume of 457.26 GB, which can be reduced to free up an additional 8 GB for SWAP. Before reducing the volume, it is necessary to reduce the file system on this volume.
Shell
1
resize2fs/dev/pve/root449.26G
However, in the case of the XFS file system, reduction must occur offline or from a live CD.
Step 3: Using lvreduce
Shell
1
lvreduce-L-8G/dev/pve/root
This command reduces the root volume by 8 GB, which is confirmed by a message about the volume size change.
Step 4: Deactivating SWAP
Shell
1
swapoff-a
Before starting changes in SWAP size, SWAP must first be turned off using the above command.
Step 5: Expanding SWAP
Shell
1
2
3
lvresize-L+8G/dev/pve/swap
mkswap/dev/pve/swap
swapon/dev/pve/swap
The above commands first increase the SWAP space, then format it and reactivate it.
Shell
1
swapon--show
Finally, we verify the active SWAP areas using the above command to ensure everything is configured correctly.
This process shows how you can flexibly manage disk space on Proxmox servers, adjusting the size of SWAP depending on needs. Using lvreduce requires caution, as any operation on partitions and volumes carries the risk of data loss, therefore it is always recommended to make backups before proceeding with changes.
In today’s world, where data is becoming increasingly valuable, proper backup management is crucial for the security of information systems. In this article, I present an effective way to automate the backup of key configuration files in Proxmox-based systems using a simple bash script and Crontab configuration.
Bash Script for Backup of the /etc Directory
The /etc file contains critical system configuration files that are essential for the proper functioning of the operating system and various applications. Loss or damage to these files can lead to serious problems. Below, I present an effective script, backup-etc.sh, that allows for the automated backup of this directory:
Generates the current date and time, which are added to the name of the archive to easily identify individual copies.
Uses the tar program with zstd compression to create an archived and compressed copy of the /etc directory.
Removes archives older than 100 days from the /var/lib/vz/dump/ location, thus ensuring optimal disk space management.
Adding Script to Crontab
To automate the backup process, the script should be added to crontab. Below is a sample configuration that runs the script daily at 2:40 AM:
Editing crontab
Shell
1
2
# crontab -e
402***/root/backup-etc.sh>/dev/null2>&1
Redirecting output to /dev/null ensures that operations are performed quietly without generating additional output to standard output.
Download the Script from soban.pl
The backup-etc.sh script is also available for download from the soban.pl website. You can download it using the following wget command and immediately save it as /root/backup-etc.sh:
With this simple command, the script is downloaded from the server and granted appropriate executable permissions.
Benefits and Modifications
The backup-etc.sh script is flexible and can be easily modified to suit different systems. It is default placed in the /var/lib/vz/dump/ folder, which is a standard backup storage location in Proxmox environments. This simplifies backup management and can be easily integrated with existing backup solutions.
By keeping backups for 100 days, we ensure a balance between availability and disk space management. Old copies are automatically deleted, minimizing the risk of disk overflow and reducing data storage costs.
Summary
Automating backups using a bash script and Crontab is an effective method to secure critical system data. The backup-etc.sh script provides simplicity, flexibility, and efficiency, making it an excellent solution for Proxmox system administrators. I encourage you to adapt and modify this script according to your own needs to provide even better protection for your IT environment.
Proxmox VE is a comprehensive, open-source server management platform that seamlessly integrates KVM hypervisor and LXC containers. Today, we present a streamlined process for installing Proxmox VE 8 on Debian 12 Bookworm, based on the official guidance from the Proxmox VE Installation Guide.
Prerequisites
A fresh Debian 12 Bookworm installation.
A user with sudo privileges.
Internet connectivity.
Installation Scripts
We’ve divided the installation into two scripts. The first script prepares your system and installs the Proxmox VE kernel. The second script continues the process after a system reboot, installing the remaining Proxmox VE packages.
Remember, all these commands need to be executed from the root user level, so:
Become root:
Shell
1
# sudo su -
First Part: System Preparation and Kernel Installation
Start by downloading the first script which prepares your system and installs the Proxmox VE kernel:
Downloading and changing permissions in the first script:
echo"Kernel installation completed. The system will now reboot. After rebooting, continue with the second part of the script."
reboot
After running the first script, your system will reboot. At this stage, you may encounter a few dialogs from the system, which are part of the normal package configuration steps. For this simplified installation, you can accept the default options by pressing Enter.
Screenshots during Installation
GRUB Configuration – A new version of the GRUB bootloader configuration file is available. It’s recommended to keep the local version currently installed unless you are aware of the changes. As with the previous dialogs, pressing Enter will select the default action.
Postfix Configuration – This dialog appears when installing the postfix package, which is a mail transport agent. The default option “Internet Site” is suitable for most cases. Pressing Enter accepts this configuration.
System Mail Name – Here you specify the FQDN (Fully Qualified Domain Name) for the system mail. The default value is usually adequate unless you have a specific domain name for your server. Again, pressing Enter will continue with the default configuration.
There might be issues encountered towards the end of the first script installation, such as:
The issues at the end of the installation of the first script.
Shell
1
2
3
4
5
Errors were encountered whileprocessing:
ifupdown2
pve-manager
proxmox-ve
E:Sub-process/usr/bin/dpkg returned an error code(1)
However, the second part of the script, executed after the reboot, addresses these problems. After a successful reboot of the machine, log into the system and proceed to the second script.
Second Part: Completing Proxmox VE Installation
After your system has rebooted, proceed with downloading the second script:
Downloading and changing permissions in the second script:
echo"Continuing Proxmox VE installation after reboot..."
# Install upgrade
apt upgrade-y
# Optional: Remove the Debian default kernel
apt remove linux-image-amd64'linux-image-6.1*'-y
update-grub
# Optionally remove the os-prober package
apt remove os-prober-y
# Clean up installation repository entry
rm/etc/apt/sources.list.d/pve-install-repo.list
# Retrieve the server's IP address for the Proxmox web interface link
IP_ADDRESS=$(hostname-I|awk'{print $1}')
echo"Proxmox VE installation completed."
echo"You can now connect to the Proxmox VE web interface using:"
echo"https://$IP_ADDRESS:8006"
echo"Please log in using the 'root' username and your root password."
Once the second script completes, you will be able to access the Proxmox VE web interface using the URL displayed at the script’s conclusion. Log in with the ‘root’ username and your root password.
Upon loading the page, you may encounter a certificate trust error – this is normal at this stage, and you can safely accept that it is unsafe and proceed to access the page for managing Proxmox. If you don’t know the root password, you can reset it by executing ‘passwd‘ as root. Good luck!
This script helps to notify me by e-mail about the condition of the disk. Remember to indicate the disk accordingly – in this case it is “/dev/sda” and change the e-mail address from “soban@soban.pl” to your own. Save the script in “/root/checkbadsector.sh“: