Living With Proxmox

My Proxmox VE 9.0 upgrade went smoothly, I ran the following command from a shell:

apt update; apt upgrade;pve8to9

Then, updated my /etc/apt/sources.list to point to “Trixie” repositories, then ran:

apt dist-upgrade

to run the upgrade process. Admittedly, I’m not running Ceph or Proxmox Backup server, it’s definitely worth checking out the update documentation at https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#In-place_upgrade.

A quick reboot, just because, and everything was up and running.

A week later, unrelated to the upgrade, I noticed that my secondary PVE server was unresponsive. I saw lots of “read error on drive /dev/sda” messages, and rebooted the server. BIOS complained the boot drive was unavailable. I checked the cables, all seemed fine.Still no boot.

Unfortunately, I’d moved the BBS from the primary server to the secondary server when I did maintenance on the primary, and forgot to move it back. The two VMs running on the secondary were the BBS and Proxmox Backup Server.

I installed a fresh copy of Windows 11 and copied by daily BBS backup (a file backup to my NAS)  to it, got the BBS working.

I took another look at the secondary, reseated everything and now it booted. The PVE gui came up, but the VMs were unavailable.

smartctl -a /dev/sda

didn’t pull up anything out of the ordinary, no remapped sectors, moderate power-up time.

I saw the message:

TASK ERROR: activating LV 'pve/data' failed: Check of pool pve/data failed (status:64). Manual repair required!

Looking at all of the LVM commands, the logical volumes all looked OK. Searching on the web revealed the command:

lvconvert --repair pve/data

After I ran the lvconvert command, the VMs appeared in PVE just fine. I copied the data files from the new BBS VM to the old BBS, and all is back up and running.

This brought up an issue with Proxmox Backup server – since it needs a VM to run, if the host running PBS crashes, how do you restore it? I wasn’t sure if the VM stored metadata in the VM or on the backup media. Hopefully the latter. While the deduplication is nice (the BBS file area is 11 gigabytes and rarely changes) being able to restore a VM directly from any Proxmox VE server is nice. I’ll have to think about what to do in the future.

I suppose I could use the Proxmox built-in backup tool to backup PBS, and PBS to back up everything else.Then, restore PBS from backup (a 2-click process) and restore everything else from PBS.

I run a 2 node cluster without Ceph and HA. One possibility is to add a third node to create a proper quorum and run Proxmox Backup server on that node. If backup metadata is stored on the target media, then an occasional  drive clone would suffice.

 

Proxmox 9.0 released

Proxmox VE 9.0 is out, and it brings a solid set of updates. Built on Debian 13 “Trixie” with Linux kernel 6.14.8, it improves hardware support and overall performance.

Core Updates

The virtualization stack includes:

  • QEMU 10.0.2 for better VM performance
  • LXC 6.0.4 for improved container stability
  • ZFS 2.3.3 with RAID-Z expansion
  • Ceph Squid 19.2.3 for distributed storage (useful in clusters)

Storage & Networking

You can now take VM snapshots on thick-provisioned LVM storage, which is helpful if you use Fibre Channel or iSCSI. Proxmox also adds Software-Defined Networking (SDN) features like OpenFabric and OSPF routing—ideal for more complex setups.

High Availability & Monitoring

New HA affinity rules let you control where VMs run in a cluster. Plus, real-time node metrics give better visibility into system performance.

Interface Improvements

The mobile UI has been redesigned, making it easier to manage your lab from a phone or tablet.

Upgrade Notes

If you’re running Proxmox VE 8.4, the upgrade path is well-documented. You’ll want to run the pve8to9 checklist script before upgrading to catch any issues.

For homelabbers, Proxmox VE 9.0 offers meaningful improvements without adding complexity. It’s a worthwhile upgrade if you want better performance, more control, and cleaner management tools.

Thinkpad Homelab Upgrades

You can spend a lot of money building a homelab that competes with small office networks. Or, you can do what I did and build a network of cast-off, unwanted hardware.

A thrift-store Synology NAS, “parts-only” Thinkpad laptop with a cracked screen and broken keyboard, and a $5 goodwill router, flashed with OpenWRT forms the basis of my home network. Proxmox, a free hypervisor  allowed me to test LXC and docker containers, block ads on my network with Pi-Hole, run a test Windows Active Directory environment, run Windows95 as a client VM, and host my BBS on this collection of cast-offs.

I’m happy with it, and am always looking for new ways to upgrade on the cheap.

I’ve wondered if I could add another hard drive to the system, or speed up the storage. I found this post after a web search – apparently, Thinkpads support SATA Express, an older technology meant to bridge support between SATA and NVMe drives. The drive interface is backwards compatible with SATA, but provides 2 PCI-x lanes (instead of 4 with native NVMe).

While a compromise, it appear to be quite a bit faster in testing.

And, I’ve found adapters that support 2 NVMe drives on one SATA port. Add 2 drives, set up a ZFS pool, on a laptop – the mind boggles.

 

 

 

Homelab Maintenance

I work out of my home office full-time. I spend a lot of time here, and so I’m used to the way things look – and sound. I was on a video call this week when something felt off. I took off my headphones and heard it.

clunk.

clunk.

One of the drives in my homelab was beginning to fail.

My Proxmox server hosts an Active Directory domain, Windows test environment, LXC containers and Docker containers. It hosts media services, ad blocking and backs up data from my family’s computers.

This “homelab” isn’t one of those half-racks full of industrial-grade servers in closets you see on YouTube. I assembled mine over the years from end-of-life, unwanted and discounted hardware. My primary server is a laptop purchased on eBay for parts, with screen burn in and missing keys. It did, however, come with 20 GB of RAM. My firewall and NAS came from thrift shops. I’d thought about upgrading it, but it serves my needs well and cost less than a used Dell desktop.

Looking at the NAS logs, I saw one drive was logging an I/O error every 30 seconds. One drive might be failing. I deactivated the drive (turns out it was one of the newer white-label drives) and replaced it with a spare I had laying around. Once let the consistency check finished, all was good.

I deactivated the failing drive and replaced it with a spare drive I had laying around. I would have set up a hot-spare, but I needed all of the bays in my NAS.

clunk.

While the NAS drive was beginning to fail, the clunk was coming from an external USB drive used to back up the NAS. The drive was sitting vertically as was designed. I turned it around so the drive lay horizontally, and the noise went away. When I was starting out in IT, we had a superstition about running spinning drives sideways, thinking it could make a head crash easier. Turns out that superstition still lives in the back of my head.

I spent the rest of the afternoon pruning backups, putting a replacement external drive on my Amazon wishlist, and re-routing cables, like you do when you run a homelab.

 

 

 

A great tiny homelab server – with multiple expansion options!

I’ve been looking for low-power, small footprint homelab servers; servethehome.com’s YouTube channel has a great comparison of “tinyminimicro” servers – ultra-small form factor (USFF) desktops that make great mini servers.

I’ve run into problems with USFF servers only supporting 16GB of memory – it’s why I paid less for a desktop form factor server that supports 64GB of memory. It’s not a problem if you run multiple USFF desktops in a cluster, but I wanted to have one system supporting all of my workloads.

ServeTheHome reviews an HP Elite Mini 600 G9 that supports 96GB of RAM, 2 NVMe drives, and 10gb ethernet in a low-power, quiet USFF form factor. Too much? He also creates a mid-level configuration with 2.5gb ethernet, 48 GB RAM and smaller, cheaper storage.

Upgrading to Proxmox VE 8

I’ve used Proxmox for two years in a homelab that serves as a sandbox for work projects, a testbed Active Directory network, and running home automation tools. It combines the familiarity of F/OSS tools like Debian Linux, QEMU, and KVM, with a graphical interface that makes managing virtual servers easy – with a community supported, free tier and paid support models.

Changes in Proxmox VE 8:

  1. Updated Kernel and Linux Base: The underlying Debian base has been updated to Debian 12 (Bookworm).
  2. Container Improvements: Proxmox VE 8 introduces numerous improvements for container deployments. It now includes full support for Cgroupsv2, which offers more fine-grained resource management and isolation. Additionally, the LXC version has been upgraded to LXC 4.0, bringing performance optimizations and improved compatibility.
  3. Ceph and Storage: The Ceph storage cluster integration has been enhanced with new features, making it easier to deploy and manage distributed storage resources. Proxmox VE 8 includes an updated version of Ceph (Octopus) with improved performance, stability, and monitoring capabilities.
  4. Networking Enhancements: Network configuration has been simplified with the addition of a new Network Configuration panel in the web interface. It provides a centralized location to manage network interfaces, bridges, VLANs, and bonds, making it easier to configure and monitor network connectivity.
  5. Improved Backup and Restore: Proxmox VE 8 introduces significant improvements to its backup and restore functionality. The new backup mechanism is faster and more efficient, allowing for reduced backup times and optimized storage usage. The restore process has also been streamlined, simplifying the recovery of VMs and containers.
  6. Security and Authentication: Proxmox VE 8 introduces support for two-factor authentication (2FA), adding an extra layer of security to the management interface. It helps protect against unauthorized access and enhances the overall security posture of the Proxmox VE environment.

More information is available at the following link: https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-8-0

Upgrading from Proxmox VE 7 to Proxmox VE 8:

I upgraded my home lab to Proxmox VE 8 in approximately an hour, with limited downtime. Most of the time spent was moving my production workloads from one server to another while the upgrade took place.  I have a 2-node cluster. Two of my VMs I wanted to keep running and ensure they weren’t affected by the upgrade, so I moved them off of Server1 to Server2, upgraded Server1, moved the VMs back to Server 1 and upgraded Server2.

One wrinkle I ran into was not being able to migrate the guest VMs back to Server1 – I received a host key verification failure.  Running the following command on each server resolved the issue:

/usr/bin/ssh -e none -o 'HostKeyAlias=server-b-name' root@server-b-ip-address /bin/true

Run this command on the server with the host alias and IP address of the OTHER server.

Upgrading was simple. You can run the upgrade from the command line or the GUI. As with any upgrade, you’ll want to backup copies of any configuration files on your system. Backing up /etc and /var wouldn’t hurt.

From the command line:

  1. Run the pve7to8 command, looking for any errors/warnings in the output. I had one service that was stopped that I restarted before running the upgrade.
  2. Run apt update and apt upgrade to upgrade all of your 7.1x apps to the latest versions.
  3. Run apt dist-upgrade to start the upgrade process.

During the upgrade, you’ll be asked whether or not you want to automatically stop/restart processes. Since I moved my production workloads to my other server, I selected Yes. If you’ve changed any configuration files that are being replaced, you’ll be asked to review changes, accept the new version or keep the old version. I opted to accept the new versions. At the end of the upgrade, you’ll want to reboot as soon as possible to use the new kernel.

From the GUI:

  1. Update Packages: First, update the Proxmox VE 7 installation to the latest available packages. Log in to the Proxmox web interface, navigate to the “Updates” section, and click on “Update” to install any available updates.
  2. Upgrade Repository: Switch the repository to the Proxmox VE 8 repository. In the web interface, go to “Datacenter” > “Updates” > “Release Channel” and select “proxmox-ve-release-8.x”.
  3. Perform Upgrade: Once the repository has been updated, go to the “Updates” section and click on “Check” to retrieve the latest Proxmox VE 8 updates. Afterward, click on “Upgrade” to initiate the upgrade process.
  4. Follow the Wizard

By combining robust virtualization, extensive hardware support, clustering, backup and ease of use, Proxmox has made a great virtualization platform for enterprise and home use. I’m very happy with the platform and the level of improvement I see in Proxmox.

Proxmox helper scripts

I use Proxmox as the basis for my homelab. Proxmox is an open source bare-metal hypervisor with support for a wide array of hardware (if Linux runs on it, it’ll probably run Proxmox…)

Proxmox offers many enterprise-level features, like clustering, backup, advanced networking, certificate management, application support and support for ZFS, NFS and CIFS shares. Proxmox supports KVM virtulization for full emulation of Windows and Linux/BSD hosts, as well as LXC containers with broad support.

Proxmox also offers an open-standard email gateway and backup system.

I use Nutanix and VMWare vSphere at work. Configuring another hypervisor is a learning experience helped by many tutorials on YouTube and online resources. One resource I wanted to call out is a Github page created by user tteckster called Proxmox Helper Scripts. This page has many of the tweaks in one place that I searched for when setting up Proxmox the first time.

From this page, you can find scripts to perform post-installation steps, maintain the kernel list, choose dark mode, and install one of dozens of LXC containers – including running Docker in a container!

I’m running a KVM Ubuntu instance in my homelab running Docker to host many of my services. I’m planning to move the services to dedicated LXC containers, and these scripts will make it easy to do.