Proxmox Datacenter Manager: Centralized Management for Proxmox VE

Proxmox Server Solutions  announced the first Alpha release of Proxmox Datacenter Manager, an open-source server management software designed to provide a unified overview of all nodes and clusters in Proxmox VE environments. This new tool aims to simplify the management of virtualized environments by offering a modern user interface and centralized control.

Key Features of Proxmox Datacenter Manager

  1. Centralized Overview: The Datacenter Manager offers a centralized view of all individual nodes and clusters, making it easier to monitor and manage resources.
  2. Basic Management: Users can perform basic operations such as shutdown, reboot, start, and remote migration of virtual guests between different data centers.
  3. Modern User Interface: The tool features a redesigned front end, optimized for accessibility, speed, and compatibility.
  4. Resource Management: It allows for better organization of resources, including hierarchical groups or resource pools, and simplifies adding remotes.
  5. Integration with Proxmox VE: For more complex configurations, the tool links directly to the full web interface of Proxmox VE.
  6. Future Enhancements: The roadmap includes plans for improved health state overview, support for multiple VRFs across clusters, and off-site replication copies for manual recovery.

The Alpha version of Proxmox Datacenter Manager is available for testing and collaboration. Installation is similar to Proxmox VE Server, with a straightforward process that includes selecting the target disk, configuring network settings, and setting up user credentials.

Proxmox Datacenter Manager is a promising tool for administrators managing multiple standalone nodes or clusters. While still in the Alpha stage, it provides valuable features that streamline administrative tasks and improve resource management. The Proxmox community is encouraged to test and provide feedback to help shape the future of this project.

More information and a download link are available on their wiki at https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap

Homelab Maintenance

I work out of my home office full-time. I spend a lot of time here, and so I’m used to the way things look – and sound. I was on a video call this week when something felt off. I took off my headphones and heard it.

clunk.

clunk.

One of the drives in my homelab was beginning to fail.

My Proxmox server hosts an Active Directory domain, Windows test environment, LXC containers and Docker containers. It hosts media services, ad blocking and backs up data from my family’s computers.

This “homelab” isn’t one of those half-racks full of industrial-grade servers in closets you see on YouTube. I assembled mine over the years from end-of-life, unwanted and discounted hardware. My primary server is a laptop purchased on eBay for parts, with screen burn in and missing keys. It did, however, come with 20 GB of RAM. My firewall and NAS came from thrift shops. I’d thought about upgrading it, but it serves my needs well and cost less than a used Dell desktop.

Looking at the NAS logs, I saw one drive was logging an I/O error every 30 seconds. One drive might be failing. I deactivated the drive (turns out it was one of the newer white-label drives) and replaced it with a spare I had laying around. Once let the consistency check finished, all was good.

I deactivated the failing drive and replaced it with a spare drive I had laying around. I would have set up a hot-spare, but I needed all of the bays in my NAS.

clunk.

While the NAS drive was beginning to fail, the clunk was coming from an external USB drive used to back up the NAS. The drive was sitting vertically as was designed. I turned it around so the drive lay horizontally, and the noise went away. When I was starting out in IT, we had a superstition about running spinning drives sideways, thinking it could make a head crash easier. Turns out that superstition still lives in the back of my head.

I spent the rest of the afternoon pruning backups, putting a replacement external drive on my Amazon wishlist, and re-routing cables, like you do when you run a homelab.

 

 

 

CEPH and Proxmox VE

I’ve wanted to add high-availability to my Proxmox cluster, but I’ve got some work to do first.

CEPH is a distributed storage system that can be used as a storage backend in Proxmox VE. CEPH provides highly available and fault-tolerant storage by distributing data across multiple storage nodes in a cluster.

In Proxmox VE, CEPH can be used as a storage backend for virtual machine disks and containers. This allows for the creation of highly available and scalable virtualized environments that can easily scale up or down as needed.

With CEPH, VMs can auto-migrate on a server failure to provide high-availability.

It looks like CEPH wants 3 storage devices minimum to create a storage array. I’m considering upgrading to 3 USFF systems (see this link for lots of information about USFF desktops as servers). 3 i5 desktops with 16 GB of RAM and NVME drives could make a nice, inexpensive cluster.

Why not use a NUC? The second-hand market is full of off-lease Dell, HP and Lenovo USFF desktops.

Proxmox VE 7.4 released

Proxmox is an open-source bare metal virtualization system I use in my homelab. Proxmox supports clustering, high availability and backup using industry standard tools running on relatively mod-free Debian Linux, qemu and kvm. It supports any hardware supported by Debian, which makes use in a lab environment practical – after running VMWare’s vSphere and Nutanix CE and dealing with stringent hardware compatibility lists, I can appreciate a hypervisor that I can throw at any hardware I have in my collection.

Proxmox VE version 7.4 has been released and as minor releases have gone, the upgrade from 7.3 to 7.4 went flawlessly, only requiring a reboot when convenient to load a new kernel. There are the usual upgrades to the Kernel (now at 5.15), QEMU, kvm, and ceph.

Proxmox VE’s UI now lets you sort guest resources by name, which makes organizing VMs much cleaner – even in a small homelab like mine, with a handful of Linux containers, a Docker host, and a small AD test environment.

There’s also a dark mode switch in the UI now, much handier than applying a mode setting that gets reset every time you reboot.

The open-source architectures riscv32 and riscv64 can be used for LXC containers. I’m interested in trying these out to expand my homelab to architectures other than i386 and x64.

If you’re thinking of installing (or upgrading) Proxmox, I’d recommend taking a look at my earlier posts: Proxmox First Steps and Proxmox Helper Scripts For helpful tips on setup and streamlining ongoing maintenance of your Proxmox system.

Proxmox First Steps

TechnoTim has a great homelab how-to channel on YouTube. This video shows all the steps he’d do when creating a Proxmox server for the first time. Setting update sources, reconfiguring storage, setting up networking and VLANs, updating ISOs, preparing for clustering, and more – all the things I wish I knew after my Proxmox server install was complete and before putting the system into production.

Proxmox helper scripts

I use Proxmox as the basis for my homelab. Proxmox is an open source bare-metal hypervisor with support for a wide array of hardware (if Linux runs on it, it’ll probably run Proxmox…)

Proxmox offers many enterprise-level features, like clustering, backup, advanced networking, certificate management, application support and support for ZFS, NFS and CIFS shares. Proxmox supports KVM virtulization for full emulation of Windows and Linux/BSD hosts, as well as LXC containers with broad support.

Proxmox also offers an open-standard email gateway and backup system.

I use Nutanix and VMWare vSphere at work. Configuring another hypervisor is a learning experience helped by many tutorials on YouTube and online resources. One resource I wanted to call out is a Github page created by user tteckster called Proxmox Helper Scripts. This page has many of the tweaks in one place that I searched for when setting up Proxmox the first time.

From this page, you can find scripts to perform post-installation steps, maintain the kernel list, choose dark mode, and install one of dozens of LXC containers – including running Docker in a container!

I’m running a KVM Ubuntu instance in my homelab running Docker to host many of my services. I’m planning to move the services to dedicated LXC containers, and these scripts will make it easy to do.

Kasm – Docker containers in your browser!

I just discovered Kasm, an amazing little tool that lets you run Docker containers on a remote server in a client browser. From their web site:

Streaming containerized apps and desktops to end-users. The Workspaces platform provides enterprise-class orchestration, data loss prevention, and web streaming technology to enable the delivery of containerized workloads to your browser.

 

Want to load a suspicious web site in a sandboxed browser? Fire up a Chromium, Edge or Brave window in your desktop browser. Need a desktrop environment? Centos and Ubuntu come loaded out of the box. Want to run The Gimp, Teams, Zoom or other apps in an isolated container? Done.

Kasm Workspaces environment is available in a community edition with limitations (perfectly usable in a home lab environment) or in professional/enterprise editions that include support and additional features for $5 per user per month and $10 per user per month, respectively.

Installation was simple. SSH into my docker app host, a virtual machine in my home lab running in a Proxmox VE host.

I needed to create a swap file, since I didn’t have one in place.

sudo dd if=/dev/zero bs=1M count=1024 of=/mnt/1GiB.swap
sudo chmod 600 /mnt/1GiB.swap
sudo mkswap /mnt/1GiB.swap
sudo swapon /mnt/1GiB.swap
cat /proc/swaps
echo '/mnt/1GiB.swap swap swap defaults 0 0' | sudo tee -a /etc/fstab
cd /tmp
wget https://kasm-static-content.s3.amazonaws.com/kasm_release_1.10.0.238225.tar.gz
tar -xf kasm_release*.tar.gz
sudo bash kasm_release/install.sh -L <Port Number>

You'll want to capture the page of usernames/passwords, then go to a browser and load
https://<your ip address>:<your port>

This loads the admin page by default. Go to the Workspaces” tab to see the available environments. More are available from their GitHub page.

 

I’m looking forward to playing with Kasm and using it for creating sandbox environments in a browser without a lot of effort.