Proxmox 9 Upgrade - HowTo And What To Expect
In the video below, we show how to upgrade Proxmox VE 8 to 9
Prerequisites
- •Basic Linux knowledge
- •Virtualization fundamentals
What You'll Learn
- Configure and manage Proxmox VE
- Understand Proxmox VE architecture and features
Proxmox VE Guide
Proxmox 9 Upgrade - HowTo And What To Expect
Feb 16, 2026
· 28 mins read
_
#### In the video below, we show how to upgrade Proxmox VE 8 to 9
Proxmox VE version 9 has been available for official use for some time now
And it was actually released just days before the official release of Debian version 13, on which Proxmox VE is based
Now ideally, a fresh installation of an OS like this is preferred
But typically that’s only done when hardware is being retired
So instead, an existing deployment would be upgraded
Now, a major upgrade like this has to be done from the command line
But if you’re not careful you could run into major problems
Useful links:
https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
https://pve.proxmox.com/wiki/Roadmap#9.0-known-issues
https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM))
https://pve1.home.lcl:8006/pve-docs/chapter-qm.html#qm_memory
Update Proxmox VE:
The first thing we need to do is to check to the documentation, especially to see if we might run into issues
https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
https://pve.proxmox.com/wiki/Roadmap#9.0-known-issues
If you don’t do this, it can result in things no longer working after the upgrade
In some cases you might have to hold back on the upgrade
But if not then we need to bring the existing deployment up to date
As far as I’m aware we don’t have a maintenance mode for Proxmox VE like in say ESXi
So, if you have a standalone server then shutdown any virtual machines that are running
If you have a cluster you should disable High Availability rules, otherwise virtual machines might be migrated during the process; Personally I set the State to ignored
Bear in mind, version 9 will be using Affinity Rules going forward
If you decide to shut down non-essential virtual machines, make sure they are not set to start at boot
If you’re using replication you’ll want to disable those rules as well to avoid potential problems or delays
And prior to upgrading a server, you should migrate essential virtual machines to another server
NOTE: I did see some warnings about a missing config file while migrating some VMs recently. It didn’t cause any issues so it may be a cosmetic bug
Now with some clusters, you might be able to have a small window of say a day or two where servers in the cluster are running different major versions
And for safety reasons this is common in case something goes wrong after a while and the entire cluster stops working
Personally, I’d prefer to upgrade all of the servers during one maintenance window
While recently upgrading a live cluster for instance I sometimes had to keep refreshing the browser on the newer server
In addition, after migrating a VM to the newer server, the older server didn’t show a green play button against a VM on the other server for some time
Things just felt out of synch, possibly due to the version mismatch
And this is why lab testing is so important
As with every major upgrade, I’d be inclined to let stake holders know that applications may be offline during the maintenance window
Even though I experienced no real issues myself with live migrations during the upgrade, existing connections may be lost whilst the upgrade takes place and VMs are being moved around
In addition, as you’ll see later, I did have to take services offline after the upgrade
Now we want to make sure that Debian and Proxmox VE are fully up to date before we carry out a major upgrade
Navigate to the server you’ll upgrade, then to Updates and click Refresh
If any updates are found, click Upgrade
This opens a shell and you’ll be prompted to agree with the upgrades
You can also do this from the shell, or an SSH session
apt update
apt dist-upgrade -yOnce this has been completed, we’ll want to reboot the server
In the GUI click Reboot, or from the command line run the following command
reboot nowOnce the server is back up, connect to it using SSH or the console and run the following command
pve8to9 --fullThis allows you to check for known potential problems if you go ahead with the upgrade
If you get a warning of a lack of disk space, I found it useful to clean out unused packages and the cache
apt autoremove
apt autocleanNow, the results of the check will likely vary and you’ll want to follow what advice you’re given
Intel Microcode:
One of the warnings I was given told me to install the intel-microcode package
Basically, CPUs can have security vulnerabilities and although this is best resolved in firmware, it may either be a while before a BIOS update is released or the motherboard is no longer supported
In which case, adding this package can help deal with these security problems by having the OS take care of them
If you see this warning, then update the sources.list file and add non-free-firmware to the three Debian lines
For example,
nano /etc/apt/sources.list
deb http://ftp.uk.debian.org/debian bookworm main contrib non-free-firmwaredeb http://ftp.uk.debian.org/debian bookworm-updates main contrib non-free-firmware
security updates
deb http://security.debian.org bookworm-security main contrib non-free-firmwareNow save and exit
We’ll then install the package
apt update
apt install intel-microcodeWith this installed, the computer should be better secured against security risks both now and going forward, as long as Intel releases updates that is
Bootloader for BIOS computers:
The computers I run typically use a BIOS and so I also get this warning
WARN: systemd-boot package installed on legacy-boot system is not necessary, consider removing it
So if you get this warning it relates to a potential conflict you could run into regarding the bootloader whilst upgrading the computer
And unless you’ve manually installed systemd-boot for your system it should be safe to remove this package and avoid the problem
First we’ll double check what the computer is using
[ -d /sys/firmware/efi ] && echo "UEFI" || echo "Legacy"If this returns an answer of Legacy we’ve confirmed the computer is using a BIOS
But we’ll check that the grub-pc package is installed and managing the boot process
dpkg -l | grep grub-pcIf we do get a response then we’ll remove the package
apt purge systemd-bootThen update GRUB
update-grubBootloader for UEFI computers:
Now if the computer is using UEFI, you’ll see a more strongly worded warning telling you the upgrade will fail
FAIL: systemd-boot meta-package installed. This will cause problems on upgrades of other boot-related packages. Remove ‘systemd-boot’ See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#sd-boot-warning for more information.
Now I actually tested this out in a video for Lab Members of the channel, and the upgrade didn’t have any issues by not removing it
HOWEVER, the safer option is to heed the advice and remove this meta-package to make sure your upgrade goes smoothly
First we’ll double check we’re using UEFI
[ -d /sys/firmware/efi ] && echo "UEFI" || echo "Legacy"If this returns an answer of UEFI we’ll check that the grub-efi packages are installed and managing the boot process
dpkg -l | grep grub-efiIf that’s confirmed we’ll remove the package
apt purge systemd-bootThen update GRUB
update-grubFor one additional check, we’ll make sure the EFI disk is still mounted
mount | grep /boot/efiIf it doesn’t return anything you can have fstab try to re-mount all mount entries using
mount -aThen check again
mount | grep /boot/efiIf this still doesn’t return anything, check the /etc/fstab file before you consider rebooting the computer as UEFI relies on EFI to boot the OS stored on the hard drive
Removable Bootloader:
On one computer that uses UEFI I got the following warning
WARN: Removable bootloader found at ‘/boot/efi/EFI/BOOT/BOOTX64.efi’, but GRUB packages not set up to update it!
Run the following command:
echo ‘grub-efi-amd64 grub2/force_efi_extra_removable boolean true’ | debconf-set-selections -v -u
Then reinstall GRUB with ‘apt install –reinstall grub-efi-amd64’
This indicates that the removeable path isn’t being kept up to date and a configuration update is needed to resolve this
So as advised, first we’ll update the setting
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -uThen we’ll re-install grub
apt install --reinstall grub-efi-amd64Final Precheck:
Assuming you have addressed all the warnings, we’ll now run one more upgrade check
pve8to9 --fullAssuming no further issues are reported we should go be good to upgrade at this point
However, if you had to make any changes, especially involving the bootloader it would be good to reboot the computer first
reboot nowShould anything go wrong at this stage, we’ll know it relates to the changes we’ve just made
Whereas if this does introduce a problem, but we only run into this after the upgrade, it might be more difficult to pinpont the cause
Deb822 Style:
Now as I’ve mentioned in another video about upgrading to Debian 13, we’re being guided towards using the deb822 style
It’s been around for a while, but now they seem more keen for us to use this going forward and Proxmox’s upgrade notes cover it as well
Now although this isn’t mandatory, what I’ve noticed and shown Lab Members of this channel, is even if you do nothing with the source lists, a new Enterprise subscription file will be created as part of the upgrade process
And if you don’t use Enterprise subscriptions, you’ll be left with an older style list which is disabled and still referencing Bookworm
Now if you don’t use Ceph either, and do nothing, you’ll have an older disabled list for that as well which still references Bookworm
It’s not a good idea to mix different code versions, and in that situation it would be easy to do that, even if by accident, at a later date
So migrating to this deb822 style does seem to be the better option and that’s what we’ll do
Now Proxmox does show us how to do this migration, but there is one thing I’d like to point out; Most of the repository servers being referenced do not support HTTPS
Now I deliberately made sure my other Debian computers are using HTTPS, but for Proxmox VE I’m going to leave them using HTTP
Granted these are Debian servers, and it would be tempting to try change to different servers that support HTTPS, but it’s entirely feasible that a future update from Proxmox would just roll any changes back to the default settings
So for me, it is what it is
Now there are different ways of going about this but I’ve come up with a mixed strategy given what I’ve experienced in my testing
Interface Name Changes:
One of the biggest gotchas you might overlook in the official notes is the potential for network interface names to change
Proxmox VE 9 brings a newer kernel and when Linux re-scans your hardware it might decide to change the name of your network interface(s)
Something along the lines of enp3s0 becoming enp3s0p0
If that should happen, you’ll no longer have direct access to your server using SSH or HTTPS because it won’t have an IP address
And that’s because the configuration is still looking for the old name
Now there is mention in the notes that you can use network interface pinning, prior to the upgrade, so that the name remains consistent
At first, that sounded like a great idea, but then I thought this is too niche and stores up problems for later down the road
If the name is tied to the MAC address, imagine the situation when that card needs replacing, at 2AM in the morning
What are the chances that the person looking into this will know anything about network interface pinning?
They may not even have access to the computer to make any changes!
But the expectation is, if you swap out a broken network adapter with an identical one, it should just work…right?
So from my own perspective, I’d rather fix the issue, IF it breaks, during the upgrade itself
And this is why out-of-band management is so important
There are many ways you can do this, and it doesn’t even cost much
So before you start the upgrade, make sure remote access is working, and make sure that if a server does go offline, it won’t impact this remote access
That way, if a network interface name does change, you’ll be able to get console access, update the config and get the server back online quickly
Upgrade Proxmox VE:
Now another reason you should have out-of-band management is because it’s also quite possible the OS will halt after it’s rebooted
We’re not just updating a few packages, we’re migrating everything, including the kernel to newer versions
Backup any VMs or containers before proceeding, even if you are running a cluster
Like any other computer, backing up Proxmox VE itself can be a bit tricky without additional software installed
Should things go wrong, the preference seems to be to install a new OS and just rebuild it
But one thing you can still do is to backup the crucial folders
tar czf /root/pve-host-backup-$(date +%F).tar.gz /etc/pve /etc/network/interfaces /etc/hosts /etc/fstabAnd also this database file
cp /var/lib/pve-cluster/config.db /root/pve-config-db-backup-$(date +%F).dbThen copy these files to an external drive
Now if you use Ceph, you need to be careful before upgrading Proxmox VE
The documentation says that you first need to make sure your computers are running Ceph 19.2 Squid before you upgrade to Proxmox VE 9 and you can check the version in the Ceph panel
I don’t use it, so I can’t provide an example for that. But do check the notes as to what you need to do for Ceph as a prerequisite https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#Prerequisites
Now to upgrade Proxmox VE and Debian, we need to update the repository information
Debian 12 is known as Bookworm whilst Debian 13 was given the name Trixie
But first we’ll make a backup of the existing sources.list file
cp /etc/apt/sources.list /etc/apt/sources.list.bakWhilst we could edit the file with a text editor, instead we’ll use the stream editor command
sed -i 's/bookworm/trixie/g' /etc/apt/sources.listThis basically replaces all instances of the word bookworm in /etc/apt/sources.list with trixie
The -i parameter is to overwrite the contents of the file, otherwise we would just have the result output to the screen
The s/bookworm/trixie/g part is an instruction to match the regular-expression bookworm and replace it with trixie, and to have this done for all instances
Now that takes care of the Debian repositories and even Proxmox’s non-subscription updates if you use them
For other 3rd party repositories we’ll take a slightly different approach
Third party repositories are found in the /etc/apt/sources.list.d folder and that should be checked for other files you need to update
ls -l /etc/apt/sources.list.dBut while we’re here we’ll also migrate them to the newer deb822 style
Proxmox’s Enterprise subscription for instance will be a file called pve-enterprise.list
Even if you don’t use this, you should still migrate it, although as you’ll see later, the upgrade will still want to create its own version
First we’ll take a backup copy of the existing file
mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.bakThen we’ll create a new version of this, but it depends
If you have an Enterprise subscription you’ll want to do this
cat > /etc/apt/sources.list.d/pve-enterprise.sources Types: deb
URIs: https://enterprise.proxmox.com/debian/pve
Suites: trixie
Components: pve-enterprise
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFIf not, we’ll migrate it, but leave it disabled
cat > /etc/apt/sources.list.d/pve-enterprise.sources Types: deb
URIs: https://enterprise.proxmox.com/debian/pve
Suites: trixie
Components: pve-enterprise
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
Enabled: false
EOFNow we’ll do the same for Ceph, by first backing up the sources.list file
mv /etc/apt/sources.list.d/ceph.list /etc/apt/sources.list.d/ceph.list.bakNow if you have a subscription for Ceph, you’ll want to create a new source file for it like this
cat > /etc/apt/sources.list.d/ceph.sources Types: deb
URIs: https://enterprise.proxmox.com/debian/ceph-squid
Suites: trixie
Components: enterprise
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFIf you don’t use it then we’ll migrate it, but leave it disabled
cat > /etc/apt/sources.list.d/ceph.sources Types: deb
URIs: https://enterprise.proxmox.com/debian/ceph-squid
Suites: trixie
Components: enterprise
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
Enabled: false
EOFHaving said that, Proxmox is providing us with access to Ceph through its own repositories and there is a non-subscription option available for that as well
cat > /etc/apt/sources.list.d/ceph.sources Types: deb
URIs: http://download.proxmox.com/debian/ceph-squid
Suites: trixie
Components: no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFIf you’ve installed other 3rd party repositories you’ll want to do something similar for them, but those two lists are usually on all servers as part of the default installation
We’ll now update the repository cache again, but this time it will be for Trixie packages
apt updateNOTE: If you get a 401 error, then as the documentation suggests, you might need to refresh your subscription(s) either through the GUI or by running this command
pvesubscription update --forceNext we’ll check the new policies with this command
apt policyLook through the lines to make sure nothing seems out of place such as references to Bookworm, or to updates you don’t want
Now we’ll upgrade to Debian 13 and Proxmox VE 9
apt dist-upgradeAt some point the upgrade will pause with information about the updates
You can read through this or press q to quit and continue the upgrade
After that, questions may arise about changes being made and your results may vary
For instance, for me it halted to ask about the keyboard configuration
I didn’t need to change that so I left it as is
It then asked about what to do with /etc/issue and Proxmox suggest sticking with the default answer of N as we’re told Proxmox autogenerates this on boot
Now it depends on your circumstances but when I do a major upgrade, the server will be out of commission i.e. it isn’t running any VMs or containers
So when asked if I want to restart services without being prompted I opt for Yes
For /etc/lvm/lvm.conf we’re told it would be best to opt for the maintainer’s version, so change the answer to Y
Now because I created a new source file for the PVE Enterprise subscription earlier, and it’s disabled, I also get prompted to ask if I want to change /etc/apt/sources.list.d/pve-enterprise.sources
For that I’ll opt to choose the default answer of N as I want to keep the file I’d already created
Now how you deal with other questions like these really depends
For instance, I change the settings for Chrony, and when I was asked about replacing Chrony’s config file, I opted to replace it with the maintainer’s version
That’s because I’d rather have the latest version of a config file, because settings can change, features can be added or even deprecated, so I think it’s best to keep files like this up to date
Then after the upgrade completes, I’ll update that config file and change the NTP server being used
Those are the main queries I got for a basic installation of PVE, but if you get others I suggest checking the documentation
You can also use the D option when asked to compare the differences, then press q to quit after you’ve read the feedback and make your decision
But if you’re in any doubt it’s best to keep the existing file, hence why it’s the default option
In any case, once the upgrade is done, reboot the server from the GUI or the terminal for this major upgrade to take effect
reboot nowPost Update:
Now if a network interface name does change, you’ll need to access the server through the console
You can check what interfaces the server has by running this command
ip -br aYou can then update the configuration file, for example
nano /etc/network/interfacesDepending on the situation, you might have to update the interface and/or bridge settings
But assuming the server has come back up it should now be showing version 9
A refresh of the browser may be needed, but you can also look for more information by running this command in the CLI
pveversionIt’s also suggested to run the checks once again, so we’ll do that
pve8to9 --fullIf there are any things flagged, then you should probably take action to resolve them
One thing we haven’t done is to migrate the Debian and Proxmox non-subscription repositories to the deb822 style
I deliberately left that until now because the computer will be running the newer APT 3 which makes this a bit easier, so we’ll run this command
apt modernize-sourcesI suggest answering the prompt with n then hit return as it will simulate the process and you can check the expected results
As long as you’re comfortable with the output, then run the command again
apt modernize-sourcesBut this time hit return as the default option is to proceed
What this command will do is rename the existing list file, then it creates a new source file for Debian repositories and another one for the Proxmox non-subscription repository, if you’ve enabled these updates that is
So it’s a bit easier than manually creating the files yourself
Bear in mind, it will also migrate list files in the /etc/apt/sources.list.d folder as well, but I’ve noticed it won’t create source files if a repository is commented out
So if you’ve disabled the Ceph repository for instance, the list file will be renamed but an equivalent source file won’t be created
In any case, I found the command doesn’t work well with the Enterprise or Ceph lists because it doesn’t know what keyring to use, leaving you with incomplete source files
Hence, why I suggested manually creating those source files earlier
You could have left that step till now, but the upgrade will create a new source list for Enterprise subscriptions anyway and it will be enabled whether you like it or not
You’d also have to delete or rename the existing list files, as they’re longer relevant and referencing Bookworm and you would have to disable the Enterprise subscriptions if you don’t use them
To make sure there are no issues though, we’ll update the repository cache again
apt updateA quick tidy up would now be helpful to free up disk space
apt autoremove
apt autocleanAs a final check, navigate to Updates | Repositories and check the repository details are correct; These should all reference Trixie and only the necessary ones enabled
Cluster Upgrade:
If you’re running a cluster you should now repeat the same process with other servers, upgrading them one at a time
Now you’ll likely want to migrate live VMs prior to upgrading another server
During my testing with Lab Members, when I tried that from the new server it raised a “conntrack_state” warning which basically means existing sessions, especially using TCP will likely be dropped
And that could mean you have to log back into an application for instance being run on a VM
Now I didn’t get that warning when I was logged into the older server and telling it to migrate from there
Although live migrations still worked, no matter which server I was sending instructions from and very few packets were lost
Now when I upgraded a live cluster later on I didn’t notice this warning, and I even missed it again when making my video. Which goes to show how subtle it is
Now, if you’re using a QDevice you’ll want to upgrade that to Trixie as well so corosync is brought up to date
Once everything is up to date you’ll want to migrate VMs back to their normal server, start any VMs that were shutdown and enable start on boot for VMs
Replication:
If you’re using ZFS replication then you’ll need to re-enable your rules
Although you can do this at the Datacenter level I prefer doing this at the server level because you’ll get feedback
Shortly after enabling a replication rule it should then start and you’ll quickly learn if there is an issue
I didn’t run into any problems on my live server, but when I went over this with Lab Members, problems did crop up
If that happens, it would be easier to delete the rule and create a new one
HA Rules:
Version 9 of Proxmox VE brings a change in High Availability, we now have Affinity Rules, which I’ll cover in more detail in a future video
But fortunately, any existing HA groups are automatically migrated as part of the upgrade
Now if you navigate to HA | Affinity Rules things can get a bit confusing as you’ll see Node Affinity Rules which are enabled
However, HA still needs to be re-enabled
NOTE: If you connect to a server that’s still running version 8, it will show HA groups, which can be confusing. All the more reason I think to upgrade all nodes in the cluster as soon as possible
So navigate to HA and you’ll see a series of Resources and it’s these we need to update
Edit each one in turn and change the State to started to re-enable HA
While we’re here though, one thing to note is that instead of having a setting of nofailback we now have an option of failback, which to me is much easier to understand
The idea is, do you want HA to always migrate a VM back to the preferred node when it’s back online and personally I’d rather this didn’t happen because if the server isn’t stable it would cause all sorts of problems
MTU Setting:
The next thing to mention, is bear in mind the MTU setting for vNICs has changed
If you use Jumbo Frames, you need to set the MTU on the physical NIC and on the bridge to 9000 for instance
In Proxmox VE version 8 and earlier, if you leave the MTU on a vNIC blank, it will default to 1500
And if you wanted a VM to inherit the MTU from the bridge, you would set the vNIC MTU to 1
In version 9 though, if you leave this blank, it will now inherit the MTU from the bridge
So for any new VM you create, that’s actually better
Now, I’ve found that for existing VMs, a value of 1 still works, but I’d be inclined to update these VMs otherwise future Administrators might get confused over the difference
It does still show the message about this special value, but it’s always better to be consistent
Admin Beware!:
Now in previous videos I’ve mentioned how Ballooning is a bad idea, at least from my perspective, particularly for production environments
Well, now we have Kernel Samepage Merging (KSM) and that’s even worse, particularly from a security perspective and yet it too is enabled by default
Now that might be great if you’re using Proxmox VE in a lab as it can make better use of RAM
But if you’ve ever chased security compliance, you’ll know physical separation is what you aim for in the most secure environment, with virtual separation being the next best thing
Basically, KSM aims to save memory usage through deduplication of identical memory pages across processes
This means a VM running Debian 13 for instance, in a less secure segment, could effectively be using the same host RAM as another Debian 13 VM in a more secure segment
In other words, that clear virtual boundary we had before where a firewall blocked access between the two virtual network segments is now muddy; Theoretically, this opens up the potential for side channel attacks
https://download.vusec.net/papers/dedup-est-machina_sp16.pdf
And of most concern to me, is after you’ve upgraded to PVE 9, you’ll find all of your hosts will now have this feature enabled
Oh dear!
Now aside from the security concern, you can also expect to see more CPU usage, so if this feature doesn’t appeal to you, according to the documentation, it can be disabled at the host level by running this command
systemctl disable --now ksmtunedThen you can instruct it to unmerge any currently merged pages
echo 2 > /sys/kernel/mm/ksm/runAnd this will have to be repeated on all hosts
You can also do this on a per VM basis by editing its memory settings in the GUI or in the CLI with this command
qm set --allow-ksm 0And I’d prefer to do that, just in case this service ever gets reinstated
Just bear in mind, this type of hardware change for a VM requires it to be shutdown and then started back up
Another thing to point out is take a look at the summary page for each VM after you’ve upgraded to version 9
Hmm, for me they’re all reporting over 100% of RAM usage, even though the QEMU quest agent is installed and the VM isn’t using much vRAM
Now this wasn’t an issue before, and this has had me scratching my head for quite a while
If I reboot a VM, the problem will go away but then return after a while
Now in previous versions, I understand the guest agent helped the host know how much RAM a VM was actually using
And although the documentation seems to imply that enabling ballooning will also give the host more accuracy for RAM usage, I found this makes little difference
You can set the minimum value to match the allocated RAM, and it should avoid the potential latency issues that made me avoid ballooning in the first place
But the dashboard still reports a very high RAM usage, albeit below 100%
Now the Linux VMs I run do show a large amount of cached memory in use. So it seems to me that the newer version of Proxmox VE is including this in the calculation of a VM’s RAM usage
So whereas previous versions might have discounted the cached memory, because it was available, the newer version seems to only factor in free memory
Whatever is actually going on, to me that means that monitoring Proxmox VE for VM RAM usage is no longer reliable
Instead, each individual VM will need monitoring for potential memory leaks
Sharing is caring!_
Please enable JavaScript to view the comments powered by Disqus.