Automating Fedora CoreOS - Unattended Installation via iPXE
In the video below, we show how to install Fedora CoreOS using iPXE
Prerequisites
- •Linux experience
- •Container fundamentals
What You'll Learn
- Deploy and manage Fedora CoreOS
- Work with container-optimized Linux systems
Fedora CoreOS
Automating Fedora CoreOS - Unattended Installation via iPXE
Feb 7, 2026
· 16 mins read
_
#### In the video below, we show how to install Fedora CoreOS using iPXE
Now I spent most of 2025 migrating my computers over to containers, which included automated the process along the way with Ansible
So much so that I even got an year end award from ChatGPT; “The Container Whisperer”
But while Alpine is lightweight and will give me access to more up to date versions of Podman, the problem is it uses OpenRC, so I can’t take advantage of Quadlets
Setting up services in OpenRC to run rootless containers took so long to get working, let alone fine tune
Granted my automated solution now works, and I can easily roll out more containers with a playbook, but it’s a very complicated solution
And then I stumbled on Fedora CoreOS
Not only does it use Systemd, but it will regularly keep itself up to date as well, leaving you to focus on managing containers
Useful links:
https://fedoraproject.org/coreos/download?stream=stable
https://coreos.github.io/ignition/getting-started/
https://coreos.github.io/butane/getting-started/
https://docs.fedoraproject.org/en-US/fedora-coreos/platforms/
Assumptions:
Because this video is specifically about the installation process, I’m going to assume you’re already familiar with how to create Ignition files
That’s something I felt worthy of a video in itself because some people will already be familiar with Ignition and CoreOS, but not necessarily Butane
But if you don’t know how to create Ingition files, then I suggest you check out my other video first which shows how you can convert a simpler Butane file to an Ignition file
OS Files:
The preferred way to install CoreOS involves automation
The goal is to install the OS over the network and for that we’ll use a web server
I’ll be using Caddy, which I run as a container, but you can use any web server for this
We need to download 3 files to install Fedora CoreOS
https://fedoraproject.org/coreos/download?stream=stable
Netboot kernel
Netboot initramfs
Netboot rootfs
These are much smaller files and it’s more practical than trying to pull an entire CD-ROM image over the network
But first we’ll create a folder on the container host, which will be where Caddy will serve all the files from
mkdir -p caddy/var/www/html/fcosNow I can download these files directly to the computer running Caddy. But unfortunately, Fedora rename newer versions and so the links change over time
The commands I’ll use will look something like this
wget -O caddy/var/www/html/fcos/fcos-kernel
wget -O caddy/var/www/html/fcos/fcos-initramfs.img
wget -O caddy/var/www/html/fcos/fcos-rootfs.imgFor each line, you’ll want copy the relevant URL from the webpage and replace the appropriate URL parameter with that
At the time of recording the commands I’ll use are
wget -O caddy/var/www/html/fcos/fcos-kernel https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/43.20260119.3.1/x86_64/fedora-coreos-43.20260119.3.1-live-kernel.x86_64
wget -O caddy/var/www/html/fcos/fcos-initramfs.img https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/43.20260119.3.1/x86_64/fedora-coreos-43.20260119.3.1-live-initramfs.x86_64.img
wget -O caddy/var/www/html/fcos/fcos-rootfs.img https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/43.20260119.3.1/x86_64/fedora-coreos-43.20260119.3.1-live-rootfs.x86_64.imgVM Setup:
We’re going to install CoreOS on a VM in this video
So configure this with the hardware settings you see fit
But when this first powers up, we want it download various files over the network so that the OS is installed on the hard drive and it will be configured the way we want it to be
After it reboots we want it to boot from the hard drive going forward
So make sure the boot order is the hard drive then the network card
In the case of Proxmox VE, I would suggest leaving the VM with the default BIOS setting of SeaBIOS if you want a simple automation solution
If you select UEFI instead, the VM will need an EFI disk to load the firmware first and then it will load the OS from the hard drive
But with an EFI disk attached, it won’t boot over the network
So you’d have to detach the EFI disk, power it on and install the OS over the network, then power the VM down immediately after the installation. After that, you’ll need to re-attach the EFI disk before powering it back up
While it might be feasible to automate something like that, it’s something else that can go wrong in the process
And when it comes to IT, it’s best to keep things simple
iPXE File:
As part of the installation process, we’ll be using iPXE
And in the case of Proxmox VE, when the VM boots up, it will load the iPXE firmware
This allows us to point it to a web server, from which it will download an iPXE instruction file telling it where to get various files and how to install the OS
Now CoreOS needs be given what’s known as an Ignition file
If you’ve used answer files or cloud-init, the premise is the same; Install an OS but automate its configuration by providing information from a separate file
This means each computer needs its own Ignition file
Now I tried various strategies to get this working, but I kept running into memory limitation problems
To solve this I settled for supplying each computer with its own iPXE file
For example, we’ll create a file for a server I’ll call srv1 on the web server
nano caddy/var/www/html/fcos/srv1.ipxe
#!ipxekernel http://fcos.homelab.lan/fcos-kernel initrd=main coreos.live.rootfs_url=http://fcos.homelab.lan/fcos-rootfs.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://fcos.homelab.lan/srv1.ign coreos.inst.platform_id=metal
initrd --name main http://fcos.homelab.lan/fcos-initramfs.img
boot
Now save and exit
NOTE: We aren’t using TLS in this iPXE process as it would require additional work to build custom firmware that trusted a private Root CA
NOTE: Make sure the drive name is correct, for my computers it will be /dev/sda
NOTE: You can make the “coreos.inst.platform_id” setting platform specific if you like, for example qemu, aws, azure, etc.
I didn’t notice any difference installing this on Proxmox VE, which uses QEMU, and this minimal installation doesn’t include a QENU guest agent anyway
While you can use IP addressing rather an FDQN, it’s easier to change a single A record in DNS than update an IP address in multiple locations
In addition, my Caddy instance is running multiple websites and an IP address isn’t as practical
Now the memory limitations I referred to before are about the size of this iPXE file
If you try to make decisions in this type of file for instance, it makes the file larger and iPXE fails to work when the file gets too large, so you have to keep this lean
In this file we issue the necessary instructions to install the OS and point iPXE to the CoreOS files we downloaded earlier
As part of that installation, we tell it where to find the Ignition file, srv1.ign
And this is a file I showed how to create in a previous video about Butane
Once the installation completes, bear in mind the computer will automatically reboot
Web Server:
Caddy can serve static files, like any other web server, so we’ll edit the Caddyfile to do that
nano caddy/Caddyfile
http://fcos.homelab.lan {
root * /var/www/html/fcos
file_server
}Now save and exit
So here we define the FQDN for the website, the root folder and tell it this will be serving static files
Because I’m running Caddy as a container I’ll also need to update it’s configuration and for me that means editing a docker compose file to add a new mapping
nano docker-compose.yml
caddy:
…
ports:
- "80:80"
volumes:
- ./caddy/var/www/html:/var/www/htmlNow save and exit
What I’ve done is to map the host folder where the web server files are, but I’ve also allowed access to TCP port 80 as the server will connect using HTTP
For the changes to take effect we need to rebuild the container
docker compose up -dTo check Caddy sees the files we’ll run a quick test
docker exec caddy ls -R /var/www/html/fcosYou’ll now need to update your DNS server with a new A record to resolve this FQDN
Once that’s done, you can check the web service is working
curl http://fcos.homelab.lan/srv1.ipxeAssuming that returns the contents of the file, we know the web server is working
DHCP Server:
This whole process relies on a DHCP server that will not only provide the computer with an IP address, gateway and DNS server, it also has to tell it where to find the iPXE file
Now how you configure your DHCP server depends on what that server is, but I’m using the Kea DHCP server
For me, this is also a container, but the config file is the same as for a standalone server, just in a different folder
This is something else I had to test various strategies to get working, but I settled on reservations
For example,
nano kea/kea-dhcp4.conf
...
"subnet4": [
{
"subnet": "192.168.102.0/24",
"pools": [ { "pool": "192.168.102.100 - 192.168.102.199" } ],
"option-data": [
{
"name": "routers",
"data": "192.168.102.254"
},
{
"name": "domain-name",
"data": "homelab.lan"
}
], "reservations": [
// MAC address reservation
{
"hw-address": "bc:24:11:d3:bc:61",
"option-data": [
{
"name": "boot-file-name",
"data": "http://fcos.homelab.lan/srv1.ipxe"
}
]
}
]
Now save and exit
NOTE: You have to be very careful as the formatting for Kea’s configuration is very strict and so as this is the first reservation I have to add a comma to a previous section for instance
My final aim is to configure each server with a static IP address, but it first has to start with one from DHCP
It doesn’t matter to me what IP address the server gives it, as it will be temporary, so I don’t reserve one
However, I still need to identify the server, and as usual we identify it by its MAC address
For hypervisors like Proxmox VE, you can find this in the hardware settings of the NIC
Otherwise check when the DHCP server assigns an IP address to the computer, as it will log the MAC address the request came from
Then we provide it with an option for the Boot File Name, also known as DHCP Option 67
As you’ll see, this is a URL pointing to the iPXE file for the server to download from our web server
Also, make sure you’re DHCP server is handing out a domain-search and domain-name as options
Personally I set a domain-search as a global option, and a domain-name as a subnet option
The domain-name is the DNS Domain that a computer belongs to and I might change that on a per subnet basis
On the other hand the domain-search is to tell a computer what domain to append when a DNS query is made using just a host name
Whenever you connect to a computer it’s best to use the FQDN
For example
ping fcos.homelab.lanBut if the computer has a domain-search setting of homelab.lan, then just using the hostname works
ping fcosI bring this up because iPXE ran into DNS issues for me and supplying both settings resolved this
Now there are different ways to apply changes in Kea, but it depends on how you’ve configured and deployed it
As this is a Lab I’ll just restart the container
docker compose restart keaAnd then check Kea is doesn’t have any issues due to a typo for instance
docker psAssuming that’s been configured correctly, when the computer boots up it should now get the IP information plus the option
Ignition File:
When you first boot the computer, Fedora CoreOS needs to perform an initial configuration and this requires an Ignition file
This is seen as a machine readable file and it’s advised to create this using the Butane tool
I’ve covered this in a previous video, but this is what I’ll add to the web server
nano caddy/var/www/html/fcos/srv1.ign
{"ignition":{"version":"3.5.0"},"passwd":{"users":[{"name":"ansible","sshAuthorizedKeys":["ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILsB45m+t/o5SFTCx7Rxe4wAPiYN0hqaobruaPb0jZpw ansible@homelab.lan"]}]},"storage":{"files":[{"path":"/etc/hostname","contents":{"compression":"","source":"data:,srv1"},"mode":420},{"path":"/etc/NetworkManager/system-connections/ens18.nmconnection","contents":{"compression":"gzip","source":"data:;base64,H4sIAAAAAAAC/1TMwarCMBBG4f08y721ibVWJE9Suhg7vzTQTEoyCr69CxV0e/g445xVMVvMOlGUAK1uIHtsCLAFRWEU1VCuPONfOeFNaIzbvZuIRQpqdcGdfOP6oXGtb4473/19B3/oSLT+oH17pgRbsoTEeuP19eynT5VY+bJC6BkAAP//FadDfKUAAAA="},"mode":384},{"path":"/etc/sysctl.d/60-disable-ipv6.conf","contents":{"compression":"","source":"data:,net.ipv6.conf.all.disable_ipv6%20%3D%201%0Anet.ipv6.conf.default.disable_ipv6%20%3D%201%0A"}},{"path":"/etc/sudoers.d/ansible","contents":{"compression":"","source":"data:,ansible%20ALL%3D(ALL)%20NOPASSWD%3A%20ALL"},"mode":288},{"overwrite":true,"path":"/etc/chrony.conf","contents":{"compression":"","source":"data:,server%20192.168.100.12%20iburst%0Adriftfile%20%2Fvar%2Flib%2Fchrony%2Fdrift%0Amakestep%201.0%203%0Artcsync%0A"}},{"path":"/etc/zincati/config.d/50-reboot-strategy.toml","contents":{"compression":"gzip","source":"data:;base64,H4sIAAAAAAAC/2TNMQrDMAyF4V2nED5AMBQ6FHKKjsEYE4tU4MjBlgnp6YuGTF1/Pukt48hJqQfo2pLSduGM7qDGNfPqAG4w3S2A8k7xW4WMlrqmYsXsH55OllzPECCnq+OMC7r3EIe2l5pGu7Q3/vHy3kEh2fQTd5ahZP7p4RcAAP//CiAq/qMAAAA="},"mode":420},{"path":"/etc/vconsole.conf","contents":{"compression":"","source":"data:,KEYMAP%3Duk"},"mode":420}],"links":[{"path":"/etc/localtime","target":"../usr/share/zoneinfo/Europe/London"}]}}Although as I showed in that previous video this JSON file is a bit easier to read when formatted
{
"ignition": {
"version": "3.5.0"
},
"passwd": {
"users": [
{
"name": "ansible",
"sshAuthorizedKeys": [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILsB45m+t/o5SFTCx7Rxe4wAPiYN0hqaobruaPb0jZpw ansible@homelab.lan"
]
}
]
},
"storage": {
"files": [
{
"path": "/etc/hostname",
"contents": {
"compression": "",
"source": "data:,srv1"
},
"mode": 420
},
{
"path": "/etc/NetworkManager/system-connections/ens18.nmconnection",
"contents": {
"compression": "gzip",
"source": "data:;base64,H4sIAAAAAAAC/1TMwarCMBBG4f08y721ibVWJE9Suhg7vzTQTEoyCr69CxV0e/g445xVMVvMOlGUAK1uIHtsCLAFRWEU1VCuPONfOeFNaIzbvZuIRQpqdcGdfOP6oXGtb4473/19B3/oSLT+oH17pgRbsoTEeuP19eynT5VY+bJC6BkAAP//FadDfKUAAAA="
},
"mode": 384
},
{
"path": "/etc/sysctl.d/60-disable-ipv6.conf",
"contents": {
"compression": "",
"source": "data:,net.ipv6.conf.all.disable_ipv6%20%3D%201%0Anet.ipv6.conf.default.disable_ipv6%20%3D%201%0A"
}
},
{
"path": "/etc/sudoers.d/ansible",
"contents": {
"compression": "",
"source": "data:,ansible%20ALL%3D(ALL)%20NOPASSWD%3A%20ALL"
},
"mode": 288
},
{
"overwrite": true,
"path": "/etc/chrony.conf",
"contents": {
"compression": "",
"source": "data:,server%20192.168.100.12%20iburst%0Adriftfile%20%2Fvar%2Flib%2Fchrony%2Fdrift%0Amakestep%201.0%203%0Artcsync%0A"
}
},
{
"path": "/etc/zincati/config.d/50-reboot-strategy.toml",
"contents": {
"compression": "gzip",
"source": "data:;base64,H4sIAAAAAAAC/2TNMQrDMAyF4V2nED5AMBQ6FHKKjsEYE4tU4MjBlgnp6YuGTF1/Pukt48hJqQfo2pLSduGM7qDGNfPqAG4w3S2A8k7xW4WMlrqmYsXsH55OllzPECCnq+OMC7r3EIe2l5pGu7Q3/vHy3kEh2fQTd5ahZP7p4RcAAP//CiAq/qMAAAA="
},
"mode": 420
},
{
"path": "/etc/vconsole.conf",
"contents": {
"compression": "",
"source": "data:,KEYMAP%3Duk"
},
"mode": 420
}
],
"links": [
{
"path": "/etc/localtime",
"target": "../usr/share/zoneinfo/Europe/London"
}
]
}
}For me the goal is to have a minimal OS that Ansible can SSH into and authenticate using SSH key authentication, so this creates a user that has no password and has full sudo rights
Granted, it’s advisable to add a super user to the wheel group, but I’m using sudo on other platforms and I want to keep my Ansible playbooks consistent
Every server needs a unique hostname, so we’ve set one
The server should have a static IP address, otherwise the server will go offline at some point if the DHCP service isn’t restored in time
But in any case, I’ll also be running a DHCP server in a container, so it wouldn’t be practical for me
I don’t use IPv6, so I’ve disabled it on the NIC and system wide
NTP is important, but for security reasons, I restrict access to Internet time servers
In which case, Chrony is configured to use an internal NTP Server
By default, Zincati will reboot the server after an upgrade is ready, so I limit when automatic reboots are allowed to early Sunday morning only
I’m in the UK, so I set the keyboard layout to UK, otherwise I’d have issues with special characters when logged in
Finally I set a local timezone
This installation is intened to be a one off process, and once ready, Ansible will take over and manage the containers through a separate non-root account
The one key lesson I’ll point out is this…Don’t try to install ANY software on Fedora CoreOS
That’s not the way Fedora CoreOS is meant to work and it will likely break automated updates being handled by Zincati
If you want to use Ansible for instance, so far I’ve found it better to use a standalone binary for Python 3
This doesn’t rely on any other software and by storing it in /usr/local/bin/ it survives an upgrade and easily accessible
Installation:
Everything should now be in place, so we’ll connect to the console of the VM, then start it so we can follow along
Depending on your network, this may take a while to complete
But once the installation has finished, the computer should automatically reboot itself
Because there’s now an OS on the hard drive, it will boot from that and once it settles at the login prompt, we can SSH into the server
And in my case it won’t then automatically upgrade itself and reboot again, because I’ve placed a restriction for that
But at this stage, it’s a matter of setting up the containers that you want
And the OS will take care of itself
Troubleshooting:
If things don’t go to plan, chances are the computer will keep rebooting
If that happens then as the computer boots up, press Ctrl-B when given the opportunity
From that shell you can test if DHCP is assigning an IP address to the computer with this command
dhcpIf it doesn’t return OK then check your DHCP server to see what’s going wrong, correct that and reboot the computer to try again
TIP: If the computer is in a different VLAN, make sure a DHCP relay is setup
Still not working?
In that case we’ll check the config being applied
Press Ctrl-B, if you’re not already in the shell, then run these commands
dhcp
configTIP: To exit hit Ctrl-X
Scroll through this using the down arrow key and make sure gateway, ip, netmask and dns have the correct entries
Also check dnssl and domain have the correct domain name information
If any of these are wrong, update the DHCP server
While you’re here, check the filename setting
If that’s not correct you’ll need to update the boot file option set by the DHCP server
Once any DHCP server settings are corrected, reboot the computer and try again
If the installation stalls soon after an IP address is assigned, we need to go back into the shell and check access to the HTTP server
Again, we’ll press Ctrl-B and this time run these commands
dhcp
imgfetch http://fcos.homelab.lan/srv1.ipxe
imgstatThe second line will download the iPXE file and if that doesn’t return OK you’ll need to check your web server as well as access to it because something is blocking access
The last line should return a file size higher than zero, if not check the web server as it may have issues serving this file
If the OS does get installed, but the iPXE process then keeps repeating, make sure the boot order is set to hard drive then network
That way, iPXE should only run on the first boot, but after that the OS from the hard drive is loaded
Final Thoughts:
Well, this iPXE method has greatly simplified my automation goal because it’s quick, repeatable and much simpler to deploy
And Proxmox VE is ideal to run this on because, by default, remote access isn’t restricted in CoreOS
But I can take care of that in a simpler, more centralised way by using the hypervisor’s firewall
So now that the OS installation automation is working, I’ll turn my attention to having Ansible handle the automation of the containers themselves
It’s already working with Alpine, but it needs porting across to CoreOS and there’s a different way in which Python 3 will have to be deployed
Sharing is caring!_
Please enable JavaScript to view the comments powered by Disqus.