Scroll down and select your Talos version (v1.9.2 for example)
Then tick the box for siderolabs/qemu-guest-agent and submit
This will provide you with a link to the bare metal ISO
The lines we’re interested in are as follows
text
Metal ISO
amd64 ISO
https://factory.talos.dev/image/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515/v1.9.2/metal-amd64.iso
arm64 ISO
https://factory.talos.dev/image/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515/v1.9.2/metal-arm64.iso
Installer Image
For the initial Talos install or upgrade use the following installer image:
factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.9.2
Download the above ISO (this will most likely be amd64 for you)
Take note of the factory.talos.dev/installer URL as you’ll need it later
Upload ISO
From the Proxmox UI, select the “local” storage and enter the “Content” section.
Click the “Upload” button:
Select the ISO you downloaded previously, then hit “Upload”
Create VMs
Before starting, familiarise yourself with the
system requirements for Talos and assign VM
resources accordingly.
Create a new VM by clicking the “Create VM” button in the Proxmox UI:
Fill out a name for the new VM:
In the OS tab, select the ISO we uploaded earlier:
Keep the defaults set in the “System” tab.
Keep the defaults in the “Hard Disk” tab as well, only changing the size if desired.
In the “CPU” section, give at least 2 cores to the VM:
Note: As of Talos v1.0 (which requires the x86-64-v2 microarchitecture), prior to Proxmox V8.0, booting with the
default Processor Type kvm64 will not work.
You can enable the required CPU features after creating the VM by
adding the following line in the corresponding /etc/pve/qemu-server/<vmid>.conf file:
Alternatively, you can set the Processor Type to host if your Proxmox host supports these CPU features,
this however prevents using live VM migration.
Verify that the RAM is set to at least 2GB:
Keep the default values for networking, verifying that the VM is set to come up on the bridge interface:
Finish creating the VM by clicking through the “Confirm” tab and then “Finish”.
Repeat this process for a second VM to use as a worker node.
You can also repeat this for additional nodes desired.
Note: Talos doesn’t support memory hot plugging, if creating the VM programmatically don’t enable memory hotplug on your
Talos VM’s.
Doing so will cause Talos to be unable to see all available memory and have insufficient memory to complete
installation of the cluster.
Start Control Plane Node
Once the VMs have been created and updated, start the VM that will be the first control plane node.
This VM will boot the ISO image specified earlier and enter “maintenance mode”.
With DHCP server
Once the machine has entered maintenance mode, there will be a console log that details the IP address that the node received.
Take note of this IP address, which will be referred to as $CONTROL_PLANE_IP for the rest of this guide.
If you wish to export this IP as a bash variable, simply issue a command like export CONTROL_PLANE_IP=1.2.3.4.
Without DHCP server
To apply the machine configurations in maintenance mode, VM has to have IP on the network.
So you can set it on boot time manually.
Press e on the boot time.
And set the IP parameters for the VM.
Format is:
With the IP address above, you can now generate the machine configurations to use for installing Talos and Kubernetes.
Issue the following command, updating the output directory, cluster name, and control plane IP as you see fit:
bash
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out
This will create several files in the _out directory: controlplane.yaml, worker.yaml, and talosconfig.
Note: The Talos config by default will install to /dev/sda.
Depending on your setup the virtual disk may be mounted differently Eg: /dev/vda.
You can check for disks running the following command:
bash
talosctl get disks --insecure --nodes $CONTROL_PLANE_IP
Update controlplane.yaml and worker.yaml config files to point to the correct disk location.
QEMU guest agent support
For QEMU guest agent support, you can generate the config with the custom install image:
bash
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out --install-image factory.talos.dev/installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.9.2
In Proxmox, go to your VM –> Options and ensure that QEMU Guest Agent is Enabled
The QEMU agent is now configured
Create Control Plane Node
Using the controlplane.yaml generated above, you can now apply this config using talosctl.
Issue:
You should now see some action in the Proxmox console for this VM.
Talos will be installed to disk, the VM will reboot, and then Talos will configure the Kubernetes control plane on this VM.
The VM will remain in stage Booting until the bootstrap is completed in a later step.
Note: This process can be repeated multiple times to create an HA control plane.
Create Worker Node
Create at least a single worker node using a process similar to the control plane creation above.
Start the worker node VM and wait for it to enter “maintenance mode”.
Take note of the worker node’s IP address, which will be referred to as $WORKER_IP
Note: This process can be repeated multiple times to add additional workers.
Using the Cluster
Once the cluster is available, you can make use of talosctl and kubectl to interact with the cluster.
For example, to view current running containers, run talosctl containers for a list of containers in the system namespace, or talosctl containers -k for the k8s.io namespace.
To view the logs of a container, use talosctl logs <container> or talosctl logs -k <container>.
First, configure talosctl to talk to your control plane node by issuing the following, updating paths and IPs as necessary: