Automating Proxmox VE with Ansible: VMs, Containers, and Rolling Updates
Use Ansible and the community.general.proxmox modules to automate VM and container creation, manage inventory, and perform rolling updates across your Proxmox cluster.
Ansible brings agentless automation to Proxmox VE. Unlike Terraform, which focuses on provisioning, Ansible excels at both provisioning and ongoing configuration management — creating VMs, installing packages inside guests, and performing rolling updates across a cluster. This guide covers practical Ansible playbooks for managing Proxmox environments using the community.general.proxmox collection.
Prerequisites
Install Ansible and the required collection on your control node (your workstation or a dedicated management server):
# Install Ansible
pip install ansible
# Install the community.general collection (includes Proxmox modules)
ansible-galaxy collection install community.general
# Install the proxmoxer Python library (required by the modules)
pip install proxmoxer requests
Setting Up Inventory
Define your Proxmox nodes in an inventory file:
# inventory/hosts.yml
all:
children:
proxmox_nodes:
hosts:
pve1:
ansible_host: 192.168.1.100
ansible_user: root
pve2:
ansible_host: 192.168.1.101
ansible_user: root
pve3:
ansible_host: 192.168.1.102
ansible_user: root
vars:
proxmox_api_host: 192.168.1.100
proxmox_api_user: automation@pve
proxmox_api_token_id: ansible-token
proxmox_api_token_secret: "{{ vault_proxmox_token_secret }}"
Store the API token secret in an Ansible Vault file:
# Create an encrypted vault file
ansible-vault create inventory/vault.yml
# Contents:
# vault_proxmox_token_secret: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Creating LXC Containers
The community.general.proxmox module manages LXC containers:
# playbooks/create-containers.yml
---
- name: Create LXC containers on Proxmox
hosts: localhost
gather_facts: false
vars_files:
- ../inventory/vault.yml
tasks:
- name: Create web server containers
community.general.proxmox:
api_host: "{{ proxmox_api_host }}"
api_user: "{{ proxmox_api_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ vault_proxmox_token_secret }}"
node: pve1
hostname: "web-{{ item }}"
vmid: "{{ 200 + item }}"
ostemplate: "local:vztmpl/debian-12-standard_12.2-1_amd64.tar.zst"
storage: local-lvm
disk: 8
cores: 2
memory: 1024
swap: 512
netif: '{"net0":"name=eth0,bridge=vmbr0,ip=192.168.1.{{ 50 + item }}/24,gw=192.168.1.1"}'
password: "{{ vault_container_password }}"
onboot: true
state: present
loop: [1, 2, 3]
- name: Start the containers
community.general.proxmox:
api_host: "{{ proxmox_api_host }}"
api_user: "{{ proxmox_api_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ vault_proxmox_token_secret }}"
vmid: "{{ 200 + item }}"
state: started
loop: [1, 2, 3]
Creating QEMU/KVM Virtual Machines
For full VMs, use the community.general.proxmox_kvm module:
# playbooks/create-vms.yml
---
- name: Create VMs from template
hosts: localhost
gather_facts: false
vars_files:
- ../inventory/vault.yml
tasks:
- name: Clone VM from template
community.general.proxmox_kvm:
api_host: "{{ proxmox_api_host }}"
api_user: "{{ proxmox_api_user }}"
api_token_id: "{{ proxmox_api_token_id }}"
api_token_secret: "{{ vault_proxmox_token_secret }}"
node: pve1
name: "app-server-{{ item }}"
newid: "{{ 300 + item }}"
clone: ubuntu-template
full: true
storage: local-lvm
cores: 4
memory: 4096
net:
net0: "virtio,bridge=vmbr0"
ipconfig:
ipconfig0: "ip=192.168.1.{{ 60 + item }}/24,gw=192.168.1.1"
ciuser: admin
sshkeys: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
state: present
loop: [1, 2, 3]
Rolling Updates Across the Cluster
Ansible's serial keyword lets you update Proxmox nodes one at a time, ensuring the cluster stays available:
# playbooks/rolling-update.yml
---
- name: Rolling update of Proxmox cluster nodes
hosts: proxmox_nodes
serial: 1 # Update one node at a time
max_fail_percentage: 0
tasks:
- name: Update package cache
apt:
update_cache: yes
cache_valid_time: 3600
- name: Upgrade all packages
apt:
upgrade: dist
autoremove: yes
register: upgrade_result
- name: Check if reboot is required
stat:
path: /var/run/reboot-required
register: reboot_required
- name: Migrate VMs off this node before reboot
shell: |
for vmid in $(qm list | awk 'NR>1 && $3=="running" {print $1}'); do
echo "Migrating VM $vmid..."
qm migrate $vmid {{ groups['proxmox_nodes'] | reject('eq', inventory_hostname) | first }} --online
done
when: reboot_required.stat.exists
ignore_errors: true
- name: Reboot if required
reboot:
msg: "Ansible rolling update reboot"
reboot_timeout: 300
when: reboot_required.stat.exists
- name: Wait for node to rejoin cluster
shell: pvecm status | grep -c "is online"
register: cluster_status
until: cluster_status.stdout | int >= (groups['proxmox_nodes'] | length)
retries: 30
delay: 10
Running the Playbooks
# Create containers
ansible-playbook -i inventory/hosts.yml playbooks/create-containers.yml --ask-vault-pass
# Create VMs
ansible-playbook -i inventory/hosts.yml playbooks/create-vms.yml --ask-vault-pass
# Rolling update (with dry-run first)
ansible-playbook -i inventory/hosts.yml playbooks/rolling-update.yml --check
ansible-playbook -i inventory/hosts.yml playbooks/rolling-update.yml --ask-vault-pass
Tips for Production Use
- Use Ansible Vault for all credentials — never store tokens in plain text
- Test with
--check(dry run) before applying changes to production - Set
serial: 1for cluster operations to maintain quorum - Monitor during updates — use ProxmoxR on your phone to watch node and VM status in real time while rolling updates are running
- Combine with Terraform — use Terraform for initial provisioning and Ansible for ongoing configuration management
Ansible brings consistency and repeatability to Proxmox management. Whether you are managing three containers or thirty nodes, codifying your operations in playbooks eliminates manual errors and saves time.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.