mirror of
https://github.com/Telmate/proxmox-api-go.git
synced 2026-04-25 15:35:51 +03:00
[PR #490] Fix HA VM Migration Race Condition #505
Labels
No labels
good first issue
issue/confirmed
issue/critical
proposal/accepted
pull-request
type/bug
type/enhancement
type/feature
type/question
type/refactoring
type/testing
type/testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/proxmox-api-go#505
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/Telmate/proxmox-api-go/pull/490
Author: @pavel-z1
Created: 10/20/2025
Status: 🔄 Open
Base:
master← Head:fix/ha-migration-race-condition📝 Commits (1)
0e03c15fix: wait for migration lock release on HA VMs📊 Changes
2 files changed (+77 additions, -0 deletions)
View changed files
📝
proxmox/client.go(+30 -0)📝
proxmox/config__qemu.go(+47 -0)📄 Description
Fix HA VM Migration Race Condition
This pull request resolves a race condition that occurs when migrating High Availability (HA) virtual machines.
The Problem
When a Terraform plan modifies the
target_nodeof aproxmox_vm_qemuresource with HA enabled, the provider initiates a migration. However, it would then immediately attempt to apply further configuration updates to the VM on the new node.Due to cluster synchronization delays, the VM's configuration file might not be immediately available on the destination node, or the VM might still be locked by the migration process. This resulted in intermittent errors, such as:
500 Configuration file 'nodes/...' does not exist500 VM is locked (migrate)This pull request addresses issue #1343.
The Solution
To ensure the provider waits until the migration is fully complete, this change introduces a robust polling mechanism. After initiating a migration, the provider will now:
lock: migrate) is released from the VM's status.This ensures that the provider only proceeds with subsequent configuration updates after the Proxmox cluster has fully finalized the migration and the VM is ready for new commands. A generous 10-minute timeout has been implemented to accommodate large or slow migrations.
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.