[GH-ISSUE #465] Bug: Proxmox 9.0: Not HA managed VM error not correctly detected #118

Open
opened 2026-02-28 00:40:39 +03:00 by kerem · 7 comments
Owner

Originally created by @TobiPeterG on GitHub (Oct 2, 2025).
Original GitHub issue: https://github.com/Telmate/proxmox-api-go/issues/465

This is related to https://github.com/Telmate/terraform-provider-proxmox/issues/1416

While testing https://github.com/Telmate/terraform-provider-proxmox/pull/1415 , I noticed that VM deletion fails with with the current master version of the proxmox terraform provider. I ran this command:
TF_LOG=TRACE TF_LOG_PATH=tofu.log PM_LOG=1 PM_DEBUG=1 tofu destroy

which created this log:

2025-10-02T14:17:19.403+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: configuring server automatic mTLS: timestamp="2025-10-02T14:17:19.403+0200"
2025-10-02T14:17:19.428+0200 [DEBUG] provider.terraform-provider-proxmox_v1.0.0: plugin address: address=/tmp/plugin1027897418 network=unix timestamp="2025-10-02T14:17:19.428+0200"
2025-10-02T14:17:19.428+0200 [DEBUG] provider: using plugin: version=5
2025-10-02T14:17:19.442+0200 [TRACE] BuiltinEvalContext: Initialized "provider[\"registry.opentofu.org/hashicorp/proxmox\"]"%!s(<nil>) provider for provider["registry.opentofu.org/hashicorp/proxmox"]
2025-10-02T14:17:19.442+0200 [TRACE] provider.stdio: waiting for stdio data
2025-10-02T14:17:19.442+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.442+0200 [TRACE] NodeApplyableProvider: configuring provider["registry.opentofu.org/hashicorp/proxmox"]
2025-10-02T14:17:19.442+0200 [TRACE] buildProviderConfig for provider["registry.opentofu.org/hashicorp/proxmox"]: using explicit config only
2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema
2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox
2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: ValidateProviderConfig
2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema
2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox
2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:594 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.443+0200"
2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Skipping protocol data file writing because no data directory is set. Use the TF_LOG_SDK_PROTO_DATA_DIR environment variable to enable this functionality.: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/internal/logging/protocol_data.go:41 @module=sdk.proto tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 timestamp="2025-10-02T14:17:19.443+0200"
2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.443+0200"
2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Preparing provider configuration: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:286 timestamp="2025-10-02T14:17:19.443+0200"
2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:374 timestamp="2025-10-02T14:17:19.443+0200"
2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:376 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.447+0200"
2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_warning_count=0 tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig diagnostic_error_count=0 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_duration_ms=4 @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto timestamp="2025-10-02T14:17:19.447+0200"
2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=PrepareProviderConfig @module=sdk.proto tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:615 timestamp="2025-10-02T14:17:19.447+0200"
2025-10-02T14:17:19.448+0200 [WARN]  ValidateProviderConfig from "provider[\"registry.opentofu.org/hashicorp/proxmox\"]" changed the config value, but that value is unused
2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: ConfigureProvider
2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: GetProviderSchema
2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox
2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:623 @module=sdk.proto tf_proto_version=5.9 tf_rpc=Configure tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d timestamp="2025-10-02T14:17:19.448+0200"
2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: No announced client capabilities: tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/client_capabilities.go:30 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox timestamp="2025-10-02T14:17:19.448+0200"
2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=Configure tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 timestamp="2025-10-02T14:17:19.448+0200"
2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:760 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure timestamp="2025-10-02T14:17:19.448+0200"
2025-10-02T14:17:19.448+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST:
GET /api2/json/access/users?full=1 HTTP/1.1
Host: pcloud.mgmt.sci.hpi.de
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN
Accept-Encoding: gzip

: timestamp="2025-10-02T14:17:19.448+0200"
2025-10-02T14:17:19.581+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT:
HTTP/1.1 200 OK
Content-Length: 322
Alt-Svc: h3=":443"; ma=2592000
Cache-Control: max-age=0
Content-Type: application/json;charset=UTF-8
Date: Thu, 02 Oct 2025 12:17:19 GMT
Expires: Thu, 02 Oct 2025 12:17:19 GMT
Pragma: no-cache
Server: Caddy
Server: pve-api-daemon/3.0

{"data":[{"tokens":[{"privsep":0,"expire":0,"tokenid":"OpenTofu"}],"expire":0,"realm-type":"pve","enable":1,"userid":"OpenTofu@pve","groups":"provisioning"},{"email":"scientific-compute@hpi.de","tokens":[{"expire":0,"tokenid":"maas","privsep":1}],"expire":0,"groups":"","realm-type":"pam","userid":"root@pam","enable":1}]}: timestamp="2025-10-02T14:17:19.581+0200"
2025-10-02T14:17:19.581+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST:
GET /api2/json/access/permissions?userid=OpenTofu@pve&path=/ HTTP/1.1
Host: pcloud.mgmt.sci.hpi.de
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN
Accept-Encoding: gzip

: timestamp="2025-10-02T14:17:19.581+0200"
2025-10-02T14:17:19.615+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT:
HTTP/1.1 200 OK
Content-Length: 935
Alt-Svc: h3=":443"; ma=2592000
Cache-Control: max-age=0
Content-Type: application/json;charset=UTF-8
Date: Thu, 02 Oct 2025 12:17:19 GMT
Expires: Thu, 02 Oct 2025 12:17:19 GMT
Pragma: no-cache
Server: Caddy
Server: pve-api-daemon/3.0

{"data":{"/":{"VM.Audit":1,"VM.Config.Options":1,"VM.Snapshot.Rollback":1,"Sys.PowerMgmt":1,"VM.Config.Network":1,"Datastore.Audit":1,"VM.Config.HWType":1,"Pool.Allocate":1,"VM.Config.CDROM":1,"Realm.AllocateUser":1,"Sys.Incoming":1,"SDN.Audit":1,"Mapping.Audit":1,"SDN.Allocate":1,"VM.Allocate":1,"VM.GuestAgent.FileRead":1,"Sys.Audit":1,"Pool.Audit":1,"VM.GuestAgent.Unrestricted":1,"VM.GuestAgent.Audit":1,"User.Modify":1,"Sys.Console":1,"VM.Config.Memory":1,"VM.Console":1,"VM.Replicate":1,"Datastore.AllocateTemplate":1,"VM.Config.CPU":1,"VM.Backup":1,"VM.Snapshot":1,"Mapping.Modify":1,"Datastore.Allocate":1,"Sys.AccessNetwork":1,"VM.Clone":1,"Sys.Modify":1,"Sys.Syslog":1,"VM.Config.Disk":1,"Realm.Allocate":1,"Datastore.AllocateSpace":1,"SDN.Use":1,"Group.Allocate":1,"Permissions.Modify":1,"VM.GuestAgent.FileWrite":1,"VM.PowerMgmt":1,"Mapping.Use":1,"VM.Migrate":1,"VM.GuestAgent.FileSystemMgmt":1,"VM.Config.Cloudinit":1}}}: timestamp="2025-10-02T14:17:19.615+0200"
2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:762 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d timestamp="2025-10-02T14:17:19.615+0200"
2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_error_count=0 diagnostic_warning_count=0 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_proto_version=5.9 tf_req_duration_ms=166 tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto timestamp="2025-10-02T14:17:19.615+0200"
2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:643 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure timestamp="2025-10-02T14:17:19.615+0200"
2025-10-02T14:17:19.615+0200 [TRACE] vertex "provider[\"registry.opentofu.org/hashicorp/proxmox\"]": visit complete
2025-10-02T14:17:19.615+0200 [TRACE] vertex "proxmox_vm_qemu.vm (destroy)": starting visit (*tofu.NodeDestroyResourceInstance)
2025-10-02T14:17:19.616+0200 [TRACE] Resolving provider key for proxmox_vm_qemu.vm
2025-10-02T14:17:19.616+0200 [TRACE] Resolved provider key for proxmox_vm_qemu.vm as %!s(<nil>)
2025-10-02T14:17:19.616+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.619+0200 [TRACE] readDiff: Read Delete change from plan for proxmox_vm_qemu.vm
2025-10-02T14:17:19.619+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.619+0200 [TRACE] readResourceInstanceState: reading state for proxmox_vm_qemu.vm
2025-10-02T14:17:19.619+0200 [TRACE] upgradeResourceStateTransform: address: proxmox_vm_qemu.vm
2025-10-02T14:17:19.621+0200 [TRACE] transformResourceState: schema version of proxmox_vm_qemu.vm is still 0; calling provider "proxmox" for any other minor fixups
2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: UpgradeResourceState
2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: GetProviderSchema
2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox
2025-10-02T14:17:19.622+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: tf_proto_version=5.9 tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:789 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=UpgradeResourceState @module=sdk.proto timestamp="2025-10-02T14:17:19.621+0200"
2025-10-02T14:17:19.622+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 tf_proto_version=5.9 @module=sdk.proto timestamp="2025-10-02T14:17:19.622+0200"
2025-10-02T14:17:19.623+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Upgrading JSON state: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:507 timestamp="2025-10-02T14:17:19.623+0200"
2025-10-02T14:17:19.639+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_duration_ms=17 tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto tf_proto_version=5.9 tf_resource_type=proxmox_vm_qemu diagnostic_error_count=0 diagnostic_warning_count=0 timestamp="2025-10-02T14:17:19.639+0200"
2025-10-02T14:17:19.639+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:808 @module=sdk.proto tf_proto_version=5.9 tf_rpc=UpgradeResourceState timestamp="2025-10-02T14:17:19.639+0200"
2025-10-02T14:17:19.651+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.651+0200 [INFO]  Starting apply for proxmox_vm_qemu.vm
2025-10-02T14:17:19.651+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.651+0200 [DEBUG] proxmox_vm_qemu.vm: applying the planned Delete change
2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: ApplyResourceChange
2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: GetProviderSchema
2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox
2025-10-02T14:17:19.659+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:928 tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.658+0200"
2025-10-02T14:17:19.659+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @module=sdk.proto tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.659+0200"
2025-10-02T14:17:19.664+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/resource.go:948 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox timestamp="2025-10-02T14:17:19.664+0200"
2025-10-02T14:17:19.665+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST:
DELETE /api2/json/cluster/ha/resources/198 HTTP/1.1
Host: pcloud.mgmt.sci.hpi.de
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN
Accept-Encoding: gzip

: timestamp="2025-10-02T14:17:19.664+0200"
2025-10-02T14:17:19.697+0200 [INFO]  provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT:
HTTP/1.1 500 Internal Server Error
Content-Length: 75
Alt-Svc: h3=":443"; ma=2592000
Cache-Control: max-age=0
Content-Type: application/json;charset=UTF-8
Date: Thu, 02 Oct 2025 12:17:19 GMT
Expires: Thu, 02 Oct 2025 12:17:19 GMT
Pragma: no-cache
Server: Caddy
Server: pve-api-daemon/3.0

{"data":null,"message":"cannot delete service 'vm:198', not HA managed!\n"}: timestamp="2025-10-02T14:17:19.697+0200"
2025-10-02T14:17:19.698+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/resource.go:950 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 timestamp="2025-10-02T14:17:19.697+0200"
2025-10-02T14:17:19.705+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_warning_count=0 tf_proto_version=5.9 tf_req_duration_ms=45 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=ApplyResourceChange @module=sdk.proto diagnostic_error_count=1 timestamp="2025-10-02T14:17:19.704+0200"
2025-10-02T14:17:19.705+0200 [ERROR] provider.terraform-provider-proxmox_v1.0.0: Response contains error diagnostic: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/diag/diagnostics.go:58 @module=sdk.proto diagnostic_summary="500 Internal Server Error" tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange diagnostic_detail="" diagnostic_severity=ERROR tf_proto_version=5.9 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 timestamp="2025-10-02T14:17:19.704+0200"
2025-10-02T14:17:19.705+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: @module=sdk.proto tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:954 tf_proto_version=5.9 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.704+0200"
2025-10-02T14:17:19.709+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache
2025-10-02T14:17:19.709+0200 [TRACE] NodeAbstractResourceInstance.writeResourceInstanceState to workingState for proxmox_vm_qemu.vm
2025-10-02T14:17:19.709+0200 [TRACE] NodeAbstractResourceInstance.writeResourceInstanceState: writing state object for proxmox_vm_qemu.vm
2025-10-02T14:17:19.712+0200 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2025-10-02T14:17:19.712+0200 [ERROR] vertex "proxmox_vm_qemu.vm (destroy)" error: 500 Internal Server Error
2025-10-02T14:17:19.712+0200 [TRACE] vertex "proxmox_vm_qemu.vm (destroy)": visit complete, with errors
2025-10-02T14:17:19.712+0200 [TRACE] dag/walk: upstream of "provider[\"registry.opentofu.org/hashicorp/proxmox\"] (close)" errored, so skipping
2025-10-02T14:17:19.712+0200 [TRACE] dag/walk: upstream of "root" errored, so skipping
2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2025-10-02T14:17:19.723+0200 [TRACE] statemgr.Filesystem: removed lock metadata file .terraform.tfstate.lock.info
2025-10-02T14:17:19.723+0200 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate
2025-10-02T14:17:19.723+0200 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2025-10-02T14:17:19.727+0200 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.opentofu.org/hashicorp/proxmox/1.0.0/linux_amd64/terraform-provider-proxmox_v1.0.0 pid=214088
2025-10-02T14:17:19.727+0200 [DEBUG] provider: plugin exited

The server only shows two API calls:

tail -100 /var/log/pveproxy/access.log
::ffff:10.124.31.44 - OpenTofu@pve!OpenTofu [02/10/2025:12:17:09 +0000] "GET /api2/json/nodes/cx39/qemu/198/pending HTTP/1.1" 200 543
::ffff:10.124.31.44 - OpenTofu@pve!OpenTofu [02/10/2025:12:17:09 +0000] "GET /api2/json/nodes/cx39/qemu/198/status/current HTTP/1.1" 200 842

I created the VM using this terraform config using the latest master provider version:

provider "proxmox" {
  pm_api_url      = "OURAPIURL"
  pm_api_token_id = "OpenTofu@pve!OpenTofu"
  pm_api_token_secret = "OURTOKEN"
  pm_tls_insecure = true
  pm_debug = true
}

resource "proxmox_vm_qemu" "vm" {
  name         = "local-test"
  protection   = false
  target_nodes = ["OURNODENAME"]
  memory       = 4096
  scsihw       = "virtio-scsi-pci"
  clone        = "OURTEMPLATE"
  os_type      = "cloud-init"
  pool         = "OURPOOL"
  agent        = 1

  cpu {
    cores   = 4
    sockets = 1
    type    = "host"
  }

  network {
      id      = 0
      model   = "virtio"
      bridge  = "vmbr0"
      tag     = OURVLAN
  }

  disks {
    scsi {
      scsi0 {
        disk {
          size    = 40
          storage = "OURSTORAGE"
        }
      }
    }
    ide {
      ide2 {
        cdrom {
          iso = "OURSTORAGEPOOL:iso/OURISONAME.iso"
        }
      }
    }
  }
  lifecycle {
    ignore_changes = [
      agent_timeout,
      disks[0].ide[0].ide2[0].cdrom[0].iso,
      target_node,
      target_nodes,
      agent
    ]
  }
}

This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call).
However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process.

Originally created by @TobiPeterG on GitHub (Oct 2, 2025). Original GitHub issue: https://github.com/Telmate/proxmox-api-go/issues/465 This is related to https://github.com/Telmate/terraform-provider-proxmox/issues/1416 While testing https://github.com/Telmate/terraform-provider-proxmox/pull/1415 , I noticed that VM deletion fails with with the current master version of the proxmox terraform provider. I ran this command: `TF_LOG=TRACE TF_LOG_PATH=tofu.log PM_LOG=1 PM_DEBUG=1 tofu destroy` which created this log: ``` 2025-10-02T14:17:19.403+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: configuring server automatic mTLS: timestamp="2025-10-02T14:17:19.403+0200" 2025-10-02T14:17:19.428+0200 [DEBUG] provider.terraform-provider-proxmox_v1.0.0: plugin address: address=/tmp/plugin1027897418 network=unix timestamp="2025-10-02T14:17:19.428+0200" 2025-10-02T14:17:19.428+0200 [DEBUG] provider: using plugin: version=5 2025-10-02T14:17:19.442+0200 [TRACE] BuiltinEvalContext: Initialized "provider[\"registry.opentofu.org/hashicorp/proxmox\"]"%!s(<nil>) provider for provider["registry.opentofu.org/hashicorp/proxmox"] 2025-10-02T14:17:19.442+0200 [TRACE] provider.stdio: waiting for stdio data 2025-10-02T14:17:19.442+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.442+0200 [TRACE] NodeApplyableProvider: configuring provider["registry.opentofu.org/hashicorp/proxmox"] 2025-10-02T14:17:19.442+0200 [TRACE] buildProviderConfig for provider["registry.opentofu.org/hashicorp/proxmox"]: using explicit config only 2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema 2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox 2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: ValidateProviderConfig 2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema 2025-10-02T14:17:19.442+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox 2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:594 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.443+0200" 2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Skipping protocol data file writing because no data directory is set. Use the TF_LOG_SDK_PROTO_DATA_DIR environment variable to enable this functionality.: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/internal/logging/protocol_data.go:41 @module=sdk.proto tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 timestamp="2025-10-02T14:17:19.443+0200" 2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.443+0200" 2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Preparing provider configuration: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:286 timestamp="2025-10-02T14:17:19.443+0200" 2025-10-02T14:17:19.443+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:374 timestamp="2025-10-02T14:17:19.443+0200" 2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:376 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig timestamp="2025-10-02T14:17:19.447+0200" 2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_warning_count=0 tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 tf_rpc=PrepareProviderConfig diagnostic_error_count=0 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_duration_ms=4 @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto timestamp="2025-10-02T14:17:19.447+0200" 2025-10-02T14:17:19.447+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=PrepareProviderConfig @module=sdk.proto tf_proto_version=5.9 tf_req_id=2b632741-8e8f-40f8-7d54-7edd2d962fc0 @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:615 timestamp="2025-10-02T14:17:19.447+0200" 2025-10-02T14:17:19.448+0200 [WARN] ValidateProviderConfig from "provider[\"registry.opentofu.org/hashicorp/proxmox\"]" changed the config value, but that value is unused 2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: ConfigureProvider 2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: GetProviderSchema 2025-10-02T14:17:19.448+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox 2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:623 @module=sdk.proto tf_proto_version=5.9 tf_rpc=Configure tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d timestamp="2025-10-02T14:17:19.448+0200" 2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: No announced client capabilities: tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/client_capabilities.go:30 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox timestamp="2025-10-02T14:17:19.448+0200" 2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=Configure tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 timestamp="2025-10-02T14:17:19.448+0200" 2025-10-02T14:17:19.448+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:760 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure timestamp="2025-10-02T14:17:19.448+0200" 2025-10-02T14:17:19.448+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST: GET /api2/json/access/users?full=1 HTTP/1.1 Host: pcloud.mgmt.sci.hpi.de User-Agent: Go-http-client/1.1 Accept: application/json Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN Accept-Encoding: gzip : timestamp="2025-10-02T14:17:19.448+0200" 2025-10-02T14:17:19.581+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT: HTTP/1.1 200 OK Content-Length: 322 Alt-Svc: h3=":443"; ma=2592000 Cache-Control: max-age=0 Content-Type: application/json;charset=UTF-8 Date: Thu, 02 Oct 2025 12:17:19 GMT Expires: Thu, 02 Oct 2025 12:17:19 GMT Pragma: no-cache Server: Caddy Server: pve-api-daemon/3.0 {"data":[{"tokens":[{"privsep":0,"expire":0,"tokenid":"OpenTofu"}],"expire":0,"realm-type":"pve","enable":1,"userid":"OpenTofu@pve","groups":"provisioning"},{"email":"scientific-compute@hpi.de","tokens":[{"expire":0,"tokenid":"maas","privsep":1}],"expire":0,"groups":"","realm-type":"pam","userid":"root@pam","enable":1}]}: timestamp="2025-10-02T14:17:19.581+0200" 2025-10-02T14:17:19.581+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST: GET /api2/json/access/permissions?userid=OpenTofu@pve&path=/ HTTP/1.1 Host: pcloud.mgmt.sci.hpi.de User-Agent: Go-http-client/1.1 Accept: application/json Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN Accept-Encoding: gzip : timestamp="2025-10-02T14:17:19.581+0200" 2025-10-02T14:17:19.615+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT: HTTP/1.1 200 OK Content-Length: 935 Alt-Svc: h3=":443"; ma=2592000 Cache-Control: max-age=0 Content-Type: application/json;charset=UTF-8 Date: Thu, 02 Oct 2025 12:17:19 GMT Expires: Thu, 02 Oct 2025 12:17:19 GMT Pragma: no-cache Server: Caddy Server: pve-api-daemon/3.0 {"data":{"/":{"VM.Audit":1,"VM.Config.Options":1,"VM.Snapshot.Rollback":1,"Sys.PowerMgmt":1,"VM.Config.Network":1,"Datastore.Audit":1,"VM.Config.HWType":1,"Pool.Allocate":1,"VM.Config.CDROM":1,"Realm.AllocateUser":1,"Sys.Incoming":1,"SDN.Audit":1,"Mapping.Audit":1,"SDN.Allocate":1,"VM.Allocate":1,"VM.GuestAgent.FileRead":1,"Sys.Audit":1,"Pool.Audit":1,"VM.GuestAgent.Unrestricted":1,"VM.GuestAgent.Audit":1,"User.Modify":1,"Sys.Console":1,"VM.Config.Memory":1,"VM.Console":1,"VM.Replicate":1,"Datastore.AllocateTemplate":1,"VM.Config.CPU":1,"VM.Backup":1,"VM.Snapshot":1,"Mapping.Modify":1,"Datastore.Allocate":1,"Sys.AccessNetwork":1,"VM.Clone":1,"Sys.Modify":1,"Sys.Syslog":1,"VM.Config.Disk":1,"Realm.Allocate":1,"Datastore.AllocateSpace":1,"SDN.Use":1,"Group.Allocate":1,"Permissions.Modify":1,"VM.GuestAgent.FileWrite":1,"VM.PowerMgmt":1,"Mapping.Use":1,"VM.Migrate":1,"VM.GuestAgent.FileSystemMgmt":1,"VM.Config.Cloudinit":1}}}: timestamp="2025-10-02T14:17:19.615+0200" 2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:762 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d timestamp="2025-10-02T14:17:19.615+0200" 2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_error_count=0 diagnostic_warning_count=0 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_proto_version=5.9 tf_req_duration_ms=166 tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto timestamp="2025-10-02T14:17:19.615+0200" 2025-10-02T14:17:19.615+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:643 @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=6442b6ca-e1a5-dccc-7eb2-d6dfae63001d tf_rpc=Configure timestamp="2025-10-02T14:17:19.615+0200" 2025-10-02T14:17:19.615+0200 [TRACE] vertex "provider[\"registry.opentofu.org/hashicorp/proxmox\"]": visit complete 2025-10-02T14:17:19.615+0200 [TRACE] vertex "proxmox_vm_qemu.vm (destroy)": starting visit (*tofu.NodeDestroyResourceInstance) 2025-10-02T14:17:19.616+0200 [TRACE] Resolving provider key for proxmox_vm_qemu.vm 2025-10-02T14:17:19.616+0200 [TRACE] Resolved provider key for proxmox_vm_qemu.vm as %!s(<nil>) 2025-10-02T14:17:19.616+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.619+0200 [TRACE] readDiff: Read Delete change from plan for proxmox_vm_qemu.vm 2025-10-02T14:17:19.619+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.619+0200 [TRACE] readResourceInstanceState: reading state for proxmox_vm_qemu.vm 2025-10-02T14:17:19.619+0200 [TRACE] upgradeResourceStateTransform: address: proxmox_vm_qemu.vm 2025-10-02T14:17:19.621+0200 [TRACE] transformResourceState: schema version of proxmox_vm_qemu.vm is still 0; calling provider "proxmox" for any other minor fixups 2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: UpgradeResourceState 2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: GetProviderSchema 2025-10-02T14:17:19.621+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox 2025-10-02T14:17:19.622+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: tf_proto_version=5.9 tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:789 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=UpgradeResourceState @module=sdk.proto timestamp="2025-10-02T14:17:19.621+0200" 2025-10-02T14:17:19.622+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 tf_proto_version=5.9 @module=sdk.proto timestamp="2025-10-02T14:17:19.622+0200" 2025-10-02T14:17:19.623+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Upgrading JSON state: @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/grpc_provider.go:507 timestamp="2025-10-02T14:17:19.623+0200" 2025-10-02T14:17:19.639+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_duration_ms=17 tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_rpc=UpgradeResourceState @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto tf_proto_version=5.9 tf_resource_type=proxmox_vm_qemu diagnostic_error_count=0 diagnostic_warning_count=0 timestamp="2025-10-02T14:17:19.639+0200" 2025-10-02T14:17:19.639+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=8c8ac50b-5b04-c6a7-4f8b-aac0d1633d7c tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:808 @module=sdk.proto tf_proto_version=5.9 tf_rpc=UpgradeResourceState timestamp="2025-10-02T14:17:19.639+0200" 2025-10-02T14:17:19.651+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.651+0200 [INFO] Starting apply for proxmox_vm_qemu.vm 2025-10-02T14:17:19.651+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.651+0200 [DEBUG] proxmox_vm_qemu.vm: applying the planned Delete change 2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: ApplyResourceChange 2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: GetProviderSchema 2025-10-02T14:17:19.651+0200 [TRACE] GRPCProvider: GetProviderSchema: serving from global schema cache: address=registry.opentofu.org/hashicorp/proxmox 2025-10-02T14:17:19.659+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received request: @module=sdk.proto tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:928 tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.658+0200" 2025-10-02T14:17:19.659+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Sending request downstream: @module=sdk.proto tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:22 tf_proto_version=5.9 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.659+0200" 2025-10-02T14:17:19.664+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Calling downstream: tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/resource.go:948 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox timestamp="2025-10-02T14:17:19.664+0200" 2025-10-02T14:17:19.665+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 >>>>>>>>>> REQUEST: DELETE /api2/json/cluster/ha/resources/198 HTTP/1.1 Host: pcloud.mgmt.sci.hpi.de User-Agent: Go-http-client/1.1 Accept: application/json Authorization: PVEAPIToken=OpenTofu@pve!OpenTofu=OURTOKEN Accept-Encoding: gzip : timestamp="2025-10-02T14:17:19.664+0200" 2025-10-02T14:17:19.697+0200 [INFO] provider.terraform-provider-proxmox_v1.0.0: 2025/10/02 14:17:19 <<<<<<<<<< RESULT: HTTP/1.1 500 Internal Server Error Content-Length: 75 Alt-Svc: h3=":443"; ma=2592000 Cache-Control: max-age=0 Content-Type: application/json;charset=UTF-8 Date: Thu, 02 Oct 2025 12:17:19 GMT Expires: Thu, 02 Oct 2025 12:17:19 GMT Pragma: no-cache Server: Caddy Server: pve-api-daemon/3.0 {"data":null,"message":"cannot delete service 'vm:198', not HA managed!\n"}: timestamp="2025-10-02T14:17:19.697+0200" 2025-10-02T14:17:19.698+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Called downstream: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.37.0/helper/schema/resource.go:950 @module=sdk.helper_schema tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 timestamp="2025-10-02T14:17:19.697+0200" 2025-10-02T14:17:19.705+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Received downstream response: diagnostic_warning_count=0 tf_proto_version=5.9 tf_req_duration_ms=45 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 tf_provider_addr=registry.terraform.io/telmate/proxmox tf_rpc=ApplyResourceChange @module=sdk.proto diagnostic_error_count=1 timestamp="2025-10-02T14:17:19.704+0200" 2025-10-02T14:17:19.705+0200 [ERROR] provider.terraform-provider-proxmox_v1.0.0: Response contains error diagnostic: @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/internal/diag/diagnostics.go:58 @module=sdk.proto diagnostic_summary="500 Internal Server Error" tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu tf_rpc=ApplyResourceChange diagnostic_detail="" diagnostic_severity=ERROR tf_proto_version=5.9 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 timestamp="2025-10-02T14:17:19.704+0200" 2025-10-02T14:17:19.705+0200 [TRACE] provider.terraform-provider-proxmox_v1.0.0: Served request: @module=sdk.proto tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_vm_qemu @caller=/home/deck/go/pkg/mod/github.com/hashicorp/terraform-plugin-go@v0.28.0/tfprotov5/tf5server/server.go:954 tf_proto_version=5.9 tf_req_id=ee16cd3a-e5d9-d35e-cbf5-9600a0072128 tf_rpc=ApplyResourceChange timestamp="2025-10-02T14:17:19.704+0200" 2025-10-02T14:17:19.709+0200 [TRACE] tofu.contextPlugins: Serving provider "registry.opentofu.org/hashicorp/proxmox" schema from global schema cache 2025-10-02T14:17:19.709+0200 [TRACE] NodeAbstractResourceInstance.writeResourceInstanceState to workingState for proxmox_vm_qemu.vm 2025-10-02T14:17:19.709+0200 [TRACE] NodeAbstractResourceInstance.writeResourceInstanceState: writing state object for proxmox_vm_qemu.vm 2025-10-02T14:17:19.712+0200 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot 2025-10-02T14:17:19.712+0200 [ERROR] vertex "proxmox_vm_qemu.vm (destroy)" error: 500 Internal Server Error 2025-10-02T14:17:19.712+0200 [TRACE] vertex "proxmox_vm_qemu.vm (destroy)": visit complete, with errors 2025-10-02T14:17:19.712+0200 [TRACE] dag/walk: upstream of "provider[\"registry.opentofu.org/hashicorp/proxmox\"] (close)" errored, so skipping 2025-10-02T14:17:19.712+0200 [TRACE] dag/walk: upstream of "root" errored, so skipping 2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old 2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: no state changes since last snapshot 2025-10-02T14:17:19.713+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate 2025-10-02T14:17:19.723+0200 [TRACE] statemgr.Filesystem: removed lock metadata file .terraform.tfstate.lock.info 2025-10-02T14:17:19.723+0200 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate 2025-10-02T14:17:19.723+0200 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF" 2025-10-02T14:17:19.727+0200 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.opentofu.org/hashicorp/proxmox/1.0.0/linux_amd64/terraform-provider-proxmox_v1.0.0 pid=214088 2025-10-02T14:17:19.727+0200 [DEBUG] provider: plugin exited ``` The server only shows two API calls: ``` tail -100 /var/log/pveproxy/access.log ::ffff:10.124.31.44 - OpenTofu@pve!OpenTofu [02/10/2025:12:17:09 +0000] "GET /api2/json/nodes/cx39/qemu/198/pending HTTP/1.1" 200 543 ::ffff:10.124.31.44 - OpenTofu@pve!OpenTofu [02/10/2025:12:17:09 +0000] "GET /api2/json/nodes/cx39/qemu/198/status/current HTTP/1.1" 200 842 ``` I created the VM using this terraform config using the latest master provider version: ``` provider "proxmox" { pm_api_url = "OURAPIURL" pm_api_token_id = "OpenTofu@pve!OpenTofu" pm_api_token_secret = "OURTOKEN" pm_tls_insecure = true pm_debug = true } resource "proxmox_vm_qemu" "vm" { name = "local-test" protection = false target_nodes = ["OURNODENAME"] memory = 4096 scsihw = "virtio-scsi-pci" clone = "OURTEMPLATE" os_type = "cloud-init" pool = "OURPOOL" agent = 1 cpu { cores = 4 sockets = 1 type = "host" } network { id = 0 model = "virtio" bridge = "vmbr0" tag = OURVLAN } disks { scsi { scsi0 { disk { size = 40 storage = "OURSTORAGE" } } } ide { ide2 { cdrom { iso = "OURSTORAGEPOOL:iso/OURISONAME.iso" } } } } lifecycle { ignore_changes = [ agent_timeout, disks[0].ide[0].ide2[0].cdrom[0].iso, target_node, target_nodes, agent ] } } ``` This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call). However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process.
Author
Owner

@TobiPeterG commented on GitHub (Oct 2, 2025):

Related to https://github.com/Telmate/proxmox-api-go/pull/464 where this check was introduced but is not working in our case.
copilot also says that the string matching is fragile lol

<!-- gh-comment-id:3361121673 --> @TobiPeterG commented on GitHub (Oct 2, 2025): Related to https://github.com/Telmate/proxmox-api-go/pull/464 where this check was introduced but is not working in our case. copilot also says that the string matching is fragile lol
Author
Owner

@Tinyblargon commented on GitHub (Oct 6, 2025):

@TobiPeterG Under which circumstances did this issue occur? Which version of PVE?

This should catch it when you try to delete a non existent HA config, maybe not in every version of PVE.
github.com/Telmate/proxmox-api-go@b7a8fcf873/proxmox/config__guest.go (L251-L253)

<!-- gh-comment-id:3373507630 --> @Tinyblargon commented on GitHub (Oct 6, 2025): @TobiPeterG Under which circumstances did this issue occur? Which version of PVE? This should catch it when you try to delete a non existent HA config, maybe not in every version of PVE. https://github.com/Telmate/proxmox-api-go/blob/b7a8fcf873e075fd61f517029b01c4ed050bcdf4/proxmox/config__guest.go#L251-L253
Author
Owner

@Tinyblargon commented on GitHub (Oct 6, 2025):

This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call).
However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process.

Originally opted to detect the error as checking if the guest has HA would require another API call. Most PVE error messages rarely if ever change.

<!-- gh-comment-id:3373551197 --> @Tinyblargon commented on GitHub (Oct 6, 2025): >This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call). However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process. Originally opted to detect the error as checking if the guest has HA would require another API call. Most PVE error messages rarely if ever change.
Author
Owner

@TobiPeterG commented on GitHub (Oct 7, 2025):

@TobiPeterG Under which circumstances did this issue occur? Which version of PVE?

This should catch it when you try to delete a non existent HA config, maybe not in every version of PVE.

proxmox-api-go/proxmox/config__guest.go

Lines 251 to 253 in b7a8fcf
if strings.HasPrefix(err.Error(), "500 cannot delete service") {
return Error.haResourceDoesNotExist(id)
}

We currently run PVE 9.0.10

<!-- gh-comment-id:3376609725 --> @TobiPeterG commented on GitHub (Oct 7, 2025): > [@TobiPeterG](https://github.com/TobiPeterG) Under which circumstances did this issue occur? Which version of PVE? > > This should catch it when you try to delete a non existent HA config, maybe not in every version of PVE. > > [proxmox-api-go/proxmox/config__guest.go](https://github.com/Telmate/proxmox-api-go/blob/b7a8fcf873e075fd61f517029b01c4ed050bcdf4/proxmox/config__guest.go#L251-L253) > > Lines 251 to 253 in [b7a8fcf](/Telmate/proxmox-api-go/commit/b7a8fcf873e075fd61f517029b01c4ed050bcdf4) > if strings.HasPrefix(err.Error(), "500 cannot delete service") { > return Error.haResourceDoesNotExist(id) > } We currently run PVE 9.0.10
Author
Owner

@TobiPeterG commented on GitHub (Oct 7, 2025):

This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call).
However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process.

Originally opted to detect the error as checking if the guest has HA would require another API call. Most PVE error messages rarely if ever change.

What would the issue be of making 1 additional API call?

<!-- gh-comment-id:3376612100 --> @TobiPeterG commented on GitHub (Oct 7, 2025): > > This happens because the current destroy path always tries to delete the VM as HA first and tries to detect if the error is "VM not HA managed" (which by itself is not very clean; it should be checked if the VM is HA managed before making the call). > > However, this error isn't correctly detected and therefore, instead of being ignored, this halts the complete destroy process. > > Originally opted to detect the error as checking if the guest has HA would require another API call. Most PVE error messages rarely if ever change. What would the issue be of making 1 additional API call?
Author
Owner

@Tinyblargon commented on GitHub (Oct 8, 2025):

@TobiPeterG If I understand correctly you got error 500 Internal Server Error when it tried to delete the nonexistent HA group?

<!-- gh-comment-id:3382640706 --> @Tinyblargon commented on GitHub (Oct 8, 2025): @TobiPeterG If I understand correctly you got error `500 Internal Server Error` when it tried to delete the nonexistent HA group?
Author
Owner

@Tinyblargon commented on GitHub (Oct 8, 2025):

{"data":null,"message":"cannot delete service 'vm:198', not HA managed!\n"}: timestamp="2025-10-02T14:17:19.697+0200"
Confused why this says cannot delete service 'vm:198', not HA managed!, when I run it the error is 500 cannot delete service 'vm:198', not HA managed!

Am quite concerned about the 500 prefix getting removed.
I am testing with Terraform, maybe OpenTofu gives different results?

<!-- gh-comment-id:3382869835 --> @Tinyblargon commented on GitHub (Oct 8, 2025): `{"data":null,"message":"cannot delete service 'vm:198', not HA managed!\n"}: timestamp="2025-10-02T14:17:19.697+0200"` Confused why this says `cannot delete service 'vm:198', not HA managed!`, when I run it the error is `500 cannot delete service 'vm:198', not HA managed!` Am quite concerned about the `500 ` prefix getting removed. I am testing with Terraform, maybe OpenTofu gives different results?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/proxmox-api-go#118
No description provided.