[GH-ISSUE #55] [Bug]: Affinity Rules #65

Closed
opened 2026-03-07 19:27:07 +03:00 by kerem · 7 comments
Owner

Originally created by @nothing-fr on GitHub (Mar 2, 2026).
Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/55

Bug Description

Hello,
We are currently testing the solution and our initial impressions are very positive, however, we are encountering some issues with affinity rules.

Perhaps the problems we are encountering are simply due to a misunderstanding on our part. Is there any documentation available to provide Proxmox administrators with more guidance?

What's the difference between those two ? :

  • Orchestration > DRS > Affinity
  • CLUSTER NAME > HA

What is the best method to create affinity rules ?

Steps to Reproduce

  1. Go to Orchestration > DRS > Affinity then create affinity rules
  2. Go to Orchestration > DRS > Affinity then disable an existing affinity rule (it disappear)

  1. When going through CLUSTER NAME > HA an error is occuring when creating a rule.

Expected Behavior

Working affinity rules within ProxCenter

Actual Behavior

  • Affinity rules (Orchestration > DRS > Affinity) do not seem to be taken into account and disappear when they are disabled...
  • When going through CLUSTER NAME > HA and creating a resource affinity rule, we get the following error:
PVE 400 /cluster/ha/rules: {”data“:null,”errors“:{”state“:”property is not defined in schema and the schema does not allow additional properties“},‘message’:” Parameter verification failed.\n"}

ProxCenter Version

acc3864

Proxmox VE Version

9.1.5

Browser

Google Chrome Version 145.0.7632.116 (Build officiel) (64 bits)

Logs / Screenshots

No response

Originally created by @nothing-fr on GitHub (Mar 2, 2026). Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/55 ### Bug Description Hello, We are currently testing the solution and our initial impressions are very positive, however, we are encountering some issues with affinity rules. Perhaps the problems we are encountering are simply due to a misunderstanding on our part. Is there any documentation available to provide Proxmox administrators with more guidance? What's the difference between those two ? : - `Orchestration > DRS > Affinity` - `CLUSTER NAME > HA` What is the best method to create affinity rules ? ### Steps to Reproduce 1. Go to `Orchestration > DRS > Affinity` then create affinity rules 2. Go to `Orchestration > DRS > Affinity` then disable an existing affinity rule (it disappear) --- 1. When going through `CLUSTER NAME > HA` an error is occuring when creating a rule. ### Expected Behavior Working affinity rules within ProxCenter ### Actual Behavior - Affinity rules (`Orchestration > DRS > Affinity`) do not seem to be taken into account and disappear when they are disabled... - When going through `CLUSTER NAME > HA` and creating a resource affinity rule, we get the following error: ``` PVE 400 /cluster/ha/rules: {”data“:null,”errors“:{”state“:”property is not defined in schema and the schema does not allow additional properties“},‘message’:” Parameter verification failed.\n"} ``` ### ProxCenter Version acc3864 ### Proxmox VE Version 9.1.5 ### Browser Google Chrome Version 145.0.7632.116 (Build officiel) (64 bits) ### Logs / Screenshots _No response_
kerem 2026-03-07 19:27:07 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@adminsyspro commented on GitHub (Mar 2, 2026):

Hi, thanks for the detailed report and for testing ProxCenter!

Both bugs are now fixed

Bug 1: DRS Affinity rules disappear when disabled

This was a backend issue — toggling a rule's enabled/disabled state was sending a partial update that overwrote the entire rule record in the database, causing the rule to lose its connection ID and effectively vanish. The backend now properly merges only the changed fields onto the existing rule before saving.

Bug 2: HA Rules creation fails with state property error

The PVE 9 API expects a disable parameter (0 or 1), not a state property (enabled/disabled). This has been corrected in both the creation and update flows.

About the difference between DRS Affinity and HA Rules

Good question — these serve different purposes:

  • Orchestration > DRS > Affinity: These are ProxCenter-managed rules. The DRS engine evaluates them and generates migration recommendations to keep your VMs placed according to your constraints (affinity, anti-affinity, node pinning). They work across all PVE versions.
  • Cluster > HA > Affinity Rules: These are native PVE 9+ HA affinity rules, managed directly by Proxmox's HA stack. ProxCenter provides a UI to create/edit them, but enforcement is handled by PVE itself.

For most users, DRS Affinity rules are the recommended approach as they offer more flexibility and work with ProxCenter's intelligent scheduling. HA rules are useful if you want PVE-native enforcement at the HA level.

Documentation covering all of this is currently being written.

<!-- gh-comment-id:3984660609 --> @adminsyspro commented on GitHub (Mar 2, 2026): Hi, thanks for the detailed report and for testing ProxCenter! **Both bugs are now fixed** Bug 1: DRS Affinity rules disappear when disabled This was a backend issue — toggling a rule's enabled/disabled state was sending a partial update that overwrote the entire rule record in the database, causing the rule to lose its connection ID and effectively vanish. The backend now properly merges only the changed fields onto the existing rule before saving. Bug 2: HA Rules creation fails with `state` property error The PVE 9 API expects a `disable` parameter (`0` or `1`), not a `state` property (`enabled`/`disabled`). This has been corrected in both the creation and update flows. About the difference between DRS Affinity and HA Rules Good question — these serve different purposes: - **Orchestration > DRS > Affinity**: These are ProxCenter-managed rules. The DRS engine evaluates them and generates migration recommendations to keep your VMs placed according to your constraints (affinity, anti-affinity, node pinning). They work across all PVE versions. - **Cluster > HA > Affinity Rules**: These are native PVE 9+ HA affinity rules, managed directly by Proxmox's HA stack. ProxCenter provides a UI to create/edit them, but enforcement is handled by PVE itself. For most users, **DRS Affinity rules are the recommended approach** as they offer more flexibility and work with ProxCenter's intelligent scheduling. HA rules are useful if you want PVE-native enforcement at the HA level. Documentation covering all of this is currently being written.
Author
Owner

@nothing-fr commented on GitHub (Mar 2, 2026):

Orchestration > DRS > Affinity: Disabling rule, do not make it disappear anymore

Image Image

but still, doing nothing. My VMs in DRS Full Auto mode are on the same host:

Image

Cluster > HA > Affinity Rules: Now everything works fine ! I can manage proxmox native rules.


Another thing I don't understand, is the use of proxmox tags for the affinity rules :

Image

Those names are not allowed in proxmox :

Image
<!-- gh-comment-id:3985342505 --> @nothing-fr commented on GitHub (Mar 2, 2026): `Orchestration > DRS > Affinity`: Disabling rule, do not make it disappear anymore <img width="1644" height="258" alt="Image" src="https://github.com/user-attachments/assets/c8ed9b13-b0be-4442-98b2-1c4bd227f862" /> <img width="1664" height="250" alt="Image" src="https://github.com/user-attachments/assets/87ff5cf7-fa7b-4e6b-9e3a-fb5726095628" /> but still, doing nothing. My VMs in DRS Full Auto mode are on the same host: <img width="248" height="128" alt="Image" src="https://github.com/user-attachments/assets/0830cb18-2e2b-4077-a371-b5141f471d9f" /> `Cluster > HA > Affinity Rules`: Now everything works fine ! I can manage proxmox native rules. --- Another thing I don't understand, is the use of proxmox tags for the affinity rules : <img width="776" height="260" alt="Image" src="https://github.com/user-attachments/assets/11d158d3-02b3-41c4-84cf-58e858be763c" /> Those names are not allowed in proxmox : <img width="334" height="162" alt="Image" src="https://github.com/user-attachments/assets/6810d745-841f-4805-8397-d6a50132368c" />
Author
Owner

@adminsyspro commented on GitHub (Mar 2, 2026):

Thanks for the follow-up and the screenshots.

DRS Anti-Affinity not triggering migrations

You're right in the previous version, affinity rules were stored in the database but not wired into the DRS evaluation engine. The rules were displayed in the UI but the DRS had no awareness of them when generating recommendations.

This is now fixed: the DRS engine loads all affinity rules (both manual and tag-based) before each evaluation cycle, detects violations, and generates migration recommendations accordingly. Additionally, enforce_affinity is now enabled by default, meaning the load-balancer will also respect your rules and never move a VM to a node that would violate an affinity or anti-affinity constraint.

This fix will be included in the next release.

Tag names in Proxmox

Good catch on the confusion what you see in ProxCenter's UI (e.g. Tag: affinity/web) is the display name of the auto-generated rule, not the actual tag to set in Proxmox.

The tags to apply on your VMs in PVE are:

PVE Tag Effect
pxc_affinity_groupname VMs with the same group are kept together
pxc_anti_affinity_groupname VMs with the same group are kept on separate nodes
pxc_pin_nodename VM is pinned to the specified node
pxc_ignore VM is excluded from DRS entirely

For example, to keep two web servers apart, add the tag pxc_anti_affinity_web to both VMs in the Proxmox UI. ProxCenter will automatically detect them and create the corresponding rule.

These tags use only lowercase letters, digits, and underscores — which are all valid characters in PVE 8 and PVE 9.

We'll improve the UI to make this clearer (show the actual PVE tag alongside the rule name). Documentation is also on the way.

<!-- gh-comment-id:3985860086 --> @adminsyspro commented on GitHub (Mar 2, 2026): Thanks for the follow-up and the screenshots. DRS Anti-Affinity not triggering migrations You're right in the previous version, affinity rules were stored in the database but **not wired into the DRS evaluation engine**. The rules were displayed in the UI but the DRS had no awareness of them when generating recommendations. This is now fixed: the DRS engine loads all affinity rules (both manual and tag-based) before each evaluation cycle, detects violations, and generates migration recommendations accordingly. Additionally, `enforce_affinity` is now enabled by default, meaning the load-balancer will also respect your rules and never move a VM to a node that would violate an affinity or anti-affinity constraint. This fix will be included in the next release. Tag names in Proxmox Good catch on the confusion what you see in ProxCenter's UI (e.g. `Tag: affinity/web`) is the **display name** of the auto-generated rule, not the actual tag to set in Proxmox. The tags to apply on your VMs in PVE are: | PVE Tag | Effect | |---|---| | `pxc_affinity_groupname` | VMs with the same group are kept together | | `pxc_anti_affinity_groupname` | VMs with the same group are kept on separate nodes | | `pxc_pin_nodename` | VM is pinned to the specified node | | `pxc_ignore` | VM is excluded from DRS entirely | For example, to keep two web servers apart, add the tag `pxc_anti_affinity_web` to both VMs in the Proxmox UI. ProxCenter will automatically detect them and create the corresponding rule. These tags use only lowercase letters, digits, and underscores — which are all valid characters in PVE 8 and PVE 9. We'll improve the UI to make this clearer (show the actual PVE tag alongside the rule name). Documentation is also on the way.
Author
Owner

@nothing-fr commented on GitHub (Mar 3, 2026):

There was a little misunderstanding about that. Actually, I was copying and pasting tags, and there must have been some strange characters in them. When I retyped the tags, “pxc_anti_affinity_test” for example, it worked and appeared correctly in ProxCenter:

Image

And it's now working :

proxcenter-orchestrator  | 2026-03-03T08:54:43+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1
proxcenter-orchestrator  | 2026-03-03T08:54:43+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=1
proxcenter-orchestrator  | 2026-03-03T08:55:22+01:00 INF Migration started source=proxmox-php-hyp01 target=proxmox-php-hyp03 task_id=UPID:proxmox-php-hyp01:0026C08E:01FB6513:69A693EB:hamigrate:110:root@pam!proxcenter: type=qemu vmid=110
proxcenter-orchestrator  | 2026-03-03T08:55:24+01:00 INF Migration self-healed via progress endpoint migration=20260303085522-vy94zu status=completed
proxcenter-orchestrator  | 2026-03-03T08:55:27+01:00 INF Migration completed successfully migration=20260303085522-vy94zu

But it was only a recommandation, it was not Automatic even with DRS in Full Auto., like this example :

Image

Or maybe I'm missing something ? Do I have to wait 1h to check if DRS does it by itself ?

FYI, I also noticed that adding tags on VMs works (1), but deleting them does not (2):

Image
<!-- gh-comment-id:3989391575 --> @nothing-fr commented on GitHub (Mar 3, 2026): There was a little misunderstanding about that. Actually, I was copying and pasting tags, and there must have been some strange characters in them. When I retyped the tags, “pxc_anti_affinity_test” for example, it worked and appeared correctly in ProxCenter: <img width="1463" height="273" alt="Image" src="https://github.com/user-attachments/assets/bf52ec7d-4851-491b-95d2-83db30eda285" /> And it's now working : ```bash proxcenter-orchestrator | 2026-03-03T08:54:43+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1 proxcenter-orchestrator | 2026-03-03T08:54:43+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=1 proxcenter-orchestrator | 2026-03-03T08:55:22+01:00 INF Migration started source=proxmox-php-hyp01 target=proxmox-php-hyp03 task_id=UPID:proxmox-php-hyp01:0026C08E:01FB6513:69A693EB:hamigrate:110:root@pam!proxcenter: type=qemu vmid=110 proxcenter-orchestrator | 2026-03-03T08:55:24+01:00 INF Migration self-healed via progress endpoint migration=20260303085522-vy94zu status=completed proxcenter-orchestrator | 2026-03-03T08:55:27+01:00 INF Migration completed successfully migration=20260303085522-vy94zu ``` **But it was only a recommandation, it was not Automatic even with DRS in Full Auto.**, like this example : <img width="1488" height="596" alt="Image" src="https://github.com/user-attachments/assets/018e5ad8-d0cd-4ab7-a59e-7d2e45f26237" /> Or maybe I'm missing something ? Do I have to wait 1h to check if DRS does it by itself ? FYI, I also noticed that adding tags on VMs works (1), but deleting them does not (2): <img width="345" height="128" alt="Image" src="https://github.com/user-attachments/assets/0842a3be-ec02-4bf7-8efd-a8017a006dcb" />
Author
Owner

@adminsyspro commented on GitHub (Mar 3, 2026):

Tag deletion fix

The tag deletion issue is now fixed. The bug occurred when removing the last tag from a VM: the frontend was sending tags= (empty string) to Proxmox, which silently ignores an empty value. Proxmox requires delete=tags to actually clear the field. This is corrected in the latest release.


DRS Full Auto — how it works

Looking at your orchestrator logs, DRS Full Auto is working correctly:

08:54:43  Affinity rule violations detected  violations=1
08:54:43  DRS evaluation: recommendations after merge  recommendations=1
08:55:22  Migration started  source=proxmox-php-hyp01  target=proxmox-php-hyp03
08:55:27  Migration completed successfully

The migration was automatically triggered ~39 seconds after the evaluation. The recommendation you saw in the UI was briefly in pending state during that window — in Full Auto mode, the DRS engine approves and executes it on the next evaluation cycle (default: every few minutes), not instantly.

So to answer your question: you don't need to wait 1 hour — in Full Auto mode, violations are resolved within one evaluation cycle after detection. The UI recommendation disappears once the migration completes and the next refresh runs.

<!-- gh-comment-id:3989638060 --> @adminsyspro commented on GitHub (Mar 3, 2026): **Tag deletion fix** The tag deletion issue is now fixed. The bug occurred when removing the **last tag** from a VM: the frontend was sending `tags=` (empty string) to Proxmox, which silently ignores an empty value. Proxmox requires `delete=tags` to actually clear the field. This is corrected in the latest release. --- **DRS Full Auto — how it works** Looking at your orchestrator logs, DRS Full Auto *is* working correctly: ``` 08:54:43 Affinity rule violations detected violations=1 08:54:43 DRS evaluation: recommendations after merge recommendations=1 08:55:22 Migration started source=proxmox-php-hyp01 target=proxmox-php-hyp03 08:55:27 Migration completed successfully ``` The migration was automatically triggered ~39 seconds after the evaluation. The recommendation you saw in the UI was briefly in `pending` state during that window — in Full Auto mode, the DRS engine approves and executes it on the next evaluation cycle (default: every few minutes), not instantly. So to answer your question: **you don't need to wait 1 hour** — in Full Auto mode, violations are resolved within one evaluation cycle after detection. The UI recommendation disappears once the migration completes and the next refresh runs.
Author
Owner

@nothing-fr commented on GitHub (Mar 3, 2026):

The logs are OK because I clicked on the execute all button and it was a simple case.

However, if I let DRS do its thing, it doesn't seem to migrate anything... plus I'm seeing some strange behavior:

I have an anti-rule on three machines:

Image

but I have four nodes... it seems to get confused and loop in the recommendations:

Image

Clicking on one of the two does:

Image

then disappears before reappearing...

Logs :

proxcenter-orchestrator  | 2026-03-03T10:16:24+01:00 INF Connections loaded from ProxCenter database count=1
proxcenter-orchestrator  | 2026-03-03T10:16:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1
proxcenter-orchestrator  | 2026-03-03T10:16:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=2
proxcenter-orchestrator  | 2026-03-03T10:18:07+01:00 ERR Failed to analyze VM storage error="API error (status 500): {\"data\":null,\"message\":\"Configuration file 'nodes/proxmox-php-hyp03/qemu-server/112.conf' does not exist\\n\"}" connection_id=cmm3hdw1b000001n0ig1xrqc6 node=proxmox-php-hyp03 type=qemu vmid=112
proxcenter-frontend      | Error checking migration: Error: Orchestrator 500: {"error":"internal server error"}
proxcenter-frontend      | 
proxcenter-frontend      |     at n (.next/server/chunks/[root-of-the-server]__fe1f16d0._.js:39:7350)
proxcenter-frontend      |     at async Object.get (.next/server/chunks/[root-of-the-server]__fe1f16d0._.js:39:7556)
proxcenter-frontend      |     at async A (.next/server/chunks/_809a99ae._.js:39:7382)
proxcenter-frontend      |     at async u (.next/server/chunks/_809a99ae._.js:39:10814)
proxcenter-frontend      |     at async l (.next/server/chunks/_809a99ae._.js:39:11855)
proxcenter-frontend      |     at async Module.I (.next/server/chunks/_809a99ae._.js:39:12933)
proxcenter-orchestrator  | 2026-03-03T10:21:24+01:00 INF Connections loaded from ProxCenter database count=1
proxcenter-orchestrator  | 2026-03-03T10:21:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1
proxcenter-orchestrator  | 2026-03-03T10:21:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=2
proxcenter-orchestrator  | 2026-03-03T10:26:24+01:00 INF Connections loaded from ProxCenter database count=1
proxcenter-orchestrator  | 2026-03-03T10:26:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1
proxcenter-orchestrator  | 2026-03-03T10:26:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=1

The ERR logs seem to appear when I click on one of the two recommendations.(screenshot above)

DRS still does not move virtual machines on its own... No operation is triggered on the Proxmox side :

Image
<!-- gh-comment-id:3989822517 --> @nothing-fr commented on GitHub (Mar 3, 2026): The logs are OK because I clicked on the `execute all` button and it was a simple case. However, if I let DRS do its thing, it doesn't seem to migrate anything... plus I'm seeing some strange behavior: I have an anti-rule on three machines: <img width="1457" height="266" alt="Image" src="https://github.com/user-attachments/assets/74617691-d1a7-4f2a-8a78-6f5f68e2b368" /> but I have four nodes... it seems to get confused and loop in the recommendations: <img width="1446" height="267" alt="Image" src="https://github.com/user-attachments/assets/23580e8e-4ab6-4cad-950d-07862345c027" /> Clicking on one of the two does: <img width="454" height="695" alt="Image" src="https://github.com/user-attachments/assets/69ddfec6-ce6e-4992-9f42-ea71f3a77dda" /> then disappears before reappearing... Logs : ```bash proxcenter-orchestrator | 2026-03-03T10:16:24+01:00 INF Connections loaded from ProxCenter database count=1 proxcenter-orchestrator | 2026-03-03T10:16:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1 proxcenter-orchestrator | 2026-03-03T10:16:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=2 proxcenter-orchestrator | 2026-03-03T10:18:07+01:00 ERR Failed to analyze VM storage error="API error (status 500): {\"data\":null,\"message\":\"Configuration file 'nodes/proxmox-php-hyp03/qemu-server/112.conf' does not exist\\n\"}" connection_id=cmm3hdw1b000001n0ig1xrqc6 node=proxmox-php-hyp03 type=qemu vmid=112 proxcenter-frontend | Error checking migration: Error: Orchestrator 500: {"error":"internal server error"} proxcenter-frontend | proxcenter-frontend | at n (.next/server/chunks/[root-of-the-server]__fe1f16d0._.js:39:7350) proxcenter-frontend | at async Object.get (.next/server/chunks/[root-of-the-server]__fe1f16d0._.js:39:7556) proxcenter-frontend | at async A (.next/server/chunks/_809a99ae._.js:39:7382) proxcenter-frontend | at async u (.next/server/chunks/_809a99ae._.js:39:10814) proxcenter-frontend | at async l (.next/server/chunks/_809a99ae._.js:39:11855) proxcenter-frontend | at async Module.I (.next/server/chunks/_809a99ae._.js:39:12933) proxcenter-orchestrator | 2026-03-03T10:21:24+01:00 INF Connections loaded from ProxCenter database count=1 proxcenter-orchestrator | 2026-03-03T10:21:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1 proxcenter-orchestrator | 2026-03-03T10:21:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=2 proxcenter-orchestrator | 2026-03-03T10:26:24+01:00 INF Connections loaded from ProxCenter database count=1 proxcenter-orchestrator | 2026-03-03T10:26:24+01:00 INF Affinity rule violations detected connection=cmm3hdw1b000001n0ig1xrqc6 violations=1 proxcenter-orchestrator | 2026-03-03T10:26:24+01:00 INF DRS evaluation: recommendations after merge clusters=1 recommendations=1 ``` **The ERR logs seem to appear when I click on one of the two recommendations.(screenshot above)** DRS still does not move virtual machines on its own... No operation is triggered on the Proxmox side : <img width="333" height="67" alt="Image" src="https://github.com/user-attachments/assets/458e2d81-0f7a-47b4-b9ec-ff2a6f23508f" />
Author
Owner

@nothing-fr commented on GitHub (Mar 3, 2026):

I saw the latest update, it seems to fix the problem... now everything seems to be working correctly on our end, with the automatic movement of VMs!

Nice job ! 🥇

<!-- gh-comment-id:3990022254 --> @nothing-fr commented on GitHub (Mar 3, 2026): I saw the latest update, it seems to fix the problem... now everything seems to be working correctly on our end, with the automatic movement of VMs! Nice job ! 🥇
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/proxcenter-ui#65
No description provided.