[GH-ISSUE #80] [Bug]: Incorrect information about CPU and memory in inventory #102

Closed
opened 2026-03-07 19:27:39 +03:00 by kerem · 3 comments
Owner

Originally created by @flpmonstro on GitHub (Mar 6, 2026).
Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/80

Bug Description

Hi everyone,

The CPU and memory information for the cluster is being collected incorrectly, as it appears to be checking the consumption of the VMs on each node and calculating an average.

Steps to Reproduce

  1. Inventory
  2. PROXCENTER
  3. Choose your Cluster

Expected Behavior

The same consumption values should appear in the cluster's dashboard nodes.

Image

Actual Behavior

Image

ProxCenter Version

db96361

Proxmox VE Version

8.4.16

Browser

Chrome, Firefox

Logs / Screenshots

No response

Originally created by @flpmonstro on GitHub (Mar 6, 2026). Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/80 ### Bug Description Hi everyone, The CPU and memory information for the cluster is being collected incorrectly, as it appears to be checking the consumption of the VMs on each node and calculating an average. ### Steps to Reproduce 1. Inventory 2. PROXCENTER 3. Choose your Cluster ### Expected Behavior The same consumption values should appear in the cluster's dashboard nodes. <img width="1388" height="279" alt="Image" src="https://github.com/user-attachments/assets/310a432b-ab71-4328-bea9-ffd6ce4bc616" /> ### Actual Behavior <img width="1586" height="573" alt="Image" src="https://github.com/user-attachments/assets/eefd374a-d29e-4c4f-aa41-79f2744a90b8" /> ### ProxCenter Version db96361 ### Proxmox VE Version 8.4.16 ### Browser Chrome, Firefox ### Logs / Screenshots _No response_
kerem 2026-03-07 19:27:39 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@adminsyspro commented on GitHub (Mar 6, 2026):

Hi @flpmonstro, good catch! Fixed in 254dbf4.

Root cause: The cluster CPU percentage was computed as a simple average of each node's CPU ratio (totalCpu / nodeCount). This is incorrect when nodes have different core counts — a 128-core node at 10% and a 4-core node at 90% should not average to 50%.

Fix: Now uses a weighted average based on each node's core count (maxcpu):

weighted = Σ(node.cpu × node.maxcpu) / Σ(node.maxcpu)

This matches how the dashboard computes CPU usage and should now show the same values in both views.

RAM and storage were already correctly computed using absolute values (bytes used / bytes total), so no change needed there.

<!-- gh-comment-id:4013493064 --> @adminsyspro commented on GitHub (Mar 6, 2026): Hi @flpmonstro, good catch! Fixed in `254dbf4`. **Root cause:** The cluster CPU percentage was computed as a **simple average** of each node's CPU ratio (`totalCpu / nodeCount`). This is incorrect when nodes have different core counts — a 128-core node at 10% and a 4-core node at 90% should not average to 50%. **Fix:** Now uses a **weighted average** based on each node's core count (`maxcpu`): ``` weighted = Σ(node.cpu × node.maxcpu) / Σ(node.maxcpu) ``` This matches how the dashboard computes CPU usage and should now show the same values in both views. RAM and storage were already correctly computed using absolute values (bytes used / bytes total), so no change needed there.
Author
Owner

@flpmonstro commented on GitHub (Mar 7, 2026):

Hi @adminsyspro ,

For me, this data does not make sense:

Image

Should be :

Image

Could you explain the metric ?

<!-- gh-comment-id:4016233881 --> @flpmonstro commented on GitHub (Mar 7, 2026): Hi @adminsyspro , For me, this data does not make sense: <img width="1075" height="201" alt="Image" src="https://github.com/user-attachments/assets/53a0a774-0bda-48fe-ae60-839d956d07f9" /> Should be : <img width="822" height="171" alt="Image" src="https://github.com/user-attachments/assets/fc695945-f011-42ea-80ae-20eaec561db2" /> Could you explain the metric ?
Author
Owner

@adminsyspro commented on GitHub (Mar 7, 2026):

Hi @flpmonstro,

You're right, the memory metrics were indeed incorrect in the cluster/inventory view. Good catch again!

Root cause: The cluster detail view was using the PVE /nodes list endpoint which returns mem values that include kernel caches (ZFS ARC, buffers) in "used" memory. The dashboard widget was already using the per-node /nodes/{node}/status endpoint which returns properly adjusted memory.used values — that's why the dashboard showed correct values (~40%) while the cluster view showed inflated values (~80-95%).

Fix: Both the inventory API and the nodes API now fetch /nodes/{node}/status for each online node and use the accurate memory.used / memory.total values, matching what the Proxmox web UI displays.

Fix will be deployed shortly.

<!-- gh-comment-id:4016299161 --> @adminsyspro commented on GitHub (Mar 7, 2026): Hi @flpmonstro, You're right, the memory metrics were indeed incorrect in the cluster/inventory view. Good catch again! **Root cause:** The cluster detail view was using the PVE `/nodes` list endpoint which returns `mem` values that **include kernel caches** (ZFS ARC, buffers) in "used" memory. The dashboard widget was already using the per-node `/nodes/{node}/status` endpoint which returns properly adjusted `memory.used` values — that's why the dashboard showed correct values (~40%) while the cluster view showed inflated values (~80-95%). **Fix:** Both the inventory API and the nodes API now fetch `/nodes/{node}/status` for each online node and use the accurate `memory.used` / `memory.total` values, matching what the Proxmox web UI displays. Fix will be deployed shortly.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/proxcenter-ui#102
No description provided.