[PR #360] [MERGED] storage: correct usage calculation for rbd and add pool status #638

Closed
opened 2026-02-27 16:39:58 +03:00 by kerem · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/retspen/webvirtmgr/pull/360
Author: @EmbeddedAndroid
Created: 7/6/2014
Status: Merged
Merged: 7/6/2014
Merged by: @retspen

Base: masterHead: master


📝 Commits (2)

  • abec216 storage: correct usage calculation for rbd
  • 3754f1d storage: add pool status

📊 Changes

3 files changed (+15 additions, -4 deletions)

View changed files

📝 storages/views.py (+4 -2)
📝 templates/storage.html (+3 -1)
📝 vrtManager/storage.py (+8 -1)

📄 Description

I noticed a discrepancy in the reported used space for my ceph (rbd) cluster. The usage being report was off by approximately a factor of two.

tyler@compute01:~$ sudo ceph -s
[sudo] password for tyler:
cluster 31845ace-9e1e-462e-bf15-76ed8a8dd851
health HEALTH_OK
monmap e8: 3 mons at {compute01=192.168.1.2:6789/0,compute02=192.168.1.3:6789/0,compute03=192.168.1.4:6789/0}, election epoch 976, quorum 0,1,2 compute01,compute02,compute03
mdsmap e1570: 1/1/1 up {0=compute01=up:active}
osdmap e1530: 14 osds: 14 up, 14 in
pgmap v6444614: 384 pgs, 3 pools, 4708 GB data, 1183 kobjects
9409 GB used, 27818 GB / 37228 GB avail
384 active+clean
client io 3396 B/s wr, 1 op/s

(4708 GB * 100) / 37228 GB = 12%
it should be
(9409 GB * 100) / 37228 GB = 25%

After some investigation I can conclude the following. get_size() returns size, free, usage. 'size' is the capacity of the pool, 'free' represents the allocation (4708 GB), but due to replication with rbd 'usage' actually reflects the available space of the pool (37228 GB - 9409 GB). This is why I'm seeing the usage percentage discrepancy.

To fix this issue, I've modified get_size to return size (capacity) and free (amount of unused space). Then I simply calculate the used space by subtracting size from the amount of unused space. This yields the proper amount of used space for rbd, llvm, and dir storage pools.

I also added pool status for quickly identifying errors on storage clusters.

I've tested this patch with DIR, LLVM, and RBD storage pools and the reflected used space / usage percentage is correct for all variants.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/retspen/webvirtmgr/pull/360 **Author:** [@EmbeddedAndroid](https://github.com/EmbeddedAndroid) **Created:** 7/6/2014 **Status:** ✅ Merged **Merged:** 7/6/2014 **Merged by:** [@retspen](https://github.com/retspen) **Base:** `master` ← **Head:** `master` --- ### 📝 Commits (2) - [`abec216`](https://github.com/retspen/webvirtmgr/commit/abec21614e374cf1c87363bb28cf0e20f3760e83) storage: correct usage calculation for rbd - [`3754f1d`](https://github.com/retspen/webvirtmgr/commit/3754f1d471ac6021a2b0371f5ffc3665cf12b478) storage: add pool status ### 📊 Changes **3 files changed** (+15 additions, -4 deletions) <details> <summary>View changed files</summary> 📝 `storages/views.py` (+4 -2) 📝 `templates/storage.html` (+3 -1) 📝 `vrtManager/storage.py` (+8 -1) </details> ### 📄 Description I noticed a discrepancy in the reported used space for my ceph (rbd) cluster. The usage being report was off by approximately a factor of two. tyler@compute01:~$ sudo ceph -s [sudo] password for tyler: cluster 31845ace-9e1e-462e-bf15-76ed8a8dd851 health HEALTH_OK monmap e8: 3 mons at {compute01=192.168.1.2:6789/0,compute02=192.168.1.3:6789/0,compute03=192.168.1.4:6789/0}, election epoch 976, quorum 0,1,2 compute01,compute02,compute03 mdsmap e1570: 1/1/1 up {0=compute01=up:active} osdmap e1530: 14 osds: 14 up, 14 in pgmap v6444614: 384 pgs, 3 pools, 4708 GB data, 1183 kobjects 9409 GB used, 27818 GB / 37228 GB avail 384 active+clean client io 3396 B/s wr, 1 op/s (4708 GB \* 100) / 37228 GB = 12% it should be (9409 GB \* 100) / 37228 GB = 25% After some investigation I can conclude the following. get_size() returns size, free, usage. 'size' is the capacity of the pool, 'free' represents the allocation (4708 GB), but due to replication with rbd 'usage' actually reflects the available space of the pool (37228 GB - 9409 GB). This is why I'm seeing the usage percentage discrepancy. To fix this issue, I've modified get_size to return size (capacity) and free (amount of unused space). Then I simply calculate the used space by subtracting size from the amount of unused space. This yields the proper amount of used space for rbd, llvm, and dir storage pools. I also added pool status for quickly identifying errors on storage clusters. I've tested this patch with DIR, LLVM, and RBD storage pools and the reflected used space / usage percentage is correct for all variants. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
kerem 2026-02-27 16:39:58 +03:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/webvirtmgr#638
No description provided.