mirror of
https://github.com/cvk98/Proxmox-load-balancer.git
synced 2026-04-25 12:35:52 +03:00
[GH-ISSUE #8] HA priority & Hardware Pass Through #5
Labels
No labels
documentation
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Proxmox-load-balancer#5
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @VibroAxe on GitHub (Aug 18, 2022).
Original GitHub issue: https://github.com/cvk98/Proxmox-load-balancer/issues/8
Does this script take into account ha priority (if a machine has a node affinity set, migrating it will only cause it to bounce back to the original node) and will it ignore (/pin) machines with hardware pass through, as they are not able to be migrated?
@cvk98 commented on GitHub (Aug 18, 2022):
No. It focuses only on loading the RAM of the node.
@VibroAxe commented on GitHub (Aug 19, 2022):
Might be worth a note in the readme, as running the script in this situation would end up with the server flapping :D
Shame, any chance of getting this accounted for, it looks like an awesome script but wont work in my environment! I'd PR but having looked at it, it's way beyond my Python experience :D
@cvk98 commented on GitHub (Aug 19, 2022):
Our cluster does not use HA. Therefore, I wrote the script without taking into account its needs. I didn't even know that HA had priority of node selection. But I can tell you how you can use this script at home. In the config, you can specify a list of exceptions for nodes by entering nodes not participating in HA there. Then your virtual machines from the HA cluster will not migrate to a random node. Next to it, you can run a second instance, in the config of which there will be nodes with HA in exceptions. You will get 2 load balancers for 2 parts of the cluster. Maybe (not 100%), this is what you need.
And in the readme I will add the warning you recommended. Thanks.
@VibroAxe commented on GitHub (Aug 19, 2022):
Oh thats cool, didn't realise it had the exemptions added, looking at the config I assume use the
Perfect thanks!
@cvk98 commented on GitHub (Aug 19, 2022):
It is worth adding that all virtual machines that run on nodes added to "exclusions: nodes" will be ignored.
If the script works for you, I'm waiting for a screenshot in issue #7))