mirror of
https://github.com/cvk98/Proxmox-load-balancer.git
synced 2026-04-25 04:25:50 +03:00
[GH-ISSUE #1] Logger shows "Need to balance: True" but nothing happens #3
Labels
No labels
documentation
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Proxmox-load-balancer#3
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mattv8 on GitHub (May 4, 2022).
Original GitHub issue: https://github.com/cvk98/Proxmox-load-balancer/issues/1
Is this expected behavior?
I have two nodes, which are already nearly balanced so this could be the reason why. See my screenshot below:

@cvk98 commented on GitHub (May 4, 2022):
It depends on many factors.
It may turn out that you have 1 option for migration, but the VM may have a CD-ROM connected or its HDD is located on the node's local storage. Then the balancer finds 1 option to improve the situation, but cannot implement it at the stage of checking the possibility of migration. The output in the "DEBUG" mode can tell more about what is happening.
@cvk98 commented on GitHub (May 5, 2022):
In the readme, I added the requirement of a common storage for all nodes
@mattv8 commented on GitHub (May 16, 2022):
Sorry for the delay, so I do have common storage between all nodes. In fact, they are all identical: same number of CPU's, RAM and storage. However, something strange is still happening. The algorithm sees that it needs to balance, and finds an option, but the migration doesn't end up happening and the algorithm gets stuck in an infinite loop:
What do you think is stopping it up? This is Virtual Environment 7.2-3 with latest pull from this repo.
@cvk98 commented on GitHub (May 17, 2022):
In theory:
Here it is necessary to include another algorithm that will choose a bad (but not critical) option. And then it will start working in the same mode.

Such a cluster cannot be balanced with improvements. We need to make it worse so that new options open up.
It's not difficult to implement, but I have nowhere to test it. Maybe I'll add this as an option.
@mattv8 commented on GitHub (May 17, 2022):
Ah ha! Interesting, thanks for the explanation. I am sure this is somewhat difficult to test and implement since you must iteratively migrate and check, and migration takes time and compute resources.
I will look more into the algorithm when I have time to see if I can contribute. For now, I need to see why the API isn't starting the migration when it hits the def vm_migration(); function. It's like the API call isn't responding properly.
@cvk98 commented on GitHub (May 18, 2022):
pvesh get /nodes/PVE2/qemu/202/migrate - will show local resources that prevent migration
pvesh create /nodes/PVE2/qemu/200/migrate --target PVE1 --online 1 - this is the CLI analog of the http request that the script makes
If this command does not start the migration, then the script will not be able to do it either.
Using this link, you can view the migration options and change them in the script to suit your needs: https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/qemu/{vmid}/migrate
@cvk98 commented on GitHub (May 18, 2022):
Changes will need to be made in this block

@cvk98 commented on GitHub (May 22, 2022):
I hope I was able to help you
@mattv8 commented on GitHub (May 23, 2022):
Thank you, yes, very helpful! Fine to close this as it is not an issue. I'm still testing in my environment; I'll report back if I have any more issues.