mirror of
https://github.com/cvk98/Proxmox-load-balancer.git
synced 2026-04-25 04:25:50 +03:00
[GH-ISSUE #17] no migration started in proxmox #11
Labels
No labels
documentation
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Proxmox-load-balancer#11
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @JonathanN1203 on GitHub (Nov 15, 2023).
Original GitHub issue: https://github.com/cvk98/Proxmox-load-balancer/issues/17
Hi,
I have the issue that the script is running but there is task startet on proxmox. What could be the issue?
root@pve2fra:~/Proxmox-load-balancer# python3 plb.py
/usr/local/lib/python3.11/dist-packages/requests/init.py:102: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
False
INFO | START Load-balancer!
DEBUG | Authorization attempt...
DEBUG | Successful authentication. Response code: 200
DEBUG | init when creating a Cluster object
DEBUG | Starting Cluster.cluster_name
DEBUG | Information about the cluster name has been received. Response code: 200
DEBUG | Launching Cluster.cluster_items
DEBUG | Attempt to get information about the cluster...
DEBUG | Information about the cluster has been received. Response code: 200
DEBUG | Launching Cluster.cluster_hosts
DEBUG | Attempt to get information about the cluster HA manager...
DEBUG | Information about the cluster HA Manager has been received. Response code: 200
DEBUG | Launching Cluster.cluster_vms
DEBUG | Launching Cluster.cluster_membership
DEBUG | Launching Cluster.cluster_cpu
DEBUG | Starting cluster_load_verification
DEBUG | Starting need_to_balance_checking
INFO | Need to balance: True
DEBUG | Running temporary_dict
DEBUG | Starting calculating
INFO | Number of options = 24
DEBUG | Starting vm_migration
DEBUG | VM:131 migration from pve2fra to pve1fra
DEBUG | The VM:131 has [{'volid': 'SSD1:vm-131-disk-0', 'shared': 0, 'is_vmstate': 0, 'size': 53687091200, 'drivename': 'virtio0', 'is_tpmstate': 0, 'cdrom': 0, 'is_attached': 1, 'is_unused': 0, 'replicate': 1}, {'size': 4194304, 'volid': 'SSD1:vm-131-cloudinit', 'is_vmstate': 0, 'shared': 0, 'is_attached': 1, 'is_unused': 0, 'cdrom': 1, 'replicate': 1, 'drivename': 'ide2', 'is_tpmstate': 0}]
@cvk98 commented on GitHub (Nov 15, 2023):
Hi,
The problem is that the disk of the virtual machine is not on the shared storage, but on the local one.
@JonathanN1203 commented on GitHub (Nov 15, 2023):
Hi,
The memory is replicated, can the script be easily modified so that it runs with it?
@cvk98 commented on GitHub (Nov 16, 2023):
Hello
You can delete the block in your copy:
if local_disk or local_resources:
logger.debug(the VM:{vm} heap {local_disk if local_disk else local_resources if local_resources else ""}')
# local_disk & Local_resource need to be reset after the check (if we start with a unmovable VM, the rest are never tested)
local_disk = None
local_resources = None
continue # for variant in variants:
else:
384-390 lines.
Maybe this will solve the problem.