mirror of
https://github.com/cvk98/Proxmox-load-balancer.git
synced 2026-04-25 04:25:50 +03:00
[GH-ISSUE #5] rr.json KeyError: 'master_node' #1
Labels
No labels
documentation
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Proxmox-load-balancer#1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @BerKer999 on GitHub (Jun 21, 2022).
Original GitHub issue: https://github.com/cvk98/Proxmox-load-balancer/issues/5
is rr.json missing?
Regards,
root@pve:
/Proxmox-load-balancer# ./plb.py/Proxmox-load-balancer# updatedbINFO | START Load-balancer!
DEBUG | Authorization attempt...
DEBUG | Successful authentication. Response code: 200
DEBUG | init when creating a Cluster object
DEBUG | Starting Cluster.cluster_name
DEBUG | Information about the cluster name has been received. Response code: 200
DEBUG | Launching Cluster.cluster_items
DEBUG | Attempt to get information about the cluster...
DEBUG | Information about the cluster has been received. Response code: 200
DEBUG | Launching Cluster.cluster_hosts
DEBUG | Attempt to get information about the cluster HA manager...
DEBUG | Information about the cluster HA Manager has been received. Response code: 200
Traceback (most recent call last):
File "/root/Proxmox-load-balancer/./plb.py", line 468, in
main()
File "/root/Proxmox-load-balancer/./plb.py", line 439, in main
cluster = Cluster(server_url)
File "/root/Proxmox-load-balancer/./plb.py", line 84, in init
self.cl_nodes: dict = self.cluster_hosts() # All cluster nodes
File "/root/Proxmox-load-balancer/./plb.py", line 156, in cluster_hosts
self.master_node = rr.json()['data']['manager_status']['master_node']
KeyError: 'master_node'
root@pve:
root@pve:
/Proxmox-load-balancer# locate rr.json/Proxmox-load-balancer#root@pve:
@cvk98 commented on GitHub (Jun 21, 2022):
I'll try to figure it out. Maybe Cylindrical (Mark Sanford) will help. This is his code.
@cvk98 commented on GitHub (Jun 21, 2022):
Perhaps you are not using HA and the received json does not have the "master_node" key. I'll take a look later.
@BerKer999 commented on GitHub (Jun 22, 2022):
HA setting was set to default (conditional) in Proxmox 7.2-4 not subscribe with 3 nodes
@cvk98 commented on GitHub (Jun 22, 2022):
Please attach the following screenshots:


and
@BerKer999 commented on GitHub (Jun 22, 2022):
Now HA manager has been configured, Thanks
pvesh is returning the msg "error resource '/cluster/ha/status/manager_status' does not define child links"
root@pve:~/Proxmox-load-balancer# ./plb.py
INFO | START Load-balancer!
DEBUG | Authorization attempt...
DEBUG | Successful authentication. Response code: 200
DEBUG | init when creating a Cluster object
DEBUG | Starting Cluster.cluster_name
DEBUG | Information about the cluster name has been received. Response code: 200
DEBUG | Launching Cluster.cluster_items
DEBUG | Attempt to get information about the cluster...
DEBUG | Information about the cluster has been received. Response code: 200
DEBUG | Launching Cluster.cluster_hosts
DEBUG | Attempt to get information about the cluster HA manager...
DEBUG | Information about the cluster HA Manager has been received. Response code: 200
DEBUG | Launching Cluster.cluster_vms
DEBUG | Launching Cluster.cluster_membership
DEBUG | Launching Cluster.cluster_cpu
INFO | This server (pve) is not the current cluster master, pve3 is. Waiting 300 seconds.
@cvk98 commented on GitHub (Jun 22, 2022):
The mode of operation only on the master node of the cluster was added only so that it was possible to roll the balancer on all nodes of the cluster. And so that they all do not start working at once - you need to include in the config a check on whether the node is the master of the HA cluster. If you have a script running on a single node or on a virtual machine, you need to install "only_on_master: OFF" in the config. This should solve your problem. And running the script in one instance only on the master node of the HA cluster with "only_on_master: ON" is pointless. Since this is not permanent and the master may change over time.
@cvk98 commented on GitHub (Jun 24, 2022):
Can I close issue?
@BerKer999 commented on GitHub (Jun 24, 2022):
yes, thanks thanks for your help