mirror of
https://github.com/Corsinvest/cv4pve-autosnap.git
synced 2026-04-26 01:15:48 +03:00
[GH-ISSUE #52] Process all VMs from a pool #49
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/cv4pve-autosnap#49
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @michabbs on GitHub (Sep 11, 2021).
Original GitHub issue: https://github.com/Corsinvest/cv4pve-autosnap/issues/52
It would be good to process all VMs from a specified pool.
For example something like this:
cv4pve-autosnap --vmid="@Poolname"@franklupo commented on GitHub (Sep 14, 2021):
Hi,
I was thinking of adding in the vmid parameter the ability to specify the pool name using the 'pool-' prefix.
What do you think?
Best regards
@michabbs commented on GitHub (Sep 14, 2021):
Well, actually it is possible to have a VM named "pool-1". So such prefix seems not to be a good idea.
"@" as prefix looks much nicer and is not ambiguous. :-)
@franklupo commented on GitHub (Sep 14, 2021):
Hi,
actually, if you read the documentation, prefixes are already used.
@franklupo commented on GitHub (Sep 14, 2021):
For compatibility maintaining old format and introduce new format:
@michabbs commented on GitHub (Sep 14, 2021):
Look nice. :-)
...and what about "all vm's in a given pool on a specific node"? :-) :-) :-)
@franklupo commented on GitHub (Sep 14, 2021):
The pool is unique in the cluster. Is not necessary to specify host
@michabbs commented on GitHub (Sep 14, 2021):
...but node is not unique to the pool. You might want to do backup on one node only. (All vm's on that node in a particular pool.)
Actually this is reasonable. If you run cv4pve-autosnap in a cron on one node - everything goes fine until that node fails. Then your snapshots are not being created anymore, even through another nodes are still up. It is better idea to run cv4pve-autosnap separately on every node - so that every node takes care of its own VMs only. So you need to "snapshot all vm's in a given pool, on a specific node only".
@franklupo commented on GitHub (Sep 14, 2021):
It is not necessary to install cv4pve-autosnap internally of node but externally this use API.
The VMs in a cluster are as unique as the pools, even if the node dies everything works. You must specify in the --host parameter to use all the nodes of the cluster in the form "host[: port],host1[:port],host2[:port]"
Installation outside the cluster is preferred.
@michabbs commented on GitHub (Sep 14, 2021):
Good point. :-)
But... anyway it's still possible to install and use cv4pve-autosnap directly on a node. And I sure people do it. And when they do... they might want to separate processing "by node".
@franklupo commented on GitHub (Sep 14, 2021):
when you execute cv4pve-autosnap it is not important which node you run it on because it will snap the vms that are specified in the -vmid parameter, as the vms are in the custer (even if only one node). What you want is perhaps something different. Give me some examples.
Best regards
@michabbs commented on GitHub (Sep 14, 2021):
Execution is successful as long as it is executed at all.
Imagine: there are 2 nodes (node1, node2). cv4pve-autosnap is installed on node1 and automatically creates snapshots of all vms in a pool. The snapshots are created on all nodes. Everything works.
Now: Node1 goes down. Node2 still work, but snapshots are not created anymore.
Solution: Install cv4pve-autosnap on all nodes, each of them to create snapshots on its own node only. This way snapshots on node2 are not affected by failure of node1.
@franklupo commented on GitHub (Sep 14, 2021):
installing cv4pve-autosnap outside the cluster the problem does not exist. However, if in HA the vm are moved from one node to another what happens? That you would no longer snap.
@michabbs commented on GitHub (Sep 14, 2021):
Yes, but that requires "the outside" not to fail. So we come back to the initial problem: one failure stops snapshots in whole cluster.
Why? After migration the vm will stay in the same pool, so no problem. Snapshots will be automatically made on the new node. (If the node make snapshots of "all my own vm's in the pool", then the newly migrated vm will be also snapshotted. And this is clou of the idea.)
@franklupo commented on GitHub (Dec 3, 2021):
In the latest version you can specify the pool using:
'@pool-???' for all VM/CT in specific pool (e.g. @pool-customer1),
Best regards