mirror of
https://github.com/Corsinvest/cv4pve-autosnap.git
synced 2026-04-25 17:05:48 +03:00
[GH-ISSUE #46] VM sometimes remain locked "snapshot-delete" status #43
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/cv4pve-autosnap#43
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @paolo-pf on GitHub (Jan 5, 2021).
Original GitHub issue: https://github.com/Corsinvest/cv4pve-autosnap/issues/46
Hi there,
I'm using your utility, running in crontab this command everynight:
cv4pve-autosnap --host=localhost --username=xxxxxxx --password=yyyyyyyy --timeout=8600 --vmid="all" snap --label='daily' --keep=7
Works everything since a lot of months except that, sometimes, on large disks VMs (for example VM with 2 Tb hdu), when it should purge old snapshots, it goes in error and vm status remain locked with "snapshot-delete" status. In also, old snapshot remains in "delete" status, it doesn't get deleted e I have to remove it manually with force switch.
In this state, VM is still running, but we cannot obviously do next snapshots in the next night, or even a vm backup.
So we must monitor every morning VM status and correct manually when this happens.
To fix this problem I use these commands:
qm unlock
qm delsnapshots "snapshots-name" -f
Our Proxmox filesystem is the old, but still reliable, directory filesystem over phisical hard disks, using qcow2 as vm file format.
@franklupo commented on GitHub (Jan 5, 2021):
Hi,
Run with --debug and send me the logs.
Surely the problem will be the timeout. Try using --timeout and set a long time.
Best regards
@paolo-pf commented on GitHub (Jan 6, 2021):
Hi there,
I'm already using --timeout=8600 that should be enough (more than 2 hours!), I've tried even --timeout=86400 but nothing changed.
In attach detailed debug log trying a snapshot of a large vm that gives error very often! (I've masked personal informations!)
When I get that error, obviously cv4pve-autosnap terminate suddenly with that error, but snapshot procedure in Proxmox console keeps going to the end.
Then VM remains locked in "pending-delete" status and I should do qm unlock
Then if I try to do a clean command to purge old snapshots I get this error because snapshot is in "delete" status:
VM 300 qmp command 'blockdev-snapshot-delete-internal-sync' failed - Snapshot with id 'null' and name 'autodaily201225031308' does not exist on device 'drive-scsi0'
log-snap.txt
Thanks for helping...
Best regards
@franklupo commented on GitHub (Jan 7, 2021):
HI,
How long does the operation take before it stops?
I looked at error but it talks about configuration cluster/host error.
See "proxmox too many redirects 599"
Best regards
@paolo-pf commented on GitHub (Jan 8, 2021):
Hi there,
When it does the snapshots of some heavy VM, it stops after 20-30 secs... but snapshot creation still goes ahead and works.
I've see about those errors, I'm trying to get them fixed, but we've never had cluster errors and cluster health is fine.
Anyway I've tried to insert all the host names in the /etc/hosts file of each node. We'll see next night how it works.
Regards
@franklupo commented on GitHub (Jan 8, 2021):
Hi,
if use Proxmox VE version 6.2 or higher consider use the --api-token instead username/password.
The session not expire if process is long.
Best regards
@franklupo commented on GitHub (Jan 11, 2021):
News?
@rootbdfy commented on GitHub (Nov 26, 2021):
Hello, I have same issue with proxmox 7.1-6 (cv4pve-autosnap 1.12.0). Using api token instead username not helped me.
Reproduced on VMs with many snapshots.
@franklupo commented on GitHub (Nov 26, 2021):
Hi,
manually from web GUI Proxmox VE does snapshot creation and deletion work?
best regards
@rootbdfy commented on GitHub (Nov 26, 2021):
No, it stuck on delete state.

@franklupo commented on GitHub (Nov 26, 2021):
Ok, if it doesn't work from GUI it's a zfs problem
@rootbdfy commented on GitHub (Nov 26, 2021):
This snapshot is absent on zfs. Manually deleted this snap section from configs. Will see what happens next.
Thx !
@wstraszak-xtrf commented on GitHub (Dec 31, 2021):
@rootbdfy how it ended up?
@rootbdfy commented on GitHub (Jan 10, 2022):
Hi!
All fine.
@duven87 commented on GitHub (Sep 24, 2024):
I have still oft this problem mit pve7 , hw raid ext4.. any solution?
@franklupo commented on GitHub (Sep 24, 2024):
ext4 is not a good solution for snapshot.