[GH-ISSUE #46] VM sometimes remain locked "snapshot-delete" status #43

Closed
opened 2026-02-26 17:44:07 +03:00 by kerem · 15 comments
Owner

Originally created by @paolo-pf on GitHub (Jan 5, 2021).
Original GitHub issue: https://github.com/Corsinvest/cv4pve-autosnap/issues/46

Hi there,
I'm using your utility, running in crontab this command everynight:
cv4pve-autosnap --host=localhost --username=xxxxxxx --password=yyyyyyyy --timeout=8600 --vmid="all" snap --label='daily' --keep=7

Works everything since a lot of months except that, sometimes, on large disks VMs (for example VM with 2 Tb hdu), when it should purge old snapshots, it goes in error and vm status remain locked with "snapshot-delete" status. In also, old snapshot remains in "delete" status, it doesn't get deleted e I have to remove it manually with force switch.

In this state, VM is still running, but we cannot obviously do next snapshots in the next night, or even a vm backup.
So we must monitor every morning VM status and correct manually when this happens.

To fix this problem I use these commands:
qm unlock
qm delsnapshots "snapshots-name" -f

Our Proxmox filesystem is the old, but still reliable, directory filesystem over phisical hard disks, using qcow2 as vm file format.

Originally created by @paolo-pf on GitHub (Jan 5, 2021). Original GitHub issue: https://github.com/Corsinvest/cv4pve-autosnap/issues/46 Hi there, I'm using your utility, running in crontab this command everynight: cv4pve-autosnap --host=localhost --username=xxxxxxx --password=yyyyyyyy --timeout=8600 --vmid="all" snap --label='daily' --keep=7 Works everything since a lot of months except that, sometimes, on large disks VMs (for example VM with 2 Tb hdu), when it should purge old snapshots, it goes in error and vm status remain locked with "snapshot-delete" status. In also, old snapshot remains in "delete" status, it doesn't get deleted e I have to remove it manually with force switch. In this state, VM is still running, but we cannot obviously do next snapshots in the next night, or even a vm backup. So we must monitor every morning VM status and correct manually when this happens. To fix this problem I use these commands: qm unlock <locked-vmid> qm delsnapshots <locked-vmid> "snapshots-name" -f Our Proxmox filesystem is the old, but still reliable, directory filesystem over phisical hard disks, using qcow2 as vm file format.
kerem closed this issue 2026-02-26 17:44:07 +03:00
Author
Owner

@franklupo commented on GitHub (Jan 5, 2021):

Hi,
Run with --debug and send me the logs.
Surely the problem will be the timeout. Try using --timeout and set a long time.

Best regards

<!-- gh-comment-id:754540022 --> @franklupo commented on GitHub (Jan 5, 2021): Hi, Run with --debug and send me the logs. Surely the problem will be the timeout. Try using --timeout and set a long time. Best regards
Author
Owner

@paolo-pf commented on GitHub (Jan 6, 2021):

Hi there,
I'm already using --timeout=8600 that should be enough (more than 2 hours!), I've tried even --timeout=86400 but nothing changed.

In attach detailed debug log trying a snapshot of a large vm that gives error very often! (I've masked personal informations!)

When I get that error, obviously cv4pve-autosnap terminate suddenly with that error, but snapshot procedure in Proxmox console keeps going to the end.
Then VM remains locked in "pending-delete" status and I should do qm unlock

Then if I try to do a clean command to purge old snapshots I get this error because snapshot is in "delete" status:
VM 300 qmp command 'blockdev-snapshot-delete-internal-sync' failed - Snapshot with id 'null' and name 'autodaily201225031308' does not exist on device 'drive-scsi0'

log-snap.txt

Thanks for helping...
Best regards

<!-- gh-comment-id:755468201 --> @paolo-pf commented on GitHub (Jan 6, 2021): Hi there, I'm already using --timeout=8600 that should be enough (more than 2 hours!), I've tried even --timeout=86400 but nothing changed. In attach detailed debug log trying a snapshot of a large vm that gives error very often! (I've masked personal informations!) When I get that error, obviously cv4pve-autosnap terminate suddenly with that error, but snapshot procedure in Proxmox console keeps going to the end. Then VM remains locked in "pending-delete" status and I should do qm unlock <vmid> Then if I try to do a clean command to purge old snapshots I get this error because snapshot is in "delete" status: VM 300 qmp command 'blockdev-snapshot-delete-internal-sync' failed - Snapshot with id 'null' and name 'autodaily201225031308' does not exist on device 'drive-scsi0' [log-snap.txt](https://github.com/Corsinvest/cv4pve-autosnap/files/5777444/log-snap.txt) Thanks for helping... Best regards
Author
Owner

@franklupo commented on GitHub (Jan 7, 2021):

HI,
How long does the operation take before it stops?
I looked at error but it talks about configuration cluster/host error.
See "proxmox too many redirects 599"

Best regards

<!-- gh-comment-id:755990879 --> @franklupo commented on GitHub (Jan 7, 2021): HI, How long does the operation take before it stops? I looked at error but it talks about configuration cluster/host error. See ["proxmox too many redirects 599"](https://www.google.com/search?sxsrf=ALeKk00DIJfo-Ut0adzLVnwtqmgStJLg8Q%3A1610010695333&ei=R9D2X4neE8KYsAfBroGAAw&q=proxmox+too+many+redirects+599&oq=proxmox+Too+many+redirections&gs_lcp=CgZwc3ktYWIQAxgBMgQIIxAnMgQIABATMgoIABAWEAoQHhATUABYAGDIhQ5oAHAAeACAAVKIAVKSAQExmAEAqgEHZ3dzLXdpesABAQ&sclient=psy-ab) Best regards
Author
Owner

@paolo-pf commented on GitHub (Jan 8, 2021):

Hi there,
When it does the snapshots of some heavy VM, it stops after 20-30 secs... but snapshot creation still goes ahead and works.
I've see about those errors, I'm trying to get them fixed, but we've never had cluster errors and cluster health is fine.

Anyway I've tried to insert all the host names in the /etc/hosts file of each node. We'll see next night how it works.

Regards

<!-- gh-comment-id:756562077 --> @paolo-pf commented on GitHub (Jan 8, 2021): Hi there, When it does the snapshots of some heavy VM, it stops after 20-30 secs... but snapshot creation still goes ahead and works. I've see about those errors, I'm trying to get them fixed, but we've never had cluster errors and cluster health is fine. Anyway I've tried to insert all the host names in the /etc/hosts file of each node. We'll see next night how it works. Regards
Author
Owner

@franklupo commented on GitHub (Jan 8, 2021):

Hi,
if use Proxmox VE version 6.2 or higher consider use the --api-token instead username/password.
The session not expire if process is long.

Best regards

<!-- gh-comment-id:756631928 --> @franklupo commented on GitHub (Jan 8, 2021): Hi, if use Proxmox VE version 6.2 or higher consider use the **--api-token** instead **username/password**. The session not expire if process is long. Best regards
Author
Owner

@franklupo commented on GitHub (Jan 11, 2021):

News?

<!-- gh-comment-id:757859777 --> @franklupo commented on GitHub (Jan 11, 2021): News?
Author
Owner

@rootbdfy commented on GitHub (Nov 26, 2021):

Hello, I have same issue with proxmox 7.1-6 (cv4pve-autosnap 1.12.0). Using api token instead username not helped me.
Reproduced on VMs with many snapshots.

изображение

<!-- gh-comment-id:979786193 --> @rootbdfy commented on GitHub (Nov 26, 2021): Hello, I have same issue with proxmox 7.1-6 (cv4pve-autosnap 1.12.0). Using api token instead username not helped me. Reproduced on VMs with many snapshots. ![изображение](https://user-images.githubusercontent.com/18612349/143551288-59fb9954-f3ce-4b32-a890-644efc065c40.png)
Author
Owner

@franklupo commented on GitHub (Nov 26, 2021):

Hi,
manually from web GUI Proxmox VE does snapshot creation and deletion work?

best regards

<!-- gh-comment-id:979799235 --> @franklupo commented on GitHub (Nov 26, 2021): Hi, manually from web GUI Proxmox VE does snapshot creation and deletion work? best regards
Author
Owner

@rootbdfy commented on GitHub (Nov 26, 2021):

Hi, manually from web GUI Proxmox VE does snapshot creation and deletion work?

best regards

No, it stuck on delete state.
изображение

<!-- gh-comment-id:979802590 --> @rootbdfy commented on GitHub (Nov 26, 2021): > Hi, manually from web GUI Proxmox VE does snapshot creation and deletion work? > > best regards No, it stuck on delete state. ![изображение](https://user-images.githubusercontent.com/18612349/143554987-9303d339-8e54-4595-8f89-a91a4dbd4f79.png)
Author
Owner

@franklupo commented on GitHub (Nov 26, 2021):

Ok, if it doesn't work from GUI it's a zfs problem

<!-- gh-comment-id:979804400 --> @franklupo commented on GitHub (Nov 26, 2021): Ok, if it doesn't work from GUI it's a zfs problem
Author
Owner

@rootbdfy commented on GitHub (Nov 26, 2021):

This snapshot is absent on zfs. Manually deleted this snap section from configs. Will see what happens next.
Thx !

<!-- gh-comment-id:979825059 --> @rootbdfy commented on GitHub (Nov 26, 2021): This snapshot is absent on zfs. Manually deleted this snap section from configs. Will see what happens next. Thx !
Author
Owner

@wstraszak-xtrf commented on GitHub (Dec 31, 2021):

@rootbdfy how it ended up?

<!-- gh-comment-id:1003351139 --> @wstraszak-xtrf commented on GitHub (Dec 31, 2021): @rootbdfy how it ended up?
Author
Owner

@rootbdfy commented on GitHub (Jan 10, 2022):

@rootbdfy how it ended up?

Hi!
All fine.

<!-- gh-comment-id:1008620832 --> @rootbdfy commented on GitHub (Jan 10, 2022): > @rootbdfy how it ended up? Hi! All fine.
Author
Owner

@duven87 commented on GitHub (Sep 24, 2024):

I have still oft this problem mit pve7 , hw raid ext4.. any solution?

<!-- gh-comment-id:2370715633 --> @duven87 commented on GitHub (Sep 24, 2024): I have still oft this problem mit pve7 , hw raid ext4.. any solution?
Author
Owner

@franklupo commented on GitHub (Sep 24, 2024):

I have still oft this problem mit pve7 , hw raid ext4.. any solution?

ext4 is not a good solution for snapshot.

<!-- gh-comment-id:2370719832 --> @franklupo commented on GitHub (Sep 24, 2024): > I have still oft this problem mit pve7 , hw raid ext4.. any solution? ext4 is not a good solution for snapshot.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/cv4pve-autosnap#43
No description provided.