mirror of
https://github.com/Corsinvest/cv4pve-autosnap.git
synced 2026-04-25 08:55:49 +03:00
[GH-ISSUE #108] [BUG] Skip VM problem storage space out of 100% after Proxmox 8 -> 9 #95
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/cv4pve-autosnap#95
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tonyblue2 on GitHub (Dec 22, 2025).
Original GitHub issue: https://github.com/Corsinvest/cv4pve-autosnap/issues/108
What happened?
Today I update my Proxmox from Version 8 -> 9.
After that cv4pve-autosnap skipps with this message:
cv4pve-autosnap --host=192.168.1.200 --username=snapy@pve --password=snapy --max-perc-storage=100 --vmid=3205,3206,3207 snap --label='cv4pve_hourly' --keep=24
ACTION Snap
PVE Version: 9.1.2
VMs: 3201,3202,3210
Label: cv4pve_hourly
Keep: 24
State: False
Only running: False
Timeout: 30 sec.
Timestamp format: yyMMddHHmmss
Max % Storage : 100%
----- POSSIBLE PROBLEM PERMISSION 'Datastore.Audit' -----
----- VM 3205 lxc running -----
Skip VM problem storage space out of 100%
----- VM 3206 lxc running -----
Skip VM problem storage space out of 100%
----- VM 3207 lxc running -----
Skip VM problem storage space out of 100%
Total execution 00:00:00.1922174
I get the same result with 10%, 95%, or 100% max-perc-storage.
But If I try pvesm status
user@myhost:# pvesm status
Name Type Status Total (KiB) Used (KiB) Available (KiB) %
local dir active 372782976 18179072 354603904 4.88%
local-zfs zfspool active 2404554018 2049950069 354603949 85.25%
What can I do to make cv4pve-autosnap create regularly scheduled snapshots via cron again?
Thank you!
Tony
Expected behavior
I expected I get Snapshots
Command used
Log output
cv4pve-autosnap Version
latest
Proxmox VE Version
9.1.2
Last working version
No response
Operating System
Linux
Pull Request
@franklupo commented on GitHub (Dec 22, 2025):
Hi @tonyblue2,
What version of cv4pve-autosnap do you use?
@franklupo commented on GitHub (Dec 22, 2025):
Run with --debug and attach logs.
Best reagrds
@tonyblue2 commented on GitHub (Dec 22, 2025):
./cv4pve-autosnap --version
1.17.0+8c1dc56c442f3dbd403ff2ba7ca6495ba407d52d
@franklupo commented on GitHub (Dec 22, 2025):
Does the previous version of autosnap v1.16.0 work?
@tonyblue2 commented on GitHub (Dec 22, 2025):
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Login: snappy@pve
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: POST, Url: https://192.168.13.230:8006/api2/json/access/ticket
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Parameters: password : ****
username : snapshot
realm : pve
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": {
"CSRFPreventionToken": "69488EED:SeK9oo/wBQIB1XmbAcjChl4N9X69wUTfV4NPB0SvzL8",
"username": "snappy@pve",
"ticket": "PVE:snappy@pve:69488EED::GqYiGszxFW8ciB8OGPg7pwYNwweNvj8ZyMJXpPlzcpmFMDzZQeuomkUpcp9sZvWz+jl45xE2WDYP+W5mYoktXvim3EW4u5/cRApCgmVHRmgx2VLGlG4LxCuWVWKn0IXrSBVtVK7kpWueng4dUApr7wgxQZgN8uUYbPHWpBAVvLJaBVmMEGkLq/Arv07O6uM1arSYw5fE+V3BjfKq6xNPlRX2J+1zwWGeg0NVN0aihaYq+c1xCgq/FH04klvMtymR/PqvKufY5TKd0zu/Gq8+ZHIJnRCq3WqmqoUP7lReRnQDSD/ya+tJBXlgPUZEaXSMLSFs5ItBWy/SquZTnIOlqw==",
"cap": {
"dc": {},
"sdn": {},
"access": {},
"storage": {},
"vms": {
"VM.Clone": 1,
"VM.Snapshot": 1,
"VM.Backup": 1,
"VM.Audit": 1
},
"mapping": {},
"nodes": {}
}
}
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/version
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": {
"version": "9.1.2",
"release": "9.1",
"repoid": "9d436f37a0ac4172"
}
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
ACTION Snap
PVE Version: 9.1.2
VMs: 3205,3206,3206
Label: cv4pve_hourly
Keep: 24
State: False
Only running: False
Timeout: 30 sec.
Timestamp format: yyMMddHHmmss
Max % Storage : 100%
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/cluster/resources?type=vm
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": [
{
"diskread": 131514368,
"vmid": 1190,
"maxcpu": 1,
"netout": 273620,
"name": "vserver",
"disk": 3893231616,
"uptime": 6089,
"status": "running",
"maxdisk": 8589934592,
"netin": 118588,
"memhost": 0,
"cpu": 0.00534336859199907,
"template": 0,
"mem": 256786432,
"type": "lxc",
"diskwrite": 0,
"maxmem": 1073741824,
"id": "lxc/1190",
"node": "myproxmox"
},
{
"status": "running",
"maxdisk": 21474836480,
"disk": 8646950912,
"name": "ndata",
"uptime": 6119,
"netout": 14662298,
"vmid": 1195,
"diskread": 158232576,
"maxcpu": 2,
"type": "lxc",
"diskwrite": 0,
"maxmem": 4244635648,
"id": "lxc/1195",
"node": "myproxmox",
"template": 0,
"mem": 171331584,
"cpu": 0.0291213588263949,
"netin": 40442345,
"memhost": 0
},
{
"netout": 34726731,
"vmid": 1196,
"diskread": 191889408,
"maxcpu": 2,
"maxdisk": 8589934592,
"status": "running",
"uptime": 6124,
"name": "ddata",
"disk": 1376649216,
"cpu": 0.000213734743679963,
"memhost": 0,
"netin": 9676206,
"type": "lxc",
"id": "lxc/1196",
"node": "myproxmox",
"maxmem": 1073741824,
"diskwrite": 0,
"mem": 258215936,
"template": 0
},
{
"type": "qemu",
"id": "qemu/1200",
"node": "myproxmox",
"maxmem": 6442450944,
"diskwrite": 111547392,
"template": 0,
"mem": 965705728,
"cpu": 0.0221980489306811,
"memhost": 1297285120,
"netin": 268802,
"maxdisk": 53687091200,
"status": "running",
"uptime": 6066,
"disk": 0,
"name": "gv",
"netout": 175390,
"vmid": 1200,
"diskread": 620045358,
"maxcpu": 4
},
{
"node": "myproxmox",
"id": "lxc/1201",
"diskwrite": 0,
"maxmem": 1073741824,
"type": "lxc",
"mem": 192507904,
"template": 0,
"cpu": 0.00897685923455843,
"netin": 98742171,
"memhost": 0,
"maxdisk": 8589934592,
"status": "running",
"uptime": 6119,
"disk": 2256797696,
"name": "nsec",
"netout": 47012860,
"maxcpu": 2,
"diskread": 171311104,
"vmid": 1201
},
{
"cpu": 0.000374035801439935,
"netin": 31598930,
"memhost": 0,
"type": "lxc",
"diskwrite": 0,
"maxmem": 1073741824,
"node": "myproxmox",
"id": "lxc/1202",
"template": 0,
"mem": 263024640,
"netout": 82864341,
"diskread": 176926720,
"vmid": 1202,
"maxcpu": 2,
"status": "running",
"maxdisk": 8589934592,
"disk": 1373110272,
"name": "dlocal",
"uptime": 6124
},
{
"uptime": 6119,
"name": "nmedia",
"disk": 2174615552,
"maxdisk": 8589934592,
"status": "running",
"maxcpu": 2,
"diskread": 142561280,
"vmid": 1203,
"netout": 5756570,
"template": 0,
"mem": 143998976,
"node": "myproxmox",
"id": "lxc/1203",
"maxmem": 1073741824,
"diskwrite": 0,
"type": "lxc",
"memhost": 0,
"netin": 16315580,
"cpu": 0.017419381609917
},
{
"uptime": 6124,
"disk": 1265631232,
"name": "dmedia",
"maxdisk": 8589934592,
"status": "running",
"maxcpu": 2,
"vmid": 1204,
"diskread": 167366656,
"netout": 15346271,
"template": 0,
"mem": 234872832,
"id": "lxc/1204",
"node": "myproxmox",
"diskwrite": 0,
"maxmem": 1073741824,
"type": "lxc",
"memhost": 0,
"netin": 4453278,
"cpu": 0.000320602115519944
},
{
"mem": 58650624,
"template": 0,
"id": "lxc/1220",
"node": "myproxmox",
"maxmem": 2147483648,
"diskwrite": 0,
"type": "lxc",
"memhost": 0,
"netin": 15155426,
"cpu": 0.000106867371839981,
"uptime": 6170,
"name": "ferv",
"disk": 2565210112,
"maxdisk": 8589934592,
"status": "running",
"maxcpu": 2,
"vmid": 1220,
"diskread": 148656128,
"netout": 49652977
},
{
"type": "lxc",
"id": "lxc/1231",
"node": "myproxmox",
"maxmem": 1073741824,
"diskwrite": 0,
"template": 0,
"mem": 218361856,
"cpu": 0.00598457282303896,
"memhost": 0,
"netin": 5376687,
"maxdisk": 21474836480,
"status": "running",
"uptime": 6086,
"disk": 3116105728,
"name": "UKu",
"netout": 878086,
"vmid": 1231,
"diskread": 136282112,
"maxcpu": 2
},
{
"name": "cfb",
"disk": 2044854272,
"uptime": 6115,
"status": "running",
"maxdisk": 21474836480,
"maxcpu": 1,
"diskread": 85987328,
"vmid": 1232,
"netout": 46786,
"template": 0,
"mem": 53608448,
"maxmem": 1073741824,
"diskwrite": 0,
"id": "lxc/1232",
"node": "myproxmox",
"type": "lxc",
"netin": 142002,
"memhost": 0,
"cpu": 0
},
{
"maxdisk": 53687091200,
"status": "running",
"uptime": 6070,
"name": "spdf",
"disk": 51642630144,
"netout": 80025,
"vmid": 1233,
"diskread": 17801216,
"maxcpu": 2,
"type": "lxc",
"node": "myproxmox",
"id": "lxc/1233",
"maxmem": 1073741824,
"diskwrite": 0,
"mem": 134455296,
"template": 0,
"cpu": 0.000534336859199907,
"memhost": 0,
"netin": 103348
},
{
"disk": 2028863488,
"name": "fr",
"uptime": 6145,
"status": "running",
"maxdisk": 8589934592,
"vmid": 1240,
"diskread": 113348608,
"maxcpu": 2,
"netout": 154524,
"mem": 68362240,
"template": 0,
"type": "lxc",
"diskwrite": 0,
"maxmem": 1073741824,
"id": "lxc/1240",
"node": "myproxmox",
"memhost": 0,
"netin": 172254,
"cpu": 0.000587770545119897
},
{
"uptime": 6082,
"disk": 59820343296,
"name": "ps",
"maxdisk": 107374182400,
"status": "running",
"maxcpu": 8,
"vmid": 1246,
"diskread": 18132992,
"netout": 3924401,
"template": 0,
"mem": 1214304256,
"node": "myproxmox",
"id": "lxc/1246",
"diskwrite": 0,
"maxmem": 6442450944,
"type": "lxc",
"netin": 8237255,
"memhost": 0,
"cpu": 0.000895014239159844
},
{
"template": 0,
"mem": 1100095488,
"maxmem": 4294967296,
"diskwrite": 0,
"node": "myproxmox",
"id": "lxc/1247",
"type": "lxc",
"netin": 1525807,
"memhost": 0,
"cpu": 0.000997428803839826,
"disk": 41908436992,
"name": "p-m",
"uptime": 6078,
"status": "running",
"maxdisk": 42949672960,
"maxcpu": 6,
"vmid": 1247,
"diskread": 17883136,
"netout": 509981
},
{
"netout": 90243,
"maxcpu": 6,
"diskread": 17981440,
"vmid": 1248,
"status": "running",
"maxdisk": 42949672960,
"name": "p-a",
"disk": 40493645824,
"uptime": 6074,
"cpu": 0.0011399186329598,
"netin": 128664,
"memhost": 0,
"maxmem": 4294967296,
"diskwrite": 0,
"node": "myproxmox",
"id": "lxc/1248",
"type": "lxc",
"template": 0,
"mem": 1082875904
},
{
"diskread": 1970724864,
"vmid": 1250,
"maxcpu": 4,
"netout": 3583035,
"name": "Pe",
"disk": 3853385728,
"uptime": 6094,
"status": "running",
"maxdisk": 21474836480,
"netin": 11174868,
"memhost": 0,
"cpu": 0.000854938974719851,
"template": 0,
"mem": 524410880,
"type": "lxc",
"maxmem": 1073741824,
"diskwrite": 0,
"id": "lxc/1250",
"node": "myproxmox"
},
{
"maxdisk": 107374182400,
"status": "stopped",
"uptime": 0,
"disk": 0,
"name": "a1",
"netout": 0,
"maxcpu": 2,
"vmid": 3101,
"diskread": 0,
"node": "myproxmox",
"id": "lxc/3101",
"diskwrite": 0,
"maxmem": 8589934592,
"type": "lxc",
"template": 0,
"mem": 0,
"cpu": 0,
"memhost": 0,
"netin": 0
},
{
"status": "stopped",
"maxdisk": 107374182400,
"name": "a3",
"disk": 0,
"uptime": 0,
"netout": 0,
"maxcpu": 2,
"diskread": 0,
"vmid": 3103,
"diskwrite": 0,
"maxmem": 8589934592,
"node": "myproxmox",
"id": "lxc/3103",
"type": "lxc",
"template": 0,
"mem": 0,
"cpu": 0,
"netin": 0,
"memhost": 0
},
{
"maxdisk": 107374182400,
"status": "stopped",
"uptime": 0,
"disk": 0,
"name": "a4",
"netout": 0,
"maxcpu": 2,
"vmid": 3104,
"diskread": 0,
"id": "lxc/3104",
"node": "myproxmox",
"maxmem": 8589934592,
"diskwrite": 0,
"type": "lxc",
"template": 0,
"mem": 0,
"cpu": 0,
"netin": 0,
"memhost": 0
},
{
"netout": 0,
"maxcpu": 2,
"diskread": 0,
"vmid": 3105,
"status": "stopped",
"maxdisk": 107374182400,
"disk": 0,
"name": "a2",
"uptime": 0,
"cpu": 0,
"netin": 0,
"memhost": 0,
"diskwrite": 0,
"maxmem": 8589934592,
"node": "myproxmox",
"id": "lxc/3105",
"type": "lxc",
"mem": 0,
"template": 0
},
{
"maxdisk": 21474836480,
"status": "running",
"uptime": 6127,
"disk": 2596798464,
"name": "rproxy",
"netout": 11609762,
"maxcpu": 2,
"diskread": 86781952,
"vmid": 3200,
"id": "lxc/3200",
"node": "myproxmox",
"maxmem": 1073741824,
"diskwrite": 0,
"type": "lxc",
"mem": 61845504,
"template": 0,
"cpu": 0.000854938974719851,
"memhost": 0,
"netin": 9246672
},
{
"diskread": 192536576,
"vmid": 3205,
"maxcpu": 2,
"netout": 32248279,
"uptime": 6131,
"disk": 3323854848,
"name": "nd",
"maxdisk": 53687091200,
"status": "running",
"memhost": 0,
"netin": 103645506,
"cpu": 0.0110073392995181,
"template": 0,
"mem": 443256832,
"type": "lxc",
"id": "lxc/3205",
"node": "myproxmox",
"diskwrite": 0,
"maxmem": 2147483648
},
{
"netout": 83949641,
"vmid": 3206,
"diskread": 177905664,
"maxcpu": 2,
"status": "running",
"maxdisk": 21474836480,
"disk": 1928593408,
"name": "dweb",
"uptime": 6142,
"cpu": 0.000267168429599953,
"memhost": 0,
"netin": 13816712,
"type": "lxc",
"diskwrite": 0,
"maxmem": 2147483648,
"node": "myproxmox",
"id": "lxc/3206",
"template": 0,
"mem": 282599424
},
{
"cpu": 0,
"memhost": 0,
"netin": 0,
"id": "lxc/3204",
"node": "myproxmox",
"diskwrite": 0,
"maxmem": 2147483648,
"type": "lxc",
"mem": 0,
"template": 0,
"netout": 0,
"maxcpu": 2,
"vmid": 3204,
"diskread": 0,
"maxdisk": 32212254720,
"status": "stopped",
"uptime": 0,
"disk": 0,
"name": "st"
},
{
"status": "stopped",
"maxdisk": 32212254720,
"name": "dd",
"disk": 0,
"uptime": 0,
"netout": 0,
"diskread": 0,
"vmid": 3205,
"maxcpu": 2,
"type": "lxc",
"maxmem": 2147483648,
"diskwrite": 0,
"node": "myproxmox",
"id": "lxc/3205",
"mem": 0,
"template": 0,
"cpu": 0,
"memhost": 0,
"netin": 0
},
{
"vmid": 3206,
"diskread": 0,
"maxcpu": 2,
"netout": 0,
"name": "bn",
"disk": 0,
"uptime": 0,
"status": "stopped",
"maxdisk": 32212254720,
"netin": 0,
"memhost": 0,
"cpu": 0,
"mem": 0,
"template": 0,
"type": "lxc",
"diskwrite": 0,
"maxmem": 2147483648,
"id": "lxc/3206",
"node": "myproxmox"
},
{
"template": 0,
"mem": 0,
"id": "lxc/3207",
"node": "myproxmox",
"maxmem": 2147483648,
"diskwrite": 0,
"type": "lxc",
"netin": 0,
"memhost": 0,
"cpu": 0,
"uptime": 0,
"disk": 0,
"name": "sr",
"maxdisk": 32212254720,
"status": "stopped",
"maxcpu": 2,
"vmid": 3207,
"diskread": 0,
"netout": 0
},
{
"cpu": 0.00101524003247982,
"netin": 43778104,
"memhost": 0,
"diskwrite": 0,
"maxmem": 4294967296,
"id": "lxc/3206",
"node": "myproxmox",
"type": "lxc",
"template": 0,
"mem": 1246269440,
"netout": 3306660,
"maxcpu": 4,
"diskread": 444866560,
"vmid": 3206,
"status": "running",
"maxdisk": 21474836480,
"name": "fl",
"disk": 2775449600,
"uptime": 6159
},
{
"maxcpu": 2,
"diskread": 135847936,
"vmid": 3211,
"netout": 58082,
"uptime": 6097,
"disk": 2070020096,
"name": "wl",
"maxdisk": 21474836480,
"status": "running",
"memhost": 0,
"netin": 20131,
"cpu": 0.000587770545119897,
"mem": 168194048,
"template": 0,
"node": "myproxmox",
"id": "lxc/3211",
"diskwrite": 0,
"maxmem": 2122317824,
"type": "lxc"
},
{
"status": "running",
"maxdisk": 8589934592,
"name": "fz",
"disk": 2374500352,
"uptime": 6156,
"netout": 17112270,
"vmid": 3220,
"diskread": 120397824,
"maxcpu": 2,
"type": "lxc",
"diskwrite": 0,
"maxmem": 536870912,
"node": "myproxmox",
"id": "lxc/3220",
"mem": 56598528,
"template": 0,
"cpu": 0,
"netin": 12265457,
"memhost": 0
},
{
"mem": 205414400,
"template": 0,
"type": "lxc",
"diskwrite": 0,
"maxmem": 1073741824,
"node": "myproxmox",
"id": "lxc/4208",
"netin": 130659259,
"memhost": 0,
"cpu": 0.00213734743679963,
"name": "i0",
"disk": 3009675264,
"uptime": 6061,
"status": "running",
"maxdisk": 8589934592,
"vmid": 4208,
"diskread": 108961792,
"maxcpu": 1,
"netout": 61118056
},
{
"cpu": 0.0647260048777487,
"netin": 242219882,
"memhost": 0,
"type": "lxc",
"node": "myproxmox",
"id": "lxc/4209",
"diskwrite": 0,
"maxmem": 4244635648,
"mem": 2000261120,
"template": 0,
"netout": 191044162,
"vmid": 4209,
"diskread": 409378816,
"maxcpu": 3,
"maxdisk": 18253611008,
"status": "running",
"uptime": 6061,
"disk": 8598454272,
"name": "ir"
},
{
"template": 0,
"mem": 55681024,
"maxmem": 536870912,
"diskwrite": 0,
"node": "myproxmox",
"id": "lxc/4220",
"type": "lxc",
"memhost": 0,
"netin": 6820451,
"cpu": 0,
"name": "fn",
"disk": 2347892736,
"uptime": 6110,
"status": "running",
"maxdisk": 8589934592,
"maxcpu": 2,
"diskread": 94498816,
"vmid": 4220,
"netout": 84754
},
{
"status": "running",
"maxdisk": 21474836480,
"name": "pn",
"disk": 3445489664,
"uptime": 6100,
"netout": 197448,
"vmid": 4225,
"diskread": 17653760,
"maxcpu": 1,
"type": "lxc",
"maxmem": 536870912,
"diskwrite": 0,
"node": "myproxmox",
"id": "lxc/4225",
"template": 0,
"mem": 70057984,
"cpu": 0.00128240846207978,
"memhost": 0,
"netin": 6711234
}
]
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/cluster/resources?type=storage
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": []
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
----- POSSIBLE PROBLEM PERMISSION 'Datastore.Audit' -----
----- VM 3205 lxc running -----
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/nodes/myproxmox/lxc/3205/config
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": {
"hostname": "nd",
"features": "nesting=1",
"description": "Ubuntu 24.04 LTS \n* vim + editor=vim in /etc/environment\n* sudo berechtigte User in Gruppe admin und sudo in /etc/group aufnehmen\n* apt-listchanges \n* apt-file\n* unattended-upgrades \n* language-pack-de language-pack-de-base\n* dpkg-reconfigure locales -> de_DE.UTF-8\n* update-locale LANG=de_DE.UTF-8\n* postfix\n* logcheck\n* /etc/aliasses -> root an administrator@firstmail.duck ; newaliases\n* timedatectl set-timezone Europe/Berlin \n* Bug in Logrotate durch neue Version beseitigen: apt install software-properties-common; add-apt-repository ppa:adiscon/v8-stable; apt install -y rsyslog\n* mailutils\n* fail2ban\n* upgrade von Ubuntu 18.04 LTS -> Ubuntu 20.04 LTS\n* upgrade von Ubuntu 20.04 LTS -> Ubuntu 22.04 LTS\n* Upgrade von Ubuntu 22.04 LTS -> Ubuntu 24.04 LTS\n",
"swap": 512,
"cores": 2,
"lxc": [
[
"lxc.mount.entry",
"/rpool/home/na srv/na none bind 0 0"
]
],
"arch": "amd64",
"onboot": 1,
"net0": "name=eth0,bridge=vmbr3,gw=192.168.3.254,hwaddr=86:04:1B:17:C0:48,ip=192.168.3.201/24,type=veth",
"digest": "e6727d995bcae26a18705691d933f4554ba95104",
"memory": 2048,
"rootfs": "local-zfs:subvol-3205-disk-0,size=50G",
"ostype": "ubuntu",
"parent": "auto_cv4pve_hourly_251220011702",
"startup": "order=11"
}
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
Skip VM problem storage space out of 100%
----- VM 3206 lxc running -----
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/nodes/myproxmox/lxc/3206/config
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": {
"memory": 2048,
"startup": "order=10,up=10",
"rootfs": "local-zfs:subvol-3206-disk-0,size=20G",
"ostype": "ubuntu",
"parent": "auto_cv4pve_hourly_251220011706",
"digest": "b8e180a574f3d578522504da89db0bc06579622e",
"arch": "amd64",
"onboot": 1,
"net0": "name=eth0,bridge=vmbr3,gw=192.168.3.254,hwaddr=5A:1B:2F:6E:7B:F5,ip=192.168.3.202/24,type=veth",
"hostname": "db",
"features": "nesting=1",
"swap": 512,
"description": "Ubuntu 24.04 LTS \n* vim + editor=vim in /etc/environment\n* sudo berechtigte User in Gruppe admin und sudo in /etc/group aufnehmen\n* apt-listchanges \n* apt-file\n* unattended-upgrades \n* language-pack-de language-pack-de-base\n* dpkg-reconfigure locales -> de_DE.UTF-8\n* update-locale LANG=de_DE.UTF-8\n* postfix\n* logcheck\n* /etc/aliasses -> root an administrator@firstmail.duck ; newaliases\n* timedatectl set-timezone Europe/Berlin \n* dselect\n* Bug in Logrotate durch neue Version beseitigen: apt install software-properties-common; add-apt-repository ppa:adiscon/v8-stable; apt install -y rsyslog\n* mailutils\n* upgrade von Ubuntu 18.04 LTS -> Ubuntu 20.04 LTS\n* upgrade von Ubuntu 20.04 LTS -> Ubuntu 22.04 LTS\n* Upgrade von Ubuntu 22.04 LTS -> Ubuntu 24.04 LTS\n",
"cores": 2
}
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
Skip VM problem storage space out of 100%
----- VM 3206 lxc running -----
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
Method: GET, Url: https://192.168.13.230:8006/api2/json/nodes/myproxmox/lxc/3206/config
trce: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
{
"data": {
"net0": "name=eth0,bridge=vmbr3,gw=192.168.3.254,hwaddr=36:AF:5E:AA:D3:DB,ip=192.168.3.210/24,type=veth",
"onboot": 1,
"unprivileged": 1,
"lxc": [
[
"lxc.mount.entry",
"/rpool/home/vl var/vl none bind 0 0"
]
],
"arch": "amd64",
"cores": 4,
"description": "Ubuntu 18.04 LTS \n* vim + editor=vim in /etc/environment\n* sudo berechtigte User in Gruppe admin und sudo in /etc/group aufnehmen\n* apt-listchanges \n* apt-file\n* unattended-upgrades \n* language-pack-de language-pack-de-base\n* dpkg-reconfigure locales -> de_DE.UTF-8\n* update-locale LANG=de_DE.UTF-8\n* postfix\n* logcheck\n* /etc/aliasses -> root an administrator@firstmail.duck ; newaliases\n* timedatectl set-timezone Europe/Berlin \n* postfix\n* dovecot\n* fail2ban\n* rspamd\n\nnoch offen:\n* Bug in Logrotate durch neue Version beseitigen: apt install software-properties-common; add-apt-repository ppa:adiscon/v8-stable; apt install -y rsyslog\n",
"swap": 512,
"hostname": "fl",
"parent": "auto_cv4pve_hourly_251220011712",
"rootfs": "local-zfs:subvol-3206-disk-0,size=20G",
"ostype": "ubuntu",
"startup": "order=5",
"memory": 4096,
"digest": "65c8e27c605ca1ac4f0fb0cdd26378e934494e79"
}
}
dbug: Corsinvest.ProxmoxVE.Api.PveClientBase[0]
StatusCode: OK ReasonPhrase: OK IsSuccessStatusCode: True
Skip VM problem storage space out of 100%
Total execution 00:00:00.1451442
@tonyblue2 commented on GitHub (Dec 22, 2025):
Bevore the update to debian trixie I used this Version:
./cv4pve-autosnap.old --version
/ / () _ _____ __/ /
/ / / __ / / / / __ \ | / / _ / / /
/ // // / / ( ) / / / / |/ / ( ) /
_/_// //// //|/_/___/_/
Automatic snapshot VM/CT with retention (Made in Italy)
1.7.1
But it doen´t work:
root@virtualhost:/home/scripts# /home/scripts/cv4pve-autosnap.old --host=192.168.13.230 --username=snapy@pve --password=xxx --vmid=3205,3206,3207 snap --label='cv4pve_hourly' --debug --keep=24
Method: POST, Url: https://192.168.1.230:8006/api2/json/access/ticket
Parameters:
password : snapshot
username : snapshot
realm : pve
Problem connection!
@franklupo commented on GitHub (Dec 22, 2025):
It looks like you don't have permission to read the storage
https://github.com/Corsinvest/cv4pve-autosnap?tab=readme-ov-file#security--permissions
@tonyblue2 commented on GitHub (Dec 22, 2025):
User shapshot has the folowoing permissions:
VM.Snapshot, VM.Audit, Pool.Audit, Datastore.Audit, VM.Clone, VM.Backup
@franklupo commented on GitHub (Dec 22, 2025):
Does the previous version of autosnap v1.16.0 work?
Thanks
@tonyblue2 commented on GitHub (Dec 22, 2025):
I tried v1.16.0 but I get the same Error.
@franklupo commented on GitHub (Dec 22, 2025):
try to run as root if there are no problems it means that it is a permissions problem
@tonyblue2 commented on GitHub (Dec 22, 2025):
Thank you very much for the hint. It works when using the root user. Which permission might additionally be required after the update to Debian trixie / Proxmox 9.1.2?
@franklupo commented on GitHub (Dec 22, 2025):
maybe Sys.Audit
@tonyblue2 commented on GitHub (Dec 22, 2025):
Unfortunately, it still does not work with the sys.audit permission. I have now assigned all available permissions to the snapshot role. Nevertheless, it still does not work. Do the permissions in Proxmox need to be reloaded, or does the server need to be restarted first?
@franklupo commented on GitHub (Dec 22, 2025):
Do you have any specific ACLs?
@tonyblue2 commented on GitHub (Dec 22, 2025):
pvesh get /access/acl
┌───────────┬───────────────┬───────┬──────────────┬───────────┐
│ path │ roleid │ type │ ugid │ propagate │
╞═══════════╪═══════════════╪═══════╪══════════════╪═══════════╡
...
├───────────┼───────────────┼───────┼──────────────┼───────────┤
│ /vms/3201 │ snapshot │ user │ snapshot@pve │ 1 │
├───────────┼───────────────┼───────┼──────────────┼───────────┤
│ /vms/3202 │ snapshot │ user │ snapshot@pve │ 1 │
├───────────┼───────────────┼───────┼──────────────┼───────────┤
├───────────┼───────────────┼───────┼──────────────┼───────────┤
│ /vms/3210 │ snapshot │ user │ snapshot@pve │ 1 │
├───────────┼───────────────┼───────┼──────────────┼───────────┤
@Lxeon commented on GitHub (Dec 24, 2025):
i got the same problem here, but i find out its because i mount a mp1 like this:
Is this a bug or is it as expected? @franklupo
@franklupo commented on GitHub (Dec 24, 2025):
Esecute with --debig and sttach log
@Lxeon commented on GitHub (Dec 24, 2025):