[GH-ISSUE #403] PVE-Scripts-Local: Add checks when removing a script (remove LXC) #116

Closed
opened 2026-02-26 12:40:13 +03:00 by kerem · 10 comments
Owner

Originally created by @Syxpi on GitHub (Dec 13, 2025).
Original GitHub issue: https://github.com/community-scripts/ProxmoxVE-Local/issues/403

Originally assigned to: @michelroegl-brunner on GitHub.

🌟 Briefly describe the feature

Verify that PVE-Script-Local does not delete an LXC/VM without verification

📝 Detailed description

Hello, after testing PVE-Script-Local, I wanted to delete an LXC created by PVE-Script-Local (a Prometheus LXC). But when I deleted this LXC, it destroyed my NAS (instead of deleting the LXC, it literally deleted my VM that contained my NAS).

Basically, my Prometheus had VMID 100, my TrueNAS had VMID 104. But, there's a but. The script said that Prometheus was VMID 104, when it wasn't.

As a result, when I deleted the LXC on the PVE-Local side, it just... destroyed my NAS. Since it thought it was the one, when it wasn't at all.

As far as I can tell, no checks were performed on the script side, which I find extremely dangerous.

Imagine trusting this script, only to have it destroy VMs that didn't ask for it. It completely destroys trust.

The worst part is that the VM that was deleted wasn't even linked to PVE-Script-Local. That's the worst part ;-;

I put it here, but I don't know if I should put it in another repository. If I need to put it somewhere else, please let me know.

💡 Why is this useful?

Before deleting, check that it contains the “community-script” tag, or PVE-Script-Local retains it in a file it has created such as LXC, with a specific ID that can be found in the LXC description, for example.

This avoids destroying VMs/LXCs that are not at all related to PVE-Script-Local, while at the same time maintaining a high level of trust. Currently, this trust is compromised (on my side)

Originally created by @Syxpi on GitHub (Dec 13, 2025). Original GitHub issue: https://github.com/community-scripts/ProxmoxVE-Local/issues/403 Originally assigned to: @michelroegl-brunner on GitHub. ### 🌟 Briefly describe the feature Verify that PVE-Script-Local does not delete an LXC/VM without verification ### 📝 Detailed description Hello, after testing PVE-Script-Local, I wanted to delete an LXC created by PVE-Script-Local (a Prometheus LXC). But when I deleted this LXC, it destroyed my NAS (instead of deleting the LXC, it literally deleted my VM that contained my NAS). Basically, my Prometheus had VMID 100, my TrueNAS had VMID 104. But, there's a but. The script said that Prometheus was VMID 104, when it wasn't. As a result, when I deleted the LXC on the PVE-Local side, it just... destroyed my NAS. Since it thought it was the one, when it wasn't at all. As far as I can tell, no checks were performed on the script side, which I find extremely dangerous. Imagine trusting this script, only to have it destroy VMs that didn't ask for it. It completely destroys trust. The worst part is that the VM that was deleted wasn't even linked to PVE-Script-Local. That's the worst part ;-; I put it here, but I don't know if I should put it in another repository. If I need to put it somewhere else, please let me know. ### 💡 Why is this useful? Before deleting, check that it contains the “community-script” tag, or PVE-Script-Local retains it in a file it has created such as LXC, with a specific ID that can be found in the LXC description, for example. This avoids destroying VMs/LXCs that are not at all related to PVE-Script-Local, while at the same time maintaining a high level of trust. Currently, this trust is compromised (on my side)
kerem 2026-02-26 12:40:13 +03:00
Author
Owner

@MickLesk commented on GitHub (Dec 13, 2025):

Wrong repo

<!-- gh-comment-id:3649132254 --> @MickLesk commented on GitHub (Dec 13, 2025): Wrong repo
Author
Owner

@Syxpi commented on GitHub (Dec 14, 2025):

Update: My trust in PVE-Script-Local is completely destroyed.

After more than 48 hours of trying hard to recover my data because of this crap, the final conclusion was that it is definitely unrecoverable.

I clearly do not thank you.
Really, I don't.

<!-- gh-comment-id:3650666237 --> @Syxpi commented on GitHub (Dec 14, 2025): Update: My trust in PVE-Script-Local is completely destroyed. After more than 48 hours of trying hard to recover my data because of this crap, the final conclusion was that it is definitely unrecoverable. I clearly do not thank you. Really, I don't.
Author
Owner

@MickLesk commented on GitHub (Dec 14, 2025):

You know thats an unstable Alpha/Beta state?

<!-- gh-comment-id:3650687582 --> @MickLesk commented on GitHub (Dec 14, 2025): You know thats an unstable Alpha/Beta state?
Author
Owner

@Syxpi commented on GitHub (Dec 14, 2025):

You know thats an unstable Alpha/Beta state?

But if it's unstable/beta, what's it doing in the release scripts then? (ProxmoxVE repo instead of ProxmoxVED)

There's no logic there.
After verification: Why did you put “Beta” at the bottom of the Readme? Why not at the top?
because at NO moment do we know that it's a beta.
I repeat,
AT NO FUCKING MOMENT.

<!-- gh-comment-id:3650699460 --> @Syxpi commented on GitHub (Dec 14, 2025): > You know thats an unstable Alpha/Beta state? But if it's unstable/beta, what's it doing in the release scripts then? (ProxmoxVE repo instead of ProxmoxVED) There's no logic there. After verification: Why did you put “Beta” at the bottom of the Readme? Why not at the top? because at NO moment do we know that it's a beta. I repeat, AT NO FUCKING MOMENT.
Author
Owner

@michelroegl-brunner commented on GitHub (Dec 14, 2025):

So i need to make my point here also.

The App did what it is supposed to do, it destroyed a Container ypu telled it to. You needed to reconfirm that you whant to destroy this contianer. You entered the ID in the form and pressed ok. It also says in the form to make sure this is the right container and that all data is gone.

As for the wrong ID, i dont buy that. I developed that thing and used it for countless hours during testing, but not for once it did not detect the correct lxc id for me. No one reported that as well.

My question to you, so i can investigate: Was your nas a lxc or vm? Can you share the logs, so i can check if it was my fault or maybe a missklick on your side.

And you learned a lesson here as well, always have backups. Shit can happen anytime.

<!-- gh-comment-id:3650704456 --> @michelroegl-brunner commented on GitHub (Dec 14, 2025): So i need to make my point here also. The App did what it is supposed to do, it destroyed a Container ypu telled it to. You needed to reconfirm that you whant to destroy this contianer. You entered the ID in the form and pressed ok. It also says in the form to make sure this is the right container and that all data is gone. As for the wrong ID, i dont buy that. I developed that thing and used it for countless hours during testing, but not for once it did not detect the correct lxc id for me. No one reported that as well. My question to you, so i can investigate: Was your nas a lxc or vm? Can you share the logs, so i can check if it was my fault or maybe a missklick on your side. And you learned a lesson here as well, always have backups. Shit can happen anytime.
Author
Owner

@michelroegl-brunner commented on GitHub (Dec 14, 2025):

You know thats an unstable Alpha/Beta state?

But if it's unstable/beta, what's it doing in the release scripts then? (ProxmoxVE repo instead of ProxmoxVED)

There's no logic there.
After verification: Why did you put “Beta” at the bottom of the Readme? Why not at the top?
because at NO moment do we know that it's a beta.
I repeat,
AT NO FUCKING MOMENT.

Alone the version 0.x.x tells you it is not a finished product.

<!-- gh-comment-id:3650706273 --> @michelroegl-brunner commented on GitHub (Dec 14, 2025): > > You know thats an unstable Alpha/Beta state? > > But if it's unstable/beta, what's it doing in the release scripts then? (ProxmoxVE repo instead of ProxmoxVED) > > There's no logic there. > After verification: Why did you put “Beta” at the bottom of the Readme? Why not at the top? > because at NO moment do we know that it's a beta. > I repeat, > AT NO FUCKING MOMENT. > Alone the version 0.x.x tells you it is not a finished product.
Author
Owner

@Syxpi commented on GitHub (Dec 14, 2025):

The App did what it is supposed to do, it destroyed a Container ypu telled it to. You needed to reconfirm that you whant to destroy this contianer. You entered the ID in the form and pressed ok. It also says in the form to make sure this is the right container and that all data is gone.

he never asked me the VMID, so... that's weird.

My question to you, so i can investigate: Was your nas a lxc or vm? Can you share the logs, so i can check if it was my fault or maybe a missklick on your side.

My NAS was a VM, because TrueNAS in LXC, never seen it in my life.

What about the logs? Where can I find them? I deleted the LXC after this incident, as I don't keep dangerous tools on my infrastructure.

And you learned a lesson here as well, always have backups. Shit can happen anytime.

And you're right. but it's hard to backup some data when... i don't have any available storage on my all infrastructure ;-;

Alone the version 0.x.x tells you it is not a finished product.

That doesn't mean anything, because the programs I was using, which were versions 0.x.x, were in full release and finished. So... that really doesn't mean anything.

<!-- gh-comment-id:3650710928 --> @Syxpi commented on GitHub (Dec 14, 2025): > The App did what it is supposed to do, it destroyed a Container ypu telled it to. You needed to reconfirm that you whant to destroy this contianer. You entered the ID in the form and pressed ok. It also says in the form to make sure this is the right container and that all data is gone. he never asked me the VMID, so... that's weird. > My question to you, so i can investigate: Was your nas a lxc or vm? Can you share the logs, so i can check if it was my fault or maybe a missklick on your side. My NAS was a VM, because TrueNAS in LXC, never seen it in my life. What about the logs? Where can I find them? I deleted the LXC after this incident, as I don't keep dangerous tools on my infrastructure. > And you learned a lesson here as well, always have backups. Shit can happen anytime. And you're right. but it's hard to backup some data when... i don't have any available storage on my all infrastructure ;-; > Alone the version 0.x.x tells you it is not a finished product. That doesn't mean anything, because the programs I was using, which were versions 0.x.x, were in full release and finished. So... that really doesn't mean anything.
Author
Owner

@michelroegl-brunner commented on GitHub (Dec 14, 2025):

So when your trunas was a vm and prometheus a lxc, then things dont add ip here. If the app would think prometheus has id 104 instead of 100, and trys to destroy it, it would fail as lxc and vms use different commands here. pct destroy and qm destroy.

So to be totally honest, i do not thing the app can be the case. Are you sure you did not miscsclick by accident? If you are sure you did not, the please share the logs, so we can understand what went wrong.

To add to the versioning topic: This is called a semantic versioning. Major.Minor.Patch
A software is normally considered to be finshed when it reaches a 1.0.0 release. All befor that you should consider as beta/alpha unstable. But i will make that more clear on the readme

<!-- gh-comment-id:3650724819 --> @michelroegl-brunner commented on GitHub (Dec 14, 2025): So when your trunas was a vm and prometheus a lxc, then things dont add ip here. If the app would think prometheus has id 104 instead of 100, and trys to destroy it, it would fail as lxc and vms use different commands here. pct destroy and qm destroy. So to be totally honest, i do not thing the app can be the case. Are you sure you did not miscsclick by accident? If you are sure you did not, the please share the logs, so we can understand what went wrong. To add to the versioning topic: This is called a semantic versioning. Major.Minor.Patch A software is normally considered to be finshed when it reaches a 1.0.0 release. All befor that you should consider as beta/alpha unstable. But i will make that more clear on the readme
Author
Owner

@Syxpi commented on GitHub (Dec 14, 2025):

So when your trunas was a vm and prometheus a lxc, then things dont add ip here. If the app would think prometheus has id 104 instead of 100, and trys to destroy it, it would fail as lxc and vms use different commands here. pct destroy and qm destroy.

that's the thing i don't understand. How could he have destroyed it, when the commands are completely different? Prometheus was in an LXC, TrueNAS in a KVM. The two commands are different, but they managed to destroy each other. I don't understand anything anymore.

and even if i missclicked, that's impossible. because Prometheus was created from the WebUI. So in theory, it knows that it's an LXC.

and i check my PVE logs, when i check for VMID 100, he says "qmdestroy" when i destroyed it manually after that. so... that's really weird

<!-- gh-comment-id:3650730561 --> @Syxpi commented on GitHub (Dec 14, 2025): > So when your trunas was a vm and prometheus a lxc, then things dont add ip here. If the app would think prometheus has id 104 instead of 100, and trys to destroy it, it would fail as lxc and vms use different commands here. pct destroy and qm destroy. that's the thing i don't understand. How could he have destroyed it, when the commands are completely different? Prometheus was in an LXC, TrueNAS in a KVM. The two commands are different, but they managed to destroy each other. I don't understand anything anymore. and even if i missclicked, that's impossible. because Prometheus was created from the WebUI. So in theory, it knows that it's an LXC. and i check my PVE logs, when i check for VMID 100, he says "qmdestroy" when i destroyed it manually after that. so... that's really weird
Author
Owner

@Syxpi commented on GitHub (Dec 14, 2025):

i check again my PVE Logs:

Image

we have the proof that VMID 100 was really a LXC.
and i don't think i have precisely 12 seconds to shutdown the VM, and destroy it just after. knowing that my TrueNAS VM take about 25 seconds to shutdown.

And i can confirm again. because i have proof of that.

Image with Zipline proof: Image

and i can say that's a good proof. because i even have screenshot in my computer.

<!-- gh-comment-id:3650746370 --> @Syxpi commented on GitHub (Dec 14, 2025): i check again my PVE Logs: <img width="1436" height="456" alt="Image" src="https://img.syxpi.fr/u/uYBvEt.png" /> we have the proof that VMID 100 was really a LXC. and i don't think i have precisely 12 seconds to shutdown the VM, and destroy it just after. knowing that my TrueNAS VM take about 25 seconds to shutdown. And i can confirm again. because i have proof of that. <img width="274" height="415" alt="Image" src="https://img.syxpi.fr/u/oiOAKU.png" /> with Zipline proof: <img width="674" height="754" alt="Image" src="https://img.syxpi.fr/u/j9lSKv.png" /> and i can say that's a good proof. because i even have screenshot in my computer.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ProxmoxVE-Local#116
No description provided.