[GH-ISSUE #362] ### Follow-up: auto-detect still missing tagged LXCs, incorrectly identifying VM's/CT's #110

Closed
opened 2026-02-26 12:40:12 +03:00 by kerem · 3 comments
Owner

Originally created by @CANTI-BOT on GitHub (Nov 29, 2025).
Original GitHub issue: https://github.com/community-scripts/ProxmoxVE-Local/issues/362

Have you read and understood the above guidelines?

yes

📝 Provide a clear and concise description of the issue.

PVE Scripts Local: v0.5.1 (fresh install in LXC)
Deployment: 3-node PVE cluster

  • <NODE_A>: runs the pvescriptslocal LXC and several Community Scripts LXCs
  • <NODE_B>: runs one Community Scripts LXC
  • <NODE_C>: only VMs (no LXCs)

Symptom

  • Auto-detect finds only the LXC on <NODE_B>.
  • It does not detect any of the LXCs on <NODE_A>, even though they all have community-script tags.
  • <NODE_C> is expected to show nothing (no LXCs, only VMs).

🔄 Steps to reproduce the issue.

Steps to Reproduce (LXC autodetect issue)

  1. In a 3-node PVE cluster, deploy PVE Scripts Local v0.5.1 in an LXC on <NODE_A> using the official helper script.
  2. On <NODE_A>, create several LXCs using Community Scripts, or manually ensure they have tags including community-script, for example:
    • tags: community-script;<ROLE_1>
    • tags: community-script;<ROLE_2>
    • tags: backup;community-script;<ROLE_3>
  3. On <NODE_B>, create at least one LXC with a tag including community-script, e.g.:
    • tags: community-script;<ROLE_B1>
  4. On <NODE_C>, have only VMs (no LXCs).
  5. From inside the pvescriptslocal container, verify SSH + pct access to each node works:
    ssh root@<NODE_A> 'pct list'
    ssh root@<NODE_B> 'pct list'
    ssh root@<NODE_C> 'pct list'
    
  6. In the PVE Scripts Local web UI:
    • Go to Manage PVE Servers.
    • Add <NODE_A>, <NODE_B>, and <NODE_C> with the same root/SSH settings used above.
    • Click Test Host Connection for each server and confirm all succeed.
  7. Go to the Installed Scripts tab and click
    “Auto-Detect LXC Containers (Must contain a tag with 'community-script')”.
  8. Observe that:
    • Only the LXC on <NODE_B> appears in the list.
    • None of the tagged LXCs on <NODE_A> are detected.
    • No LXCs from <NODE_C> appear (expected, since there are none).

Paste the full error output (if available).

Verification from inside the pve-scripts-local container

ssh root@<NODE_A> '
  echo "=== <NODE_A> ==="
  pct list
  for id in $(pct list | awk "NR>1 {print \$1}"); do
    echo "--- CT $id tags:"
    pct config "$id" | grep -i "^tags" || echo "no tags set on $id"
  done
'

Output (shortened, host/IPs anonymized):

=== <NODE_A> ===
VMID       Status     Lock         Name
<CTID_1>   running                 <LXC_NAME_1>
<CTID_2>   stopped                 <LXC_NAME_2>
<CTID_3>   running                 <LXC_NAME_3>

--- CT <CTID_1> tags:
tags: community-script;<ROLE_1>
--- CT <CTID_2> tags:
tags: community-script;<ROLE_2>
--- CT <CTID_3> tags:
tags: backup;community-script;<ROLE_3>

<NODE_B>:

ssh root@<NODE_B> '
  pct list
  for id in $(pct list | awk "NR>1 {print \$1}"); do
    echo "--- CT $id tags:"
    pct config "$id" | grep -i "^tags" || echo "no tags set on $id"
  done
'
VMID       Status     Lock         Name
<CTID_B1>  stopped                 <LXC_NAME_B1>

--- CT <CTID_B1> tags:
tags: community-script;<ROLE_B1>

<NODE_C>:

ssh root@<NODE_C> 'pct list'
VMID  Status  Lock  Name
# (no LXC containers here, only VMs)

Despite this, the Installed Scripts → Auto-Detect LXC Containers button only creates a record for the LXC on <NODE_B>. Nothing from <NODE_A> shows up, even though:

  • SSH connectivity from the pve-scripts-local LXC to <NODE_A> works
  • pct list and pct config work
  • Tags clearly contain community-script

🖼️ Additional context (optional).

Additional issue: VM shell uses LXC path and autodetect doesn’t rediscover the VM

I also noticed a separate problem with a VM created via a Community Scripts VM installer (Docker VM):

  1. I deployed a VM on <NODE_C> using one of the Community Scripts VM installers (e.g. a Docker VM script).

  2. The installation completed successfully and an entry was created in Installed Scripts:

    • Type badge shows VM
    • Script path looks like scripts/vm/<SCRIPT_NAME>.sh
    • Server column correctly shows <NODE_C>
  3. When I click StartShell for this VM entry from the PVE Scripts Local UI, the shell session fails with:

    [START] Starting shell session for container <VMID_VM1>...
    Configuration file 'nodes/<NODE_C>/lxc/<VMID_VM1>.conf' does not exist
    shell session ended with exit code: 2
    
Image

This suggests the Shell feature is trying to open a container shell for <VMID_VM1> by looking under lxc/<VMID_VM1>.conf, even though this is a VM, not an LXC (its config lives under qemu-server/<VMID_VM1>.conf on the node).

  1. As an experiment, I removed this VM entry from the PVE Scripts Local database (via the UI “Delete” / cleanup action).
  2. I then tried to get it back by running the Auto-Detect LXC Containers button again. The VM is not rediscovered, so there is currently no way to automatically re-associate that VM script installation with the existing VM on <NODE_C>.

So there seem to be two related issues:

  • The VM Shell action is using the LXC path (lxc/<VMID_VM1>.conf) instead of VM/qemu logic for a VM-type script installation.
  • Once the VM’s Installed Script entry is removed, there is no mechanism (auto-detect or otherwise) to detect that a VM created via a VM script already exists on <NODE_C> and re-add it.

A screenshot is attached showing the shell failure and the Installed Scripts view for reference.


If you’d like additional logs, I can capture journalctl -u pvescriptslocal -n 200 and /var/log/pve-scripts.log right after hitting the auto-detect button and/or attempting to start a VM shell session.

Originally created by @CANTI-BOT on GitHub (Nov 29, 2025). Original GitHub issue: https://github.com/community-scripts/ProxmoxVE-Local/issues/362 ### ✅ Have you read and understood the above guidelines? yes ### 📝 Provide a clear and concise description of the issue. **PVE Scripts Local**: v0.5.1 (fresh install in LXC) **Deployment**: 3-node PVE cluster - `<NODE_A>`: runs the `pvescriptslocal` LXC and several Community Scripts LXCs - `<NODE_B>`: runs one Community Scripts LXC - `<NODE_C>`: only VMs (no LXCs) **Symptom** - Auto-detect finds **only** the LXC on `<NODE_B>`. - It does **not** detect any of the LXCs on `<NODE_A>`, even though they all have `community-script` tags. - `<NODE_C>` is expected to show nothing (no LXCs, only VMs). --- ### 🔄 Steps to reproduce the issue. ### Steps to Reproduce (LXC autodetect issue) 1. In a 3-node PVE cluster, deploy PVE Scripts Local v0.5.1 in an LXC on `<NODE_A>` using the official helper script. 2. On `<NODE_A>`, create several LXCs using Community Scripts, or manually ensure they have tags including `community-script`, for example: - `tags: community-script;<ROLE_1>` - `tags: community-script;<ROLE_2>` - `tags: backup;community-script;<ROLE_3>` 3. On `<NODE_B>`, create at least one LXC with a tag including `community-script`, e.g.: - `tags: community-script;<ROLE_B1>` 4. On `<NODE_C>`, have only VMs (no LXCs). 5. From inside the `pvescriptslocal` container, verify SSH + `pct` access to each node works: ```bash ssh root@<NODE_A> 'pct list' ssh root@<NODE_B> 'pct list' ssh root@<NODE_C> 'pct list' ``` 6. In the PVE Scripts Local web UI: - Go to **Manage PVE Servers**. - Add `<NODE_A>`, `<NODE_B>`, and `<NODE_C>` with the same root/SSH settings used above. - Click **Test Host Connection** for each server and confirm all succeed. 7. Go to the **Installed Scripts** tab and click **“Auto-Detect LXC Containers (Must contain a tag with 'community-script')”**. 8. Observe that: - Only the LXC on `<NODE_B>` appears in the list. - None of the tagged LXCs on `<NODE_A>` are detected. - No LXCs from `<NODE_C>` appear (expected, since there are none). --- ### ❌ Paste the full error output (if available). ### Verification from inside the pve-scripts-local container ```bash ssh root@<NODE_A> ' echo "=== <NODE_A> ===" pct list for id in $(pct list | awk "NR>1 {print \$1}"); do echo "--- CT $id tags:" pct config "$id" | grep -i "^tags" || echo "no tags set on $id" done ' ``` Output (shortened, host/IPs anonymized): ```text === <NODE_A> === VMID Status Lock Name <CTID_1> running <LXC_NAME_1> <CTID_2> stopped <LXC_NAME_2> <CTID_3> running <LXC_NAME_3> --- CT <CTID_1> tags: tags: community-script;<ROLE_1> --- CT <CTID_2> tags: tags: community-script;<ROLE_2> --- CT <CTID_3> tags: tags: backup;community-script;<ROLE_3> ``` `<NODE_B>`: ```bash ssh root@<NODE_B> ' pct list for id in $(pct list | awk "NR>1 {print \$1}"); do echo "--- CT $id tags:" pct config "$id" | grep -i "^tags" || echo "no tags set on $id" done ' ``` ```text VMID Status Lock Name <CTID_B1> stopped <LXC_NAME_B1> --- CT <CTID_B1> tags: tags: community-script;<ROLE_B1> ``` `<NODE_C>`: ```bash ssh root@<NODE_C> 'pct list' ``` ```text VMID Status Lock Name # (no LXC containers here, only VMs) ``` Despite this, the **Installed Scripts → Auto-Detect LXC Containers** button only creates a record for the LXC on `<NODE_B>`. Nothing from `<NODE_A>` shows up, even though: - SSH connectivity from the pve-scripts-local LXC to `<NODE_A>` works - `pct list` and `pct config` work - Tags clearly contain `community-script` --- ### 🖼️ Additional context (optional). ### Additional issue: VM shell uses LXC path and autodetect doesn’t rediscover the VM I also noticed a separate problem with a VM created via a Community Scripts VM installer (Docker VM): 1. I deployed a VM on `<NODE_C>` using one of the Community Scripts VM installers (e.g. a Docker VM script). 2. The installation completed successfully and an entry was created in **Installed Scripts**: - Type badge shows **VM** - Script path looks like `scripts/vm/<SCRIPT_NAME>.sh` - Server column correctly shows `<NODE_C>` 3. When I click **Start** → **Shell** for this VM entry from the PVE Scripts Local UI, the shell session fails with: ```text [START] Starting shell session for container <VMID_VM1>... Configuration file 'nodes/<NODE_C>/lxc/<VMID_VM1>.conf' does not exist shell session ended with exit code: 2 ``` <img width="1953" height="1924" alt="Image" src="https://github.com/user-attachments/assets/e7a575ed-1062-4e37-bc23-7a97b0a1324a" /> This suggests the Shell feature is trying to open a *container* shell for `<VMID_VM1>` by looking under `lxc/<VMID_VM1>.conf`, even though this is a **VM**, not an LXC (its config lives under `qemu-server/<VMID_VM1>.conf` on the node). 4. As an experiment, I removed this VM entry from the PVE Scripts Local database (via the UI “Delete” / cleanup action). 5. I then tried to get it back by running the **Auto-Detect LXC Containers** button again. The VM is **not** rediscovered, so there is currently no way to automatically re-associate that VM script installation with the existing VM on `<NODE_C>`. So there seem to be two related issues: - The VM Shell action is using the LXC path (`lxc/<VMID_VM1>.conf`) instead of VM/qemu logic for a VM-type script installation. - Once the VM’s Installed Script entry is removed, there is no mechanism (auto-detect or otherwise) to detect that a VM created via a VM script already exists on `<NODE_C>` and re-add it. A screenshot is attached showing the shell failure and the Installed Scripts view for reference. --- If you’d like additional logs, I can capture `journalctl -u pvescriptslocal -n 200` and `/var/log/pve-scripts.log` right after hitting the auto-detect button and/or attempting to start a VM shell session.
kerem 2026-02-26 12:40:12 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@michelroegl-brunner commented on GitHub (Dec 1, 2025):

First i want to say thank you for that detailed Report. I need to dig a bit deeper why it dose not detect all VMs and LXCs for you. There the additional Logs would be very helpfull to see where it falls over. Also if you can provide Logs when you try to start/stop a VM and i fails would be great, this is something i can not reproduce at the moment.

For the VM - Shell Problem: The shell Button is not supposed to be here with VMs at all, as such a feature dose not exist on Proxmox. This was a oversight from my side, i just did not catch that. The button will get removed with the next update.

<!-- gh-comment-id:3595804731 --> @michelroegl-brunner commented on GitHub (Dec 1, 2025): First i want to say thank you for that detailed Report. I need to dig a bit deeper why it dose not detect all VMs and LXCs for you. There the additional Logs would be very helpfull to see where it falls over. Also if you can provide Logs when you try to start/stop a VM and i fails would be great, this is something i can not reproduce at the moment. For the VM - Shell Problem: The shell Button is not supposed to be here with VMs at all, as such a feature dose not exist on Proxmox. This was a oversight from my side, i just did not catch that. The button will get removed with the next update.
Author
Owner

@CANTI-BOT commented on GitHub (Dec 7, 2025):

Additional finding: autodetect ignores 5-digit VMIDs

I did some controlled tests on a single node with the same helper script, same node, same network, same tags:

  1. Ran the Grocy LXC helper script 3 times on <NODE_X> with these VMIDs:

    • 999
    • 9999
    • 99999
      All three containers have tags including community-script.
  2. Hit Installed Scripts → Auto-Detect LXC Containers.

Result in the UI:

  • 999 is detected (shows up as an LXC entry).
  • 9999 is detected.
  • 99999 is not detected.

Screenshots attached: one from the PVE node showing the three LXCs (999, 9999, 99999), and one from PVE Scripts Local showing only 999 and 9999 present as installed scripts.

Image Image

This also matches my earlier experience:

  • LXCs with IDs like <15021> and <50111> (both with community-script tags) are not detected.
  • LXCs with IDs 100, 101, 102, 999, 9999 are detected as expected.

So it looks like the autodetect logic currently only considers VMIDs up to 4 digits (<= 9999).
5-digit IDs appear to be ignored entirely during the scan, even though Proxmox itself allows them.

<!-- gh-comment-id:3622409274 --> @CANTI-BOT commented on GitHub (Dec 7, 2025): ### Additional finding: autodetect ignores 5-digit VMIDs I did some controlled tests on a single node with the **same helper script**, same node, same network, same tags: 1. Ran the Grocy LXC helper script 3 times on `<NODE_X>` with these VMIDs: - `999` - `9999` - `99999` All three containers have `tags` including `community-script`. 2. Hit **Installed Scripts → Auto-Detect LXC Containers**. Result in the UI: - `999` is detected (shows up as an LXC entry). - `9999` is detected. - `99999` is **not** detected. Screenshots attached: one from the PVE node showing the three LXCs (999, 9999, 99999), and one from PVE Scripts Local showing only 999 and 9999 present as installed scripts. <img width="2048" height="1172" alt="Image" src="https://github.com/user-attachments/assets/cb9deefa-9584-47f6-9e0d-c4b91dc4a80a" /> <img width="542" height="315" alt="Image" src="https://github.com/user-attachments/assets/5683c158-198d-4b2e-974e-a8a8f25aa8d0" /> This also matches my earlier experience: - LXCs with IDs like `<15021>` and `<50111>` (both with `community-script` tags) are not detected. - LXCs with IDs `100`, `101`, `102`, `999`, `9999` are detected as expected. So it looks like the autodetect logic currently only considers VMIDs up to 4 digits (<= 9999). 5-digit IDs appear to be ignored entirely during the scan, even though Proxmox itself allows them.
Author
Owner

@michelroegl-brunner commented on GitHub (Dec 7, 2025):

That is a good find, i only have 3 didigt IDs on my test node. I look into that.

<!-- gh-comment-id:3622849491 --> @michelroegl-brunner commented on GitHub (Dec 7, 2025): That is a good find, i only have 3 didigt IDs on my test node. I look into that.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/ProxmoxVE-Local#110
No description provided.