[GH-ISSUE #86] [Bug]: invalid bootorder: device 'scsi0' does not exist' when Migration VM from ESXi #131

Closed
opened 2026-03-13 17:17:23 +03:00 by kerem · 9 comments
Owner

Originally created by @nothing-fr on GitHub (Mar 9, 2026).
Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/86

Bug Description

I tried the new VM migration feature and the migration of my VM failed with the following error:

Image

Steps to Reproduce

  • Migrate a VM from ESXi to Proxmox

Expected Behavior

  • No errors

Actual Behavior

  • PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"message":"invalid bootorder: device 'scsi0' does not exist'\n","data":null}

ProxCenter Version

62eebd9

Proxmox VE Version

9.1.6

Browser

Version 145.0.7632.159 (Build officiel) (64 bits)

Logs / Screenshots

No response

Originally created by @nothing-fr on GitHub (Mar 9, 2026). Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/86 ### Bug Description I tried the new VM migration feature and the migration of my VM failed with the following error: <img width="649" height="602" alt="Image" src="https://github.com/user-attachments/assets/e349b900-85a2-4cf9-849f-4777bb00690b" /> ### Steps to Reproduce - Migrate a VM from ESXi to Proxmox ### Expected Behavior - No errors ### Actual Behavior - PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"message":"invalid bootorder: device 'scsi0' does not exist'\n","data":null} ### ProxCenter Version 62eebd9 ### Proxmox VE Version 9.1.6 ### Browser Version 145.0.7632.159 (Build officiel) (64 bits) ### Logs / Screenshots _No response_
kerem 2026-03-13 17:17:23 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@adminsyspro commented on GitHub (Mar 9, 2026):

Hi, thanks for the report!

Root cause: When migrating an EFI-based VM, Proxmox allocates disk-0 for the efidisk0. The first imported data disk then becomes disk-1, but the migration code was hardcoding disk-0 for the attach command — causing the SCSI attach to fail silently. The subsequent boot=order=scsi0 then fails because scsi0 was never attached.

Fix: The migration pipeline now parses the actual volume name from the qm disk import output instead of guessing the disk index. This has been fixed for both ESXi and XCP-ng pipelines.

Deployed in commit bedd4f5. Please update and retry the migration — it should work correctly now for both BIOS and EFI VMs.

<!-- gh-comment-id:4023971319 --> @adminsyspro commented on GitHub (Mar 9, 2026): Hi, thanks for the report! **Root cause:** When migrating an EFI-based VM, Proxmox allocates `disk-0` for the `efidisk0`. The first imported data disk then becomes `disk-1`, but the migration code was hardcoding `disk-0` for the attach command — causing the SCSI attach to fail silently. The subsequent `boot=order=scsi0` then fails because `scsi0` was never attached. **Fix:** The migration pipeline now parses the actual volume name from the `qm disk import` output instead of guessing the disk index. This has been fixed for both ESXi and XCP-ng pipelines. Deployed in commit bedd4f5. Please update and retry the migration — it should work correctly now for both BIOS and EFI VMs.
Author
Owner

@nothing-fr commented on GitHub (Mar 10, 2026):

FYI : VM is not EFI, just
I still have the same error. Here are more logs:

roxcenter-orchestrator  | 2026-03-10T10:33:26+01:00 INF Executing SSH command command="echo ok" host=192.168.1.25 port=22 user=root
proxcenter-orchestrator  | 2026-03-10T10:33:26+01:00 INF SSH command executed successfully command="echo ok" exitCode=0 host=192.168.1.25
proxcenter-frontend      | [ssh] executed via orchestrator on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:33:29+01:00 INF Executing SSH command command="nohup bash -c 'curl -sk -b \"vmware_soap_session=\"f24e9b28067f4db0253043e323aa77fc6074d450\"\" -o \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" -w '\"'\"'{\"speed\":%{speed_download},\"size\":%{size_download},\"time\":%{time_total}}'\"'\"' \"https://192.168.1.51/folder/gitlab-test-02/gitlab-test-02-flat.vmdk?dcPath=ha-datacenter&dsName=testAFF_nfs_b5_01\" > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\" 2>&1; echo $? > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\"' > /dev/null 2>&1 & echo $!" host=192.168.1.25 port=22 user=root
proxcenter-orchestrator  | 2026-03-10T10:33:30+01:00 INF SSH command executed successfully command="nohup bash -c 'curl -sk -b \"vmware_soap_session=\"f24e9b28067f4db0253043e323aa77fc6074d450\"\" -o \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" -w '\"'\"'{\"speed\":%{speed_download},\"size\":%{size_download},\"time\":%{time_total}}'\"'\"' \"https://192.168.1.51/folder/gitlab-test-02/gitlab-test-02-flat.vmdk?dcPath=ha-datacenter&dsName=testAFF_nfs_b5_01\" > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\" 2>&1; echo $? > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\"' > /dev/null 2>&1 & echo $!" exitCode=0 host=192.168.1.25
proxcenter-frontend      | [ssh] executed via orchestrator on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:33:30+01:00 INF Executing SSH command command="echo 3042079 > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\"" host=192.168.1.25 port=22 user=root
proxcenter-orchestrator  | 2026-03-10T10:33:30+01:00 INF SSH command executed successfully command="echo 3042079 > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\"" exitCode=0 host=192.168.1.25
proxcenter-frontend      | [ssh] executed via orchestrator on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:33:33+01:00 WRN SSH command not in allowlist command="cat \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\" 2>/dev/null || echo RUNNING"
proxcenter-frontend      | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25
proxcenter-frontend      | [ssh] executed via ssh2 on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:33:33+01:00 WRN SSH command not in allowlist command="stat -c %s \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" 2>/dev/null || echo 0"
proxcenter-frontend      | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25
proxcenter-frontend      | [ssh] executed via ssh2 on 192.168.1.25

< LOOP DURING COPY>

proxcenter-orchestrator  | 2026-03-10T10:43:01+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\" \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\" \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\""
proxcenter-frontend      | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25
proxcenter-frontend      | [ssh] executed via ssh2 on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:43:14+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\""
proxcenter-frontend      | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25
proxcenter-frontend      | [ssh] executed via ssh2 on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:44:16+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.qcow2\""
proxcenter-frontend      | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25
proxcenter-frontend      | [ssh] executed via ssh2 on 192.168.1.25
proxcenter-orchestrator  | 2026-03-10T10:44:18+01:00 INF Executing SSH command command="qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on" host=192.168.1.25 port=22 user=root
proxcenter-orchestrator  | 2026-03-10T10:44:18+01:00 ERR SSH command failed error="command failed: Process exited with status 255" command="qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on" host=192.168.1.25
proxcenter-frontend      | [ssh] executed via orchestrator on 192.168.1.25
<!-- gh-comment-id:4030046358 --> @nothing-fr commented on GitHub (Mar 10, 2026): FYI : VM is not EFI, just I still have the same error. Here are more logs: ``` roxcenter-orchestrator | 2026-03-10T10:33:26+01:00 INF Executing SSH command command="echo ok" host=192.168.1.25 port=22 user=root proxcenter-orchestrator | 2026-03-10T10:33:26+01:00 INF SSH command executed successfully command="echo ok" exitCode=0 host=192.168.1.25 proxcenter-frontend | [ssh] executed via orchestrator on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:33:29+01:00 INF Executing SSH command command="nohup bash -c 'curl -sk -b \"vmware_soap_session=\"f24e9b28067f4db0253043e323aa77fc6074d450\"\" -o \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" -w '\"'\"'{\"speed\":%{speed_download},\"size\":%{size_download},\"time\":%{time_total}}'\"'\"' \"https://192.168.1.51/folder/gitlab-test-02/gitlab-test-02-flat.vmdk?dcPath=ha-datacenter&dsName=testAFF_nfs_b5_01\" > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\" 2>&1; echo $? > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\"' > /dev/null 2>&1 & echo $!" host=192.168.1.25 port=22 user=root proxcenter-orchestrator | 2026-03-10T10:33:30+01:00 INF SSH command executed successfully command="nohup bash -c 'curl -sk -b \"vmware_soap_session=\"f24e9b28067f4db0253043e323aa77fc6074d450\"\" -o \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" -w '\"'\"'{\"speed\":%{speed_download},\"size\":%{size_download},\"time\":%{time_total}}'\"'\"' \"https://192.168.1.51/folder/gitlab-test-02/gitlab-test-02-flat.vmdk?dcPath=ha-datacenter&dsName=testAFF_nfs_b5_01\" > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\" 2>&1; echo $? > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\"' > /dev/null 2>&1 & echo $!" exitCode=0 host=192.168.1.25 proxcenter-frontend | [ssh] executed via orchestrator on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:33:30+01:00 INF Executing SSH command command="echo 3042079 > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\"" host=192.168.1.25 port=22 user=root proxcenter-orchestrator | 2026-03-10T10:33:30+01:00 INF SSH command executed successfully command="echo 3042079 > \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\"" exitCode=0 host=192.168.1.25 proxcenter-frontend | [ssh] executed via orchestrator on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:33:33+01:00 WRN SSH command not in allowlist command="cat \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\" 2>/dev/null || echo RUNNING" proxcenter-frontend | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25 proxcenter-frontend | [ssh] executed via ssh2 on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:33:33+01:00 WRN SSH command not in allowlist command="stat -c %s \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\" 2>/dev/null || echo 0" proxcenter-frontend | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25 proxcenter-frontend | [ssh] executed via ssh2 on 192.168.1.25 < LOOP DURING COPY> proxcenter-orchestrator | 2026-03-10T10:43:01+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid\" \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.pid.exit\" \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.stats\"" proxcenter-frontend | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25 proxcenter-frontend | [ssh] executed via ssh2 on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:43:14+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.vmdk\"" proxcenter-frontend | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25 proxcenter-frontend | [ssh] executed via ssh2 on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:44:16+01:00 WRN SSH command not in allowlist command="rm -f \"/tmp/proxcenter-mig-cmmkex5b8000g01rtccjm7njp-disk0.qcow2\"" proxcenter-frontend | [ssh] orchestrator rejected command, falling back to ssh2 for 192.168.1.25 proxcenter-frontend | [ssh] executed via ssh2 on 192.168.1.25 proxcenter-orchestrator | 2026-03-10T10:44:18+01:00 INF Executing SSH command command="qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on" host=192.168.1.25 port=22 user=root proxcenter-orchestrator | 2026-03-10T10:44:18+01:00 ERR SSH command failed error="command failed: Process exited with status 255" command="qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on" host=192.168.1.25 proxcenter-frontend | [ssh] executed via orchestrator on 192.168.1.25 ```
Author
Owner

@adminsyspro commented on GitHub (Mar 10, 2026):

Thanks for the detailed logs!

Looking at the trace, the attach command is: qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on

The issue is that it's trying to attach disk-0, but if your source VM uses UEFI/EFI boot, Proxmox allocates disk-0 for the EFI disk during VM creation. The imported data disk then becomes disk-1, but the code is falling back to disk-0.

Could you confirm a few things so we can pinpoint the exact cause?

  1. Is the source VM EFI/UEFI-based? (or BIOS?)
  2. How many disks does the source VM have?
  3. Can you share the full migration log from the ProxCenter UI (the migration dialog shows a log panel) — we need to see the output of the qm disk import step specifically, to check if the volume name parsing is working correctly.
<!-- gh-comment-id:4030077987 --> @adminsyspro commented on GitHub (Mar 10, 2026): Thanks for the detailed logs! Looking at the trace, the attach command is: `qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=on` The issue is that it's trying to attach `disk-0`, but if your source VM uses UEFI/EFI boot, Proxmox allocates `disk-0` for the EFI disk during VM creation. The imported data disk then becomes `disk-1`, but the code is falling back to `disk-0`. Could you confirm a few things so we can pinpoint the exact cause? 1. **Is the source VM EFI/UEFI-based?** (or BIOS?) 2. **How many disks does the source VM have?** 3. **Can you share the full migration log from the ProxCenter UI** (the migration dialog shows a log panel) — we need to see the output of the `qm disk import` step specifically, to check if the volume name parsing is working correctly.
Author
Owner

@nothing-fr commented on GitHub (Mar 10, 2026):

  1. The VM is Bios :
    Image

  2. Only 1 :
    Image

  3. Here it is :

[10:33:26] Starting pre-flight checks...
[10:33:26] Connecting to ESXi host https://192.168.1.51...
[10:33:26] ✓ Authenticated as root
[10:33:26] Retrieving VM configuration for "304"...
[10:33:26] ✓ VM config: 1 vCPU, 2.0 GB RAM, 1 disk(s), firmware=bios
[10:33:26] Testing SSH to Proxmox node proxmox-php-hyp01 (192.168.1.25)...
[10:33:26] ✓ SSH connectivity OK
[10:33:26] Target storage "proxmox_prod_nfs_b5_01": 274.2 GB free, need 60.0 GB
[10:33:26] Allocating VMID on Proxmox cluster...
[10:33:26] Allocated VMID 121
[10:33:26] Creating VM: gitlab-test-02 (l26, seabios, virtio-scsi-single)...
[10:33:29] ✓ VM 121 created on proxmox-php-hyp01
[10:33:29] [Disk 1/1] Transferring "Hard disk 1" (60.0 GB, thin)...
[10:33:29] Downloading VMDK from ESXi (60.0 GB)...
[10:43:01] ✓ Download complete: 60.0 GB in 570s (107.9 MB/s)
[10:43:01] Converting to qcow2 format...
[10:43:14] ✓ Conversion to qcow2 complete
[10:43:16] Importing disk into storage "proxmox_prod_nfs_b5_01"...
[10:44:18] ✓ Disk 1 imported and attached as scsi0
[10:44:18] Configuring VM (boot order, agent)...
[10:44:19] ✗ Migration failed: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"data":null,"message":"invalid bootorder: device 'scsi0' does not exist'\n"}
<!-- gh-comment-id:4030121553 --> @nothing-fr commented on GitHub (Mar 10, 2026): 1. The VM is Bios : <img width="816" height="79" alt="Image" src="https://github.com/user-attachments/assets/5f89ffea-2ee6-4dad-a14b-838ecb47546b" /> 2. Only 1 : <img width="816" height="79" alt="Image" src="https://github.com/user-attachments/assets/d00a35b9-3ec9-4a19-84ec-c10617299414" /> 4. Here it is : ``` [10:33:26] Starting pre-flight checks... [10:33:26] Connecting to ESXi host https://192.168.1.51... [10:33:26] ✓ Authenticated as root [10:33:26] Retrieving VM configuration for "304"... [10:33:26] ✓ VM config: 1 vCPU, 2.0 GB RAM, 1 disk(s), firmware=bios [10:33:26] Testing SSH to Proxmox node proxmox-php-hyp01 (192.168.1.25)... [10:33:26] ✓ SSH connectivity OK [10:33:26] Target storage "proxmox_prod_nfs_b5_01": 274.2 GB free, need 60.0 GB [10:33:26] Allocating VMID on Proxmox cluster... [10:33:26] Allocated VMID 121 [10:33:26] Creating VM: gitlab-test-02 (l26, seabios, virtio-scsi-single)... [10:33:29] ✓ VM 121 created on proxmox-php-hyp01 [10:33:29] [Disk 1/1] Transferring "Hard disk 1" (60.0 GB, thin)... [10:33:29] Downloading VMDK from ESXi (60.0 GB)... [10:43:01] ✓ Download complete: 60.0 GB in 570s (107.9 MB/s) [10:43:01] Converting to qcow2 format... [10:43:14] ✓ Conversion to qcow2 complete [10:43:16] Importing disk into storage "proxmox_prod_nfs_b5_01"... [10:44:18] ✓ Disk 1 imported and attached as scsi0 [10:44:18] Configuring VM (boot order, agent)... [10:44:19] ✗ Migration failed: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"data":null,"message":"invalid bootorder: device 'scsi0' does not exist'\n"} ```
Author
Owner

@adminsyspro commented on GitHub (Mar 10, 2026):

Thanks for the logs, very helpful!

We found two bugs:

  1. Silent failure masking: The SSH execution layer was ignoring command failures from the orchestrator (HTTP 200 with success: false was treated as success). So qm set --scsi0 was failing with exit 255 but the migration log showed it as successful.

  2. qm set via SSH unreliable: On some cluster configurations, qm set via SSH returns exit 255. We've replaced the disk attach step with the PVE REST API (PUT /config), which is more reliable.

Both fixes are deployed in commit 7ee935d. Please update and retry the migration — it should work correctly now. If it still fails, you'll get a proper error message this time instead of a silent failure.

<!-- gh-comment-id:4030586025 --> @adminsyspro commented on GitHub (Mar 10, 2026): Thanks for the logs, very helpful! We found **two bugs**: 1. **Silent failure masking**: The SSH execution layer was ignoring command failures from the orchestrator (HTTP 200 with `success: false` was treated as success). So `qm set --scsi0` was failing with exit 255 but the migration log showed it as successful. 2. **`qm set` via SSH unreliable**: On some cluster configurations, `qm set` via SSH returns exit 255. We've replaced the disk attach step with the **PVE REST API** (`PUT /config`), which is more reliable. Both fixes are deployed in commit 7ee935d. Please update and retry the migration — it should work correctly now. If it still fails, you'll get a proper error message this time instead of a silent failure.
Author
Owner

@nothing-fr commented on GitHub (Mar 10, 2026):

OK, after the update, it still doesn't work, but with new logs:

[12:37:38] Importing disk into storage "proxmox_prod_nfs_b5_01"...
[12:38:42] ⚠ Warning: Could not auto-attach scsi0: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"message":"unable to parse directory volume name 'vm-121-disk-0'\n","data":null}
[12:38:42] Configuring VM (boot order, agent)...
[12:38:42] ✗ Migration failed: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"data":null,"message":"invalid bootorder: device 'scsi0' does not exist'\n"}

It looks like it's trying to attach a disk that doesn't exist or is in the wrong place?

If you have a procedure that I could perform manually to see where the problem lies, that's no problem for me.

Storage is shared storage in NFS between Proxmox servers. This may be an important point to note.

After the failure, the VM still exists, but I think the QCOW disk has been deleted, because I can't find it...

<!-- gh-comment-id:4031012552 --> @nothing-fr commented on GitHub (Mar 10, 2026): OK, after the update, it still doesn't work, but with new logs: ``` [12:37:38] Importing disk into storage "proxmox_prod_nfs_b5_01"... [12:38:42] ⚠ Warning: Could not auto-attach scsi0: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"message":"unable to parse directory volume name 'vm-121-disk-0'\n","data":null} [12:38:42] Configuring VM (boot order, agent)... [12:38:42] ✗ Migration failed: PVE 500 /nodes/proxmox-php-hyp01/qemu/121/config: {"data":null,"message":"invalid bootorder: device 'scsi0' does not exist'\n"} ``` It looks like it's trying to attach a disk that doesn't exist or is in the wrong place? If you have a procedure that I could perform manually to see where the problem lies, that's no problem for me. *Storage is shared storage in NFS between Proxmox servers. This may be an important point to note.* After the failure, the VM still exists, but I think the QCOW disk has been deleted, because I can't find it...
Author
Owner

@adminsyspro commented on GitHub (Mar 10, 2026):

Thanks for the logs and the patience debugging this!

Root cause: On NFS directory storage, the disk volume name format is different (storage:VMID/vm-XXX-disk-N.qcow2) compared to block-based storage (storage:vm-XXX-disk-N). The import output parsing was failing silently, and the hardcoded fallback was using the wrong format for your NFS setup — causing the unable to parse directory volume name error.

Fix (commit 0eef8e1): When the import output parsing fails, we now read the VM config directly from the PVE API to find the actual volume name of the imported disk. This should work reliably regardless of storage type (NFS, LVM, Ceph, ZFS, etc.).

Please update and retry. One heads-up though: the migration feature has been tested on several storage types but not exhaustively on every possible configuration. If you hit another edge case, please share the logs and we'll fix it as we go — your feedback has been incredibly helpful so far!

<!-- gh-comment-id:4031101041 --> @adminsyspro commented on GitHub (Mar 10, 2026): Thanks for the logs and the patience debugging this! **Root cause:** On NFS directory storage, the disk volume name format is different (`storage:VMID/vm-XXX-disk-N.qcow2`) compared to block-based storage (`storage:vm-XXX-disk-N`). The import output parsing was failing silently, and the hardcoded fallback was using the wrong format for your NFS setup — causing the `unable to parse directory volume name` error. **Fix (commit 0eef8e1):** When the import output parsing fails, we now read the VM config directly from the PVE API to find the actual volume name of the imported disk. This should work reliably regardless of storage type (NFS, LVM, Ceph, ZFS, etc.). Please update and retry. One heads-up though: the migration feature has been tested on several storage types but not exhaustively on every possible configuration. If you hit another edge case, please share the logs and we'll fix it as we go — your feedback has been incredibly helpful so far!
Author
Owner

@nothing-fr commented on GitHub (Mar 10, 2026):

Image

It's working now... Do you plan a feature to make batch import ? (multiple select VMs)

<!-- gh-comment-id:4031734342 --> @nothing-fr commented on GitHub (Mar 10, 2026): <img width="625" height="517" alt="Image" src="https://github.com/user-attachments/assets/bc7cf92a-c6d5-4b59-9f15-7df5620ea579" /> It's working now... Do you plan a feature to make batch import ? (multiple select VMs)
Author
Owner

@adminsyspro commented on GitHub (Mar 10, 2026):

Yes, it's on the roadmap, migrate with in mass VM.

I work on it actually.

<!-- gh-comment-id:4031813410 --> @adminsyspro commented on GitHub (Mar 10, 2026): Yes, it's on the roadmap, migrate with in mass VM. I work on it actually.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/proxcenter-ui#131
No description provided.