mirror of
https://github.com/adminsyspro/proxcenter-ui.git
synced 2026-04-25 14:35:56 +03:00
[GH-ISSUE #86] [Bug]: invalid bootorder: device 'scsi0' does not exist' when Migration VM from ESXi #131
Labels
No labels
bug
enhancement
feature-request
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/proxcenter-ui#131
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @nothing-fr on GitHub (Mar 9, 2026).
Original GitHub issue: https://github.com/adminsyspro/proxcenter-ui/issues/86
Bug Description
I tried the new VM migration feature and the migration of my VM failed with the following error:
Steps to Reproduce
Expected Behavior
Actual Behavior
ProxCenter Version
62eebd9Proxmox VE Version
9.1.6
Browser
Version 145.0.7632.159 (Build officiel) (64 bits)
Logs / Screenshots
No response
@adminsyspro commented on GitHub (Mar 9, 2026):
Hi, thanks for the report!
Root cause: When migrating an EFI-based VM, Proxmox allocates
disk-0for theefidisk0. The first imported data disk then becomesdisk-1, but the migration code was hardcodingdisk-0for the attach command — causing the SCSI attach to fail silently. The subsequentboot=order=scsi0then fails becausescsi0was never attached.Fix: The migration pipeline now parses the actual volume name from the
qm disk importoutput instead of guessing the disk index. This has been fixed for both ESXi and XCP-ng pipelines.Deployed in commit
bedd4f5. Please update and retry the migration — it should work correctly now for both BIOS and EFI VMs.@nothing-fr commented on GitHub (Mar 10, 2026):
FYI : VM is not EFI, just
I still have the same error. Here are more logs:
@adminsyspro commented on GitHub (Mar 10, 2026):
Thanks for the detailed logs!
Looking at the trace, the attach command is:
qm set 121 --scsi0 proxmox_prod_nfs_b5_01:vm-121-disk-0,discard=onThe issue is that it's trying to attach
disk-0, but if your source VM uses UEFI/EFI boot, Proxmox allocatesdisk-0for the EFI disk during VM creation. The imported data disk then becomesdisk-1, but the code is falling back todisk-0.Could you confirm a few things so we can pinpoint the exact cause?
qm disk importstep specifically, to check if the volume name parsing is working correctly.@nothing-fr commented on GitHub (Mar 10, 2026):
The VM is Bios :

Only 1 :

Here it is :
@adminsyspro commented on GitHub (Mar 10, 2026):
Thanks for the logs, very helpful!
We found two bugs:
Silent failure masking: The SSH execution layer was ignoring command failures from the orchestrator (HTTP 200 with
success: falsewas treated as success). Soqm set --scsi0was failing with exit 255 but the migration log showed it as successful.qm setvia SSH unreliable: On some cluster configurations,qm setvia SSH returns exit 255. We've replaced the disk attach step with the PVE REST API (PUT /config), which is more reliable.Both fixes are deployed in commit
7ee935d. Please update and retry the migration — it should work correctly now. If it still fails, you'll get a proper error message this time instead of a silent failure.@nothing-fr commented on GitHub (Mar 10, 2026):
OK, after the update, it still doesn't work, but with new logs:
It looks like it's trying to attach a disk that doesn't exist or is in the wrong place?
If you have a procedure that I could perform manually to see where the problem lies, that's no problem for me.
Storage is shared storage in NFS between Proxmox servers. This may be an important point to note.
After the failure, the VM still exists, but I think the QCOW disk has been deleted, because I can't find it...
@adminsyspro commented on GitHub (Mar 10, 2026):
Thanks for the logs and the patience debugging this!
Root cause: On NFS directory storage, the disk volume name format is different (
storage:VMID/vm-XXX-disk-N.qcow2) compared to block-based storage (storage:vm-XXX-disk-N). The import output parsing was failing silently, and the hardcoded fallback was using the wrong format for your NFS setup — causing theunable to parse directory volume nameerror.Fix (commit
0eef8e1): When the import output parsing fails, we now read the VM config directly from the PVE API to find the actual volume name of the imported disk. This should work reliably regardless of storage type (NFS, LVM, Ceph, ZFS, etc.).Please update and retry. One heads-up though: the migration feature has been tested on several storage types but not exhaustively on every possible configuration. If you hit another edge case, please share the logs and we'll fix it as we go — your feedback has been incredibly helpful so far!
@nothing-fr commented on GitHub (Mar 10, 2026):
It's working now... Do you plan a feature to make batch import ? (multiple select VMs)
@adminsyspro commented on GitHub (Mar 10, 2026):
Yes, it's on the roadmap, migrate with in mass VM.
I work on it actually.