[GH-ISSUE #69] Cannot remove volume/pool after creating #214

Closed
opened 2026-03-12 18:08:59 +03:00 by kerem · 4 comments
Owner

Originally created by @brg3466 on GitHub (Jul 9, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/69

Hi, first would like to thank you so much for the script ! I created a RAID1 pool with 2x 2T non-synology NVMe SSD. But after creating, I cannot remove the volume nor the pool. (I changed the mind and wanted to create a RAID0 pool with these 2 m.2 SSD , I run the script again but it is said the drive is already being used by DSM).
How to remove the m.2 SSD from the volume/pool ?
Thank you!

Originally created by @brg3466 on GitHub (Jul 9, 2023). Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/69 Hi, first would like to thank you so much for the script ! I created a RAID1 pool with 2x 2T non-synology NVMe SSD. But after creating, I cannot remove the volume nor the pool. (I changed the mind and wanted to create a RAID0 pool with these 2 m.2 SSD , I run the script again but it is said the drive is already being used by DSM). How to remove the m.2 SSD from the volume/pool ? Thank you!
kerem closed this issue 2026-03-12 18:09:04 +03:00
Author
Owner

@007revad commented on GitHub (Jul 9, 2023):

If you run the script with the --all option it will allow you to select any detected NVMe drives, even if they have already have a RAID array on them.

sudo -i /volume1/scripts/syno_create_m2_volume.sh --all

Note: Replace /volume1/scripts/ with the path to where the script is located.

<!-- gh-comment-id:1627560224 --> @007revad commented on GitHub (Jul 9, 2023): If you run the script with the --all option it will allow you to select any detected NVMe drives, even if they have already have a RAID array on them. `sudo -i /volume1/scripts/syno_create_m2_volume.sh --all` Note: Replace /volume1/scripts/ with the path to where the script is located.
Author
Owner

@brg3466 commented on GitHub (Jul 9, 2023):

it threw an error , see below
Using options: --all
Type yes to continue. Type anything else to do a dry run test.
yes

NVMe M.2 nvme0n1 is WD Blue SN570 2TB

NVMe M.2 nvme1n1 is WD Blue SN570 2TB
WARNING Drive has a volume partition

Unused M.2 drives found: 2

  1. Single

  2. RAID 0

  3. RAID 1
    Select the RAID type: 2
    You selected RAID 0

  4. nvme0n1

  5. nvme1n1
    Select the 1st M.2 drive: 2
    You selected nvme1n1

  6. nvme0n1
    Select the 2nd M.2 drive: 1
    You selected nvme0n1

Ready to create RAID 0 volume group using nvme1n1 and nvme0n1

WARNING Everything on the selected M.2 drive(s) will be deleted.
Type yes to continue. Type anything else to quit.
yes
You chose to continue. You are brave! :)

Using md4 as it's the next available.

Creating Synology partitions on nvme1n1

    Device   Sectors (Version7: SupportRaid)

/dev/nvme1n11 4980480 (2431 MB)
/dev/nvme1n12 4194304 (2048 MB)
Reserved size: 262144 ( 128 MB)
Primary data partition will be created.

WARNING: This action will erase all data on '/dev/nvme1n1' and repart it, are you sure to continue? [y/N] y
Cleaning all partitions...
Creating sys partitions...
[/sbin/sfdisk -N1 -uS -q -f -j256 -z4980480 -tfd -F /dev/nvme1n1] failed. err=255
Create system partitions failed.

<!-- gh-comment-id:1627570904 --> @brg3466 commented on GitHub (Jul 9, 2023): it threw an error , see below Using options: --all Type yes to continue. Type anything else to do a dry run test. yes NVMe M.2 nvme0n1 is WD Blue SN570 2TB NVMe M.2 nvme1n1 is WD Blue SN570 2TB WARNING Drive has a volume partition Unused M.2 drives found: 2 1) Single 2) RAID 0 3) RAID 1 Select the RAID type: 2 You selected RAID 0 1) nvme0n1 2) nvme1n1 Select the 1st M.2 drive: 2 You selected nvme1n1 1) nvme0n1 Select the 2nd M.2 drive: 1 You selected nvme0n1 Ready to create RAID 0 volume group using nvme1n1 and nvme0n1 WARNING Everything on the selected M.2 drive(s) will be deleted. Type yes to continue. Type anything else to quit. yes You chose to continue. You are brave! :) Using md4 as it's the next available. Creating Synology partitions on nvme1n1 Device Sectors (Version7: SupportRaid) /dev/nvme1n11 4980480 (2431 MB) /dev/nvme1n12 4194304 (2048 MB) Reserved size: 262144 ( 128 MB) Primary data partition will be created. WARNING: This action will erase all data on '/dev/nvme1n1' and repart it, are you sure to continue? [y/N] y Cleaning all partitions... Creating sys partitions... [/sbin/sfdisk -N1 -uS -q -f -j256 -z4980480 -tfd -F /dev/nvme1n1] failed. err=255 Create system partitions failed.
Author
Owner

@brg3466 commented on GitHub (Jul 9, 2023):

I tried again and it worked for the 2nd time ! You are genius ! Thank you !

<!-- gh-comment-id:1627577401 --> @brg3466 commented on GitHub (Jul 9, 2023): I tried again and it worked for the 2nd time ! You are genius ! Thank you !
Author
Owner

@007revad commented on GitHub (Jul 9, 2023):

Cleaning all partitions...
Creating sys partitions...
[/sbin/sfdisk -N1 -uS -q -f -j256 -z4980480 -tfd -F /dev/nvme1n1] failed. err=255
Create system partitions failed.

Did the script stop there or did it also show ERROR 5 Failed to create syno partitions! error?

If it showed the ERROR 5 Failed to create syno partitions! error I could change it to suggest running the script again.

<!-- gh-comment-id:1627603925 --> @007revad commented on GitHub (Jul 9, 2023): > Cleaning all partitions... > Creating sys partitions... > [/sbin/sfdisk -N1 -uS -q -f -j256 -z4980480 -tfd -F /dev/nvme1n1] failed. err=255 > Create system partitions failed. Did the script stop there or did it also show `ERROR 5 Failed to create syno partitions!` error? If it showed the `ERROR 5 Failed to create syno partitions!` error I could change it to suggest running the script again.
Sign in to join this conversation.
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_M2_volume#214
No description provided.