[GH-ISSUE #13] Compatible with DSM 6.2 or 7.1 (7.2) ? #1

Closed
opened 2026-03-07 19:19:10 +03:00 by kerem · 20 comments
Owner

Originally created by @rrmt23 on GitHub (Mar 24, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/13

Hello friend!

Thanks for the script.
On what versions of the DSM can they be used?
And what awaits us after the update DSM for ex. from 7.1 to 7.2 or 6.2 to 7.x?

Thank you!

Originally created by @rrmt23 on GitHub (Mar 24, 2023). Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/13 Hello friend! Thanks for the script. On what versions of the DSM can they be used? And what awaits us after the update DSM for ex. from 7.1 to 7.2 or 6.2 to 7.x? Thank you!
kerem closed this issue 2026-03-07 19:19:10 +03:00
Author
Owner

@007revad commented on GitHub (Mar 24, 2023):

People have confirmed that it works on:

  • DS1821+ DSM 7.2 Beta
  • DS1821+ DSM 7.1 Update 4
  • DS1621+ DSM 7.1.1-42962 Update 4
  • DS920+ DSM 7.1.1-42962 Update 1
  • DS720+ DSM 7.2 Beta
  • DS918+ DSM 7.1.1

I'm still waiting for someone to confirm if it works with DSM 6.2.4

If it doesn't work with DSM 6.2.4 I have a new version I'm working on that will work on DSM 6 and DSM 7.

<!-- gh-comment-id:1483389698 --> @007revad commented on GitHub (Mar 24, 2023): People have confirmed that it works on: - DS1821+ DSM 7.2 Beta - DS1821+ DSM 7.1 Update 4 - DS1621+ DSM 7.1.1-42962 Update 4 - DS920+ DSM 7.1.1-42962 Update 1 - DS720+ DSM 7.2 Beta - DS918+ DSM 7.1.1 I'm still waiting for someone to confirm if it works with DSM 6.2.4 If it doesn't work with DSM 6.2.4 I have a new version I'm working on that will work on DSM 6 and DSM 7.
Author
Owner

@andreacampanella commented on GitHub (Mar 28, 2023):

DSM 7.1 Update 4

Doesn't seem to work for me, as much as I try it always shows up as cache drive

<!-- gh-comment-id:1486423590 --> @andreacampanella commented on GitHub (Mar 28, 2023): DSM 7.1 Update 4 Doesn't seem to work for me, as much as I try it always shows up as cache drive
Author
Owner

@legnagyobb commented on GitHub (Mar 28, 2023):

Dear 007revead!
It works on my DS723+ DSM 7.1.1-42962 Update 4.
Very nice job, thank you!

<!-- gh-comment-id:1486607532 --> @legnagyobb commented on GitHub (Mar 28, 2023): Dear 007revead! It works on my DS723+ DSM 7.1.1-42962 Update 4. Very nice job, thank you!
Author
Owner

@crisfd commented on GitHub (Mar 29, 2023):

DSM 6.2.4 DS918
It ends up with error:
mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md
ERROR 5 Failed to create RAID!

<!-- gh-comment-id:1487993418 --> @crisfd commented on GitHub (Mar 29, 2023): DSM 6.2.4 DS918 It ends up with error: mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md ERROR 5 Failed to create RAID!
Author
Owner

@SSerhy commented on GitHub (Mar 29, 2023):

It works on my DS3622xs+ DSM 7.1.1-42962 Update 1,
3 m.2 in raid5
comment: it is not visible in the settings of the TRIM option for installation.
question: what will happen to the disk volume if you upgrade to version 7.2?

Very nice job, thank you!

<!-- gh-comment-id:1488200138 --> @SSerhy commented on GitHub (Mar 29, 2023): It works on my DS3622xs+ DSM 7.1.1-42962 Update 1, 3 m.2 in raid5 comment: it is not visible in the settings of the TRIM option for installation. question: what will happen to the disk volume if you upgrade to version 7.2? Very nice job, thank you!
Author
Owner

@007revad commented on GitHub (Mar 29, 2023):

3 m.2 in raid5

@SSerhy Which model M2 card are you using?

<!-- gh-comment-id:1489090666 --> @007revad commented on GitHub (Mar 29, 2023): > 3 m.2 in raid5 @SSerhy Which model M2 card are you using?
Author
Owner

@007revad commented on GitHub (Mar 29, 2023):

Doesn't seem to work for me, as much as I try it always shows up as cache drive

@andreacampanella Did you accidentally run it in dry run mode?

<!-- gh-comment-id:1489109266 --> @007revad commented on GitHub (Mar 29, 2023): > Doesn't seem to work for me, as much as I try it always shows up as cache drive @andreacampanella Did you accidentally run it in dry run mode?
Author
Owner

@007revad commented on GitHub (Mar 29, 2023):

DSM 6.2.4 DS918
It ends up with error:
mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md
ERROR 5 Failed to create RAID!

@crisfd I'll finish the changes to make it support DSM 6.2.4 and let you know when it's done.

<!-- gh-comment-id:1489111760 --> @007revad commented on GitHub (Mar 29, 2023): > DSM 6.2.4 DS918 > It ends up with error: > mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md > ERROR 5 Failed to create RAID! @crisfd I'll finish the changes to make it support DSM 6.2.4 and let you know when it's done.
Author
Owner

@007revad commented on GitHub (Mar 30, 2023):

@crisfd Can you try v1.2.11

https://github.com/007revad/Synology_M2_volume/releases/tag/v1.2.11

Let me know if DSM 6.2.4 does not have the "assemble storage pool" after running the script and rebooting.

<!-- gh-comment-id:1489750592 --> @007revad commented on GitHub (Mar 30, 2023): @crisfd Can you try v1.2.11 https://github.com/007revad/Synology_M2_volume/releases/tag/v1.2.11 Let me know if DSM 6.2.4 does not have the "assemble storage pool" after running the script and rebooting.
Author
Owner

@crisfd commented on GitHub (Mar 30, 2023):

@007revad Hi. This time the script is running fine on DS918 dsm 6.2.4-25556

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
Creating a physical volume (PV) on md5 partition
Wiping btrfs signature on /dev/md5.
Physical volume "/dev/md5" successfully created
Creating a volume group (VG) on md5 partition
Volume group "vg5" successfully created

root@diskstation:~# fdisk -l |grep nvme*
Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
/dev/nvme0n1p1 256 4980735 4980480 2.4G fd Linux raid autodetect
/dev/nvme0n1p2 4980736 9175039 4194304 2G fd Linux raid autodetect
/dev/nvme0n1p3 9437184 1000206899 990769716 472.4G fd Linux raid autodetect
GPT PMBR size mismatch (102399 != 15133247) will be corrected by w(rite).

DSM 6.2.4 does not have the "assemble storage pool" at all. The nvme drive is not shown in Storage Manager.
Only through ssh i can mount and access partition, make folders and copy data but thats it.
Thank you for your work

<!-- gh-comment-id:1490450698 --> @crisfd commented on GitHub (Mar 30, 2023): @007revad Hi. This time the script is running fine on DS918 dsm 6.2.4-25556 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started. Creating a physical volume (PV) on md5 partition Wiping btrfs signature on /dev/md5. Physical volume "/dev/md5" successfully created Creating a volume group (VG) on md5 partition Volume group "vg5" successfully created root@diskstation:~# fdisk -l |grep nvme* Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors /dev/nvme0n1p1 256 4980735 4980480 2.4G fd Linux raid autodetect /dev/nvme0n1p2 4980736 9175039 4194304 2G fd Linux raid autodetect /dev/nvme0n1p3 9437184 1000206899 990769716 472.4G fd Linux raid autodetect GPT PMBR size mismatch (102399 != 15133247) will be corrected by w(rite). DSM 6.2.4 does not have the "assemble storage pool" at all. The nvme drive is not shown in Storage Manager. Only through ssh i can mount and access partition, make folders and copy data but thats it. Thank you for your work
Author
Owner

@007revad commented on GitHub (Apr 6, 2023):

@crisfd I haven't forgotten about you. I've been busy with other scripts,.

<!-- gh-comment-id:1498499144 --> @007revad commented on GitHub (Apr 6, 2023): @crisfd I haven't forgotten about you. I've been busy with other scripts,.
Author
Owner

@007revad commented on GitHub (Apr 6, 2023):

@crisfd
Do you want try this develop version to confirm that it now works in DSM 6
https://github.com/007revad/Synology_M2_volume/archive/refs/tags/v1.3.12.tar.gz

<!-- gh-comment-id:1498799053 --> @007revad commented on GitHub (Apr 6, 2023): @crisfd Do you want try this develop version to confirm that it now works in DSM 6 https://github.com/007revad/Synology_M2_volume/archive/refs/tags/v1.3.12.tar.gz
Author
Owner

@crisfd commented on GitHub (Apr 7, 2023):

@007revad I've tested the version 1.3.12, it works fine, but still DSM won't mount the nvme drive.

root@Diskstation:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active linear sde3[0]
3902196544 blocks super 1.2 64k rounding [1/1] [U]
md2 : active linear sdd3[0]
1948692544 blocks super 1.2 64k rounding [1/1] [U]
md3 : active linear sdf3[0]
5855700544 blocks super 1.2 64k rounding [1/1] [U]
md1 : active raid1 sdf2[2] sde2[1] sdd2[0]
2097088 blocks [16/3] [UUU_____________]
md0 : active raid1 sdd1[0] sde1[1] sdf1[2]
2490176 blocks [16/3] [UUU_____________]

root@Diskstation:~# mdadm --assemble --scan
mdadm: /dev/md/5 has been started with 1 drive.

root@Diskstation:~# mdadm --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Thu Apr 6 19:45:25 2023
Raid Level : raid1
Array Size : 495383808 (472.43 GiB 507.27 GB)
Used Dev Size : 495383808 (472.43 GiB 507.27 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Apr 6 20:02:09 2023
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : Diskstation:5 (local to host Diskstation)
UUID : 80f3d237:74c8e69e:39d79681:15980a75
Events : 3

Number   Major   Minor   RaidDevice State
   0     259        3        0      active sync   /dev/nvme0n1p3

root@Diskstation:~# mount /dev/md5 /volume9

root@Diskstation:~# df -h /volume9/
Filesystem Size Used Avail Use% Mounted on
/dev/md5 463G 5.1M 463G 1% /volume9

root@Diskstation:~# synospace --meta -e
[/dev/md4]

     Descriptions=[]
     Reuse Space ID=[reuse_2]

[/dev/md3]

     Descriptions=[]
     Reuse Space ID=[reuse_8]

[/dev/md2]

     Descriptions=[]
     Reuse Space ID=[reuse_1]
<!-- gh-comment-id:1500047386 --> @crisfd commented on GitHub (Apr 7, 2023): @007revad I've tested the version 1.3.12, it works fine, but still DSM won't mount the nvme drive. root@Diskstation:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md4 : active linear sde3[0] 3902196544 blocks super 1.2 64k rounding [1/1] [U] md2 : active linear sdd3[0] 1948692544 blocks super 1.2 64k rounding [1/1] [U] md3 : active linear sdf3[0] 5855700544 blocks super 1.2 64k rounding [1/1] [U] md1 : active raid1 sdf2[2] sde2[1] sdd2[0] 2097088 blocks [16/3] [UUU_____________] md0 : active raid1 sdd1[0] sde1[1] sdf1[2] 2490176 blocks [16/3] [UUU_____________] root@Diskstation:~# mdadm --assemble --scan mdadm: /dev/md/5 has been started with 1 drive. root@Diskstation:~# mdadm --detail /dev/md5 /dev/md5: Version : 1.2 Creation Time : Thu Apr 6 19:45:25 2023 Raid Level : raid1 Array Size : 495383808 (472.43 GiB 507.27 GB) Used Dev Size : 495383808 (472.43 GiB 507.27 GB) Raid Devices : 1 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Apr 6 20:02:09 2023 State : clean Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : Diskstation:5 (local to host Diskstation) UUID : 80f3d237:74c8e69e:39d79681:15980a75 Events : 3 Number Major Minor RaidDevice State 0 259 3 0 active sync /dev/nvme0n1p3 root@Diskstation:~# mount /dev/md5 /volume9 root@Diskstation:~# df -h /volume9/ Filesystem Size Used Avail Use% Mounted on /dev/md5 463G 5.1M 463G 1% /volume9 root@Diskstation:~# synospace --meta -e [/dev/md4] --------------------- Descriptions=[] Reuse Space ID=[reuse_2] [/dev/md3] --------------------- Descriptions=[] Reuse Space ID=[reuse_8] [/dev/md2] --------------------- Descriptions=[] Reuse Space ID=[reuse_1]
Author
Owner

@007revad commented on GitHub (Apr 12, 2023):

@crisfd Can you try the old v1.0.3 to see if it works in DSM 6.2.4
https://github.com/007revad/Synology_M2_volume/releases/tag/v1.0.3

<!-- gh-comment-id:1505074511 --> @007revad commented on GitHub (Apr 12, 2023): @crisfd Can you try the old v1.0.3 to see if it works in DSM 6.2.4 https://github.com/007revad/Synology_M2_volume/releases/tag/v1.0.3
Author
Owner

@crisfd commented on GitHub (Apr 14, 2023):

@crisfd Can you try the old v1.0.3 to see if it works in DSM 6.2.4 https://github.com/007revad/Synology_M2_volume/releases/tag/v1.0.3

Synology_M2_volume v1.0.3
github.com/007revad/Synology_M2_volume

Type yes to continue. Type anything else to do a dry run test.
yes
There is a newer version of this script available.
Current version: v1.0.3
Latest version: v1.2.12
Do you want to download v1.2.12 now? {y/n]
n
NVMe M.2 nvme0n1 is INTEL SSDPEKNW512G8
WARNING Drive has a cache partition

Unused M.2 drives found: 1

  1. nvme0n1

  2. Quit
    Select the M.2 drive: 1

  3. btrfs

  4. ext4
    Select the file system: 1

Ready to create btrfs volume on nvme0n1

WARNING Everything on the selected M.2 drive(s) will be deleted.
Type yes to continue. Type anything else to quit.
yes
You chose to continue. You are brave! :)

./syno_create_m2_volume.sh: line 441: "4" +1: syntax error: operand expected (error token is ""4" +1")
Using md as it's the next available.

Creating Synology partitions on nvme0n1

    Device   Sectors (Version7: SupportRaid)

/dev/nvme0n11 4980480 (2431 MB)
/dev/nvme0n12 4194304 (2048 MB)
Reserved size: 262144 ( 128 MB)
Primary data partition will be created.

WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y
Cleaning all partitions...
Creating sys partitions...
Creating primary data partition...
Please remember to mdadm and mkfs new partitions.

Creating single drive device.
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md
./syno_create_m2_volume.sh: line 532: /sys/block/md/queue/rotational: No such file or directory
btrfs-progs v4.0
See http://btrfs.wiki.kernel.org for more information.

Failed to check size for '/dev/md': No such file or directory

After the restart go to Storage Manager and select online assemble:
Storage Pool > Available Pool > Online Assemble
Then, optionally, enable TRIM:
Storage Pool > ... > Settings > SSD TRIM

The Synology needs to restart.
Type yes to reboot now.
Type anything else to quit (if you will restart it yourself).

The main issue remains that dsm 6.2.4-25556 wont see the nvme drive.

<!-- gh-comment-id:1508401377 --> @crisfd commented on GitHub (Apr 14, 2023): > @crisfd Can you try the old v1.0.3 to see if it works in DSM 6.2.4 https://github.com/007revad/Synology_M2_volume/releases/tag/v1.0.3 Synology_M2_volume v1.0.3 github.com/007revad/Synology_M2_volume Type yes to continue. Type anything else to do a dry run test. yes There is a newer version of this script available. Current version: v1.0.3 Latest version: v1.2.12 Do you want to download v1.2.12 now? {y/n] n NVMe M.2 nvme0n1 is INTEL SSDPEKNW512G8 WARNING Drive has a cache partition Unused M.2 drives found: 1 1) nvme0n1 2) Quit Select the M.2 drive: 1 1) btrfs 2) ext4 Select the file system: 1 Ready to create btrfs volume on nvme0n1 WARNING Everything on the selected M.2 drive(s) will be deleted. Type yes to continue. Type anything else to quit. yes You chose to continue. You are brave! :) ./syno_create_m2_volume.sh: line 441: "4" +1: syntax error: operand expected (error token is ""4" +1") Using md as it's the next available. Creating Synology partitions on nvme0n1 Device Sectors (Version7: SupportRaid) /dev/nvme0n11 4980480 (2431 MB) /dev/nvme0n12 4194304 (2048 MB) Reserved size: 262144 ( 128 MB) Primary data partition will be created. WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y Cleaning all partitions... Creating sys partitions... Creating primary data partition... Please remember to mdadm and mkfs new partitions. Creating single drive device. mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: /dev/md is an invalid name for an md device. Try /dev/md/md ./syno_create_m2_volume.sh: line 532: /sys/block/md/queue/rotational: No such file or directory btrfs-progs v4.0 See http://btrfs.wiki.kernel.org for more information. Failed to check size for '/dev/md': No such file or directory After the restart go to Storage Manager and select online assemble: Storage Pool > Available Pool > Online Assemble Then, optionally, enable TRIM: Storage Pool > ... > Settings > SSD TRIM The Synology needs to restart. Type yes to reboot now. Type anything else to quit (if you will restart it yourself). The main issue remains that dsm 6.2.4-25556 wont see the nvme drive.
Author
Owner

@007revad commented on GitHub (Apr 15, 2023):

Are you rebooting the Synology after running the script? DSM 6.2.4 has no online assemble option so it needs to reboot.

I see the v1.0.3 script has an error when getting the next available md number. That line in the script works in DSM 7 with bash 4.4 but causes an error in DSM 6 with bash 4.3

You'd need to change line 420 from this:
nextmd=$(("${lastmd:2}" +1))

to this:
nextmd=$((${lastmd:2} +1))

i.e. Remove the " "

<!-- gh-comment-id:1509459511 --> @007revad commented on GitHub (Apr 15, 2023): Are you rebooting the Synology after running the script? DSM 6.2.4 has no online assemble option so it needs to reboot. I see the v1.0.3 script has an error when getting the next available md number. That line in the script works in DSM 7 with bash 4.4 but causes an error in DSM 6 with bash 4.3 You'd need to change line 420 from this: `nextmd=$(("${lastmd:2}" +1))` to this: `nextmd=$((${lastmd:2} +1))` i.e. Remove the **" "**
Author
Owner

@crisfd commented on GitHub (Apr 15, 2023):

@007revad I've changed the line and it didn't throw any errors this time. I always did run the script 2-3 times and reboot, just to make sure I haven't done something wrong. However the outcome is the same. I see the drive but can't mount by DSM.
I've searched a little bit and found one cmd that helped another way.
After running the script, rebooted and:
mdadm --assemble --scan
mount /dev/md5 /volume9
Made a new shared folder (NVME) on volume1 and run
mount --bind /volume9 /volume1/NVME/
I still don't see the drive in Storage Manager, but so far, this is the only way and I take that as a win.
Thank you so much

<!-- gh-comment-id:1509764212 --> @crisfd commented on GitHub (Apr 15, 2023): @007revad I've changed the line and it didn't throw any errors this time. I always did run the script 2-3 times and reboot, just to make sure I haven't done something wrong. However the outcome is the same. I see the drive but can't mount by DSM. I've searched a little bit and found one cmd that helped another way. After running the script, rebooted and: mdadm --assemble --scan mount /dev/md5 /volume9 Made a new shared folder (NVME) on volume1 and run mount --bind /volume9 /volume1/NVME/ I still don't see the drive in Storage Manager, but so far, this is the only way and I take that as a win. Thank you so much
Author
Owner

@SSerhy commented on GitHub (May 1, 2023):

3 m.2 in raid5

@SSerhy Which model M2 card are you using?

sorry, I didn't see the message earlier,
NVMe M.2 CeaMere Model CMSSDG

<!-- gh-comment-id:1530325898 --> @SSerhy commented on GitHub (May 1, 2023): > > 3 m.2 in raid5 > > @SSerhy Which model M2 card are you using? sorry, I didn't see the message earlier, NVMe M.2 CeaMere Model CMSSDG
Author
Owner

@007revad commented on GitHub (May 1, 2023):

3 m.2 in raid5

@SSerhy Which model M2 card are you using?

sorry, I didn't see the message earlier, NVMe M.2 CeaMere Model CMSSDG

I can't remember why I asked which M2 card you have. But it's nice to see that non-Synology M2 cards can work. EDIT I just realised the CeaMere Model CMSSDG is an NVMe drive and not a PCIe M2 card.

I wanted to know how you were running 3 NVMe drives.

The volume will survive upgrading to 7.2 (but you should back it up just in case).

I've learnt that DSM only shows the SSD TRIM option in DSM 7.2 Beta for RAID 1. I'm curious if you will see the TRIM option for RAID 5 after updating to 7.2

<!-- gh-comment-id:1530610160 --> @007revad commented on GitHub (May 1, 2023): > > > 3 m.2 in raid5 > > > > > > @SSerhy Which model M2 card are you using? > > sorry, I didn't see the message earlier, NVMe M.2 CeaMere Model CMSSDG I can't remember why I asked which M2 card you have. But it's nice to see that non-Synology M2 cards can work. **EDIT** I just realised the CeaMere Model CMSSDG is an NVMe drive and not a PCIe M2 card. I wanted to know how you were running 3 NVMe drives. The volume will survive upgrading to 7.2 (but you should back it up just in case). I've learnt that DSM only shows the SSD TRIM option in DSM 7.2 Beta for RAID 1. I'm curious if you will see the TRIM option for RAID 5 after updating to 7.2
Author
Owner

@SSerhy commented on GitHub (May 3, 2023):

I wanted to know how you were running 3 NVMe drives.
I use it for this PCI-E Raid Card Signal Splitting Expansion:
https://aliexpress.ru/item/1005004290859228.html?spm=a2g2w.detail.pers_rcmd.6.db061484AFYK7M&sku_id=12000028647841936&afterSave=true

I've learnt that DSM only shows the SSD TRIM option in DSM 7.2 Beta for RAID 1. I'm curious if you will see the TRIM option for RAID 5 after updating to 7.2
Ок, i can do it a little later

<!-- gh-comment-id:1533493792 --> @SSerhy commented on GitHub (May 3, 2023): > I wanted to know how you were running 3 NVMe drives. I use it for this PCI-E Raid Card Signal Splitting Expansion: https://aliexpress.ru/item/1005004290859228.html?spm=a2g2w.detail.pers_rcmd.6.db061484AFYK7M&sku_id=12000028647841936&afterSave=true > I've learnt that DSM only shows the SSD TRIM option in DSM 7.2 Beta for RAID 1. I'm curious if you will see the TRIM option for RAID 5 after updating to 7.2 Ок, i can do it a little later
Sign in to join this conversation.
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_M2_volume#1
No description provided.