[GH-ISSUE #78] RAID5 without official M.2 NVMe adapter card #19

Closed
opened 2026-03-07 19:19:30 +03:00 by kerem · 38 comments
Owner

Originally created by @hawie on GitHub (Jul 31, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/78

          > Unfortunately if DSM isn't seeing the PCIe card with synonvme the drive won't show up in storage manager.

That card is very cheap. I might buy one myself to see if I can get it working in my DS1821+

Is it possible to enable it? create RAID5 without official M.2 NVMe adapter card.

Originally posted by @hawie in https://github.com/007revad/Synology_M2_volume/issues/76#issuecomment-1657420785

Originally created by @hawie on GitHub (Jul 31, 2023). Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/78 > Unfortunately if DSM isn't seeing the PCIe card with synonvme the drive won't show up in storage manager. > > That card is very cheap. I might buy one myself to see if I can get it working in my DS1821+ Is it possible to enable it? create RAID5 without official M.2 NVMe adapter card. _Originally posted by @hawie in https://github.com/007revad/Synology_M2_volume/issues/76#issuecomment-1657420785_
kerem closed this issue 2026-03-07 19:19:30 +03:00
Author
Owner

@hawie commented on GitHub (Jul 31, 2023):

  1. basic infomation

cmd:

udevadm info /dev/nvme0n1 | head -n 1
udevadm info /dev/nvme1n1 | head -n 1
udevadm info /dev/nvme2n1 | head -n 1
udevadm info /dev/nvme3n1 | head -n 1
cat /etc.defaults/extensionPorts
cat /etc/extensionPorts
synonvme --m2-card-model-get /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1
synonvme --m2-card-model-get /dev/nvme2n1
synonvme --m2-card-model-get /dev/nvme3n1
cat /run/synostorage/disks/nvme0n1/m2_pool_support 
cat /run/synostorage/disks/nvme1n1/m2_pool_support 
cat /run/synostorage/disks/nvme2n1/m2_pool_support 
cat /run/synostorage/disks/nvme3n1/m2_pool_support

screen:

ash-4.4# udevadm info /dev/nvme0n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1
ash-4.4# udevadm info /dev/nvme1n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.1/0000:02:00.0/nvme/nvme1/nvme1n1
ash-4.4# udevadm info /dev/nvme2n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.2/0000:03:00.0/nvme/nvme2/nvme2n1
ash-4.4# udevadm info /dev/nvme3n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.3/0000:04:00.0/nvme/nvme3/nvme3n1
ash-4.4# cat /etc.defaults/extensionPorts
[pci]
pci1="0000:00:1c.0"
pci2="0000:00:1c.1"
pci3="0000:00:1c.2"
pci4="0000:00:1c.3"
ash-4.4# cat /etc/extensionPorts
[pci]
pci1="0000:00:1c.0"
pci2="0000:00:1c.1"
pci3="0000:00:1c.2"
pci4="0000:00:1c.3"
ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1
Not M.2 adapter card
ash-4.4# cat /run/synostorage/disks/nvme0n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme1n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme2n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme3n1/m2_pool_support
0ash-4.4#
<!-- gh-comment-id:1657428440 --> @hawie commented on GitHub (Jul 31, 2023): 1. basic infomation cmd: ``` udevadm info /dev/nvme0n1 | head -n 1 udevadm info /dev/nvme1n1 | head -n 1 udevadm info /dev/nvme2n1 | head -n 1 udevadm info /dev/nvme3n1 | head -n 1 cat /etc.defaults/extensionPorts cat /etc/extensionPorts synonvme --m2-card-model-get /dev/nvme0n1 synonvme --m2-card-model-get /dev/nvme1n1 synonvme --m2-card-model-get /dev/nvme2n1 synonvme --m2-card-model-get /dev/nvme3n1 cat /run/synostorage/disks/nvme0n1/m2_pool_support cat /run/synostorage/disks/nvme1n1/m2_pool_support cat /run/synostorage/disks/nvme2n1/m2_pool_support cat /run/synostorage/disks/nvme3n1/m2_pool_support ``` screen: ``` ash-4.4# udevadm info /dev/nvme0n1 | head -n 1 P: /devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1 ash-4.4# udevadm info /dev/nvme1n1 | head -n 1 P: /devices/pci0000:00/0000:00:1c.1/0000:02:00.0/nvme/nvme1/nvme1n1 ash-4.4# udevadm info /dev/nvme2n1 | head -n 1 P: /devices/pci0000:00/0000:00:1c.2/0000:03:00.0/nvme/nvme2/nvme2n1 ash-4.4# udevadm info /dev/nvme3n1 | head -n 1 P: /devices/pci0000:00/0000:00:1c.3/0000:04:00.0/nvme/nvme3/nvme3n1 ash-4.4# cat /etc.defaults/extensionPorts [pci] pci1="0000:00:1c.0" pci2="0000:00:1c.1" pci3="0000:00:1c.2" pci4="0000:00:1c.3" ash-4.4# cat /etc/extensionPorts [pci] pci1="0000:00:1c.0" pci2="0000:00:1c.1" pci3="0000:00:1c.2" pci4="0000:00:1c.3" ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1 Not M.2 adapter card ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1 Not M.2 adapter card ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1 Not M.2 adapter card ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1 Not M.2 adapter card ash-4.4# cat /run/synostorage/disks/nvme0n1/m2_pool_support 0ash-4.4# cat /run/synostorage/disks/nvme1n1/m2_pool_support 0ash-4.4# cat /run/synostorage/disks/nvme2n1/m2_pool_support 0ash-4.4# cat /run/synostorage/disks/nvme3n1/m2_pool_support 0ash-4.4# ```
Author
Owner

@hawie commented on GitHub (Jul 31, 2023):

  1. try
    RDM Map 4 nvme disks to 4 SATA disks in PVE, and use Synology Storage Manager to create storage pools and storage volumes.
    Extract RAID5 information, generate RAID5 configuration file and modify sda in it to nvme related information.
    then reboot, change to nvme passthrough,
    and
/sbin/mdadm --assemble /dev/md2 --scan  --no-degraded --config=/root/mdadm.conf
/sbin/vgchange -ay /dev/vg1
mount /dev/mapper/vg1-volume_1 /volume1/

RAID5 configuration file
/root/mdadm.conf

ARRAY /dev/md2 level=raid5 num-devices=4 metadata=1.2 name=N100:2 UUID=82adf9da:a918ef39:96ca8d1e:b60467e4
/dev/nvme0n1p5,/dev/nvme1n1p5,/dev/nvme2n1p5,/dev/nvme3n1p5

mount | grep volume
/dev/mapper/vg1-volume_1 on /volume1 type btrfs (rw,relatime,space_cache=v2,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno)

<!-- gh-comment-id:1657438496 --> @hawie commented on GitHub (Jul 31, 2023): 2. try RDM Map 4 nvme disks to 4 SATA disks in PVE, and use Synology Storage Manager to create storage pools and storage volumes. Extract RAID5 information, generate RAID5 configuration file and modify sda in it to nvme related information. then reboot, change to nvme passthrough, and ``` /sbin/mdadm --assemble /dev/md2 --scan --no-degraded --config=/root/mdadm.conf /sbin/vgchange -ay /dev/vg1 mount /dev/mapper/vg1-volume_1 /volume1/ ``` RAID5 configuration file /root/mdadm.conf ``` ARRAY /dev/md2 level=raid5 num-devices=4 metadata=1.2 name=N100:2 UUID=82adf9da:a918ef39:96ca8d1e:b60467e4 /dev/nvme0n1p5,/dev/nvme1n1p5,/dev/nvme2n1p5,/dev/nvme3n1p5 ``` mount | grep volume `/dev/mapper/vg1-volume_1 on /volume1 type btrfs (rw,relatime,space_cache=v2,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno) `
Author
Owner

@hawie commented on GitHub (Jul 31, 2023):

  1. info
ash-4.4# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               5.56 TiB / not usable 960.00 KiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              1457280
  Free PE               125
  Allocated PE          1457155
  PV UUID               204rri-YEqb-r2Q8-lp9r-k1i6-zyuL-QLGzjp
   
ash-4.4# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.56 TiB
  PE Size               4.00 MiB
  Total PE              1457280
  Alloc PE / Size       1457155 / 5.56 TiB
  Free  PE / Size       125 / 500.00 MiB
  VG UUID               xyIfCl-qMV4-Hb2V-IpUM-8xe0-IMYg-tye9uL

volume1 can be accessed normally through ssh.

<!-- gh-comment-id:1657441528 --> @hawie commented on GitHub (Jul 31, 2023): 3. info ``` ash-4.4# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 5.56 TiB / not usable 960.00 KiB Allocatable yes PE Size 4.00 MiB Total PE 1457280 Free PE 125 Allocated PE 1457155 PV UUID 204rri-YEqb-r2Q8-lp9r-k1i6-zyuL-QLGzjp ash-4.4# vgdisplay --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 5.56 TiB PE Size 4.00 MiB Total PE 1457280 Alloc PE / Size 1457155 / 5.56 TiB Free PE / Size 125 / 500.00 MiB VG UUID xyIfCl-qMV4-Hb2V-IpUM-8xe0-IMYg-tye9uL ``` volume1 can be accessed normally through ssh.
Author
Owner

@hawie commented on GitHub (Jul 31, 2023):

screen1 screen2 screen3 screen4
  1. Problem
    How can I create what the system thinks is a storage pool?

volume1 can be accessed normally through ssh, but no valid storage pool appears.
The Online Assemble of the default Storage Manager GUI cannot succeed because it cannot detect all four NVME disks.

<!-- gh-comment-id:1657451567 --> @hawie commented on GitHub (Jul 31, 2023): <img width="983" alt="screen1" src="https://github.com/007revad/Synology_M2_volume/assets/475298/c69d76ee-7cb1-40cd-b9d8-e925d4961ed8"> <img width="978" alt="screen2" src="https://github.com/007revad/Synology_M2_volume/assets/475298/058838e5-9e84-4ada-a8d2-7adf497d44ce"> <img width="972" alt="screen3" src="https://github.com/007revad/Synology_M2_volume/assets/475298/5eafdda8-0f06-417a-b5fe-4654c104854e"> <img width="981" alt="screen4" src="https://github.com/007revad/Synology_M2_volume/assets/475298/bcc05f2b-1950-4e8b-9a56-93e2998611f7"> 4. Problem **How can I create what the system thinks is a storage pool?** volume1 can be accessed normally through ssh, but no valid storage pool appears. The Online Assemble of the default Storage Manager GUI cannot succeed because it cannot detect all four NVME disks.
Author
Owner

@hawie commented on GitHub (Jul 31, 2023):

Looking forward to God @007revad can solve this problem without using the official adapter. Only in this way can any number of NVME disks be added.

<!-- gh-comment-id:1657465841 --> @hawie commented on GitHub (Jul 31, 2023): Looking forward to God @007revad can solve this problem without using the official adapter. Only in this way can any number of NVME disks be added.
Author
Owner

@007revad commented on GitHub (Aug 6, 2023):

I'm surprised that you got as far as you did.

it is possible to create a RAID 5 storage pool using 2 internal NVMe drives and 2 NVMe drives in a Synology M.2 PCIe card.

I do have some questions about your setup:

  1. What NVMe PCIe card do you have?
  2. Do you not have a volume 1 on HDDs?
  3. Are M.2 Drive 1-1 and M.2 Drive 2-1 in the PCIe card?
  4. What happens if you create the volume on vg1 via SSH?
<!-- gh-comment-id:1666693057 --> @007revad commented on GitHub (Aug 6, 2023): I'm surprised that you got as far as you did. it is possible to create a RAID 5 storage pool using 2 internal NVMe drives and 2 NVMe drives in a Synology M.2 PCIe card. I do have some questions about your setup: 1. What NVMe PCIe card do you have? 2. Do you not have a volume 1 on HDDs? 3. Are M.2 Drive 1-1 and M.2 Drive 2-1 in the PCIe card? 4. What happens if you create the volume on vg1 via SSH?
Author
Owner

@hawie commented on GitHub (Aug 6, 2023):

raid5-ok It worked. I tried repeatedly according to your script. After a certain reboot, Volume1 appeared and I didn't even click Online Assemble. I don't know how to make it like this again. no NVMe PCIe card, simulated in a virtual machine before.
<!-- gh-comment-id:1666716706 --> @hawie commented on GitHub (Aug 6, 2023): <img width="851" alt="raid5-ok" src="https://github.com/007revad/Synology_M2_volume/assets/475298/0667ad8e-7071-49cf-b237-617fd227a2f6"> It worked. I tried repeatedly according to your script. After a certain reboot, Volume1 appeared and I didn't even click Online Assemble. I don't know how to make it like this again. no NVMe PCIe card, simulated in a virtual machine before.
Author
Owner

@007revad commented on GitHub (Aug 6, 2023):

Your the 3rd person who has had to run the script multiple times before it worked. But the other 2 had to run the script multiple times after a DSM update to get their NVMe volume back. I have no idea why... but I'd love to figure it out.

<!-- gh-comment-id:1666718933 --> @007revad commented on GitHub (Aug 6, 2023): Your the 3rd person who has had to run the script multiple times before it worked. But the other 2 had to run the script multiple times after a DSM update to get their NVMe volume back. I have no idea why... but I'd love to figure it out.
Author
Owner

@007revad commented on GitHub (Aug 6, 2023):

How did you simulated 4 NVMe drives in a virtual machine?

Was it DSM virtual machine or XPEnology in a virtual machine?

<!-- gh-comment-id:1666726178 --> @007revad commented on GitHub (Aug 6, 2023): How did you simulated 4 NVMe drives in a virtual machine? Was it DSM virtual machine or XPEnology in a virtual machine?
Author
Owner

@hawie commented on GitHub (Aug 6, 2023):

Proxmox Virtual Environment, with PCIe passthrough and XPEnology.

<!-- gh-comment-id:1666743113 --> @hawie commented on GitHub (Aug 6, 2023): Proxmox Virtual Environment, with PCIe passthrough and XPEnology.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

sorry to bring up closed thread, but just wanted to let you know that I'm able to create Raid0 volume with 4x - 2TB nvme on Asus HyperX M.2 card. Bifurcation x4x4x4x4 on Lenovo P520 WorkStation. Will continue test to see if it is stable.

test

<!-- gh-comment-id:1712676233 --> @jdpdata commented on GitHub (Sep 10, 2023): sorry to bring up closed thread, but just wanted to let you know that I'm able to create Raid0 volume with 4x - 2TB nvme on Asus HyperX M.2 card. Bifurcation x4x4x4x4 on Lenovo P520 WorkStation. Will continue test to see if it is stable. ![test](https://github.com/007revad/Synology_M2_volume/assets/109316311/50442088-b85d-4296-8a48-f27e52c13dee)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

now with Healthy Volume 2 :)

test

<!-- gh-comment-id:1712676531 --> @jdpdata commented on GitHub (Sep 10, 2023): now with Healthy Volume 2 :) ![test](https://github.com/007revad/Synology_M2_volume/assets/109316311/a2bff256-1513-454a-84a8-07fb9a9c145c)
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

@jdpdata

I'm about to upload an updated version of the script that supports up to 32 NVMe drives :o) It also supports RAID 6 and RAID 10.

<!-- gh-comment-id:1712689073 --> @007revad commented on GitHub (Sep 10, 2023): @jdpdata I'm about to upload an updated version of the script that supports up to 32 NVMe drives :o) It also supports RAID 6 and RAID 10.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

@007revad
Sweet! I want to try Raid10. I'll test it out for you.

<!-- gh-comment-id:1712692943 --> @jdpdata commented on GitHub (Sep 10, 2023): @007revad Sweet! I want to try Raid10. I'll test it out for you.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?

<!-- gh-comment-id:1712693155 --> @jdpdata commented on GitHub (Sep 10, 2023): So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

@007revad Sweet! I want to try Raid10. I'll test it out for you.

https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh

Can you reply back with shell output? I'd like to check that it's not outputting anything strange.

<!-- gh-comment-id:1712694857 --> @007revad commented on GitHub (Sep 10, 2023): > @007revad Sweet! I want to try Raid10. I'll test it out for you. https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh Can you reply back with shell output? I'd like to check that it's not outputting anything strange.
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?

1200 MB/s is impressive. 1250 MB/s is the theoretical maximum for 10GbE.

iSCSI Multipathing with 2 physical 10GbE ports on both machines, or a single 25GbE port on each machine should get you double the speed.

<!-- gh-comment-id:1712695452 --> @007revad commented on GitHub (Sep 10, 2023): > So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed? 1200 MB/s is impressive. 1250 MB/s is the theoretical maximum for 10GbE. iSCSI Multipathing with 2 physical 10GbE ports on both machines, or a single 25GbE port on each machine should get you double the speed.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

oh man, I'm out of available port on my 10G switch. May need to dismantle one of my NAS to steal the dual fiber channels to test.
Testing your new scripts now with Raid10...

<!-- gh-comment-id:1712695782 --> @jdpdata commented on GitHub (Sep 10, 2023): oh man, I'm out of available port on my 10G switch. May need to dismantle one of my NAS to steal the dual fiber channels to test. Testing your new scripts now with Raid10...
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

image

<!-- gh-comment-id:1712696544 --> @jdpdata commented on GitHub (Sep 10, 2023): ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/684bf8de-e0c5-40d3-911a-a613c1b55f85)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

Not working with Raid10. Can't select any of my nvme drives. You want me to try Raid6?

<!-- gh-comment-id:1712696640 --> @jdpdata commented on GitHub (Sep 10, 2023): Not working with Raid10. Can't select any of my nvme drives. You want me to try Raid6?
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

same issue with Raid6

image

<!-- gh-comment-id:1712696756 --> @jdpdata commented on GitHub (Sep 10, 2023): same issue with Raid6 ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/970fd31a-c76f-4ed1-834f-f8a55f027bf2)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

Do I need to erase my drives first?

<!-- gh-comment-id:1712696801 --> @jdpdata commented on GitHub (Sep 10, 2023): Do I need to erase my drives first?
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

erased my drives. I still can't select them

image

<!-- gh-comment-id:1712697214 --> @jdpdata commented on GitHub (Sep 10, 2023): erased my drives. I still can't select them ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/25b4255a-4cf1-4cb3-88e1-b2e39e309373)
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

I've made a change to the script. Can you try it again.

https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh

And reply with a screenshot.

<!-- gh-comment-id:1712699166 --> @007revad commented on GitHub (Sep 10, 2023): I've made a change to the script. Can you try it again. https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh And reply with a screenshot.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

I swapped fiber modules with another NAS. Looks like I have to rebuild the ARC loader to accept the new nic card. Give me a few moments...

<!-- gh-comment-id:1712701508 --> @jdpdata commented on GitHub (Sep 10, 2023): I swapped fiber modules with another NAS. Looks like I have to rebuild the ARC loader to accept the new nic card. Give me a few moments...
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

image

<!-- gh-comment-id:1712702876 --> @jdpdata commented on GitHub (Sep 10, 2023): ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/1925afd9-cdd5-4dc5-92a7-5a6d6a40a203)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

image

<!-- gh-comment-id:1712702984 --> @jdpdata commented on GitHub (Sep 10, 2023): ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/5786a2a6-340e-4817-819f-09d9ffce671f)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

creating array is going to take a very long time

<!-- gh-comment-id:1712703155 --> @jdpdata commented on GitHub (Sep 10, 2023): creating array is going to take a very long time
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

creating array is going to take a very long time

I'm running it now to create RAID 1 with two 500GB NMVe and it looks like the resync will take about 35 minutes. I imagine with four 4TB drives it could take 9 hours!

I'm going to add a timer that shows how long the resync took. And get rid of the "Done" option when there's no drives left to select.

<!-- gh-comment-id:1712704751 --> @007revad commented on GitHub (Sep 10, 2023): > creating array is going to take a very long time I'm running it now to create RAID 1 with two 500GB NMVe and it looks like the resync will take about 35 minutes. I imagine with four 4TB drives it could take 9 hours! I'm going to add a timer that shows how long the resync took. And get rid of the "Done" option when there's no drives left to select.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

It's 16% done so far. I'll let it finish. Will report back in the morning.

<!-- gh-comment-id:1712705417 --> @jdpdata commented on GitHub (Sep 10, 2023): It's 16% done so far. I'll let it finish. Will report back in the morning.
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

If it's up to 16% it will only take 2 hours.

<!-- gh-comment-id:1712705773 --> @007revad commented on GitHub (Sep 10, 2023): If it's up to 16% it will only take 2 hours.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

Ok, 31.9% now. Probably good time to take a break. I've been at this all day since 10AM! Almost 12 hrs already. I'll run some benchmarks tomorrow.

<!-- gh-comment-id:1712707599 --> @jdpdata commented on GitHub (Sep 10, 2023): Ok, 31.9% now. Probably good time to take a break. I've been at this all day since 10AM! Almost 12 hrs already. I'll run some benchmarks tomorrow.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

Thank you btw for the awesome scripts! I wanted to stay with Xpenology. Was very tempted to go to the dark side with TrueNas Scale. It supports nvme raid out-of-the-box no problems, but I know nothing about managing Truenas.

<!-- gh-comment-id:1712707795 --> @jdpdata commented on GitHub (Sep 10, 2023): Thank you btw for the awesome scripts! I wanted to stay with Xpenology. Was very tempted to go to the dark side with TrueNas Scale. It supports nvme raid out-of-the-box no problems, but I know nothing about managing Truenas.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

yay! It's done.
image

<!-- gh-comment-id:1712722398 --> @jdpdata commented on GitHub (Sep 10, 2023): yay! It's done. ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/aa286ce4-c91d-4c25-bdf1-3471df5538da)
Author
Owner

@007revad commented on GitHub (Sep 10, 2023):

Nice. Only 110 minutes. Thanks for testing the script.

<!-- gh-comment-id:1712722723 --> @007revad commented on GitHub (Sep 10, 2023): Nice. Only 110 minutes. Thanks for testing the script.
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

raid10 is up

image

<!-- gh-comment-id:1712722920 --> @jdpdata commented on GitHub (Sep 10, 2023): raid10 is up ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/8e268b8c-d554-4f6a-92ba-dbc8aeaf40ef)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

some benchmarks. CrystalDisk on iSCSi mounted disk. Was expected r/w hit with RAID10, but none at all. Still maxing out my 10GbE. I think this is a keeper!!

image

<!-- gh-comment-id:1712725605 --> @jdpdata commented on GitHub (Sep 10, 2023): some benchmarks. CrystalDisk on iSCSi mounted disk. Was expected r/w hit with RAID10, but none at all. Still maxing out my 10GbE. I think this is a keeper!! ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/4690edfd-5ebb-4b0f-8242-071772cd85e4)
Author
Owner

@jdpdata commented on GitHub (Sep 10, 2023):

Fully saturated 10GbE on SMB transfers as well

image

<!-- gh-comment-id:1712728541 --> @jdpdata commented on GitHub (Sep 10, 2023): Fully saturated 10GbE on SMB transfers as well ![image](https://github.com/007revad/Synology_M2_volume/assets/109316311/fcf54342-ee74-41b4-b599-765dad9c616b)
Sign in to join this conversation.
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_M2_volume#19
No description provided.