mirror of
https://github.com/007revad/Synology_M2_volume.git
synced 2026-04-26 00:06:14 +03:00
[GH-ISSUE #78] RAID5 without official M.2 NVMe adapter card #19
Labels
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_M2_volume#19
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hawie on GitHub (Jul 31, 2023).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/78
Is it possible to enable it? create RAID5 without official M.2 NVMe adapter card.
Originally posted by @hawie in https://github.com/007revad/Synology_M2_volume/issues/76#issuecomment-1657420785
@hawie commented on GitHub (Jul 31, 2023):
cmd:
screen:
@hawie commented on GitHub (Jul 31, 2023):
RDM Map 4 nvme disks to 4 SATA disks in PVE, and use Synology Storage Manager to create storage pools and storage volumes.
Extract RAID5 information, generate RAID5 configuration file and modify sda in it to nvme related information.
then reboot, change to nvme passthrough,
and
RAID5 configuration file
/root/mdadm.conf
mount | grep volume
/dev/mapper/vg1-volume_1 on /volume1 type btrfs (rw,relatime,space_cache=v2,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno)@hawie commented on GitHub (Jul 31, 2023):
volume1 can be accessed normally through ssh.
@hawie commented on GitHub (Jul 31, 2023):
How can I create what the system thinks is a storage pool?
volume1 can be accessed normally through ssh, but no valid storage pool appears.
The Online Assemble of the default Storage Manager GUI cannot succeed because it cannot detect all four NVME disks.
@hawie commented on GitHub (Jul 31, 2023):
Looking forward to God @007revad can solve this problem without using the official adapter. Only in this way can any number of NVME disks be added.
@007revad commented on GitHub (Aug 6, 2023):
I'm surprised that you got as far as you did.
it is possible to create a RAID 5 storage pool using 2 internal NVMe drives and 2 NVMe drives in a Synology M.2 PCIe card.
I do have some questions about your setup:
@hawie commented on GitHub (Aug 6, 2023):
@007revad commented on GitHub (Aug 6, 2023):
Your the 3rd person who has had to run the script multiple times before it worked. But the other 2 had to run the script multiple times after a DSM update to get their NVMe volume back. I have no idea why... but I'd love to figure it out.
@007revad commented on GitHub (Aug 6, 2023):
How did you simulated 4 NVMe drives in a virtual machine?
Was it DSM virtual machine or XPEnology in a virtual machine?
@hawie commented on GitHub (Aug 6, 2023):
Proxmox Virtual Environment, with PCIe passthrough and XPEnology.
@jdpdata commented on GitHub (Sep 10, 2023):
sorry to bring up closed thread, but just wanted to let you know that I'm able to create Raid0 volume with 4x - 2TB nvme on Asus HyperX M.2 card. Bifurcation x4x4x4x4 on Lenovo P520 WorkStation. Will continue test to see if it is stable.
@jdpdata commented on GitHub (Sep 10, 2023):
now with Healthy Volume 2 :)
@007revad commented on GitHub (Sep 10, 2023):
@jdpdata
I'm about to upload an updated version of the script that supports up to 32 NVMe drives :o) It also supports RAID 6 and RAID 10.
@jdpdata commented on GitHub (Sep 10, 2023):
@007revad
Sweet! I want to try Raid10. I'll test it out for you.
@jdpdata commented on GitHub (Sep 10, 2023):
So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?
@007revad commented on GitHub (Sep 10, 2023):
https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh
Can you reply back with shell output? I'd like to check that it's not outputting anything strange.
@007revad commented on GitHub (Sep 10, 2023):
1200 MB/s is impressive. 1250 MB/s is the theoretical maximum for 10GbE.
iSCSI Multipathing with 2 physical 10GbE ports on both machines, or a single 25GbE port on each machine should get you double the speed.
@jdpdata commented on GitHub (Sep 10, 2023):
oh man, I'm out of available port on my 10G switch. May need to dismantle one of my NAS to steal the dual fiber channels to test.
Testing your new scripts now with Raid10...
@jdpdata commented on GitHub (Sep 10, 2023):
@jdpdata commented on GitHub (Sep 10, 2023):
Not working with Raid10. Can't select any of my nvme drives. You want me to try Raid6?
@jdpdata commented on GitHub (Sep 10, 2023):
same issue with Raid6
@jdpdata commented on GitHub (Sep 10, 2023):
Do I need to erase my drives first?
@jdpdata commented on GitHub (Sep 10, 2023):
erased my drives. I still can't select them
@007revad commented on GitHub (Sep 10, 2023):
I've made a change to the script. Can you try it again.
https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh
And reply with a screenshot.
@jdpdata commented on GitHub (Sep 10, 2023):
I swapped fiber modules with another NAS. Looks like I have to rebuild the ARC loader to accept the new nic card. Give me a few moments...
@jdpdata commented on GitHub (Sep 10, 2023):
@jdpdata commented on GitHub (Sep 10, 2023):
@jdpdata commented on GitHub (Sep 10, 2023):
creating array is going to take a very long time
@007revad commented on GitHub (Sep 10, 2023):
I'm running it now to create RAID 1 with two 500GB NMVe and it looks like the resync will take about 35 minutes. I imagine with four 4TB drives it could take 9 hours!
I'm going to add a timer that shows how long the resync took. And get rid of the "Done" option when there's no drives left to select.
@jdpdata commented on GitHub (Sep 10, 2023):
It's 16% done so far. I'll let it finish. Will report back in the morning.
@007revad commented on GitHub (Sep 10, 2023):
If it's up to 16% it will only take 2 hours.
@jdpdata commented on GitHub (Sep 10, 2023):
Ok, 31.9% now. Probably good time to take a break. I've been at this all day since 10AM! Almost 12 hrs already. I'll run some benchmarks tomorrow.
@jdpdata commented on GitHub (Sep 10, 2023):
Thank you btw for the awesome scripts! I wanted to stay with Xpenology. Was very tempted to go to the dark side with TrueNas Scale. It supports nvme raid out-of-the-box no problems, but I know nothing about managing Truenas.
@jdpdata commented on GitHub (Sep 10, 2023):
yay! It's done.

@007revad commented on GitHub (Sep 10, 2023):
Nice. Only 110 minutes. Thanks for testing the script.
@jdpdata commented on GitHub (Sep 10, 2023):
raid10 is up
@jdpdata commented on GitHub (Sep 10, 2023):
some benchmarks. CrystalDisk on iSCSi mounted disk. Was expected r/w hit with RAID10, but none at all. Still maxing out my 10GbE. I think this is a keeper!!
@jdpdata commented on GitHub (Sep 10, 2023):
Fully saturated 10GbE on SMB transfers as well