mirror of
https://github.com/007revad/Synology_M2_volume.git
synced 2026-04-25 15:56:06 +03:00
[GH-ISSUE #133] how can I install e10m20t1 on ds1821 #30
Labels
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_M2_volume#30
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @loonyd on GitHub (Mar 19, 2024).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/133
I have already installed two m2 SSDs in my DS1821 and used a script to create storage space. I have now added one e10m20-t1 and inserted the other two m2 SSDs. When I run the script, I can discover the newly added SSDs, but I cannot create storage space. The prompt is that MD10 already exists, and the network card does not appear. I want to use both a 10 Gigabit network card and four SSDs at the same time. I found this post, https://blog.mitsea.com/b3d944ff0c02498c9c341a620b323f1a/ Can it be achieved on 1821?
@007revad commented on GitHub (Mar 19, 2024):
I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1.
With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager.
You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option:
syno_hdd_db.sh -nFor the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card
If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch
@loonyd commented on GitHub (Mar 19, 2024):
thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?
@007revad commented on GitHub (Mar 19, 2024):
Yes.
And you won't need syno_shr_switch.
@loonyd commented on GitHub (Mar 19, 2024):
thanks, after I try, I will reply here
@loonyd commented on GitHub (Mar 19, 2024):
by the way, Nic on e10m20-t1 works well? I only want use ssd for storage and use 10g Nic.
@007revad commented on GitHub (Mar 19, 2024):
I've found if a DS1821+ has 32GB of memory SSD caches don't make difference, so I prefer to use my NVMe drives for volumes.
With 4GB of memory transferring a 7GB file starts at around 500MB/s but then drops to around 300MB/s.
With 32GB of memory transferring a 7GB file I get over 900MB/s.
I get the same speeds for 1 NVMe drive as a volume in the internal M.2 slots as I do for the 1 NVMe drive in E10M20-T1.
@loonyd commented on GitHub (Mar 19, 2024):
Running from: /volume1/web/Synology_M2_volume/syno_create_m2_volume.sh
Type yes to continue. Type anything else to do a dry run test.
yes
NVMe M.2 nvme0n1 is KINGSTON SA2000M81000G
WARNING Drive has a volume partition
NVMe M.2 nvme1n1 is INTEL MEMPEK1J016GAL
Skipping drive as it is being used by DSM
NVMe M.2 nvme2n1 is INTEL MEMPEK1J016GAL
Skipping drive as it is being used by DSM
Unused M.2 drives found: 1
Select the M.2 drive: 1
You selected nvme0n1
Ready to create volume group using nvme0n1
WARNING Everything on the selected M.2 drive(s) will be deleted.
Type yes to continue. Type anything else to quit.
yes
You chose to continue. You are brave! :)
Using md10 as it's the next available.
Creating Synology partitions on nvme0n1
/dev/nvme0n11 4980480 (2431 MB)
/dev/nvme0n12 4194304 (2048 MB)
Reserved size: 262144 ( 128 MB)
Primary data partition will be created.
WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y
Cleaning all partitions...
Creating sys partitions...
Creating primary data partition...
Please remember to mdadm and mkfs new partitions.
Creating the RAID array. This will take a while...
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: /dev/md10 is already in use.
ERROR 5 Failed to create RAID!
@007revad commented on GitHub (Mar 19, 2024):
Why are you using synology create m2 volume?
@loonyd commented on GitHub (Mar 19, 2024):
I just want use m2 ssd for storage……and I have created raid0 by m2 inside,now I add a m2 ssd on e10m20,and I want add one storage more,how can I do?
@007revad commented on GitHub (Mar 19, 2024):
I already gave you the 3 steps.
@loonyd commented on GitHub (Mar 20, 2024):
What I mean is, I have 3 or 4 m2 SSDs, 2 in slots, and 2 in E10M20.
I only used your script to make the two SSDs of the built-in slot raid0. Now, I have added one or two SSDs on E10M20. I want to create a separate storage pool, which means that the SSDs of the built-in slot will be used as a storage pool, and the SSDs on E10M20 will also be used as storage pools.
On the basis of keeping the original built-in storage pool, running Synology M2 volume and selecting the newly added SSD on E10M20 will prompt me that md10 already exists. Can I delete the storage pool that was originally established on the built-in SSD on the web management page and create a new storage pool entirely using Synology M22 volume? Can I choose to use two built-in SSDs as one storage pool and then choose the SSD on E10M20 as the second storage pool?
@007revad commented on GitHub (Mar 20, 2024):
Do NOT use this script on a DS1821+. This script is only needed for models older than '20 series.
Just create the storage pool and volume in storage manager like you would when creating any other storage pool and volume.
@loonyd commented on GitHub (Mar 20, 2024):
means syno_create_create_m2_volume.sh --restore?
@loonyd commented on GitHub (Mar 20, 2024):
no,no, if I not use script, it can't create volume with third part SSD!
@007revad commented on GitHub (Mar 20, 2024):
I actually meant syno_enable_m2_volume.sh --restore So ignore my previous comment about using the --restore option.
@007revad commented on GitHub (Mar 20, 2024):
https://github.com/007revad/Synology_HDD_db makes DSM allow you to use storage manager to create volumes on 3rd party NVMe drives.
So you need https://github.com/007revad/Synology_enable_M2_card to enable the M.2 slots in the E10M20-T1
Then you need https://github.com/007revad/Synology_HDD_db so you can use storage manager to create volumes on 3rd party NVMe drives.
@loonyd commented on GitHub (Mar 20, 2024):
Intel INTEL MEMPEK1J016GAL (SSD) is not supported? I have run synology_HDD_db and reboot, it says not supported either!
@loonyd commented on GitHub (Mar 20, 2024):
this
![Uploading iShot_2024-03-20_09.13.09.png…]()
@007revad commented on GitHub (Mar 20, 2024):
I can't see your image.
Did you run synology_HDD_db with the -n option?
@loonyd commented on GitHub (Mar 20, 2024):
ok, thanks very very very much!
it works well now!
@007revad commented on GitHub (Mar 20, 2024):
We got there in the end. 😃
@loonyd commented on GitHub (Mar 20, 2024):
but......I have created a storage pool in web ui,when I want to create storage volume, it said no more storage pool can be used……

@007revad commented on GitHub (Mar 20, 2024):
Is that Japanese or Chinese? I can't read it.
It looks like storage pool 7 already has a volume, using the 2 internal NVMe drives.
Can you do the following:
@loonyd commented on GitHub (Mar 20, 2024):
pic below
@007revad commented on GitHub (Mar 20, 2024):
Storage Pool 7 is using the NVMe drives in the DS1821+ built-in M.2 slots.
M.2 Drive 1 and M.2 Drive 2 are in the built-in M.2 slots.
M.2 Drive 1-1 and M.2 Drive 1-2 will be the NVMe drives in the E10M20-T1.
Does "Storage Manager > Overview" show the E10M20-T1 and it's NVMe drives, like this?

@loonyd commented on GitHub (Mar 20, 2024):
yes, e10m20 is not plugin now, so there is only 2 ssd in built-in slots
@loonyd commented on GitHub (Mar 20, 2024):
like this

@loonyd commented on GitHub (Mar 20, 2024):
I want create 2 storage pools and 2 volumes
storage pool7---use built-in slots, create a volume
storage pool8---use E10M20-T1 slots(not use now),create a volume
but can't create volume om storage pool now
@007revad commented on GitHub (Mar 20, 2024):
Storage Manager is showing your 2 NVMe drives as already being used. Are they currently setup as a cache?
@loonyd commented on GitHub (Mar 20, 2024):
no,but I set a raid0 by your syno_creat_vlolume before, then del the storage pool in synology web UI.
@007revad commented on GitHub (Mar 20, 2024):
Make
sure syno_hdd_db.sh -nis scheduled to run at boot with the -n option.https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md
Then delete storage pool 7, and reboot.
Next do this:
@loonyd commented on GitHub (Mar 20, 2024):
not work, same like before...
@007revad commented on GitHub (Mar 20, 2024):
What does the following return?
@loonyd commented on GitHub (Mar 20, 2024):
after I create storage pool, it return below, md9 is the two ssd.
root@SynologyNAS:~# cat /proc/mdstat
Personalities : [raid1] [linear]
md9 : active linear nvme1n1p3[1] nvme0n1p3[0]
6682624 blocks super 1.2 64k rounding [2/2] [UU]
md3 : active linear sata5p3[0] sata6p3[1]
31230310400 blocks super 1.2 64k rounding [2/2] [UU]
md2 : active linear sata7p3[0] sata8p3[1]
31230310400 blocks super 1.2 64k rounding [2/2] [UU]
md5 : active raid1 sata4p3[0]
3896294208 blocks super 1.2 [1/1] [U]
md4 : active raid1 sata3p3[0]
3896294208 blocks super 1.2 [1/1] [U]
md10 : active raid1 sata2p3[0]
15615155200 blocks super 1.2 [1/1] [U]
md8 : active raid1 sata1p3[0]
15615155200 blocks super 1.2 [1/1] [U]
md1 : active raid1 sata1p2[0] sata5p2[7] sata7p2[6] sata8p2[5] sata6p2[4] sata2p2[3] sata4p2[2] sata3p2[1]
2097088 blocks [8/8] [UUUUUUUU]
md0 : active raid1 sata1p1[0] sata5p1[7] sata8p1[6] sata7p1[5] sata6p1[4] sata2p1[3] sata4p1[2] sata3p1[1]
8388544 blocks [8/8] [UUUUUUUU]
@007revad commented on GitHub (Mar 20, 2024):
I notice there's no md6 or md7. Now I'm curious why syno_create_m2_volume.sh tried to use md10.
What does this command return?
And this command?
@loonyd commented on GitHub (Mar 20, 2024):
root@SynologyNAS:
# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort | tail -1md0
md1
md10
md2
md3
md4
md5
md8
root@SynologyNAS:
md8
root@SynologyNAS:~#
@loonyd commented on GitHub (Mar 20, 2024):
Because I replaced several hard drives in the past few years, some of which may have been used in other NAS devices and were directly used for storage space repair.
@007revad commented on GitHub (Mar 20, 2024):
What about this one:
@loonyd commented on GitHub (Mar 20, 2024):
root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n
md0
md1
md10
md2
md3
md4
md5
md8
@007revad commented on GitHub (Mar 20, 2024):
One more try:
@007revad commented on GitHub (Mar 20, 2024):
I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error.
https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25
I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.
@loonyd commented on GitHub (Mar 20, 2024):
root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3
md0
md1
md2
md3
md4
md5
md8
md10
@loonyd commented on GitHub (Mar 20, 2024):
If this script can address the issue of partitioning several SSDs into multiple storage pools and spaces respectively, I can also use this script for management.
@007revad commented on GitHub (Mar 20, 2024):
You choose which NVMe drive, or drives, to create the storage pool.
@loonyd commented on GitHub (Mar 20, 2024):
I want create 2 storage pools and 2 volumes
storage pool7---use built-in slots, create a volume
storage pool8---use E10M20-T1 slots(not use now),create a volume
@loonyd commented on GitHub (Mar 20, 2024):
it seems work now, I will try insert E10M20 later back home
@loonyd commented on GitHub (Mar 22, 2024):
hi, it's working normally now by your Synology_M2_volume V1.2.25 , thanks and thanks very much!