[GH-ISSUE #133] how can I install e10m20t1 on ds1821 #228

Closed
opened 2026-03-12 18:15:55 +03:00 by kerem · 47 comments
Owner

Originally created by @loonyd on GitHub (Mar 19, 2024).
Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/133

I have already installed two m2 SSDs in my DS1821 and used a script to create storage space. I have now added one e10m20-t1 and inserted the other two m2 SSDs. When I run the script, I can discover the newly added SSDs, but I cannot create storage space. The prompt is that MD10 already exists, and the network card does not appear. I want to use both a 10 Gigabit network card and four SSDs at the same time. I found this post, https://blog.mitsea.com/b3d944ff0c02498c9c341a620b323f1a/ Can it be achieved on 1821?

Originally created by @loonyd on GitHub (Mar 19, 2024). Original GitHub issue: https://github.com/007revad/Synology_M2_volume/issues/133 I have already installed two m2 SSDs in my DS1821 and used a script to create storage space. I have now added one e10m20-t1 and inserted the other two m2 SSDs. When I run the script, I can discover the newly added SSDs, but I cannot create storage space. The prompt is that MD10 already exists, and the network card does not appear. I want to use both a 10 Gigabit network card and four SSDs at the same time. I found this post, https://blog.mitsea.com/b3d944ff0c02498c9c341a620b323f1a/ Can it be achieved on 1821?
kerem closed this issue 2026-03-12 18:16:01 +03:00
Author
Owner

@007revad commented on GitHub (Mar 19, 2024):

I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1.

ds1821+_e10m20-t1_small

ds1821_3nvme_raidf1-2

With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager.

You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: syno_hdd_db.sh -n

For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card

If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch

<!-- gh-comment-id:2005925511 --> @007revad commented on GitHub (Mar 19, 2024): I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1. ![ds1821+_e10m20-t1_small](https://github.com/007revad/Synology_M2_volume/assets/39733752/e2a5bfd6-53b1-47ad-a913-664e8a27c133) ![ds1821_3nvme_raidf1-2](https://github.com/007revad/Synology_M2_volume/assets/39733752/1c039361-9610-432d-84f3-f57714a2bba0) With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager. You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: `syno_hdd_db.sh -n` For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch
Author
Owner

@loonyd commented on GitHub (Mar 19, 2024):

I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1.

ds1821+_e10m20-t1_small

ds1821_3nvme_raidf1-2

With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager.

You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: syno_hdd_db.sh -n

For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card

If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch

thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?

<!-- gh-comment-id:2005956945 --> @loonyd commented on GitHub (Mar 19, 2024): > I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1. > > ![ds1821+_e10m20-t1_small](https://private-user-images.githubusercontent.com/39733752/313945772-e2a5bfd6-53b1-47ad-a913-664e8a27c133.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4MzA3ODMsIm5iZiI6MTcxMDgzMDQ4MywicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDU3NzItZTJhNWJmZDYtNTNiMS00N2FkLWE5MTMtNjY0ZThhMjdjMTMzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzE5VDA2NDEyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNmNWI0OTRkM2NjOGM3OTU5ZDg2NzA5ODkwMjA4ZWY5NjQ3OGM0ZWVjZDRjNzQyZTJlMzZkOTBkN2FjYmY1NjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.pTIbpR-wpg37uztMQqejyLHChxXEGZHo0x4YQ51oh68) > > ![ds1821_3nvme_raidf1-2](https://private-user-images.githubusercontent.com/39733752/313944609-1c039361-9610-432d-84f3-f57714a2bba0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4MzA3ODMsIm5iZiI6MTcxMDgzMDQ4MywicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDQ2MDktMWMwMzkzNjEtOTYxMC00MzJkLTg0ZjMtZjU3NzE0YTJiYmEwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzE5VDA2NDEyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdkOGVhZWY1YmQ1MmI4YWJiNjRmNGQ3MmU2MzIwMGZiY2YyODEzNjJhZjYxY2QwYWVmOWNjZmEwYzg4NGQ5NGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.MyP95wznj9eC4kilueaa4sIdDstThKl0XlXBzb-g134) > > With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager. > > You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: `syno_hdd_db.sh -n` > > For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card > > If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?
Author
Owner

@007revad commented on GitHub (Mar 19, 2024):

thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?

Yes.

And you won't need syno_shr_switch.

<!-- gh-comment-id:2005971196 --> @007revad commented on GitHub (Mar 19, 2024): > thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works? Yes. And you won't need syno_shr_switch.
Author
Owner

@loonyd commented on GitHub (Mar 19, 2024):

thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?

Yes.

And you won't need syno_shr_switch.

thanks, after I try, I will reply here

<!-- gh-comment-id:2006019211 --> @loonyd commented on GitHub (Mar 19, 2024): > > thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works? > > Yes. > > And you won't need syno_shr_switch. thanks, after I try, I will reply here
Author
Owner

@loonyd commented on GitHub (Mar 19, 2024):

I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1.
ds1821+_e10m20-t1_small
ds1821_3nvme_raidf1-2
With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager.
You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: syno_hdd_db.sh -n
For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card
If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch

thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?

by the way, Nic on e10m20-t1 works well? I only want use ssd for storage and use 10g Nic.

<!-- gh-comment-id:2006028230 --> @loonyd commented on GitHub (Mar 19, 2024): > > I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1. > > ![ds1821+_e10m20-t1_small](https://private-user-images.githubusercontent.com/39733752/313945772-e2a5bfd6-53b1-47ad-a913-664e8a27c133.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4MzA3ODMsIm5iZiI6MTcxMDgzMDQ4MywicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDU3NzItZTJhNWJmZDYtNTNiMS00N2FkLWE5MTMtNjY0ZThhMjdjMTMzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzE5VDA2NDEyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNmNWI0OTRkM2NjOGM3OTU5ZDg2NzA5ODkwMjA4ZWY5NjQ3OGM0ZWVjZDRjNzQyZTJlMzZkOTBkN2FjYmY1NjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.pTIbpR-wpg37uztMQqejyLHChxXEGZHo0x4YQ51oh68) > > ![ds1821_3nvme_raidf1-2](https://private-user-images.githubusercontent.com/39733752/313944609-1c039361-9610-432d-84f3-f57714a2bba0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4MzA3ODMsIm5iZiI6MTcxMDgzMDQ4MywicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDQ2MDktMWMwMzkzNjEtOTYxMC00MzJkLTg0ZjMtZjU3NzE0YTJiYmEwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzE5VDA2NDEyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdkOGVhZWY1YmQ1MmI4YWJiNjRmNGQ3MmU2MzIwMGZiY2YyODEzNjJhZjYxY2QwYWVmOWNjZmEwYzg4NGQ5NGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.MyP95wznj9eC4kilueaa4sIdDstThKl0XlXBzb-g134) > > With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager. > > You only need https://github.com/007revad/Synology_HDD_db Run it with the -n option: `syno_hdd_db.sh -n` > > For the E10M20-T1 you also need https://github.com/007revad/Synology_enable_M2_card > > If you want to run 3 or 4 NVMe drives in RAID-F1 you'll need https://github.com/007revad/Synology_SHR_switch > > thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works? by the way, Nic on e10m20-t1 works well? I only want use ssd for storage and use 10g Nic.
Author
Owner

@007revad commented on GitHub (Mar 19, 2024):

I've found if a DS1821+ has 32GB of memory SSD caches don't make difference, so I prefer to use my NVMe drives for volumes.

With 4GB of memory transferring a 7GB file starts at around 500MB/s but then drops to around 300MB/s.

With 32GB of memory transferring a 7GB file I get over 900MB/s.

I get the same speeds for 1 NVMe drive as a volume in the internal M.2 slots as I do for the 1 NVMe drive in E10M20-T1.

<!-- gh-comment-id:2006153797 --> @007revad commented on GitHub (Mar 19, 2024): I've found if a DS1821+ has 32GB of memory SSD caches don't make difference, so I prefer to use my NVMe drives for volumes. With 4GB of memory transferring a 7GB file starts at around 500MB/s but then drops to around 300MB/s. With 32GB of memory transferring a 7GB file I get over 900MB/s. I get the same speeds for 1 NVMe drive as a volume in the internal M.2 slots as I do for the 1 NVMe drive in E10M20-T1.
Author
Owner

@loonyd commented on GitHub (Mar 19, 2024):

thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works?

Yes.

And you won't need syno_shr_switch.

Running from: /volume1/web/Synology_M2_volume/syno_create_m2_volume.sh
Type yes to continue. Type anything else to do a dry run test.
yes

NVMe M.2 nvme0n1 is KINGSTON SA2000M81000G
WARNING Drive has a volume partition

NVMe M.2 nvme1n1 is INTEL MEMPEK1J016GAL
Skipping drive as it is being used by DSM

NVMe M.2 nvme2n1 is INTEL MEMPEK1J016GAL
Skipping drive as it is being used by DSM

Unused M.2 drives found: 1

  1. nvme0n1
    Select the M.2 drive: 1
    You selected nvme0n1

Ready to create volume group using nvme0n1
WARNING Everything on the selected M.2 drive(s) will be deleted.
Type yes to continue. Type anything else to quit.
yes
You chose to continue. You are brave! :)

Using md10 as it's the next available.

Creating Synology partitions on nvme0n1

    Device   Sectors (Version7: SupportRaid)

/dev/nvme0n11 4980480 (2431 MB)
/dev/nvme0n12 4194304 (2048 MB)
Reserved size: 262144 ( 128 MB)
Primary data partition will be created.

WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y
Cleaning all partitions...
Creating sys partitions...
Creating primary data partition...
Please remember to mdadm and mkfs new partitions.

Creating the RAID array. This will take a while...
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: /dev/md10 is already in use.

ERROR 5 Failed to create RAID!

<!-- gh-comment-id:2007030279 --> @loonyd commented on GitHub (Mar 19, 2024): > > thanks very much,now I want use the 3 ssd installed in 1821, ssd1&ssd2 installed before for raid0, the new one on e10m20-t1 for single, can it works? > > Yes. > > And you won't need syno_shr_switch. Running from: /volume1/web/Synology_M2_volume/syno_create_m2_volume.sh Type yes to continue. Type anything else to do a dry run test. yes NVMe M.2 nvme0n1 is KINGSTON SA2000M81000G WARNING Drive has a volume partition NVMe M.2 nvme1n1 is INTEL MEMPEK1J016GAL Skipping drive as it is being used by DSM NVMe M.2 nvme2n1 is INTEL MEMPEK1J016GAL Skipping drive as it is being used by DSM Unused M.2 drives found: 1 1) nvme0n1 Select the M.2 drive: 1 You selected nvme0n1 Ready to create volume group using nvme0n1 WARNING Everything on the selected M.2 drive(s) will be deleted. Type yes to continue. Type anything else to quit. yes You chose to continue. You are brave! :) Using md10 as it's the next available. Creating Synology partitions on nvme0n1 Device Sectors (Version7: SupportRaid) /dev/nvme0n11 4980480 (2431 MB) /dev/nvme0n12 4194304 (2048 MB) Reserved size: 262144 ( 128 MB) Primary data partition will be created. WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N] y Cleaning all partitions... Creating sys partitions... Creating primary data partition... Please remember to mdadm and mkfs new partitions. Creating the RAID array. This will take a while... mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: /dev/md10 is already in use. ERROR 5 Failed to create RAID!
Author
Owner

@007revad commented on GitHub (Mar 19, 2024):

Why are you using synology create m2 volume?

  1. Run https://github.com/007revad/Synology_enable_M2_card to enable the E10M20-T1
  2. Then run https://github.com/007revad/Synology_HDD_db with the -n option
  3. Then create your storage pool and volume from storage manager.
<!-- gh-comment-id:2007053008 --> @007revad commented on GitHub (Mar 19, 2024): Why are you using synology create m2 volume? 1. Run https://github.com/007revad/Synology_enable_M2_card to enable the E10M20-T1 2. Then run https://github.com/007revad/Synology_HDD_db with the -n option 3. Then create your storage pool and volume from storage manager.
Author
Owner

@loonyd commented on GitHub (Mar 19, 2024):

Why are you using synology create m2 volume?

  1. Run https://github.com/007revad/Synology_enable_M2_card to enable the E10M20-T1
  2. Then run https://github.com/007revad/Synology_HDD_db with the -n option
  3. Then create your storage pool and volume from storage manager.

I just want use m2 ssd for storage……and I have created raid0 by m2 inside,now I add a m2 ssd on e10m20,and I want add one storage more,how can I do?

<!-- gh-comment-id:2007154123 --> @loonyd commented on GitHub (Mar 19, 2024): > Why are you using synology create m2 volume? > > 1. Run https://github.com/007revad/Synology_enable_M2_card to enable the E10M20-T1 > 2. Then run https://github.com/007revad/Synology_HDD_db with the -n option > 3. Then create your storage pool and volume from storage manager. I just want use m2 ssd for storage……and I have created raid0 by m2 inside,now I add a m2 ssd on e10m20,and I want add one storage more,how can I do?
Author
Owner

@007revad commented on GitHub (Mar 19, 2024):

I already gave you the 3 steps.

<!-- gh-comment-id:2007824886 --> @007revad commented on GitHub (Mar 19, 2024): I already gave you the 3 steps.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

I already gave you the 3 steps.

What I mean is, I have 3 or 4 m2 SSDs, 2 in slots, and 2 in E10M20.
I only used your script to make the two SSDs of the built-in slot raid0. Now, I have added one or two SSDs on E10M20. I want to create a separate storage pool, which means that the SSDs of the built-in slot will be used as a storage pool, and the SSDs on E10M20 will also be used as storage pools.
On the basis of keeping the original built-in storage pool, running Synology M2 volume and selecting the newly added SSD on E10M20 will prompt me that md10 already exists. Can I delete the storage pool that was originally established on the built-in SSD on the web management page and create a new storage pool entirely using Synology M22 volume? Can I choose to use two built-in SSDs as one storage pool and then choose the SSD on E10M20 as the second storage pool?

<!-- gh-comment-id:2008380430 --> @loonyd commented on GitHub (Mar 20, 2024): > I already gave you the 3 steps. What I mean is, I have 3 or 4 m2 SSDs, 2 in slots, and 2 in E10M20. I only used your script to make the two SSDs of the built-in slot raid0. Now, I have added one or two SSDs on E10M20. I want to create a separate storage pool, which means that the SSDs of the built-in slot will be used as a storage pool, and the SSDs on E10M20 will also be used as storage pools. On the basis of keeping the original built-in storage pool, running Synology M2 volume and selecting the newly added SSD on E10M20 will prompt me that md10 already exists. Can I delete the storage pool that was originally established on the built-in SSD on the web management page and create a new storage pool entirely using Synology M22 volume? Can I choose to use two built-in SSDs as one storage pool and then choose the SSD on E10M20 as the second storage pool?
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

Do NOT use this script on a DS1821+. This script is only needed for models older than '20 series.

Just create the storage pool and volume in storage manager like you would when creating any other storage pool and volume.

  1. Open Storage Manager.
  2. Click Storage.
  3. Click Create.
  4. Click Create Volume.
  5. For "Storage Pool" select "Create a new storage pool".
  6. Click Next.
  7. Select SHR, Basic or JBOD. I would select SHR.
  8. Give it a description if you want (maybe E10M20-T).
  9. Click Next then OK.
  10. Tick the E10M20-T1 M.2 drive (drives in the E10M20-T1 show as "M.2 Drive 1-1" and "M.2 Drive 1-2".
    • Drives in the internal M.2 slots show as "M.2 Drive 1" and "M.2 Drive 2".
  11. Click Next.
  12. Select "perform drive check" if it's a new drive that hasn't been used before (otherwise select "Skip drive check").
  13. Click Next.
  14. Click Apply.
<!-- gh-comment-id:2008408444 --> @007revad commented on GitHub (Mar 20, 2024): Do **NOT** use this script on a DS1821+. This script is only needed for models older than '20 series. Just create the storage pool and volume in storage manager like you would when creating any other storage pool and volume. 1. Open Storage Manager. 2. Click Storage. 3. Click Create. 4. Click Create Volume. 5. For "Storage Pool" select "Create a new storage pool". 6. Click Next. 7. Select SHR, Basic or JBOD. I would select SHR. 8. Give it a description if you want (maybe E10M20-T). 9. Click Next then OK. 10. Tick the E10M20-T1 M.2 drive (drives in the E10M20-T1 show as "M.2 Drive 1-1" and "M.2 Drive 1-2". - Drives in the internal M.2 slots show as "M.2 Drive 1" and "M.2 Drive 2". 11. Click Next. 12. Select "perform drive check" if it's a new drive that hasn't been used before (otherwise select "Skip drive check"). 13. Click Next. 14. Click Apply.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1.

ds1821+_e10m20-t1_small

ds1821_3nvme_raidf1-2

With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager. You should run syno_m2_volume.sh --restore to undo the changes it made, and make sure it is not scheduled.

means syno_create_create_m2_volume.sh --restore?

<!-- gh-comment-id:2008415996 --> @loonyd commented on GitHub (Mar 20, 2024): > I've got a DS1821+ with a E10M20-T1 with 10GbE working and 3 NVMe drives in RAID-F1. > > ![ds1821+_e10m20-t1_small](https://private-user-images.githubusercontent.com/39733752/313945772-e2a5bfd6-53b1-47ad-a913-664e8a27c133.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4OTQzNjksIm5iZiI6MTcxMDg5NDA2OSwicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDU3NzItZTJhNWJmZDYtNTNiMS00N2FkLWE5MTMtNjY0ZThhMjdjMTMzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIwVDAwMjEwOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdkODUzMjRlYzQ0MjIyNDI4ZTNhNjBkNjA2YjI2MzExYzUxYmM3NTI0NWFmZmIwNDZhZDhmZDJmNTI4YWM3YjAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.xWjihNI7AJ4zSkvVzL0f01Z3EcCaiQzSE_vU-8cAgeM) > > ![ds1821_3nvme_raidf1-2](https://private-user-images.githubusercontent.com/39733752/313944609-1c039361-9610-432d-84f3-f57714a2bba0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4OTQzNjksIm5iZiI6MTcxMDg5NDA2OSwicGF0aCI6Ii8zOTczMzc1Mi8zMTM5NDQ2MDktMWMwMzkzNjEtOTYxMC00MzJkLTg0ZjMtZjU3NzE0YTJiYmEwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIwVDAwMjEwOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE3MDZmOTkyNjVhM2JlY2MyM2Y3ODE1ZTRmMTY3OTBjNGZiMTIwNTExZmI2OGIxZTRmMGNkYzVhMzEwYTU5ZDUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.l76tJUezXwtvD2_1xEKqvlrASWUBf74nzFUFhCTuykA) > > With a DS1821+ running DSM 7.2.1 you don't need Synology_M2_volume to be able to create NVMe volumes in storage manager. You should run `syno_m2_volume.sh --restore` to undo the changes it made, and make sure it is **not** scheduled. means syno_create_create_m2_volume.sh --restore?
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Do NOT use this script on a DS1821+. This script is only needed for models older than '20 series.

Just create the storage pool and volume in storage manager like you would when creating any other storage pool and volume.

  1. Open Storage Manager.

  2. Click Storage.

  3. Click Create.

  4. Click Create Volume.

  5. For "Storage Pool" select "Create a new storage pool".

  6. Click Next.

  7. Select SHR, Basic or JBOD. I would select SHR.

  8. Give it a description if you want (maybe E10M20-T).

  9. Click Next then OK.

  10. Tick the E10M20-T1 M.2 drive (drives in the E10M20-T1 show as "M.2 Drive 1-1" and "M.2 Drive 1-2".

    • Drives in the internal M.2 slots show as "M.2 Drive 1" and "M.2 Drive 2".
  11. Click Next.

  12. Select "perform drive check" if it's a new drive that hasn't been used before (otherwise select "Skip drive check").

  13. Click Next.

  14. Click Apply.

no,no, if I not use script, it can't create volume with third part SSD!

<!-- gh-comment-id:2008421715 --> @loonyd commented on GitHub (Mar 20, 2024): > Do **NOT** use this script on a DS1821+. This script is only needed for models older than '20 series. > > Just create the storage pool and volume in storage manager like you would when creating any other storage pool and volume. > > 1. Open Storage Manager. > 2. Click Storage. > 3. Click Create. > 4. Click Create Volume. > 5. For "Storage Pool" select "Create a new storage pool". > 6. Click Next. > 7. Select SHR, Basic or JBOD. I would select SHR. > 8. Give it a description if you want (maybe E10M20-T). > 9. Click Next then OK. > 10. Tick the E10M20-T1 M.2 drive (drives in the E10M20-T1 show as "M.2 Drive 1-1" and "M.2 Drive 1-2". > > * Drives in the internal M.2 slots show as "M.2 Drive 1" and "M.2 Drive 2". > 11. Click Next. > 12. Select "perform drive check" if it's a new drive that hasn't been used before (otherwise select "Skip drive check"). > 13. Click Next. > 14. Click Apply. no,no, if I not use script, it can't create volume with third part SSD!
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

means syno_create_create_m2_volume.sh --restore?

I actually meant syno_enable_m2_volume.sh --restore So ignore my previous comment about using the --restore option.

<!-- gh-comment-id:2008463782 --> @007revad commented on GitHub (Mar 20, 2024): > means syno_create_create_m2_volume.sh --restore? I actually meant syno_enable_m2_volume.sh --restore So ignore my previous comment about using the --restore option.
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

if I not use script, it can't create volume with third part SSD!

https://github.com/007revad/Synology_HDD_db makes DSM allow you to use storage manager to create volumes on 3rd party NVMe drives.

So you need https://github.com/007revad/Synology_enable_M2_card to enable the M.2 slots in the E10M20-T1

Then you need https://github.com/007revad/Synology_HDD_db so you can use storage manager to create volumes on 3rd party NVMe drives.

<!-- gh-comment-id:2008473418 --> @007revad commented on GitHub (Mar 20, 2024): > if I not use script, it can't create volume with third part SSD! https://github.com/007revad/Synology_HDD_db makes DSM allow you to use storage manager to create volumes on 3rd party NVMe drives. So you need https://github.com/007revad/Synology_enable_M2_card to enable the M.2 slots in the E10M20-T1 Then you need https://github.com/007revad/Synology_HDD_db so you can use storage manager to create volumes on 3rd party NVMe drives.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Intel INTEL MEMPEK1J016GAL (SSD) is not supported? I have run synology_HDD_db and reboot, it says not supported either!

<!-- gh-comment-id:2008484715 --> @loonyd commented on GitHub (Mar 20, 2024): Intel INTEL MEMPEK1J016GAL (SSD) is not supported? I have run synology_HDD_db and reboot, it says not supported either!
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

this
Uploading iShot_2024-03-20_09.13.09.png…

<!-- gh-comment-id:2008485146 --> @loonyd commented on GitHub (Mar 20, 2024): this ![Uploading iShot_2024-03-20_09.13.09.png…]()
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

I can't see your image.

Did you run synology_HDD_db with the -n option?

<!-- gh-comment-id:2008487030 --> @007revad commented on GitHub (Mar 20, 2024): I can't see your image. Did you run synology_HDD_db with the -n option?
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

ok, thanks very very very much!
it works well now!

<!-- gh-comment-id:2008545819 --> @loonyd commented on GitHub (Mar 20, 2024): ok, thanks very very very much! it works well now!
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

We got there in the end. 😃

<!-- gh-comment-id:2008569585 --> @007revad commented on GitHub (Mar 20, 2024): We got there in the end. 😃
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

We got there in the end. 😃

but......I have created a storage pool in web ui,when I want to create storage volume, it said no more storage pool can be used……
iShot_2024-03-20_10 29 50

<!-- gh-comment-id:2008576576 --> @loonyd commented on GitHub (Mar 20, 2024): > We got there in the end. 😃 but......I have created a storage pool in web ui,when I want to create storage volume, it said no more storage pool can be used…… <img width="1406" alt="iShot_2024-03-20_10 29 50" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/cbe4ca50-a3b5-40f1-8e24-17f4b185fc30">
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

Is that Japanese or Chinese? I can't read it.

It looks like storage pool 7 already has a volume, using the 2 internal NVMe drives.

Can you do the following:

  1. Go to "Control Panel > Regional Options > Language".
  2. Change "Display language" to English then click Apply.
  3. Take the same screenshot in storage manager.
<!-- gh-comment-id:2008593129 --> @007revad commented on GitHub (Mar 20, 2024): Is that Japanese or Chinese? I can't read it. It looks like storage pool 7 already has a volume, using the 2 internal NVMe drives. Can you do the following: 1. Go to "Control Panel > Regional Options > Language". 2. Change "Display language" to English then click Apply. 3. Take the same screenshot in storage manager.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

pic below

iShot_2024-03-20_11 07 14 iShot_2024-03-20_11 10 05 iShot_2024-03-20_11 10 13 iShot_2024-03-20_11 10 26
<!-- gh-comment-id:2008599132 --> @loonyd commented on GitHub (Mar 20, 2024): pic below <img width="815" alt="iShot_2024-03-20_11 07 14" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/fc203498-f1fe-4fda-af07-6627a6622faa"> <img width="1261" alt="iShot_2024-03-20_11 10 05" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/347cf7e1-d8f2-4d11-9b3f-1b32818f2fcb"> <img width="398" alt="iShot_2024-03-20_11 10 13" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/d5e3dd46-add2-42a5-bc6a-f994ced14a57"> <img width="1496" alt="iShot_2024-03-20_11 10 26" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/1ee18899-77b6-4985-823b-786daa20f922">
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

Storage Pool 7 is using the NVMe drives in the DS1821+ built-in M.2 slots.

M.2 Drive 1 and M.2 Drive 2 are in the built-in M.2 slots.

M.2 Drive 1-1 and M.2 Drive 1-2 will be the NVMe drives in the E10M20-T1.

Does "Storage Manager > Overview" show the E10M20-T1 and it's NVMe drives, like this?
ds1821+_e10m20-t1_small

<!-- gh-comment-id:2008608563 --> @007revad commented on GitHub (Mar 20, 2024): Storage Pool 7 is using the NVMe drives in the DS1821+ built-in M.2 slots. M.2 Drive 1 and M.2 Drive 2 are in the built-in M.2 slots. M.2 Drive **1-1** and M.2 Drive **1-2** will be the NVMe drives in the E10M20-T1. Does "Storage Manager > Overview" show the E10M20-T1 and it's NVMe drives, like this? ![ds1821+_e10m20-t1_small](https://github.com/007revad/Synology_M2_volume/assets/39733752/191bdefb-5360-4651-bef2-f10bffcf4c1a)
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Storage Pool 7 is using the NVMe drives in the DS1821+ built-in M.2 slots.

M.2 Drive 1 and M.2 Drive 2 are in the built-in M.2 slots.

M.2 Drive 1-1 and M.2 Drive 1-2 will be the NVMe drives in the E10M20-T1.

Does "Storage Manager > Overview" show the E10M20-T1 and it's NVMe drives, like this? ds1821+_e10m20-t1_small

yes, e10m20 is not plugin now, so there is only 2 ssd in built-in slots

<!-- gh-comment-id:2008610734 --> @loonyd commented on GitHub (Mar 20, 2024): > Storage Pool 7 is using the NVMe drives in the DS1821+ built-in M.2 slots. > > M.2 Drive 1 and M.2 Drive 2 are in the built-in M.2 slots. > > M.2 Drive **1-1** and M.2 Drive **1-2** will be the NVMe drives in the E10M20-T1. > > Does "Storage Manager > Overview" show the E10M20-T1 and it's NVMe drives, like this? ![ds1821+_e10m20-t1_small](https://private-user-images.githubusercontent.com/39733752/314370475-191bdefb-5360-4651-bef2-f10bffcf4c1a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA5MDU1MzcsIm5iZiI6MTcxMDkwNTIzNywicGF0aCI6Ii8zOTczMzc1Mi8zMTQzNzA0NzUtMTkxYmRlZmItNTM2MC00NjUxLWJlZjItZjEwYmZmY2Y0YzFhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIwVDAzMjcxN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdkNWYxZmYxZDYxZTA5YTVhYTdjN2IyMWRmNmQ3Y2M0NzdhNDdhODZhYjEwYTAwZDkwNDExZTdhODU2MzkzZjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.quhKa368dK8jWQowsWCMZLoWGQaGMCetqPYOVyQcQv4) yes, e10m20 is not plugin now, so there is only 2 ssd in built-in slots
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

like this
iShot_2024-03-20_11 31 48

<!-- gh-comment-id:2008611550 --> @loonyd commented on GitHub (Mar 20, 2024): like this <img width="654" alt="iShot_2024-03-20_11 31 48" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/467d9937-d586-4c61-824f-a60b0df69134">
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

I want create 2 storage pools and 2 volumes
storage pool7---use built-in slots, create a volume
storage pool8---use E10M20-T1 slots(not use now),create a volume
but can't create volume om storage pool now

<!-- gh-comment-id:2008614196 --> @loonyd commented on GitHub (Mar 20, 2024): I want create 2 storage pools and 2 volumes storage pool7---use built-in slots, create a volume storage pool8---use E10M20-T1 slots(not use now),create a volume but can't create volume om storage pool now
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

Storage Manager is showing your 2 NVMe drives as already being used. Are they currently setup as a cache?

<!-- gh-comment-id:2008615671 --> @007revad commented on GitHub (Mar 20, 2024): Storage Manager is showing your 2 NVMe drives as already being used. Are they currently setup as a cache?
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Storage Manager is showing your 2 NVMe drives as already being used. Are they currently setup as a cache?

no,but I set a raid0 by your syno_creat_vlolume before, then del the storage pool in synology web UI.

<!-- gh-comment-id:2008617069 --> @loonyd commented on GitHub (Mar 20, 2024): > Storage Manager is showing your 2 NVMe drives as already being used. Are they currently setup as a cache? no,but I set a raid0 by your syno_creat_vlolume before, then del the storage pool in synology web UI.
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

Make sure syno_hdd_db.sh -n is scheduled to run at boot with the -n option.
https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md

Then delete storage pool 7, and reboot.

Next do this:

  1. Open Storage Manager.
  2. Click Storage.
  3. Click Create.
  4. Click Create Volume.
  5. For "Storage Pool" select "Create a new storage pool".
  6. Click Next.
  7. Select the RAID type you want.
  8. Click Next then OK.
  9. Tick "M.2 Drive 1" and "M.2 Drive 2".
  10. Click Next.
  11. Select "Skip drive check".
  12. Click Next.
  13. Click Apply.
<!-- gh-comment-id:2008627109 --> @007revad commented on GitHub (Mar 20, 2024): Make `sure syno_hdd_db.sh -n` is scheduled to run at boot with the -n option. https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md Then delete storage pool 7, and reboot. Next do this: 1. Open Storage Manager. 2. Click Storage. 3. Click Create. 4. Click Create Volume. 5. For "Storage Pool" select "Create a new storage pool". 6. Click Next. 7. Select the RAID type you want. 8. Click Next then OK. 9. Tick "M.2 Drive 1" and "M.2 Drive 2". 10. Click Next. 11. Select "Skip drive check". 12. Click Next. 13. Click Apply.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Make sure syno_hdd_db.sh -n is scheduled to run at boot with the -n option. https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md

Then delete storage pool 7, and reboot.

Next do this:

  1. Open Storage Manager.
  2. Click Storage.
  3. Click Create.
  4. Click Create Volume.
  5. For "Storage Pool" select "Create a new storage pool".
  6. Click Next.
  7. Select the RAID type you want.
  8. Click Next then OK.
  9. Tick "M.2 Drive 1" and "M.2 Drive 2".
  10. Click Next.
  11. Select "Skip drive check".
  12. Click Next.
  13. Click Apply.

not work, same like before...

<!-- gh-comment-id:2008637571 --> @loonyd commented on GitHub (Mar 20, 2024): > Make `sure syno_hdd_db.sh -n` is scheduled to run at boot with the -n option. https://github.com/007revad/Synology_HDD_db/blob/main/how_to_schedule.md > > Then delete storage pool 7, and reboot. > > Next do this: > > 1. Open Storage Manager. > 2. Click Storage. > 3. Click Create. > 4. Click Create Volume. > 5. For "Storage Pool" select "Create a new storage pool". > 6. Click Next. > 7. Select the RAID type you want. > 8. Click Next then OK. > 9. Tick "M.2 Drive 1" and "M.2 Drive 2". > 10. Click Next. > 11. Select "Skip drive check". > 12. Click Next. > 13. Click Apply. not work, same like before...
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

What does the following return?

cat /proc/mdstat

<!-- gh-comment-id:2008650867 --> @007revad commented on GitHub (Mar 20, 2024): What does the following return? ``` cat /proc/mdstat ```
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

What does the following return?

cat /proc/mdstat

after I create storage pool, it return below, md9 is the two ssd.
root@SynologyNAS:~# cat /proc/mdstat
Personalities : [raid1] [linear]
md9 : active linear nvme1n1p3[1] nvme0n1p3[0]
6682624 blocks super 1.2 64k rounding [2/2] [UU]

md3 : active linear sata5p3[0] sata6p3[1]
31230310400 blocks super 1.2 64k rounding [2/2] [UU]

md2 : active linear sata7p3[0] sata8p3[1]
31230310400 blocks super 1.2 64k rounding [2/2] [UU]

md5 : active raid1 sata4p3[0]
3896294208 blocks super 1.2 [1/1] [U]

md4 : active raid1 sata3p3[0]
3896294208 blocks super 1.2 [1/1] [U]

md10 : active raid1 sata2p3[0]
15615155200 blocks super 1.2 [1/1] [U]

md8 : active raid1 sata1p3[0]
15615155200 blocks super 1.2 [1/1] [U]

md1 : active raid1 sata1p2[0] sata5p2[7] sata7p2[6] sata8p2[5] sata6p2[4] sata2p2[3] sata4p2[2] sata3p2[1]
2097088 blocks [8/8] [UUUUUUUU]

md0 : active raid1 sata1p1[0] sata5p1[7] sata8p1[6] sata7p1[5] sata6p1[4] sata2p1[3] sata4p1[2] sata3p1[1]
8388544 blocks [8/8] [UUUUUUUU]

<!-- gh-comment-id:2008673288 --> @loonyd commented on GitHub (Mar 20, 2024): > What does the following return? > > ``` > cat /proc/mdstat > ``` after I create storage pool, it return below, md9 is the two ssd. root@SynologyNAS:~# cat /proc/mdstat Personalities : [raid1] [linear] md9 : active linear nvme1n1p3[1] nvme0n1p3[0] 6682624 blocks super 1.2 64k rounding [2/2] [UU] md3 : active linear sata5p3[0] sata6p3[1] 31230310400 blocks super 1.2 64k rounding [2/2] [UU] md2 : active linear sata7p3[0] sata8p3[1] 31230310400 blocks super 1.2 64k rounding [2/2] [UU] md5 : active raid1 sata4p3[0] 3896294208 blocks super 1.2 [1/1] [U] md4 : active raid1 sata3p3[0] 3896294208 blocks super 1.2 [1/1] [U] md10 : active raid1 sata2p3[0] 15615155200 blocks super 1.2 [1/1] [U] md8 : active raid1 sata1p3[0] 15615155200 blocks super 1.2 [1/1] [U] md1 : active raid1 sata1p2[0] sata5p2[7] sata7p2[6] sata8p2[5] sata6p2[4] sata2p2[3] sata4p2[2] sata3p2[1] 2097088 blocks [8/8] [UUUUUUUU] md0 : active raid1 sata1p1[0] sata5p1[7] sata8p1[6] sata7p1[5] sata6p1[4] sata2p1[3] sata4p1[2] sata3p1[1] 8388544 blocks [8/8] [UUUUUUUU]
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

I notice there's no md6 or md7. Now I'm curious why syno_create_m2_volume.sh tried to use md10.

What does this command return?

grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort

And this command?

grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort | tail -1
<!-- gh-comment-id:2008734229 --> @007revad commented on GitHub (Mar 20, 2024): I notice there's no md6 or md7. Now I'm curious why syno_create_m2_volume.sh tried to use md10. What does this command return? ``` grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort ``` And this command? ``` grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort | tail -1 ```
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

root@SynologyNAS:# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort
md0
md1
md10
md2
md3
md4
md5
md8
root@SynologyNAS:
# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort | tail -1
md8
root@SynologyNAS:~#

<!-- gh-comment-id:2008741598 --> @loonyd commented on GitHub (Mar 20, 2024): root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort md0 md1 md10 md2 md3 md4 md5 md8 root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort | tail -1 md8 root@SynologyNAS:~#
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

Because I replaced several hard drives in the past few years, some of which may have been used in other NAS devices and were directly used for storage space repair.

<!-- gh-comment-id:2008752741 --> @loonyd commented on GitHub (Mar 20, 2024): Because I replaced several hard drives in the past few years, some of which may have been used in other NAS devices and were directly used for storage space repair.
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

What about this one:

grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n
<!-- gh-comment-id:2008885538 --> @007revad commented on GitHub (Mar 20, 2024): What about this one: ``` grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n ```
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n
md0
md1
md10
md2
md3
md4
md5
md8

<!-- gh-comment-id:2008906968 --> @loonyd commented on GitHub (Mar 20, 2024): root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n md0 md1 md10 md2 md3 md4 md5 md8
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

One more try:

grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3
<!-- gh-comment-id:2008937167 --> @007revad commented on GitHub (Mar 20, 2024): One more try: ``` grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3 ```
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error.
https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25

I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.

<!-- gh-comment-id:2008951299 --> @007revad commented on GitHub (Mar 20, 2024): I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25 I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

One more try:

grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3

root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3
md0
md1
md2
md3
md4
md5
md8
md10

<!-- gh-comment-id:2008983265 --> @loonyd commented on GitHub (Mar 20, 2024): > One more try: > > ``` > grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3 > ``` root@SynologyNAS:~# grep -oP "md[0-9]{1,2}" "/proc/mdstat" | sort -n -k1.3 md0 md1 md2 md3 md4 md5 md8 md10
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25

I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.

If this script can address the issue of partitioning several SSDs into multiple storage pools and spaces respectively, I can also use this script for management.

<!-- gh-comment-id:2008988004 --> @loonyd commented on GitHub (Mar 20, 2024): > I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25 > > I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you. If this script can address the issue of partitioning several SSDs into multiple storage pools and spaces respectively, I can also use this script for management.
Author
Owner

@007revad commented on GitHub (Mar 20, 2024):

You choose which NVMe drive, or drives, to create the storage pool.

<!-- gh-comment-id:2009080353 --> @007revad commented on GitHub (Mar 20, 2024): You choose which NVMe drive, or drives, to create the storage pool.
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

You choose which NVMe drive, or drives, to create the storage pool.

I want create 2 storage pools and 2 volumes
storage pool7---use built-in slots, create a volume
storage pool8---use E10M20-T1 slots(not use now),create a volume

<!-- gh-comment-id:2009084292 --> @loonyd commented on GitHub (Mar 20, 2024): > You choose which NVMe drive, or drives, to create the storage pool. I want create 2 storage pools and 2 volumes storage pool7---use built-in slots, create a volume storage pool8---use E10M20-T1 slots(not use now),create a volume
Author
Owner

@loonyd commented on GitHub (Mar 20, 2024):

I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25

I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.

it seems work now, I will try insert E10M20 later back home

<!-- gh-comment-id:2009085796 --> @loonyd commented on GitHub (Mar 20, 2024): > I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25 > > I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you. it seems work now, I will try insert E10M20 later back home
Author
Owner

@loonyd commented on GitHub (Mar 22, 2024):

I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25

I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you.

hi, it's working normally now by your Synology_M2_volume V1.2.25 , thanks and thanks very much!

iShot_2024-03-22_08 29 49 iShot_2024-03-22_08 35 05
<!-- gh-comment-id:2014107577 --> @loonyd commented on GitHub (Mar 22, 2024): > I've released a new version of syno_create_m2_volume that fixes the "md10 already exists" error. https://github.com/007revad/Synology_M2_volume/releases/tag/v1.3.25 > > I know I told you not to use syno_create_m2_volume but it seems like it may be the easiest solution for you. hi, it's working normally now by your Synology_M2_volume V1.2.25 , thanks and thanks very much! <img width="817" alt="iShot_2024-03-22_08 29 49" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/a94c5ff3-4a94-4bbe-aa3f-2d6773efa7fc"> <img width="1240" alt="iShot_2024-03-22_08 35 05" src="https://github.com/007revad/Synology_M2_volume/assets/33145562/3e44b7cc-b2b2-4315-a0a4-63420fceaa7a">
Sign in to join this conversation.
No labels
bug
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_M2_volume#228
No description provided.