[GH-ISSUE #7] Samsung 970 listed as incompatible on 1621+ when trying to create storage pool #508

Closed
opened 2026-03-11 11:31:04 +03:00 by kerem · 25 comments
Owner

Originally created by @MarkErik on GitHub (Mar 9, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/7

Screen Shot 2023-03-09 at 5 16 48 PM

I've run the script (and tried running it again):

Samsung SSD 970 EVO Plus 2TB already exists in ds1621+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1621+_host_v7.db.new

Is there a different database that gets checked for the new feature in DSM 7.2 where creating storage pools from NVME drives is supported in the 1621+?

Originally created by @MarkErik on GitHub (Mar 9, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/7 <img width="747" alt="Screen Shot 2023-03-09 at 5 16 48 PM" src="https://user-images.githubusercontent.com/23254168/224172747-12ba0cac-39aa-44eb-b895-03d772323f19.png"> I've run the script (and tried running it again): ``` Samsung SSD 970 EVO Plus 2TB already exists in ds1621+_host_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1621+_host_v7.db.new ``` Is there a different database that gets checked for the new feature in DSM 7.2 where creating storage pools from NVME drives is supported in the 1621+?
kerem 2026-03-11 11:31:04 +03:00
Author
Owner

@MarkErik commented on GitHub (Mar 9, 2023):

I wanted to add that I know I can use the command line to create a storage pool using this drive, the only drawback is that via the command line, when the storage pool is created, it doesn't let me turn on (says not available) the whole-volume encryption for this drive.

That's why I was hoping that now that the 1621+ is officially supported as one of the models that can natively in DSM do m.2 nvme storage pools, I could use the GUI to create the storage pool and thus hopefully having it enable encryption for the volume.

I wonder if the m.2 nvme support is baked-into DSM since its only their 2 Synology SSDs at the moment.

<!-- gh-comment-id:1462924148 --> @MarkErik commented on GitHub (Mar 9, 2023): I wanted to add that I know I can use the command line to create a storage pool using this drive, the only drawback is that via the command line, when the storage pool is created, it doesn't let me turn on (says not available) the whole-volume encryption for this drive. That's why I was hoping that now that the 1621+ is officially supported as one of the models that can natively in DSM do m.2 nvme storage pools, I could use the GUI to create the storage pool and thus hopefully having it enable encryption for the volume. I wonder if the m.2 nvme support is baked-into DSM since its only their 2 Synology SSDs at the moment.
Author
Owner

@007revad commented on GitHub (Mar 9, 2023):

This is actually something I can test myself as I have a DS1821+. I was reluctant to try DSM 7.2 beta but I now I have a reason to try it to see how Synology is blocking 3rd party NVMe drives. It might take me a few days to figure out.

<!-- gh-comment-id:1462976022 --> @007revad commented on GitHub (Mar 9, 2023): This is actually something I can test myself as I have a DS1821+. I was reluctant to try DSM 7.2 beta but I now I have a reason to try it to see how Synology is blocking 3rd party NVMe drives. It might take me a few days to figure out.
Author
Owner

@linguowei commented on GitHub (Mar 10, 2023):

@007revad Looking forward to your test results

<!-- gh-comment-id:1463408424 --> @linguowei commented on GitHub (Mar 10, 2023): @007revad Looking forward to your test results
Author
Owner

@MarkErik commented on GitHub (Mar 13, 2023):

@007revad I wanted to ask whether in the ReadMe when it says:
Bypass unsupported M.2 drives for use as volumes in DSM 7.2 (for models that supported M.2 volumes).
Do you mean "Allow"?

<!-- gh-comment-id:1466031731 --> @MarkErik commented on GitHub (Mar 13, 2023): @007revad I wanted to ask whether in the ReadMe when it says: _Bypass_ unsupported M.2 drives for use as volumes in DSM 7.2 (for models that supported M.2 volumes). Do you mean "Allow"?
Author
Owner

@007revad commented on GitHub (Mar 13, 2023):

Yes.

<!-- gh-comment-id:1466876310 --> @007revad commented on GitHub (Mar 13, 2023): Yes.
Author
Owner

@ctrlaltdelete007 commented on GitHub (Mar 15, 2023):

Maybe this should be added:
/etc.default/synoinfo.conf

the following parameter has to be changed to "yes"

support_m2_pool="no"

<!-- gh-comment-id:1469797650 --> @ctrlaltdelete007 commented on GitHub (Mar 15, 2023): Maybe this should be added: /etc.default/synoinfo.conf the following parameter has to be changed to "yes" support_m2_pool="no"
Author
Owner

@007revad commented on GitHub (Mar 17, 2023):

support_m2_pool="yes" is already set on models that support M.2 volumes. And on models that don't officially support M.2 volumes my script sets it to yes, or adds the line if it's missing.

<!-- gh-comment-id:1473266165 --> @007revad commented on GitHub (Mar 17, 2023): support_m2_pool="yes" is already set on models that support M.2 volumes. And on models that don't officially support M.2 volumes my script sets it to yes, or adds the line if it's missing.
Author
Owner

@007revad commented on GitHub (Mar 23, 2023):

I've written another script that creates the storage pool for you and you can then go into DSM, select Online assemble and then create the volume (without any stupid warnings from DSM). This method allows full volume encryption to be enabled.

https://github.com/007revad/Synology_M2_volume

<!-- gh-comment-id:1481032851 --> @007revad commented on GitHub (Mar 23, 2023): I've written another script that creates the storage pool for you and you can then go into DSM, select Online assemble and then create the volume (without any stupid warnings from DSM). This method allows full volume encryption to be enabled. https://github.com/007revad/Synology_M2_volume
Author
Owner

@prt1999 commented on GitHub (Mar 24, 2023):

enabling storage pool is not synlogy with ssd:
https://xpenology.com/forum/topic/67961-use-nvmem2-hard-drives-as-storage-pools-in-synology/

<!-- gh-comment-id:1482908582 --> @prt1999 commented on GitHub (Mar 24, 2023): enabling storage pool is not synlogy with ssd: https://xpenology.com/forum/topic/67961-use-nvmem2-hard-drives-as-storage-pools-in-synology/
Author
Owner

@007revad commented on GitHub (Mar 25, 2023):

enabling storage pool is not synlogy with ssd: https://xpenology.com/forum/topic/67961-use-nvmem2-hard-drives-as-storage-pools-in-synology/

Unfortunately that didn't work on my DS1821+ with DSM 7.2 beta.

<!-- gh-comment-id:1483950343 --> @007revad commented on GitHub (Mar 25, 2023): > enabling storage pool is not synlogy with ssd: https://xpenology.com/forum/topic/67961-use-nvmem2-hard-drives-as-storage-pools-in-synology/ Unfortunately that didn't work on my DS1821+ with DSM 7.2 beta.
Author
Owner

@inkpool commented on GitHub (Apr 4, 2023):

Do you still have a plan to support 3rd party NVMe SSD creating Volume with GUI?

<!-- gh-comment-id:1495386286 --> @inkpool commented on GitHub (Apr 4, 2023): Do you still have a plan to support 3rd party NVMe SSD creating Volume with GUI?
Author
Owner

@007revad commented on GitHub (Apr 6, 2023):

@inkpool Sorry, I didn't see your comment until just now.

I've written another script that enables creating M.2 storage pools and volumes all from within Storage Manager.
https://github.com/007revad/Synology_enable_M2_volume

And also another script that does exactly the same as Synology_enable_M2_volume but also enables Data Deduplication on any brand SSDs, even on unsupported models.
https://github.com/007revad/Synology_enable_Deduplication

Using Data Deduplication from the GUI is very cool. I hope to be able to make it work on HDDs as well.

<!-- gh-comment-id:1498491707 --> @007revad commented on GitHub (Apr 6, 2023): @inkpool Sorry, I didn't see your comment until just now. I've written another script that enables creating M.2 storage pools and volumes all from within Storage Manager. https://github.com/007revad/Synology_enable_M2_volume And also another script that does exactly the same as Synology_enable_M2_volume but also enables Data Deduplication on any brand SSDs, even on unsupported models. https://github.com/007revad/Synology_enable_Deduplication Using Data Deduplication from the GUI is very cool. I hope to be able to make it work on HDDs as well.
Author
Owner

@nicolerenee commented on GitHub (Apr 8, 2023):

I was able to create a new 3rd party NVMe volume in the GUI after running the script in this repo and then running the following command:

echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support

For my NVMe drive. I only have one right now so I only ran it for that drive, once I get my other one I'll test what it required for it if anything.

<!-- gh-comment-id:1500771649 --> @nicolerenee commented on GitHub (Apr 8, 2023): I was able to create a new 3rd party NVMe volume in the GUI after running the script in this repo and then running the following command: ``` echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support ``` For my NVMe drive. I only have one right now so I only ran it for that drive, once I get my other one I'll test what it required for it if anything.
Author
Owner

@007revad commented on GitHub (Apr 8, 2023):

@nicolerenee I've added you to the credits at the bottom of the readme

echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support

I've seen this mentioned a month ago on both synology-forum.de and xpenology.com and again a week ago here on reddit, but the comments were always short and vague.

Your comment has made it clear to me now. Thank you.

<!-- gh-comment-id:1500803669 --> @007revad commented on GitHub (Apr 8, 2023): @nicolerenee I've added you to the credits at the bottom of the readme `echo 1 > /run/synostorage/disks/nvme0n1/m2_pool_support` I've seen this mentioned a month ago on both synology-forum.de and xpenology.com and again a week ago here on reddit, but the comments were always short and vague. Your comment has made it clear to me now. Thank you.
Author
Owner

@007revad commented on GitHub (Apr 8, 2023):

@inkpool @ctrlaltdelete007 @linguowei @MarkErik

v2.0.35 just released and now, thanks to @nicolerenee, allows creating the storage pool and volume from Storage Manager, for any M.2 drive(s).

<!-- gh-comment-id:1500820660 --> @007revad commented on GitHub (Apr 8, 2023): @inkpool @ctrlaltdelete007 @linguowei @MarkErik v2.0.35 just released and now, thanks to @nicolerenee, allows creating the storage pool and volume from Storage Manager, for any M.2 drive(s).
Author
Owner

@nicolerenee commented on GitHub (Apr 8, 2023):

I just tested the new version since my second NVMe showed up today and it worked perfectly.

<!-- gh-comment-id:1500984771 --> @nicolerenee commented on GitHub (Apr 8, 2023): I just tested the new version since my second NVMe showed up today and it worked perfectly.
Author
Owner

@007revad commented on GitHub (Apr 9, 2023):

@nicolerenee What Synology model do you have and which DSM version are running?

<!-- gh-comment-id:1501010700 --> @007revad commented on GitHub (Apr 9, 2023): @nicolerenee What Synology model do you have and which DSM version are running?
Author
Owner

@hawie commented on GitHub (Apr 11, 2023):

/usr/syno/sbin/synostgdisk missing, any idea?
Msg:
Synology_HDD_db v2.0.35
DS918+ DSM 6.2.3-25426-3

HDD/SSD models found: 1
CT2000MX500SSD1,033

M.2 drive models found: 1
SanDisk Ultra 3D NVMe,21705000

No M.2 cards found

No Expansion Units found

Backed up ds918+_host.db
Added CT2000MX500SSD1 to ds918+_host.db
Added CT2000MX500SSD1 to ds918+_host.db.new
Added SanDisk Ultra 3D NVMe to ds918+_host.db
Added SanDisk Ultra 3D NVMe to ds918+_host.db.new

Backed up synoinfo.conf

Re-enabled support disk compatibility.

Enabled M.2 volume support.

Disabled drive db auto updates.
./syno_hdd_db.sh: line 1016: /usr/syno/sbin/synostgdisk: No such file or directory

You may need to reboot the Synology to see the changes.

<!-- gh-comment-id:1502562303 --> @hawie commented on GitHub (Apr 11, 2023): /usr/syno/sbin/synostgdisk missing, any idea? Msg: Synology_HDD_db v2.0.35 DS918+ DSM 6.2.3-25426-3 HDD/SSD models found: 1 CT2000MX500SSD1,033 M.2 drive models found: 1 SanDisk Ultra 3D NVMe,21705000 No M.2 cards found No Expansion Units found Backed up ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db.new Added SanDisk Ultra 3D NVMe to ds918+_host.db Added SanDisk Ultra 3D NVMe to ds918+_host.db.new Backed up synoinfo.conf Re-enabled support disk compatibility. Enabled M.2 volume support. Disabled drive db auto updates. **./syno_hdd_db.sh: line 1016: /usr/syno/sbin/synostgdisk: No such file or directory** You may need to reboot the Synology to see the changes.
Author
Owner

@hawie commented on GitHub (Apr 11, 2023):

/usr/syno/sbin/synostgdisk missing, any idea? Msg: Synology_HDD_db v2.0.35 DS918+ DSM 6.2.3-25426-3

HDD/SSD models found: 1 CT2000MX500SSD1,033

M.2 drive models found: 1 SanDisk Ultra 3D NVMe,21705000

No M.2 cards found

No Expansion Units found

Backed up ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db.new Added SanDisk Ultra 3D NVMe to ds918+_host.db Added SanDisk Ultra 3D NVMe to ds918+_host.db.new

Backed up synoinfo.conf

Re-enabled support disk compatibility.

Enabled M.2 volume support.

Disabled drive db auto updates. ./syno_hdd_db.sh: line 1016: /usr/syno/sbin/synostgdisk: No such file or directory

You may need to reboot the Synology to see the changes.

works fine with 7.2 beta.

<!-- gh-comment-id:1503776162 --> @hawie commented on GitHub (Apr 11, 2023): > /usr/syno/sbin/synostgdisk missing, any idea? Msg: Synology_HDD_db v2.0.35 DS918+ DSM 6.2.3-25426-3 > > HDD/SSD models found: 1 CT2000MX500SSD1,033 > > M.2 drive models found: 1 SanDisk Ultra 3D NVMe,21705000 > > No M.2 cards found > > No Expansion Units found > > Backed up ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db Added CT2000MX500SSD1 to ds918+_host.db.new Added SanDisk Ultra 3D NVMe to ds918+_host.db Added SanDisk Ultra 3D NVMe to ds918+_host.db.new > > Backed up synoinfo.conf > > Re-enabled support disk compatibility. > > Enabled M.2 volume support. > > Disabled drive db auto updates. **./syno_hdd_db.sh: line 1016: /usr/syno/sbin/synostgdisk: No such file or directory** > > You may need to reboot the Synology to see the changes. works fine with 7.2 beta.
Author
Owner

@MarkErik commented on GitHub (Apr 14, 2023):

I wanted to share that using the script I was able to create a Storage Pool in the Synology GUI for my Samsung 970 EVO PLUS in the 7.2 Beta. Being able to have it recognized as an officially supported drive is great because then I could also enable file encryption for the Volume.

However...

For anyone else who has also created a SSD volume in 7.2 - does it seem like it is less performant than what you would expect?

In my 1621+ (32GB RAM) I have a 2TB Samsung 970 EVO PLUS as an encrypted volume, and a 6x14TB WD RAID10, with a 20TB encrypted volume.

If I run the ATTO Benchmark, the RAID10 is getting higher IOPS (almost 2x) for the small file sizes than the SSD. Maybe I am mistaken, but I thought that the SSD would be much better.

Here's the charts (look at the scale on the left):

This is for the SSD
Screen Shot 2023-04-14 at 10 22 49 AM

This is for the RAID10
Screen Shot 2023-04-14 at 10 14 17 AM

Also, I am seeing some strange dips on the disk writes, and the reads look messy in the AJA 64GB test for the SSD
Screen Shot 2023-04-14 at 10 28 16 AM

Compared to the more consistent writes on the RAID10
Screen Shot 2023-04-14 at 10 35 55 AM

<!-- gh-comment-id:1508668433 --> @MarkErik commented on GitHub (Apr 14, 2023): I wanted to share that using the script I was able to create a Storage Pool in the Synology GUI for my Samsung 970 EVO PLUS in the 7.2 Beta. Being able to have it recognized as an officially supported drive is great because then I could also enable file encryption for the Volume. However... For anyone else who has also created a SSD volume in 7.2 - does it seem like it is less performant than what you would expect? In my 1621+ (32GB RAM) I have a 2TB Samsung 970 EVO PLUS as an encrypted volume, and a 6x14TB WD RAID10, with a 20TB encrypted volume. If I run the ATTO Benchmark, the RAID10 is getting higher IOPS (almost 2x) for the small file sizes than the SSD. Maybe I am mistaken, but I thought that the SSD would be much better. Here's the charts (look at the scale on the left): This is for the SSD <img width="1117" alt="Screen Shot 2023-04-14 at 10 22 49 AM" src="https://user-images.githubusercontent.com/23254168/232072484-becba01f-835f-4e78-afba-d0da3ed0d0b1.png"> This is for the RAID10 <img width="1118" alt="Screen Shot 2023-04-14 at 10 14 17 AM" src="https://user-images.githubusercontent.com/23254168/232072576-bb2b0445-9ee3-492d-b547-5477ad76f5c0.png"> Also, I am seeing some strange dips on the disk writes, and the reads look messy in the AJA 64GB test for the SSD <img width="916" alt="Screen Shot 2023-04-14 at 10 28 16 AM" src="https://user-images.githubusercontent.com/23254168/232072826-77176ad6-b6d5-4be9-97fb-374f8218783e.png"> Compared to the more consistent writes on the RAID10 <img width="914" alt="Screen Shot 2023-04-14 at 10 35 55 AM" src="https://user-images.githubusercontent.com/23254168/232074637-bbaf501c-cb8a-4a4c-b9de-7365b3137e96.png">
Author
Owner

@007revad commented on GitHub (Apr 16, 2023):

@MarkErik I've not seen any real benefit from using NVMe drives as a volume or a cache.

But I'd actually be really happy if I was getting your read and write speeds on my DS1821+. With 32G of ECC memory and a E10G18-T1 10G card I only get 410 MB/s writes and 280 MB/s reads.

DS1821+ AJA 64GB test for a WD Black SN770 NVMe (no encryption or data checksums).
image

DS1821+ AJA 64GB test for a 4x 16TB Ironwolf SHR array (no encryption and each drive's write cache disabled).
image

<!-- gh-comment-id:1510220808 --> @007revad commented on GitHub (Apr 16, 2023): @MarkErik I've not seen any real benefit from using NVMe drives as a volume or a cache. But I'd actually be really happy if I was getting your read and write speeds on my DS1821+. With 32G of ECC memory and a E10G18-T1 10G card I only get 410 MB/s writes and 280 MB/s reads. DS1821+ AJA 64GB test for a WD Black SN770 NVMe (no encryption or data checksums). ![image](https://user-images.githubusercontent.com/39733752/232288485-5d05201e-b5e9-4c1d-859b-12ac20340d9f.png) DS1821+ AJA 64GB test for a 4x 16TB Ironwolf SHR array (no encryption and each drive's write cache disabled). ![image](https://user-images.githubusercontent.com/39733752/232288466-3838cc31-6572-4063-8a0b-9cf9a182f4e6.png)
Author
Owner

@MarkErik commented on GitHub (Apr 16, 2023):

@007revad Interesting about your speeds, especially the NVME, since we have nearly the same NAS hardware configuration (same amount of RAM, same 10G card).

A few points about my config at the moment is that the RAID10 (6 x shucked 14TB WD USB drives that turned out to be WD140EDGZ) is empty, so my read and write speeds are ideal. At the moment I also don't have any other services running on my NAS.

Your SHR write speeds seems quite good, from many comments indicating that SHR write speed is typically that of a single drive. But in the same vein, reads where meant to be N-1 speed of the number of drives, so something looks a bit off on that for you.

What I've noticed is that the SSD volume seems to perform best right after a reboot, whereas the RAID10 performs similar regardless.

I found some benchmark numbers from testing on the previous version of DSM (7.1), using Blackmagic, comparing the command-line created SSD volume, to the RAID10 - and using a 3GB file (so that it would supposedly cache the entire operation in RAM), the SSD volume performed about 100MB/s slower than the RAID10. But then after a reboot, the SSD performed better.

So across different DSM versions, with and without encryption, the SSD volume seems to be more temperamental.

I have a 500GB Samsung 970 EVO PLUS in the second slot that I was going to use as a cache drive for the RAID10, but I'll test it today as a volume, to see how it behaves performance-wise.

<!-- gh-comment-id:1510420061 --> @MarkErik commented on GitHub (Apr 16, 2023): @007revad Interesting about your speeds, especially the NVME, since we have nearly the same NAS hardware configuration (same amount of RAM, same 10G card). A few points about my config at the moment is that the RAID10 (6 x shucked 14TB WD USB drives that turned out to be WD140EDGZ) is empty, so my read and write speeds are ideal. At the moment I also don't have any other services running on my NAS. Your SHR write speeds seems quite good, from many comments indicating that SHR write speed is typically that of a single drive. But in the same vein, reads where meant to be N-1 speed of the number of drives, so something looks a bit off on that for you. What I've noticed is that the SSD volume seems to perform best right after a reboot, whereas the RAID10 performs similar regardless. I found some benchmark numbers from testing on the previous version of DSM (7.1), using Blackmagic, comparing the command-line created SSD volume, to the RAID10 - and using a 3GB file (so that it would supposedly cache the entire operation in RAM), the SSD volume performed about 100MB/s slower than the RAID10. But then after a reboot, the SSD performed better. So across different DSM versions, with and without encryption, the SSD volume seems to be more temperamental. I have a 500GB Samsung 970 EVO PLUS in the second slot that I was going to use as a cache drive for the RAID10, but I'll test it today as a volume, to see how it behaves performance-wise.
Author
Owner

@inkpool commented on GitHub (Apr 17, 2023):

I did some tests, the performance of NVMe volume is not significant better than HDD volume. I have a SN570 NVMe RAID0 and a HDD RAID 5. So disappointed.

<!-- gh-comment-id:1510676292 --> @inkpool commented on GitHub (Apr 17, 2023): I did some tests, the performance of NVMe volume is not significant better than HDD volume. I have a SN570 NVMe RAID0 and a HDD RAID 5. So disappointed.
Author
Owner

@inkpool commented on GitHub (Apr 17, 2023):

The single drive, raid1 and raid0 of NVMe volume did not show much difference.

One good thing is we could achieve nearly 8-drive HDD RAID5 performance with a single NVMe drive.

<!-- gh-comment-id:1510812474 --> @inkpool commented on GitHub (Apr 17, 2023): The single drive, raid1 and raid0 of NVMe volume did not show much difference. One good thing is we could achieve nearly 8-drive HDD RAID5 performance with a single NVMe drive.
Author
Owner

@MarkErik commented on GitHub (Apr 25, 2023):

FYI: No problems doing the upgrade from 7.2 Beta to 7.2 RC. All the SSD volumes that I had created were still there working. edit. However the inconsistent speed/performance still persists.

<!-- gh-comment-id:1521746595 --> @MarkErik commented on GitHub (Apr 25, 2023): FYI: No problems doing the upgrade from 7.2 Beta to 7.2 RC. All the SSD volumes that I had created were still there working. edit. However the inconsistent speed/performance still persists.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#508
No description provided.