[GH-ISSUE #108] M2D20 or FireCuda 530 now detected #543

Closed
opened 2026-03-11 11:51:53 +03:00 by kerem · 5 comments
Owner

Originally created by @7Uppers on GitHub (Jul 1, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/108

Inserted a M2D20 cache card and 2x Seagate FireCuda 530 ZP500GM3A013
The firecudas seem undetected by syno_hdd_db scirpt

myadm@ds1621xs:~$ sudo -i /volume1/apps/mac/drv_osx/synology/Synology_HDD_db-main/syno_hdd_db.sh -rm
Password: 
Synology_HDD_db v3.0.56
DS1621xs+ DSM 7.2-64570-1 
Using options: -rm

HDD/SSD models found: 1
IronWolf ZA2000NM10002-2,SU3SC011

No Expansion Units found

IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host_v7.db
IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db
IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db.new

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 32 GB.

Drive db auto updates already enabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

however they are detected by the syno_create_m2_volume script

myadm@ds1621xs:/$ sudo -i /volume1/apps/mac/drv_osx/synology/Synology_M2_volume-main/syno_create_m2_volume.sh
Password: 
Synology_M2_volume v1.2.14
DS1621xs+ DSM 7.2-64570-1 

Using options: 
Type yes to continue. Type anything else to do a dry run test.
yes

NVMe M.2 nvme0n1 is Seagate FireCuda 530 ZP500GM30013
WARNING Drive has a volume partition

NVMe M.2 nvme1n1 is Seagate FireCuda 530 ZP500GM30013
WARNING Drive has a volume partition

Unused M.2 drives found: 2

End result is that I cannot use them.....

myadm@ds1621xs:~$ ls /run/synostorage/disks
sda  sdb  sdc  sdd  sde  sdf

myadm@ds1621xs:~$ ls /sys/block | grep nvme
nvme0n1
nvme1n1

myadm@ds1621xs:~$ sudo synodisk --enum -t cache



myadm@ds1621xs:~$ sudo synonvme --model-get /dev/nvme1
Model name: Seagate FireCuda 530 ZP500GM300

myadm@ds1621xs:~$ sudo synonvme --model-get /dev/nvme0
Model name: Seagate FireCuda 530 ZP500GM300

myadm@ds1621xs:~$ sudo ls -I ram -I loop* -I dm-* /sys/block
Password: 
md0  md2      nvme1n1  ram1   ram11  ram13  ram15  ram3  ram5  ram7  ram9  sdb  sdd  sdf       zram0  zram2
md1  nvme0n1  ram0     ram10  ram12  ram14  ram2   ram4  ram6  ram8  sda   sdc  sde  synoboot  zram1  zram3

myadm@ds1621xs:~$ cat /sys/block/nvme0n1/device/model
Seagate FireCuda 530 ZP500GM30013       
myadm@ds1621xs:~$ cat /sys/block/nvme0n1/device/firmware_rev
SU6SM003
myadm@ds1621xs:~$ cat /sys/block/nvme1n1/device/model
Seagate FireCuda 530 ZP500GM30013       
myadm@ds1621xs:~$ cat /sys/block/nvme1n1/device/firmware_rev
SU6SM003
Originally created by @7Uppers on GitHub (Jul 1, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/108 Inserted a M2D20 cache card and 2x Seagate FireCuda 530 ZP500GM3A013 The firecudas seem undetected by syno_hdd_db scirpt ``` myadm@ds1621xs:~$ sudo -i /volume1/apps/mac/drv_osx/synology/Synology_HDD_db-main/syno_hdd_db.sh -rm Password: Synology_HDD_db v3.0.56 DS1621xs+ DSM 7.2-64570-1 Using options: -rm HDD/SSD models found: 1 IronWolf ZA2000NM10002-2,SU3SC011 No Expansion Units found IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host_v7.db IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db.new Support disk compatibility already enabled. Support memory compatibility already disabled. Max memory already set to 32 GB. Drive db auto updates already enabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` however they are detected by the syno_create_m2_volume script ``` myadm@ds1621xs:/$ sudo -i /volume1/apps/mac/drv_osx/synology/Synology_M2_volume-main/syno_create_m2_volume.sh Password: Synology_M2_volume v1.2.14 DS1621xs+ DSM 7.2-64570-1 Using options: Type yes to continue. Type anything else to do a dry run test. yes NVMe M.2 nvme0n1 is Seagate FireCuda 530 ZP500GM30013 WARNING Drive has a volume partition NVMe M.2 nvme1n1 is Seagate FireCuda 530 ZP500GM30013 WARNING Drive has a volume partition Unused M.2 drives found: 2 ``` End result is that I cannot use them..... ``` myadm@ds1621xs:~$ ls /run/synostorage/disks sda sdb sdc sdd sde sdf myadm@ds1621xs:~$ ls /sys/block | grep nvme nvme0n1 nvme1n1 myadm@ds1621xs:~$ sudo synodisk --enum -t cache myadm@ds1621xs:~$ sudo synonvme --model-get /dev/nvme1 Model name: Seagate FireCuda 530 ZP500GM300 myadm@ds1621xs:~$ sudo synonvme --model-get /dev/nvme0 Model name: Seagate FireCuda 530 ZP500GM300 myadm@ds1621xs:~$ sudo ls -I ram -I loop* -I dm-* /sys/block Password: md0 md2 nvme1n1 ram1 ram11 ram13 ram15 ram3 ram5 ram7 ram9 sdb sdd sdf zram0 zram2 md1 nvme0n1 ram0 ram10 ram12 ram14 ram2 ram4 ram6 ram8 sda sdc sde synoboot zram1 zram3 myadm@ds1621xs:~$ cat /sys/block/nvme0n1/device/model Seagate FireCuda 530 ZP500GM30013 myadm@ds1621xs:~$ cat /sys/block/nvme0n1/device/firmware_rev SU6SM003 myadm@ds1621xs:~$ cat /sys/block/nvme1n1/device/model Seagate FireCuda 530 ZP500GM30013 myadm@ds1621xs:~$ cat /sys/block/nvme1n1/device/firmware_rev SU6SM003 ```
kerem closed this issue 2026-03-11 11:51:59 +03:00
Author
Owner

@7Uppers commented on GitHub (Jul 1, 2023):

latest rc doesn't help

sudo -i /volume1/apps/mac/drv_osx/synology/Synology_HDD_db-3.1.59-RC/syno_hdd_db.sh -rm
Password: 
Synology_HDD_db v3.1.59
DS1621xs+ DSM 7.2-64570-1 
Using options: -rm

HDD/SSD models found: 1
IronWolf ZA2000NM10002-2,SU3SC011

No Expansion Units found

IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host_v7.db
IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db
IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db.new

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 32 GB.

Drive db auto updates already enabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.


sudo -i fdisk -l /dev/nvme0n1
Password: 
Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Seagate FireCuda 530 ZP500GM30013       
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1db5e014

Device         Boot   Start       End   Sectors   Size Id Type
/dev/nvme0n1p1          256   4980735   4980480   2.4G fd Linux raid autodetect
/dev/nvme0n1p2      4980736   9175039   4194304     2G fd Linux raid autodetect
/dev/nvme0n1p3      9437184 976768064 967330881 461.3G fd Linux raid autodetect



sudo syno_hdd_util --ssd_detect
Model                Firmware     SN                   Dev        is SSD?
IronWolf ZA2000NM10002-2 SU3SC011     7TD002AX             /dev/sdf   yes   
IronWolf ZA2000NM10002-2 SU3SC011     7TD00248             /dev/sde   yes   
IronWolf ZA2000NM10002-2 SU3SC011     7TD00242             /dev/sdd   yes   
IronWolf ZA2000NM10002-2 SU3SC011     7TD0024Q             /dev/sdc   yes   
IronWolf ZA2000NM10002-2 SU3SC011     7TD0024H             /dev/sdb   yes   
IronWolf ZA2000NM10002-2 SU3SC011     7TD0024N             /dev/sda   yes   
If this is not right, please kindly report this to us
<!-- gh-comment-id:1616024207 --> @7Uppers commented on GitHub (Jul 1, 2023): latest rc doesn't help ``` sudo -i /volume1/apps/mac/drv_osx/synology/Synology_HDD_db-3.1.59-RC/syno_hdd_db.sh -rm Password: Synology_HDD_db v3.1.59 DS1621xs+ DSM 7.2-64570-1 Using options: -rm HDD/SSD models found: 1 IronWolf ZA2000NM10002-2,SU3SC011 No Expansion Units found IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host_v7.db IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db IronWolf ZA2000NM10002-2 already exists in ds1621xs+_host.db.new Support disk compatibility already enabled. Support memory compatibility already disabled. Max memory already set to 32 GB. Drive db auto updates already enabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. sudo -i fdisk -l /dev/nvme0n1 Password: Disk /dev/nvme0n1: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: Seagate FireCuda 530 ZP500GM30013 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x1db5e014 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 256 4980735 4980480 2.4G fd Linux raid autodetect /dev/nvme0n1p2 4980736 9175039 4194304 2G fd Linux raid autodetect /dev/nvme0n1p3 9437184 976768064 967330881 461.3G fd Linux raid autodetect sudo syno_hdd_util --ssd_detect Model Firmware SN Dev is SSD? IronWolf ZA2000NM10002-2 SU3SC011 7TD002AX /dev/sdf yes IronWolf ZA2000NM10002-2 SU3SC011 7TD00248 /dev/sde yes IronWolf ZA2000NM10002-2 SU3SC011 7TD00242 /dev/sdd yes IronWolf ZA2000NM10002-2 SU3SC011 7TD0024Q /dev/sdc yes IronWolf ZA2000NM10002-2 SU3SC011 7TD0024H /dev/sdb yes IronWolf ZA2000NM10002-2 SU3SC011 7TD0024N /dev/sda yes If this is not right, please kindly report this to us ```
Author
Owner

@007revad commented on GitHub (Jul 1, 2023):

DSM only lets you use NVMe drives in an M.2 PCIe card as a cache. So to use them as a volume you will need to use the syno_create_m2_volume.sh script. But first we need to get the DS1621xs+ use the M2D20 card.

What does the following command return:
cat /proc/sys/kernel/syno_hw_version

<!-- gh-comment-id:1616142225 --> @007revad commented on GitHub (Jul 1, 2023): DSM only lets you use NVMe drives in an M.2 PCIe card as a cache. So to use them as a volume you will need to use the syno_create_m2_volume.sh script. But first we need to get the DS1621xs+ use the M2D20 card. What does the following command return: `cat /proc/sys/kernel/syno_hw_version`
Author
Owner

@007revad commented on GitHub (Jul 2, 2023):

If you're feeling brave you could try extracting the model.dtb from the attached zip file then copy it to /etc.defaults/model.dtb then chmod 644 "/etc.defaults/model.dtb" and reboot. Finally check if storage manager can now see the M2D20.
DS1621xs+_model.zip

I can think of 3 possible outcomes:

  1. It works and storage manager can now see the M2D20.
  2. Nothing changes and storage manager still cannot see the M2D20.
  3. Bad things happen.

To mitigate against outcome 3 I would suggest you:

  1. Shut down the NAS.
  2. Remove the existing drives (and any NVMe drives installed in the NAS' internal M.2 slots).
  3. Install a test drive and initialise the NAS.
  4. Add the model.dtb file, reboot then check if storage manager can see the M2D20.

Assuming step 4 went okay, shut down the NAS, remove the test drive and reinstall your original drives.

<!-- gh-comment-id:1616352248 --> @007revad commented on GitHub (Jul 2, 2023): If you're feeling brave you could try extracting the model.dtb from the attached zip file then copy it to /etc.defaults/model.dtb then `chmod 644 "/etc.defaults/model.dtb"` and reboot. Finally check if storage manager can now see the M2D20. [DS1621xs+_model.zip](https://github.com/007revad/Synology_HDD_db/files/11928686/DS1621xs%2B_model.zip) I can think of 3 possible outcomes: 1. It works and storage manager can now see the M2D20. 2. Nothing changes and storage manager still cannot see the M2D20. 3. Bad things happen. To mitigate against outcome 3 I would suggest you: 1. Shut down the NAS. 2. Remove the existing drives (and any NVMe drives installed in the NAS' internal M.2 slots). 3. Install a test drive and initialise the NAS. 4. Add the model.dtb file, reboot then check if storage manager can see the M2D20. Assuming step 4 went okay, shut down the NAS, remove the test drive and reinstall your original drives.
Author
Owner

@7Uppers commented on GitHub (Jul 2, 2023):

It works now, unexpectedly
This morning after the nightly shutdown, the M2D20, m2 drives and storage pool became visible.
I think I rebooted most of the time after every script run, so I am surprised it works.
It may be something with the chronological order where one script should run after another, or maybe I did miss a reboot, I don't know.
The only extra command i tried is:
sudo set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme DS1621xs+ yes

cat /proc/sys/kernel/syno_hw_version
DS1621xs+

sudo synodisk --enum -t cache
************ Disk Info ***************
>> Disk id: 2
>> Slot id: 1
>> Disk path: /dev/nvme0n1
>> Disk model: Seagate FireCuda 530 ZP500GM30013       
>> Total capacity: 465.76 GB
>> Tempeture: 37 C
************ Disk Info ***************
>> Disk id: 1
>> Slot id: 1
>> Disk path: /dev/nvme1n1
>> Disk model: Seagate FireCuda 530 ZP500GM30013       
>> Total capacity: 465.76 GB
>> Tempeture: 38 C

i created a new volume, and defined a new storage in VMM, I migrated the virtual machines to the volume on the M2D20 card and did some speed tests in windows:

winsat disk -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-drive c -ran -read'
> Run Time 00:00:00.45
> Running: Storage Assessment '-drive c -seq -read'
> Run Time 00:00:00.81
> Running: Storage Assessment '-drive c -seq -write'
> Run Time 00:00:00.97
> Running: Storage Assessment '-drive c -flush -seq'
> Run Time 00:00:00.47
> Running: Storage Assessment '-drive c -flush -ran'
> Run Time 00:00:00.47
> Dshow Video Encode Time                      0.00000 s
> Dshow Video Decode Time                      0.00000 s
> Media Foundation Decode Time                 0.00000 s
> Disk  Random 16.0 Read                       744.01 MB/s          8.5
> Disk  Sequential 64.0 Read                   **3054.97 MB/s**          9.2
> Disk  Sequential 64.0 Write                  2299.53 MB/s          9.1
> Average Read Time with Sequential Writes     0.054 ms          8.9
> Latency: 95th Percentile                     0.104 ms          8.9
> Latency: Maximum                             2.573 ms          8.7
> Average Read Time with Random Writes         0.055 ms          8.9
> Total Run Time 00:00:03.44

The result is good, and in line what I expected.
When the M2 storage was installed in the internal cache slots, they only function at PCIe 2.0 speed, so not better than sata-600 slots and not using the speed capabilities of nvme.
Now that the M2 storage is installed on the M2D20 card they can function at much higher PCIe 3.0 speed.

I ran a quick scan in windows defender: previously it ran 8 minutes, in the new set up it runs 4 minutes.
I think it is safe to say that the storage speed is doubled with this set up, which is a big deal for a virtual machine.

Not everything works: running a benchmark action in storage manager takes forever and gives no result

Thanks for the scripts, and for all your help

<!-- gh-comment-id:1616621701 --> @7Uppers commented on GitHub (Jul 2, 2023): It works now, unexpectedly This morning after the nightly shutdown, the M2D20, m2 drives and storage pool became visible. I think I rebooted most of the time after every script run, so I am surprised it works. It may be something with the chronological order where one script should run after another, or maybe I did miss a reboot, I don't know. The only extra command i tried is: sudo set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme DS1621xs+ yes ``` cat /proc/sys/kernel/syno_hw_version DS1621xs+ sudo synodisk --enum -t cache ************ Disk Info *************** >> Disk id: 2 >> Slot id: 1 >> Disk path: /dev/nvme0n1 >> Disk model: Seagate FireCuda 530 ZP500GM30013 >> Total capacity: 465.76 GB >> Tempeture: 37 C ************ Disk Info *************** >> Disk id: 1 >> Slot id: 1 >> Disk path: /dev/nvme1n1 >> Disk model: Seagate FireCuda 530 ZP500GM30013 >> Total capacity: 465.76 GB >> Tempeture: 38 C ``` i created a new volume, and defined a new storage in VMM, I migrated the virtual machines to the volume on the M2D20 card and did some speed tests in windows: ``` winsat disk -drive c Windows System Assessment Tool > Running: Feature Enumeration '' > Run Time 00:00:00.00 > Running: Storage Assessment '-drive c -ran -read' > Run Time 00:00:00.45 > Running: Storage Assessment '-drive c -seq -read' > Run Time 00:00:00.81 > Running: Storage Assessment '-drive c -seq -write' > Run Time 00:00:00.97 > Running: Storage Assessment '-drive c -flush -seq' > Run Time 00:00:00.47 > Running: Storage Assessment '-drive c -flush -ran' > Run Time 00:00:00.47 > Dshow Video Encode Time 0.00000 s > Dshow Video Decode Time 0.00000 s > Media Foundation Decode Time 0.00000 s > Disk Random 16.0 Read 744.01 MB/s 8.5 > Disk Sequential 64.0 Read **3054.97 MB/s** 9.2 > Disk Sequential 64.0 Write 2299.53 MB/s 9.1 > Average Read Time with Sequential Writes 0.054 ms 8.9 > Latency: 95th Percentile 0.104 ms 8.9 > Latency: Maximum 2.573 ms 8.7 > Average Read Time with Random Writes 0.055 ms 8.9 > Total Run Time 00:00:03.44 ``` The result is good, and in line what I expected. When the M2 storage was installed in the internal cache slots, they only function at PCIe 2.0 speed, so not better than sata-600 slots and not using the speed capabilities of nvme. Now that the M2 storage is installed on the M2D20 card they can function at much higher PCIe 3.0 speed. I ran a quick scan in windows defender: previously it ran 8 minutes, in the new set up it runs 4 minutes. I think it is safe to say that the storage speed is doubled with this set up, which is a big deal for a virtual machine. Not everything works: running a benchmark action in storage manager takes forever and gives no result Thanks for the scripts, and for all your help
Author
Owner

@007revad commented on GitHub (Jul 2, 2023):

Nice. I wish my DS1821+ had a built-in 10GbE port so I could use the PCIe slot for an M2D20.

<!-- gh-comment-id:1616952463 --> @007revad commented on GitHub (Jul 2, 2023): Nice. I wish my DS1821+ had a built-in 10GbE port so I could use the PCIe slot for an M2D20.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#543
No description provided.