[GH-ISSUE #148] DS1821+ with 2x NVMEs internal and 2x NVMEs on E10M20-T1, no show in Storage Manager after script #773

Closed
opened 2026-03-12 16:38:01 +03:00 by kerem · 157 comments
Owner

Originally created by @RozzNL on GitHub (Sep 30, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/148

Hi all,
I have a DS1821+ and am running DSM 7.2-64570 Update 3.
Installed 2 Samsung NVMEs in the hardware slots on the Syno, after running the script syno_hdd_db.sh from [u/daveR007](https://www.reddit.com/u/daveR007/) the ssds show up and i can use them as cache.
Ran that a couple of years.
Just recently i found the E10M20-T1 card and installed with 2 more NVME`s, ran the script again:

root@DS1821:/volume1/homes/admin/Scripts# ./syno_hdd_db.sh -nfr

Synology_HDD_db v3.1.64
DS1821+ DSM 7.2-64570-3
Using options: -nfr
Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh

HDD/SSD models found: 2
ST14000NM001G-2KJ103,SN03
ST16000NM001G-2KK103,SN03

M.2 drive models found: 2
Samsung SSD 970 EVO 1TB,2B2QEXE7
Samsung SSD 970 EVO Plus 2TB,2B2QEXM7

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db
ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 SATA already enabled for DS1821+
E10M20-T1 already enabled in model.dtb

Disabled support disk compatibility.
Disabled support memory compatibility.
Max memory already set to 64 GB.

M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.

So it sees the 4 ssd`s but does not show them in the syno gui, i ran the syno_create_m2_volume.sh and created 2 raid1 volumes, 1x raid1 on the onboard slots and 1x raid 1 on the E10M20-T1 card.

But still they do not show up in the gui, also no online assemble option.

Answer from private chat with Dave:
This is caused by DSM 7.2 Update 3 adding a power_limit for NVMe drives

Originally created by @RozzNL on GitHub (Sep 30, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/148 Hi all, I have a DS1821+ and am running DSM 7.2-64570 Update 3. Installed 2 Samsung NVME`s in the hardware slots on the Syno, after running the script syno_hdd_db.sh from [u/daveR007](https://www.reddit.com/u/daveR007/) the ssd`s show up and i can use them as cache. Ran that a couple of years. Just recently i found the E10M20-T1 card and installed with 2 more NVME`s, ran the script again: ``` root@DS1821:/volume1/homes/admin/Scripts# ./syno_hdd_db.sh -nfr Synology_HDD_db v3.1.64 DS1821+ DSM 7.2-64570-3 Using options: -nfr Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh HDD/SSD models found: 2 ST14000NM001G-2KJ103,SN03 ST16000NM001G-2KK103,SN03 M.2 drive models found: 2 Samsung SSD 970 EVO 1TB,2B2QEXE7 Samsung SSD 970 EVO Plus 2TB,2B2QEXM7 M.2 PCIe card models found: 1 E10M20-T1 No Expansion Units found ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db E10M20-T1 NIC already enabled for DS1821+ E10M20-T1 NVMe already enabled for DS1821+ E10M20-T1 SATA already enabled for DS1821+ E10M20-T1 already enabled in model.dtb Disabled support disk compatibility. Disabled support memory compatibility. Max memory already set to 64 GB. M.2 volume support already enabled. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` So it sees the 4 ssd`s but does not show them in the syno gui, i ran the syno_create_m2_volume.sh and created 2 raid1 volumes, 1x raid1 on the onboard slots and 1x raid 1 on the E10M20-T1 card. But still they do not show up in the gui, also no online assemble option. Answer from private chat with Dave: This is caused by DSM 7.2 Update 3 adding a power_limit for NVMe drives
kerem closed this issue 2026-03-12 16:38:07 +03:00
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

I need to get some information from you. Can you reply with what the following commands return:

synodisk --enum -t cache

cat /sys/block/nvme0n1/device/syno_block_info

cat /sys/block/nvme1n1/device/syno_block_info

cat /sys/block/nvme2n1/device/syno_block_info

cat /sys/block/nvme3n1/device/syno_block_info

<!-- gh-comment-id:1741719591 --> @007revad commented on GitHub (Sep 30, 2023): I need to get some information from you. Can you reply with what the following commands return: `synodisk --enum -t cache` `cat /sys/block/nvme0n1/device/syno_block_info` `cat /sys/block/nvme1n1/device/syno_block_info` `cat /sys/block/nvme2n1/device/syno_block_info` `cat /sys/block/nvme3n1/device/syno_block_info`
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

And 2 more:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done

Assuming the last line of that command ended in 0000:07:00.0 then run this command:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done

<!-- gh-comment-id:1741722709 --> @007revad commented on GitHub (Sep 30, 2023): And 2 more: `for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done` Assuming the last line of that command ended in **0000:07:00.0** then run this command: `for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done`
Author
Owner

@RozzNL commented on GitHub (Sep 30, 2023):

I did use the create m2 volume script again and created 4x single volumes, hope this does not mess up your needed information.

synodisk --enum -t cache
No info returned

cat /sys/block/nvme0n1/device/syno_block_info
pciepath=00:01.2,00.0,04.0,00.0

cat /sys/block/nvme1n1/device/syno_block_info
pciepath=00:01.2,00.0,08.0,00.0

cat /sys/block/nvme2n1/device/syno_block_info
pciepath=00:01.3,00.0

cat /sys/block/nvme3n1/device/syno_block_info
pciepath=00:01.4,00.0

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done
/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie01
/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie02
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0
Yes indeed returned your assumed info

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:07:00.0:pcie12
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:00.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:02.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:03.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:0c.0

<!-- gh-comment-id:1741724899 --> @RozzNL commented on GitHub (Sep 30, 2023): I did use the create m2 volume script again and created 4x single volumes, hope this does not mess up your needed information. `synodisk --enum -t cache ` No info returned `cat /sys/block/nvme0n1/device/syno_block_info` pciepath=00:01.2,00.0,04.0,00.0 `cat /sys/block/nvme1n1/device/syno_block_info` pciepath=00:01.2,00.0,08.0,00.0 `cat /sys/block/nvme2n1/device/syno_block_info` pciepath=00:01.3,00.0 `cat /sys/block/nvme3n1/device/syno_block_info` pciepath=00:01.4,00.0 `for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done` /sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie01 /sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie02 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0 Yes indeed returned your assumed info `for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done` /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:07:00.0:pcie12 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:00.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:02.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:03.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:0c.0
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

I have enough to create a model.dtb file for your DS1821+ to make the E10M20-T1 and it's NVMe drives appear in storage manager.

But the result of the last command is a little confusing. Though it doesn't matter for what we're doing.

  1. 0000:08:00.0
  2. 0000:08:02.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  3. 0000:08:03.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  4. 0000:08:04.0 is the M.2 slot 2 in the E10M20-T1 for an NVMe drive.
  5. 0000:08:08.0 is the M.2 slot 1 in the E10M20-T1 for an NVMe drive.
  6. 0000:08:0c.0

I don't know what 0000:08:00.0 and 0000:08:0c.0 are for. One of them could be for the 10G in the E10M20-T1.

<!-- gh-comment-id:1741731768 --> @007revad commented on GitHub (Sep 30, 2023): I have enough to create a model.dtb file for your DS1821+ to make the E10M20-T1 and it's NVMe drives appear in storage manager. But the result of the last command is a little confusing. Though it doesn't matter for what we're doing. 1. 0000:08:00.0 2. 0000:08:02.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive. 3. 0000:08:03.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive. 4. 0000:08:04.0 is the M.2 slot 2 in the E10M20-T1 for an NVMe drive. 5. 0000:08:08.0 is the M.2 slot 1 in the E10M20-T1 for an NVMe drive. 6. 0000:08:0c.0 I don't know what 0000:08:00.0 and 0000:08:0c.0 are for. One of them could be for the 10G in the E10M20-T1.
Author
Owner

@RozzNL commented on GitHub (Sep 30, 2023):

Great!
I don`t mind testing some more for you if you need the info for the future?

<!-- gh-comment-id:1741732951 --> @RozzNL commented on GitHub (Sep 30, 2023): Great! I don`t mind testing some more for you if you need the info for the future?
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip

Then

  1. Unzip it to a directory on the DS1821+
  2. cd to that directory.
  3. chmod 644 model.dtb
  4. cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak
  5. cp -pu model.dtb /etc.defaults/model.dtb
  6. cp -pu model.dtb /etc/model.dtb
  7. Reboot
  8. Check storage manager now shows the E10M20-T1 and it's NVMe drives.
<!-- gh-comment-id:1741733516 --> @007revad commented on GitHub (Sep 30, 2023): Can you download this zip file: [ds1821+_model_with_e10m20-t1.zip](https://github.com/007revad/Synology_HDD_db/files/12774402/ds1821%2B_model_with_e10m20-t1.zip) Then 1. Unzip it to a directory on the DS1821+ 2. cd to that directory. 3. `chmod 644 model.dtb` 4. `cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak` 5. `cp -pu model.dtb /etc.defaults/model.dtb` 6. `cp -pu model.dtb /etc/model.dtb` 7. Reboot 8. Check storage manager now shows the E10M20-T1 and it's NVMe drives.
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

I don`t mind testing some more for you if you need the info for the future?

I will take you up on that.

<!-- gh-comment-id:1741733891 --> @007revad commented on GitHub (Sep 30, 2023): > I don`t mind testing some more for you if you need the info for the future? I will take you up on that.
Author
Owner

@RozzNL commented on GitHub (Sep 30, 2023):

Nope, nothing changed in Storage Manager

<!-- gh-comment-id:1741736884 --> @RozzNL commented on GitHub (Sep 30, 2023): Nope, nothing changed in Storage Manager
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

That's disappointing and unexpected.

It's 9pm here and it's been a busy day. I'll get back to you tomorrow.

<!-- gh-comment-id:1741740138 --> @007revad commented on GitHub (Sep 30, 2023): That's disappointing and unexpected. It's 9pm here and it's been a busy day. I'll get back to you tomorrow.
Author
Owner

@RozzNL commented on GitHub (Sep 30, 2023):

No probs Dave, thanks for so far.

<!-- gh-comment-id:1741741311 --> @RozzNL commented on GitHub (Sep 30, 2023): No probs Dave, thanks for so far.
Author
Owner

@zcpnate commented on GitHub (Sep 30, 2023):

This appears to be the same as my open issue #132. Reverting to 7.2u1 does consistently fix it but am now stuck on that dsm version.

<!-- gh-comment-id:1741836141 --> @zcpnate commented on GitHub (Sep 30, 2023): This appears to be the same as my open issue #132. Reverting to 7.2u1 does consistently fix it but am now stuck on that dsm version.
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

EDIT Don't worry about these commands. See my later comment here.

What do the following commands return:

grep "e10m20-t1" /run/model.dtb

grep "power_limit" /run/model.dtb

grep "100,100,100,100" /run/model.dtb

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+

<!-- gh-comment-id:1741865336 --> @007revad commented on GitHub (Sep 30, 2023): **EDIT** Don't worry about these commands. See [my later comment here](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741874000). What do the following commands return: `grep "e10m20-t1" /run/model.dtb` `grep "power_limit" /run/model.dtb` `grep "100,100,100,100" /run/model.dtb` `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+` `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+`
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

@zcpnate @cfsnate

Yes, this is the same problem. I was going to reply to issue #132 once @RozzNL had confirmed the fix is working.

<!-- gh-comment-id:1741866979 --> @007revad commented on GitHub (Sep 30, 2023): @zcpnate @cfsnate Yes, this is the same problem. I was going to reply to issue #132 once @RozzNL had confirmed the fix is working.
Author
Owner

@007revad commented on GitHub (Sep 30, 2023):

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

<!-- gh-comment-id:1741874000 --> @007revad commented on GitHub (Sep 30, 2023): @RozzNL @zcpnate @cfsnate I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want. The syno_hdd_db.sh that I have scheduled to run at boot-up has the `check_modeldtb "$c"` lines commented out. For the E10M20-T1 you want to change line 1335 from `check_modeldtb "$c"` to `#check_modeldtb "$c"` After editing syno_hdd_db.sh redo the steps in [this comment](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516).
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

Will try that later today Dave

<!-- gh-comment-id:1742021612 --> @RozzNL commented on GitHub (Oct 1, 2023): Will try that later today Dave
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

Just for the sake of testing, did your commands before i edited the script:

What do the following commands return:

grep "e10m20-t1" /run/model.dtb
Binary file /run/model.dtb matches
grep "power_limit" /run/model.dtb
Binary file /run/model.dtb matches
grep "100,100,100,100" /run/model.dtb
Binary file /run/model.dtb matches
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+
yes
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
yes

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

Commented out, reapplied the modeldtb and applicable commands, rebooted.
The modified script with commented out check is run at shutdown, after boot up, still no drives in Storage Manager. 👎

EDIT:
I doublechecked that i use the modified model.dtb file you gave me, dates+size are the same as your modified file.

EDIT2:
I do run the syno_hdd_db.sh with the -nfr option btw

<!-- gh-comment-id:1742071140 --> @RozzNL commented on GitHub (Oct 1, 2023): Just for the sake of testing, did your commands before i edited the script: > What do the following commands return: > `grep "e10m20-t1" /run/model.dtb` Binary file /run/model.dtb matches > `grep "power_limit" /run/model.dtb` Binary file /run/model.dtb matches > `grep "100,100,100,100" /run/model.dtb` Binary file /run/model.dtb matches > `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+` yes > `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+` yes > @RozzNL @zcpnate @cfsnate > > I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want. > > The syno_hdd_db.sh that I have scheduled to run at boot-up has the `check_modeldtb "$c"` lines commented out. For the E10M20-T1 you want to change line 1335 from `check_modeldtb "$c"` to `#check_modeldtb "$c"` > > After editing syno_hdd_db.sh redo the steps in [this comment](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516). Commented out, reapplied the modeldtb and applicable commands, rebooted. The modified script with commented out check is run at shutdown, after boot up, still no drives in Storage Manager. 👎 EDIT: I doublechecked that i use the modified model.dtb file you gave me, dates+size are the same as your modified file. EDIT2: I do run the syno_hdd_db.sh with the -nfr option btw
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Try disabling the schedules for syno_hdd_db and leaving it disabled, then run this command
set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ no

<!-- gh-comment-id:1742111196 --> @007revad commented on GitHub (Oct 1, 2023): Try disabling the schedules for syno_hdd_db and leaving it disabled, then run this command `set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ no`
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Can you tell me what these commands return:

synodisk --enum -t cache

udevadm info --query path --name nvme0

udevadm info --query path --name nvme1

udevadm info --query path --name nvme2

udevadm info --query path --name nvme3

<!-- gh-comment-id:1742112974 --> @007revad commented on GitHub (Oct 1, 2023): Can you tell me what these commands return: `synodisk --enum -t cache` `udevadm info --query path --name nvme0` `udevadm info --query path --name nvme1` `udevadm info --query path --name nvme2` `udevadm info --query path --name nvme3`
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

Disabled schedule, ran command, rebooted, nothing changed in Storage Manager.

Can you tell me what these commands return:

synodisk --enum -t cache
Nothing returned
udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2
/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3
/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

EDIT:
Looking at your command, and looking in the file "adapter_cards.conf" i see:
[E10M20-T1_sup_nic] and
[E10M20-T1_sup_nvme] and
[E10M20-T1_sup_sata] and
DS1821+=yes, but also lower in the list
DS1821+=no

There are multiple references for the same DS.....not only for the DS1821+ but also other DS`s

<!-- gh-comment-id:1742115544 --> @RozzNL commented on GitHub (Oct 1, 2023): Disabled schedule, ran command, rebooted, nothing changed in Storage Manager. > Can you tell me what these commands return: `synodisk --enum -t cache` Nothing returned `udevadm info --query path --name nvme0` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 `udevadm info --query path --name nvme1` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 `udevadm info --query path --name nvme2` /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2 `udevadm info --query path --name nvme3` /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3 EDIT: Looking at your command, and looking in the file "adapter_cards.conf" i see: [E10M20-T1_sup_nic] and [E10M20-T1_sup_nvme] and [E10M20-T1_sup_sata] and DS1821+=yes, but also lower in the list DS1821+=no There are multiple references for the same DS.....not only for the DS1821+ but also other DS`s
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run:
sudo grep synostgd-disk /var/log/messages | tail -10

<!-- gh-comment-id:1742126089 --> @007revad commented on GitHub (Oct 1, 2023): I don't understand why `synodisk --enum -t cache` is not returning anything. Are there any nvme erros if you run: `sudo grep synostgd-disk /var/log/messages | tail -10`
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run:
sudo grep synostgd-disk /var/log/messages | tail -10

2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

EDIT:
But running the (modified 1335 line) script
./syno_hdd_db.sh -nfr
Synology_HDD_db v3.1.64
DS1821+ DSM 7.2-64570-3
Using options: -nfr
Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh

HDD/SSD models found: 2
ST14000NM001G-2KJ103,SN03
ST16000NM001G-2KK103,SN03

M.2 drive models found: 2
Samsung SSD 970 EVO 1TB,2B2QEXE7
Samsung SSD 970 EVO Plus 2TB,2B2QEXM7

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db
ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 SATA already enabled for DS1821+

Disabled support disk compatibility.
Disabled support memory compatibility.
Max memory already set to 64 GB.
M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.

<!-- gh-comment-id:1742127317 --> @RozzNL commented on GitHub (Oct 1, 2023): > I don't understand why `synodisk --enum -t cache` is not returning anything. > > Are there any nvme erros if you run: `sudo grep synostgd-disk /var/log/messages | tail -10` 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1 EDIT: But running the (modified 1335 line) script ./syno_hdd_db.sh -nfr Synology_HDD_db v3.1.64 DS1821+ DSM 7.2-64570-3 Using options: -nfr Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh HDD/SSD models found: 2 ST14000NM001G-2KJ103,SN03 ST16000NM001G-2KK103,SN03 M.2 drive models found: 2 Samsung SSD 970 EVO 1TB,2B2QEXE7 Samsung SSD 970 EVO Plus 2TB,2B2QEXM7 M.2 PCIe card models found: 1 E10M20-T1 No Expansion Units found ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db E10M20-T1 NIC already enabled for DS1821+ E10M20-T1 NVMe already enabled for DS1821+ E10M20-T1 SATA already enabled for DS1821+ Disabled support disk compatibility. Disabled support memory compatibility. Max memory already set to 64 GB. M.2 volume support already enabled. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes.
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Synology uses the same adapter_cards.conf on every Synology NAS model (even models without a PCIe slot). It lists which PCIe adapter cards each model supports.

Can you try deleting the line that says "DS1821+=no"

I also just noticed that every model that officially supports the E10M20-T1 is listed as yes in the [E10M20-T1_sup_sata] section. Even though Synology's information says the E10M20-T1 does not support SATA M.2 drives on any NAS model.

The Xpenology people just add the NAS model = yes under every section in adapter_cards.conf

<!-- gh-comment-id:1742129363 --> @007revad commented on GitHub (Oct 1, 2023): Synology uses the same adapter_cards.conf on every Synology NAS model (even models without a PCIe slot). It lists which PCIe adapter cards each model supports. Can you try deleting the line that says "DS1821+=no" I also just noticed that every model that officially supports the E10M20-T1 is listed as yes in the [E10M20-T1_sup_sata] section. Even though Synology's information says the E10M20-T1 does not support SATA M.2 drives on any NAS model. The Xpenology people just add the NAS model = yes under every section in adapter_cards.conf
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return?
/sys/firmware/devicetree/base/power_limit && echo

The only Synology models I own that have M.2 slots have:

  • DS720+ has a "11.55,5.775" power limit.
  • DS1821+ has a "14.85,9.075" power limit.

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

  • nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
  • nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

<!-- gh-comment-id:1742140794 --> @007revad commented on GitHub (Oct 1, 2023): > Incorrect power limit number 4!=2 Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found. What does the following command return? `/sys/firmware/devicetree/base/power_limit && echo` The only Synology models I own that have M.2 slots have: - DS720+ has a "11.55,5.775" power limit. - DS1821+ has a "14.85,9.075" power limit. I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log: - nvme_model_spec_get.c:81 Fail to get fdt property of power_limit - nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1 Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots. Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.
Author
Owner

@zcpnate commented on GitHub (Oct 1, 2023):

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

<!-- gh-comment-id:1742143126 --> @zcpnate commented on GitHub (Oct 1, 2023): I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

7.2u1 didn't have a power limit. Synology added the power limit in 7.2u2

<!-- gh-comment-id:1742143752 --> @007revad commented on GitHub (Oct 1, 2023): > I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits? 7.2u1 didn't have a power limit. Synology added the power limit in 7.2u2
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

Ah...de [ ] are seperate sections, gottcha
EDIT:
All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return?

cat /sys/firmware/devicetree/base/power_limit && echo
14.85,9.075

The only Synology models I own that have M.2 slots have:

  • DS720+ has a "11.55,5.775" power limit.
  • DS1821+ has a "14.85,9.075" power limit.

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

  • nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
  • nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

<!-- gh-comment-id:1742151855 --> @RozzNL commented on GitHub (Oct 1, 2023): Ah...de [ ] are seperate sections, gottcha EDIT: All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this? > > Incorrect power limit number 4!=2 > > Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found. > > What does the following command return? `cat /sys/firmware/devicetree/base/power_limit && echo` 14.85,9.075 > The only Synology models I own that have M.2 slots have: > > * DS720+ has a "11.55,5.775" power limit. > * DS1821+ has a "14.85,9.075" power limit. > > I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log: > > * nvme_model_spec_get.c:81 Fail to get fdt property of power_limit > * nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1 > > Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots. > > Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

<!-- gh-comment-id:1742158181 --> @007revad commented on GitHub (Oct 1, 2023): @zcpnate Can you check if `smartctl --info /dev/nvme0` works for NVMe drives in 7.2u1
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Yes, running syno_hdd_db would have set it back to yes. But I don't think it matters.

<!-- gh-comment-id:1742158743 --> @007revad commented on GitHub (Oct 1, 2023): > All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this? Yes, running syno_hdd_db would have set it back to yes. But I don't think it matters.
Author
Owner

@zcpnate commented on GitHub (Oct 1, 2023):

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

ash-4.4# smartctl --info /dev/nvme0
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read NVMe Identify Controller failed: NVMe Status 0x400b
<!-- gh-comment-id:1742159676 --> @zcpnate commented on GitHub (Oct 1, 2023): > @zcpnate > > Can you check if `smartctl --info /dev/nvme0` works for NVMe drives in 7.2u1 ``` ash-4.4# smartctl --info /dev/nvme0 smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org Read NVMe Identify Controller failed: NVMe Status 0x400b ```
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Read NVMe Identify Controller failed: NVMe Status 0x400b

On 7.2u3 I get Read NVMe Identify Controller failed: NVMe Status 0x4002

Someone else on 7.2.1 gets Read NVMe Identify Controller failed: NVMe Status 0x200b

The only thing that's consistent is that smartctl --info for nvme drives doesn't work in DSM 7.2

<!-- gh-comment-id:1742163712 --> @007revad commented on GitHub (Oct 1, 2023): > Read NVMe Identify Controller failed: NVMe Status 0x400b On 7.2u3 I get `Read NVMe Identify Controller failed: NVMe Status 0x4002` Someone else on 7.2.1 gets `Read NVMe Identify Controller failed: NVMe Status 0x200b` The only thing that's consistent is that smartctl --info for nvme drives doesn't work in DSM 7.2
Author
Owner

@zcpnate commented on GitHub (Oct 1, 2023):

I tested a few other nvme drives and got 200b for my internally mounted nvme drives acting as a volume.

<!-- gh-comment-id:1742164192 --> @zcpnate commented on GitHub (Oct 1, 2023): I tested a few other nvme drives and got 200b for my internally mounted nvme drives acting as a volume.
Author
Owner

@RozzNL commented on GitHub (Oct 1, 2023):

I too get the 0x200b

<!-- gh-comment-id:1742164681 --> @RozzNL commented on GitHub (Oct 1, 2023): I too get the 0x200b
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

Can you try:

synodiskport -cache

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1`

synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

<!-- gh-comment-id:1742166518 --> @007revad commented on GitHub (Oct 1, 2023): Can you try: `synodiskport -cache` `synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1` `synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1` `synonvme --m2-card-model-get /dev/nvme2n1; `synonvme --get-location /dev/nvme2n1` `synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1`
Author
Owner

@zcpnate commented on GitHub (Oct 1, 2023):

Can you try:

synodiskport -cache

ash-4.4# synodiskport -cache
nvme0n1 nvme1n1 nvme2n1 nvme3n1

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Device: /dev/nvme0n1, PCI Slot: 1, Card Slot: 2

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Device: /dev/nvme1n1, PCI Slot: 1, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1`

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

<!-- gh-comment-id:1742167760 --> @zcpnate commented on GitHub (Oct 1, 2023): > Can you try: > > `synodiskport -cache` ash-4.4# synodiskport -cache nvme0n1 nvme1n1 nvme2n1 nvme3n1 > > `synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1` > ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1 E10M20-T1 Device: /dev/nvme0n1, PCI Slot: 1, Card Slot: 2 > `synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1` > ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1 E10M20-T1 Device: /dev/nvme1n1, PCI Slot: 1, Card Slot: 1 > `synonvme --m2-card-model-get /dev/nvme2n1; `synonvme --get-location /dev/nvme2n1` > ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1 Not M.2 adapter card Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1 > `synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1` ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1 Not M.2 adapter card Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2
Author
Owner

@007revad commented on GitHub (Oct 1, 2023):

I had a typo in the last command. It should return the same result, but the command should have been:
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

<!-- gh-comment-id:1742173446 --> @007revad commented on GitHub (Oct 1, 2023): I had a typo in the last command. It should return the same result, but the command should have been: `synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1`
Author
Owner

@zcpnate commented on GitHub (Oct 1, 2023):

I had a typo in the last command. It should return the same result, but the command should have been: synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

blind copy paste haha didn't catch that

<!-- gh-comment-id:1742174486 --> @zcpnate commented on GitHub (Oct 1, 2023): > I had a typo in the last command. It should return the same result, but the command should have been: `synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1` ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1 Not M.2 adapter card Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2 blind copy paste haha didn't catch that
Author
Owner

@007revad commented on GitHub (Oct 2, 2023):

While searching for what causes the "nvme_model_spec_get.c:90 Incorrect power limit number 4!=2" log entry I found 7.2-U3 has 2 scripts related to nvme power. I need to check if 7.2.1 still has those scripts.

syno_nvme_power_limit_set.service runs /usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh

/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh then runs /usr/syno/lib/systemd/scripts/nvme_power_state.sh -d $dev_name -p $pwr_limit which sets the power limit to $pwr_limit for nvme drive $dev_name

It can also list the power states of the specified nvme drive. Strangely both my DS720+ and DS1821+ return the exact same power states even though both have different power_limits set in model.dtb

For me /usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0 returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 1:   max_power 3.00W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:3.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 2:   max_power 2.20W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:2.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 3:   max_power 0.0150W non-operational enlat:1500 exlat:2500 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:0.0150 W     non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050W non-operational enlat:10000 exlat:6000 rrt:4 rrl:4 rwt:4 rwl:4 idle_power:0.0050 W    non-operational rrt 4   rrl 4   rwt 4   rwl 4
ps 5:   max_power 0.0033W non-operational enlat:176000 exlat:25000 rrt:5 rrl:5 rwt:5 rwl:5 idle_power:0.0033 W  non-operational rrt 5   rrl 5   rwt 5   rwl 5


========== nvme0 result ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0

add to task schedule? false
<!-- gh-comment-id:1742621794 --> @007revad commented on GitHub (Oct 2, 2023): While searching for what causes the "nvme_model_spec_get.c:90 Incorrect power limit number 4!=2" log entry I found 7.2-U3 has 2 scripts related to nvme power. I need to check if 7.2.1 still has those scripts. `syno_nvme_power_limit_set.service` runs `/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh` `/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh` then runs `/usr/syno/lib/systemd/scripts/nvme_power_state.sh -d $dev_name -p $pwr_limit` which sets the power limit to $pwr_limit for nvme drive $dev_name It can also list the power states of the specified nvme drive. Strangely both my DS720+ and DS1821+ return the exact same power states even though both have different power_limits set in model.dtb For me `/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0` returns: ``` ========== list all power states of nvme0 ========== ps 0: max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W operational rrt 0 rrl 0 rwt 0 rwl 0 ps 1: max_power 3.00W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:3.02 W operational rrt 0 rrl 0 rwt 0 rwl 0 ps 2: max_power 2.20W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:2.02 W operational rrt 0 rrl 0 rwt 0 rwl 0 ps 3: max_power 0.0150W non-operational enlat:1500 exlat:2500 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:0.0150 W non-operational rrt 3 rrl 3 rwt 3 rwl 3 ps 4: max_power 0.0050W non-operational enlat:10000 exlat:6000 rrt:4 rrl:4 rwt:4 rwl:4 idle_power:0.0050 W non-operational rrt 4 rrl 4 rwt 4 rwl 4 ps 5: max_power 0.0033W non-operational enlat:176000 exlat:25000 rrt:5 rrl:5 rwt:5 rwl:5 idle_power:0.0033 W non-operational rrt 5 rrl 5 rwt 5 rwl 5 ========== nvme0 result ========== ps 0: max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W operational rrt 0 rrl 0 rwt 0 rwl 0 add to task schedule? false ```
Author
Owner

@RozzNL commented on GitHub (Oct 2, 2023):

/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0
For me it returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0
ps 1:   max_power 5.90 W        operational     rrt 1   rrl 1   rwt 1   rwl 1
ps 2:   max_power 3.60 W        operational     rrt 2   rrl 2   rwt 2   rwl 2
ps 3:   max_power 0.0700 W      non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050 W      non-operational rrt 4   rrl 4   rwt 4   rwl 4


========== nvme0 result ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0

add to task schedule? false
<!-- gh-comment-id:1742753543 --> @RozzNL commented on GitHub (Oct 2, 2023): `/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0` For me it returns: ``` ========== list all power states of nvme0 ========== ps 0: max_power 7.50 W operational rrt 0 rrl 0 rwt 0 rwl 0 ps 1: max_power 5.90 W operational rrt 1 rrl 1 rwt 1 rwl 1 ps 2: max_power 3.60 W operational rrt 2 rrl 2 rwt 2 rwl 2 ps 3: max_power 0.0700 W non-operational rrt 3 rrl 3 rwt 3 rwl 3 ps 4: max_power 0.0050 W non-operational rrt 4 rrl 4 rwt 4 rwl 4 ========== nvme0 result ========== ps 0: max_power 7.50 W operational rrt 0 rrl 0 rwt 0 rwl 0 add to task schedule? false ```
Author
Owner

@007revad commented on GitHub (Oct 2, 2023):

Yours looks more like I'd expect the output of a Synology command or script to look like.

Does this return an error? Or a list of nvme drives and power limits?

nvme_list=$(synodiskport -cache)
output=$(/usr/syno/bin/synonvme --get-power-limit $nvme_list)
echo ${output[@]}
<!-- gh-comment-id:1742780469 --> @007revad commented on GitHub (Oct 2, 2023): Yours looks more like I'd expect the output of a Synology command or script to look like. Does this return an error? Or a list of nvme drives and power limits? ``` nvme_list=$(synodiskport -cache) output=$(/usr/syno/bin/synonvme --get-power-limit $nvme_list) echo ${output[@]} ```
Author
Owner

@RozzNL commented on GitHub (Oct 2, 2023):

Nope, i doesn`t return anything...

<!-- gh-comment-id:1742817590 --> @RozzNL commented on GitHub (Oct 2, 2023): Nope, i doesn`t return anything...
Author
Owner

@007revad commented on GitHub (Oct 2, 2023):

So what about these:

nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}

output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}

synonvme --get-power-limit nvme0n1

synonvme --get-power-limit nvme1n1

synonvme --get-power-limit nvme2n1

synonvme --get-power-limit nvme3n1

<!-- gh-comment-id:1742831247 --> @007revad commented on GitHub (Oct 2, 2023): So what about these: `nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}` `output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}` `synonvme --get-power-limit nvme0n1` `synonvme --get-power-limit nvme1n1` `synonvme --get-power-limit nvme2n1` `synonvme --get-power-limit nvme3n1`
Author
Owner

@RozzNL commented on GitHub (Oct 2, 2023):

All return with nothing 👎

<!-- gh-comment-id:1742839845 --> @RozzNL commented on GitHub (Oct 2, 2023): All return with nothing 👎
Author
Owner

@007revad commented on GitHub (Oct 2, 2023):

Does synodiskport -cache

return:
nvme0n1 nvme1n1 nvme2n1 nvme3n1

<!-- gh-comment-id:1742854693 --> @007revad commented on GitHub (Oct 2, 2023): Does `synodiskport -cache` return: `nvme0n1 nvme1n1 nvme2n1 nvme3n1`
Author
Owner

@RozzNL commented on GitHub (Oct 2, 2023):

nope, still returns nothing....and i still have the same errors btw
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

<!-- gh-comment-id:1742859159 --> @RozzNL commented on GitHub (Oct 2, 2023): nope, still returns nothing....and i still have the same errors btw `2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1`
Author
Owner

@zcpnate commented on GitHub (Oct 2, 2023):

FYI these power limit scripts do not exist on 7.2u1

<!-- gh-comment-id:1742919878 --> @zcpnate commented on GitHub (Oct 2, 2023): FYI these power limit scripts do not exist on 7.2u1
Author
Owner

@007revad commented on GitHub (Oct 3, 2023):

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL
Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:
    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

<!-- gh-comment-id:1744094930 --> @007revad commented on GitHub (Oct 3, 2023): I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got `4!=2` in logs but I didn't. @RozzNL Can you do the following: 1. Edit line 1334 in syno_hdd_db.sh to change this: - `enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` - to this: - `#enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` 2. Delete the `DS1821+=yes` line under the `[E10M20-T1_sup_sata]` section in `/usr/syno/etc.defaults/adapter_cards.conf` Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have **NOT** run it since updating to DSM 7.2 update 3.
Author
Owner

@007revad commented on GitHub (Oct 3, 2023):

If anyone wants a quick solution (instead of waiting for more trial and error testing) you can replace /usr/lib/libsynonvme.so.1 with the one from DSM 7.2-64570. I know this works in 7.2 update 2 and update 3. But I have no idea if it works in 7.2.1

  1. Download DS1821+_64570_libsynonvme.so.1.zip and unzip it.
  2. Backup existing libsynonvme.so.1 and append build and update version:
    • build=$(get_key_value /etc.defaults/VERSION buildnumber)
    • nano=$(get_key_value /etc.defaults/VERSION nano)
    • cp -p /usr/lib/libsynonvme.so.1 /usr/lib/libsynonvme.so.1.${build}-${nano}.bak
  3. cd to the folder where you unzipped the downloaded libsynonvme.so.1
  4. mv -f libsynonvme.so.1 /usr/lib/libsynonvme.so.1 && chmod a+r /usr/lib/libsynonvme.so.1
<!-- gh-comment-id:1744172716 --> @007revad commented on GitHub (Oct 3, 2023): If anyone wants a quick solution (instead of waiting for more trial and error testing) you can replace /usr/lib/libsynonvme.so.1 with the one from DSM 7.2-64570. I know this works in 7.2 update 2 and update 3. But I have no idea if it works in 7.2.1 1. Download [DS1821+_64570_libsynonvme.so.1.zip](https://github.com/007revad/Synology_HDD_db/files/12788409/DS1821%2B_64570_libsynonvme.so.1.zip) and unzip it. 2. Backup existing libsynonvme.so.1 and append build and update version: - `build=$(get_key_value /etc.defaults/VERSION buildnumber)` - `nano=$(get_key_value /etc.defaults/VERSION nano)` - `cp -p /usr/lib/libsynonvme.so.1 /usr/lib/libsynonvme.so.1.${build}-${nano}.bak` 3. cd to the folder where you unzipped the downloaded libsynonvme.so.1 4. `mv -f libsynonvme.so.1 /usr/lib/libsynonvme.so.1 && chmod a+r /usr/lib/libsynonvme.so.1`
Author
Owner

@RozzNL commented on GitHub (Oct 3, 2023):

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:

    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

Goodmorning all,
Performed the comment-out, removed line DS1821+=yes, rebooted, no change.

I have indeed performed the enable_m2_volume script, so i restored this back with running the script again, rebooted but i could not get back into the gui, had to reboot 2x again. after succesful reboot, still no change.

Checked the comment-out and line were still removed (just to be sure the m2_volume script had not interfered) and i did forget to run the hdd_db script after i edited it, duh...so reran all again to check, still no change

EDIT:
Checking some of the commands you sent previously.
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Can't get the location of /dev/nvme3n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Can't get the location of /dev/nvme2n1
synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Can't get the location of /dev/nvme1n1

<!-- gh-comment-id:1744450400 --> @RozzNL commented on GitHub (Oct 3, 2023): > I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got `4!=2` in logs but I didn't. > > @RozzNL Can you do the following: > > 1. Edit line 1334 in syno_hdd_db.sh to change this: > > * `enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` > * to this: > * `#enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` > 2. Delete the `DS1821+=yes` line under the `[E10M20-T1_sup_sata]` section in `/usr/syno/etc.defaults/adapter_cards.conf` > > Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have **NOT** run it since updating to DSM 7.2 update 3. Goodmorning all, Performed the comment-out, removed line DS1821+=yes, rebooted, no change. I have indeed performed the enable_m2_volume script, so i restored this back with running the script again, rebooted but i could not get back into the gui, had to reboot 2x again. after succesful reboot, still no change. Checked the comment-out and line were still removed (just to be sure the m2_volume script had not interfered) and i did forget to run the hdd_db script after i edited it, duh...so reran all again to check, still no change EDIT: Checking some of the commands you sent previously. `synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1` Not M.2 adapter card Can't get the location of /dev/nvme3n1 `synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1` Not M.2 adapter card Can't get the location of /dev/nvme2n1 `synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1` E10M20-T1 Can't get the location of /dev/nvme0n1 `synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1` E10M20-T1 Can't get the location of /dev/nvme1n1
Author
Owner

@007revad commented on GitHub (Oct 4, 2023):

I'm curious if the issues @RozzNL is having are the same for everyone.

@zcpnate what does synodisk --enum -t cache return for you?

Are you willing to try 7.2 update 3 again, but this time:

  1. Disable any scheduled scripts first (and leave them disabled).
  2. Update to 7.2 update 3.
  3. Check if synodisk --enum -t cache returns something.
  4. Download the following test script and model.dtb file and put them both in the same directory.
  5. Run the script and check storage manager.
  6. Reboot and check storage manager.

download_button

<!-- gh-comment-id:1746061660 --> @007revad commented on GitHub (Oct 4, 2023): I'm curious if the issues @RozzNL is having are the same for everyone. @zcpnate what does `synodisk --enum -t cache` return for you? Are you willing to try 7.2 update 3 again, but this time: 1. Disable any scheduled scripts first (and leave them disabled). 2. Update to 7.2 update 3. 3. Check if `synodisk --enum -t cache` returns something. 4. Download the following test script and model.dtb file and put them both in the same directory. - https://github.com/007revad/Synology_HDD_db/blob/test/hdd_db_test.sh - https://github.com/007revad/Synology_HDD_db/blob/test/dtb/DS1821%2B_model.dtb 5. Run the script and check storage manager. 6. Reboot and check storage manager. ![download_button](https://github.com/007revad/Synology_HDD_db/assets/39733752/9496fa5d-b9e7-4d11-ba3d-af4c442777ac)
Author
Owner

@zcpnate commented on GitHub (Oct 4, 2023):

Can get you this info tmw. I'd be willing to upgrade to u3 for testing as I'm pretty sure I can reliably downgrade to u1 in the event of no success. Also totally willing to jump on a zoom and we can debug in real time.

<!-- gh-comment-id:1746065725 --> @zcpnate commented on GitHub (Oct 4, 2023): Can get you this info tmw. I'd be willing to upgrade to u3 for testing as I'm pretty sure I can reliably downgrade to u1 in the event of no success. Also totally willing to jump on a zoom and we can debug in real time.
Author
Owner

@007revad commented on GitHub (Oct 7, 2023):

@zcpnate
Did you get a chance to try 7.2 update 3 with the model.dtb file from https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516

and line 1335 in syno_hdd_db.sh changed from this:
check_modeldtb "$c"

to this:
#check_modeldtb "$c"

Then reboot.

@RozzNL
There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall

Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3.

Then do the same steps I outlined for above for zcpnate.

<!-- gh-comment-id:1751589150 --> @007revad commented on GitHub (Oct 7, 2023): @zcpnate Did you get a chance to try 7.2 update 3 with the model.dtb file from https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516 and line 1335 in syno_hdd_db.sh changed from this: `check_modeldtb "$c"` to this: `#check_modeldtb "$c"` Then reboot. @RozzNL There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3. Then do the same steps I outlined for above for zcpnate.
Author
Owner

@RozzNL commented on GitHub (Oct 7, 2023):

@zcpnate Did you get a chance to try 7.2 update 3 with the model.dtb file from #148 (comment)

and line 1335 in syno_hdd_db.sh changed from this: check_modeldtb "$c"

to this: #check_modeldtb "$c"

Then reboot.

@RozzNL There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall

Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3.

Then do the same steps I outlined for above for zcpnate.

I downgraded to a full release of DSM_DS1821+_64570.pat, rebooted and was auto upgraded to latest release of DSM 7.2-64570 U3 after reboot.
As expected i did see the 2 internal NVMEs but not the E10M20-T1 card (so no 2x NVMEs and 10GbE)
Ran the syno_hdd_db.sh with 2 lines commented out from your ealier request (lines 1334 #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA" and line 1335 #check_modeldtb "$c"

After reboot, i saw both internal NVMEs in Storage Manager and could online assemble them and i got the 10GbE back. Still no NVMEs on the E10M20-T1 card, but i think this is expected due to not running your script syno_create_m2_volume.sh right?

So awaiting your further orders :-)

I did run the following commands for you:
synodisk --enum -t cache
************ Disk Info ***************

Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 41 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 41 C

grep "e10m20-t1" /run/model.dtb
returns nothing
grep "power_limit" /run/model.dtb
Binary file /run/model.dtb matches
grep "100,100,100,100" /run/model.dtb
returns nothing
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+
yes
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
retuns nothing, but is commented out in hdd_db

udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2
/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3
/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

sudo grep synostgd-disk /var/log/messages | tail -10
2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

smartctl --info /dev/nvme0
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read NVMe Identify Controller failed: NVMe Status 0x200b

synodiskport -cache
nvme2n1 nvme3n1

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Can't get the location of /dev/nvme1n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}
nvme2n1 nvme3n1
output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}
nvme2n1:9.075 nvme3n1:9.075
synonvme --get-power-limit nvme0n1
returns nothing
synonvme --get-power-limit nvme1n1
returns nothing
synonvme --get-power-limit nvme2n1
nvme2n1:14.85
synonvme --get-power-limit nvme3n1
nvme3n1:14.85

EDIT:
uncommenting 1334 in syno_hdd_db.sh line to enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+
yes

udevadm info --query path --name nvme0
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1
/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

Maybe try your modified modeldtb as a next step?

<!-- gh-comment-id:1751667690 --> @RozzNL commented on GitHub (Oct 7, 2023): > @zcpnate Did you get a chance to try 7.2 update 3 with the model.dtb file from [#148 (comment)](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516) > > and line 1335 in syno_hdd_db.sh changed from this: `check_modeldtb "$c"` > > to this: `#check_modeldtb "$c"` > > Then reboot. > > @RozzNL There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall > > Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3. > > Then do the same steps I outlined for above for zcpnate. I downgraded to a full release of DSM_DS1821+_64570.pat, rebooted and was auto upgraded to latest release of DSM 7.2-64570 U3 after reboot. As expected i did see the 2 internal NVME`s but not the E10M20-T1 card (so no 2x NVME`s and 10GbE) Ran the syno_hdd_db.sh with 2 lines commented out from your ealier request (lines 1334 `#enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` and line 1335 `#check_modeldtb "$c"` After reboot, i saw both internal NVME`s in Storage Manager and could online assemble them and i got the 10GbE back. Still no NVME`s on the E10M20-T1 card, but i think this is expected due to not running your script syno_create_m2_volume.sh right? So awaiting your further orders :-) I did run the following commands for you: `synodisk --enum -t cache` ************ Disk Info *************** >> Disk id: 1 >> Disk path: /dev/nvme2n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 41 C ************ Disk Info *************** >> Disk id: 2 >> Disk path: /dev/nvme3n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 41 C `grep "e10m20-t1" /run/model.dtb` returns nothing `grep "power_limit" /run/model.dtb` Binary file /run/model.dtb matches `grep "100,100,100,100" /run/model.dtb` returns nothing `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+` yes `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+` retuns nothing, but is commented out in hdd_db `udevadm info --query path --name nvme0` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 `udevadm info --query path --name nvme1` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 `udevadm info --query path --name nvme2` /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2 `udevadm info --query path --name nvme3` /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3 `sudo grep synostgd-disk /var/log/messages | tail -10` 2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info 2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info 2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 `/sys/firmware/devicetree/base/power_limit && echo` -ash: /sys/firmware/devicetree/base/power_limit: Permission denied `smartctl --info /dev/nvme0` smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org Read NVMe Identify Controller failed: NVMe Status 0x200b `synodiskport -cache` nvme2n1 nvme3n1 `synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1` E10M20-T1 Can't get the location of /dev/nvme0n1 `synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1` E10M20-T1 Can't get the location of /dev/nvme1n1 `synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1` Not M.2 adapter card Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1 `synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1` Not M.2 adapter card Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2 `nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}` nvme2n1 nvme3n1 `output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}` nvme2n1:9.075 nvme3n1:9.075 `synonvme --get-power-limit nvme0n1` returns nothing `synonvme --get-power-limit nvme1n1` returns nothing `synonvme --get-power-limit nvme2n1` nvme2n1:14.85 `synonvme --get-power-limit nvme3n1` nvme3n1:14.85 EDIT: uncommenting 1334 in syno_hdd_db.sh line to `enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"` `get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+` yes `udevadm info --query path --name nvme0` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 `udevadm info --query path --name nvme1` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 Maybe try your modified modeldtb as a next step?
Author
Owner

@007revad commented on GitHub (Oct 9, 2023):

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

Sorry, that command should have been:
cat /sys/firmware/devicetree/base/power_limit && echo

udevadm info --query path --name nvme0 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

This appears to suggest I have the ports back to front. What does this command return:
syno_slot_mapping

<!-- gh-comment-id:1752306773 --> @007revad commented on GitHub (Oct 9, 2023): > `/sys/firmware/devicetree/base/power_limit && echo` -ash: /sys/firmware/devicetree/base/power_limit: Permission denied Sorry, that command should have been: `cat /sys/firmware/devicetree/base/power_limit && echo` > `udevadm info --query path --name nvme0` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 > `udevadm info --query path --name nvme1` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 This appears to suggest I have the ports back to front. What does this command return: `syno_slot_mapping`
Author
Owner

@RozzNL commented on GitHub (Oct 9, 2023):

/sys/firmware/devicetree/base/power_limit && echo
-ash: /sys/firmware/devicetree/base/power_limit: Permission denied

Sorry, that command should have been: cat /sys/firmware/devicetree/base/power_limit && echo

14.85,9.075

udevadm info --query path --name nvme0 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1

This appears to suggest I have the ports back to front. What does this command return: syno_slot_mapping

System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1

<!-- gh-comment-id:1752312659 --> @RozzNL commented on GitHub (Oct 9, 2023): > `/sys/firmware/devicetree/base/power_limit && echo` > -ash: /sys/firmware/devicetree/base/power_limit: Permission denied > > Sorry, that command should have been: `cat /sys/firmware/devicetree/base/power_limit && echo` 14.85,9.075 > `udevadm info --query path --name nvme0` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 > `udevadm info --query path --name nvme1` /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 > > This appears to suggest I have the ports back to front. What does this command return: `syno_slot_mapping` System Disk Internal Disk 01: /dev/sata1 02: /dev/sata2 03: /dev/sata3 04: /dev/sata4 05: /dev/sata5 06: /dev/sata6 07: /dev/sata7 08: /dev/sata8 Esata port count: 2 Esata port 1 01: Esata port 2 01: USB Device 01: 02: 03: 04: Internal SSD Cache: 01: /dev/nvme2n1 02: /dev/nvme3n1 PCIe Slot 1: E10M20-T1
Author
Owner

@007revad commented on GitHub (Oct 9, 2023):

Try these:

grep "e10m20-t1" /etc.defaults/model.dtb

grep "power_limit" /etc.defaults/model.dtb

grep "100,100,100,100" /etc.defaults/model.dtb

If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands:

chmod 644 /etc.defaults/model.dtb

cp -pu /etc.defaults/model.dtb /etc/model.dtb

cp -pu /etc.defaults/model.dtb /run/model.dtb

<!-- gh-comment-id:1752327492 --> @007revad commented on GitHub (Oct 9, 2023): Try these: `grep "e10m20-t1" /etc.defaults/model.dtb` `grep "power_limit" /etc.defaults/model.dtb` `grep "100,100,100,100" /etc.defaults/model.dtb` If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands: `chmod 644 /etc.defaults/model.dtb` `cp -pu /etc.defaults/model.dtb /etc/model.dtb` `cp -pu /etc.defaults/model.dtb /run/model.dtb`
Author
Owner

@RozzNL commented on GitHub (Oct 9, 2023):

Try these:
grep "e10m20-t1" /etc.defaults/model.dtb

Returns nothing

grep "power_limit" /etc.defaults/model.dtb

Binary file /etc.defaults/model.dtb matches

grep "100,100,100,100" /etc.defaults/model.dtb

Returns nothing

Did not run below commands.

If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands:

chmod 644 /etc.defaults/model.dtb

cp -pu /etc.defaults/model.dtb /etc/model.dtb

cp -pu /etc.defaults/model.dtb /run/model.dtb

<!-- gh-comment-id:1752347530 --> @RozzNL commented on GitHub (Oct 9, 2023): > Try these: > `grep "e10m20-t1" /etc.defaults/model.dtb` Returns nothing > `grep "power_limit" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches > `grep "100,100,100,100" /etc.defaults/model.dtb` Returns nothing Did not run below commands. > If all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands: > > `chmod 644 /etc.defaults/model.dtb` > > `cp -pu /etc.defaults/model.dtb /etc/model.dtb` > > `cp -pu /etc.defaults/model.dtb /run/model.dtb`
Author
Owner

@007revad commented on GitHub (Oct 9, 2023):

Download this zip file:
ds1821+_model_with_e10m20-t1.zip

Then

  1. Unzip it to a directory on the DS1821+
  2. cd to that directory.
  3. chmod 644 model.dtb
  4. cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak
  5. cp -pu model.dtb /etc.defaults/model.dtb
  6. cp -pu model.dtb /etc/model.dtb
  7. cp -pu model.dtb /run/model.dtb
  8. Reboot
  9. Check storage manager now shows the E10M20-T1 and it's NVMe drives.
<!-- gh-comment-id:1752353010 --> @007revad commented on GitHub (Oct 9, 2023): Download this zip file: [ds1821+_model_with_e10m20-t1.zip](https://github.com/007revad/Synology_HDD_db/files/12774402/ds1821%2B_model_with_e10m20-t1.zip) Then 1. Unzip it to a directory on the DS1821+ 2. cd to that directory. 3. `chmod 644 model.dtb` 4. `cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak` 5. `cp -pu model.dtb /etc.defaults/model.dtb` 6. `cp -pu model.dtb /etc/model.dtb` 7. `cp -pu model.dtb /run/model.dtb` 8. Reboot 9. Check storage manager now shows the E10M20-T1 and it's NVMe drives.
Author
Owner

@RozzNL commented on GitHub (Oct 9, 2023):

Wil do above when i get back from work.

Edit:
Unzipped and copied model.dtb 3x als per instructions.
I did notice that /run/model.dtb gets rewritten at bootup? am i correct? i saw this due to the time stamp on the file, it had changed, the other 2 were still the same timestamp.
Again no NVME`s in Storage Manager (all are gone)

But:
grep "e10m20-t1" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches

syno_slot_mapping
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01:
02:

PCIe Slot 1: E10M20-T1
01:
02:

synodiskport -cache
Returns blank

<!-- gh-comment-id:1752429741 --> @RozzNL commented on GitHub (Oct 9, 2023): Wil do above when i get back from work. Edit: Unzipped and copied model.dtb 3x als per instructions. I did notice that /run/model.dtb gets rewritten at bootup? am i correct? i saw this due to the time stamp on the file, it had changed, the other 2 were still the same timestamp. Again no NVME`s in Storage Manager (all are gone) But: `grep "e10m20-t1" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches `grep "power_limit" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches `grep "100,100,100,100" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches `syno_slot_mapping` System Disk Internal Disk 01: /dev/sata1 02: /dev/sata2 03: /dev/sata3 04: /dev/sata4 05: /dev/sata5 06: /dev/sata6 07: /dev/sata7 08: /dev/sata8 Esata port count: 2 Esata port 1 01: Esata port 2 01: USB Device 01: 02: 03: 04: Internal SSD Cache: 01: 02: PCIe Slot 1: E10M20-T1 01: 02: `synodiskport -cache` Returns blank
Author
Owner

@007revad commented on GitHub (Oct 9, 2023):

Yes, DSM does overwrite /run/model.dtb during boot.

Can you restore the backed up model.dtb file, until I create a new model.dtb for you to try.

  1. cp -p /etc.defaults/model.dtb.bak /etc.defaults/model.dtb
  2. cp -pu /etc.defaults/model.dtb.bak /etc/model.dtb
  3. Reboot
<!-- gh-comment-id:1753240462 --> @007revad commented on GitHub (Oct 9, 2023): Yes, DSM does overwrite /run/model.dtb during boot. Can you restore the backed up model.dtb file, until I create a new model.dtb for you to try. 1. `cp -p /etc.defaults/model.dtb.bak /etc.defaults/model.dtb` 2. `cp -pu /etc.defaults/model.dtb.bak /etc/model.dtb` 3. Reboot
Author
Owner

@007revad commented on GitHub (Oct 10, 2023):

I've been thinking this would be a lot easier if I had a E10M20-T1, not just for DSM 7.2 update 3 and 7.2.1 but for when new versions of the storage manager package are released. Where I live a E10M20-T1 costs 1/3rd the price of a DS1821+!?!?

So I have a question for those who have E10M20-T1. Do the included M.2 heatsinks come with sticky single use thermal pads? It looks like it would be hard to temporarily remove an M.2 drive for testing.

<!-- gh-comment-id:1754910588 --> @007revad commented on GitHub (Oct 10, 2023): I've been thinking this would be a lot easier if I had a E10M20-T1, not just for DSM 7.2 update 3 and 7.2.1 but for when new versions of the storage manager package are released. Where I live a E10M20-T1 costs 1/3rd the price of a DS1821+!?!? So I have a question for those who have E10M20-T1. Do the included M.2 heatsinks come with sticky single use thermal pads? It looks like it would be hard to temporarily remove an M.2 drive for testing.
Author
Owner

@RozzNL commented on GitHub (Oct 11, 2023):

Dave, i have swapped my internal NVME’s multiple times with the ones on the E10M20-T1, totally no problems removing the heatsink.
And because i like to tinker and tweak as much as possible, i also installed 2 coolers on the internals NVME’s ([Gelid Solutions Icecap M.2 SSD Cooler)

<!-- gh-comment-id:1756711509 --> @RozzNL commented on GitHub (Oct 11, 2023): Dave, i have swapped my internal NVME’s multiple times with the ones on the E10M20-T1, totally no problems removing the heatsink. And because i like to tinker and tweak as much as possible, i also installed 2 coolers on the internals NVME’s ([Gelid Solutions Icecap M.2 SSD Cooler)
Author
Owner

@007revad commented on GitHub (Oct 11, 2023):

totally no problems removing the heatsink

Thanks. I just bought a E10M20-T1 online and paid for express shipping. The online store's distribution center is only a few suburbs away from me so hopefully it will arrive quickly (they don't allow pick-up).

<!-- gh-comment-id:1756727890 --> @007revad commented on GitHub (Oct 11, 2023): > totally no problems removing the heatsink Thanks. I just bought a E10M20-T1 online and paid for express shipping. The online store's distribution center is only a few suburbs away from me so hopefully it will arrive quickly (they don't allow pick-up).
Author
Owner

@007revad commented on GitHub (Oct 11, 2023):

I just unpacked DSM 7.2 update 3 for all 113 Synology models that can use DSM 7.2

  1. 36 of them use a devicetree (model.dtb).
  2. 14 of those support E10M20-T1 and M2D20.
  3. All 14 use the same pcie_postfix = "00.0,08.0,00.0" and pcie_postfix = "00.0,04.0,00.0" (for both E10M20-T1 and M2D20).

This confirms that the pcie_postfix values that I used in the model.dtb file were correct.

I also noticed that those 14 models that support E10M20-T1 do not have SATA M.2 support enabled in model.dtb for the E10M20-T1, even though they all have E10M20-T1_sup_sata enabled in adaptor_cards.conf

This confirms that adding entries for SATA M.2 support in model.dtb won't make any difference.

I've compiled 2 new model.dtb files for you to try:

  1. model.dtb with power_limit = "100,100"; ds1821+_100x2.zip
  2. model.dtb with power_limit = "14.85,9.075"; ds1821+_14.85.zip

Unzip it to a directory on the DS1821+ then

  1. cd to that directory.
  2. chmod 644 model.dtb
  3. cp -pu model.dtb /etc.defaults/model.dtb
  4. cp -pu model.dtb /etc/model.dtb
  5. Check storage manager.
  6. Reboot.
  7. Check storage manager again.
<!-- gh-comment-id:1757341932 --> @007revad commented on GitHub (Oct 11, 2023): I just unpacked DSM 7.2 update 3 for all 113 Synology models that can use DSM 7.2 1. 36 of them use a devicetree (model.dtb). 2. 14 of those support E10M20-T1 and M2D20. 3. All 14 use the same `pcie_postfix = "00.0,08.0,00.0"` and `pcie_postfix = "00.0,04.0,00.0"` (for both E10M20-T1 and M2D20). This confirms that the pcie_postfix values that I used in the model.dtb file were correct. I also noticed that those 14 models that support E10M20-T1 do not have SATA M.2 support enabled in model.dtb for the E10M20-T1, even though they all have `E10M20-T1_sup_sata` enabled in adaptor_cards.conf This confirms that adding entries for SATA M.2 support in model.dtb won't make any difference. I've compiled 2 new model.dtb files for you to try: 1. model.dtb with `power_limit = "100,100";` [ds1821+_100x2.zip](https://github.com/007revad/Synology_HDD_db/files/12868090/ds1821%2B_100x2.zip) 2. model.dtb with `power_limit = "14.85,9.075";` [ds1821+_14.85.zip](https://github.com/007revad/Synology_HDD_db/files/12868092/ds1821%2B_14.85.zip) Unzip it to a directory on the DS1821+ then 1. cd to that directory. 2. `chmod 644 model.dtb` 3. `cp -pu model.dtb /etc.defaults/model.dtb` 4. `cp -pu model.dtb /etc/model.dtb` 5. Check storage manager. 6. Reboot. 7. Check storage manager again.
Author
Owner

@RozzNL commented on GitHub (Oct 11, 2023):

I've compiled 2 new model.dtb files for you to try:

  1. model.dtb with power_limit = "100,100"; [ds1821+_100x2.zip]

Still only NVME`s internal in Storage Manager, before and after.

  1. model.dtb with power_limit = "14.85,9.075"; [ds1821+_14.85.zip

Still only NVME`s internal in Storage Manager, before and after.

Both files show same info below:
synodiskport -cache
nvme2n1 nvme3n1
grep "e10m20-t1" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtb
Binary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

syno_slot_mapping
Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1
01:
02:

<!-- gh-comment-id:1757932165 --> @RozzNL commented on GitHub (Oct 11, 2023): > I've compiled 2 new model.dtb files for you to try: > 1. model.dtb with `power_limit = "100,100";` [ds1821+_100x2.zip] Still only NVME`s internal in Storage Manager, before and after. > 2. model.dtb with `power_limit = "14.85,9.075";` [ds1821+_14.85.zip Still only NVME`s internal in Storage Manager, before and after. Both files show same info below: `synodiskport -cache` nvme2n1 nvme3n1 `grep "e10m20-t1" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches `grep "power_limit" /etc.defaults/model.dtb` Binary file /etc.defaults/model.dtb matches `grep "100,100,100,100" /etc.defaults/model.dtb` Returns nothing `syno_slot_mapping` Internal SSD Cache: 01: /dev/nvme2n1 02: /dev/nvme3n1 PCIe Slot 1: E10M20-T1 01: 02:
Author
Owner

@007revad commented on GitHub (Oct 11, 2023):

grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

Depending on which model.dtb file you were using you'd need to run either:
grep "100,100" /etc.defaults/model.dtb
or
grep "14.85,9.075" /etc.defaults/model.dtb

What do these 3 commands return:
ls -l /etc.defaults/model.dtb
ls -l /etc/model.dtb
ls -l /run/model.dtb

<!-- gh-comment-id:1758701176 --> @007revad commented on GitHub (Oct 11, 2023): > `grep "100,100,100,100" /etc.defaults/model.dtb` > Returns nothing Depending on which model.dtb file you were using you'd need to run either: `grep "100,100" /etc.defaults/model.dtb` or `grep "14.85,9.075" /etc.defaults/model.dtb` What do these 3 commands return: `ls -l /etc.defaults/model.dtb` `ls -l /etc/model.dtb` `ls -l /run/model.dtb`
Author
Owner

@RozzNL commented on GitHub (Oct 12, 2023):

grep "100,100,100,100" /etc.defaults/model.dtb
Returns nothing

Depending on which model.dtb file you were using you'd need to run either: grep "100,100" /etc.defaults/model.dtb or grep "14.85,9.075" /etc.defaults/model.dtb

Gotcha.

What do these 3 commands return:
ls -l /etc.defaults/model.dtb
ls -l /etc/model.dtb
ls -l /run/model.dtb

ls -l /etc.defaults/model.dtb
-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc.defaults/model.dtb
ls -l /etc/model.dtb
-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc/model.dtb
ls -l /run/model.dtb
-rw-r--r-- 1 root root 3848 Oct 11 17:13 /run/model.dtb

<!-- gh-comment-id:1759281610 --> @RozzNL commented on GitHub (Oct 12, 2023): > > `grep "100,100,100,100" /etc.defaults/model.dtb` > > Returns nothing > > Depending on which model.dtb file you were using you'd need to run either: `grep "100,100" /etc.defaults/model.dtb` or `grep "14.85,9.075" /etc.defaults/model.dtb` Gotcha. > What do these 3 commands return: `ls -l /etc.defaults/model.dtb` `ls -l /etc/model.dtb` `ls -l /run/model.dtb` `ls -l /etc.defaults/model.dtb` -rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc.defaults/model.dtb `ls -l /etc/model.dtb` -rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc/model.dtb `ls -l /run/model.dtb` -rw-r--r-- 1 root root 3848 Oct 11 17:13 /run/model.dtb
Author
Owner

@007revad commented on GitHub (Oct 12, 2023):

Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip

Unzip it to a directory on the DS1821+ then

  1. cd to that directory.
  2. chmod 644 model.dtb
  3. sudo chown root:root model.dtb
  4. cp -pu model.dtb /etc.defaults/model.dtb
  5. cp -pu model.dtb /etc/model.dtb
  6. Check storage manager.
  7. Reboot.
  8. Check storage manager again.
<!-- gh-comment-id:1759296856 --> @007revad commented on GitHub (Oct 12, 2023): Can you download this zip file: [ds1821+_model_with_e10m20-t1.zip](https://github.com/007revad/Synology_HDD_db/files/12774402/ds1821%2B_model_with_e10m20-t1.zip) Unzip it to a directory on the DS1821+ then 1. cd to that directory. 2. `chmod 644 model.dtb` 3. `sudo chown root:root model.dtb` 4. `cp -pu model.dtb /etc.defaults/model.dtb` 5. `cp -pu model.dtb /etc/model.dtb` 6. Check storage manager. 7. Reboot. 8. Check storage manager again.
Author
Owner

@RozzNL commented on GitHub (Oct 12, 2023):

Will do when i get back home from work.

EDIT:
Internal NVME`s gone again in Storage Manager after reboot.

<!-- gh-comment-id:1759303841 --> @RozzNL commented on GitHub (Oct 12, 2023): Will do when i get back home from work. EDIT: Internal NVME`s gone again in Storage Manager after reboot.
Author
Owner

@007revad commented on GitHub (Oct 13, 2023):

My E10M20-T1 arrived 30 minutes ago and I now have both 10GbE and NVMe drives working in DSM 7.2-64570 Update 3 :o)

The solution was simple once I realized LAN 5 was missing as well as the NVMe drives.

  1. sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nvme DS1821+ yes
  2. sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nic DS1821+ yes
  3. Reboot.

If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300

Image_E10M20-T1_working

<!-- gh-comment-id:1760714843 --> @007revad commented on GitHub (Oct 13, 2023): My E10M20-T1 arrived 30 minutes ago and I now have both 10GbE and NVMe drives working in DSM 7.2-64570 Update 3 :o) The solution was simple once I realized LAN 5 was missing as well as the NVMe drives. 1. `sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nvme DS1821+ yes` 2. `sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nic DS1821+ yes` 3. Reboot. If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300 ![Image_E10M20-T1_working](https://github.com/007revad/Synology_HDD_db/assets/39733752/e7cb1a3c-a563-4e39-bc28-8c376d93c77e)
Author
Owner

@007revad commented on GitHub (Oct 13, 2023):

FYI Do NOT update to 7.2.1 yet.

I just updated to 7.2.1 and:

  1. My single internal NVMe drive was showing as critical because DSM thought there should be 2 NVMe drives.
    • This single NVMe drive was previously migrated from a DS720+ and "online assembled".
    • This time the online assemble was grayed out and the only option was Remove (which I did as there was no data on it).
  2. My E10M20-T1 and it's NVMe drives are missing.
  3. 7.2.1 updated itself to 7.2.1 Update 1 (so I don't know if this is a 7.2.1 or 7.2.1 Update 1 issue).
<!-- gh-comment-id:1760785212 --> @007revad commented on GitHub (Oct 13, 2023): FYI Do **_NOT_** update to 7.2.1 yet. I just updated to 7.2.1 and: 1. My single internal NVMe drive was showing as critical because DSM thought there should be 2 NVMe drives. - This single NVMe drive was previously migrated from a DS720+ and "online assembled". - This time the online assemble was grayed out and the only option was Remove (which I did as there was no data on it). 2. My E10M20-T1 and it's NVMe drives are missing. 3. 7.2.1 updated itself to 7.2.1 Update 1 (so I don't know if this is a 7.2.1 or 7.2.1 Update 1 issue).
Author
Owner

@RozzNL commented on GitHub (Oct 13, 2023):

Which model.dtb file did you use? (Original? 100? 14,85?)
Cause i still do not see the drives in Storage Manager after performing your adapter_cards.conf in /etc and /etc.defaults
My LAN 5 never went away after the first time i used your syno_hdd_db script

So i`m glad you got it working at your side!!! Now my turn ;)
Still, we need to keep this working after an update, so not out of the woods yet.

EDIT:
I restored the model.dtb.bak from a couple of steps back and i now do see the internal disks, but still no E10M20-T1, i do however still have LAN 5.
So i`m guessing i still have a different setup then yours?!?

<!-- gh-comment-id:1761267959 --> @RozzNL commented on GitHub (Oct 13, 2023): Which model.dtb file did you use? (Original? 100? 14,85?) Cause i still do not see the drives in Storage Manager after performing your adapter_cards.conf in /etc and /etc.defaults My LAN 5 never went away after the first time i used your syno_hdd_db script So i`m glad you got it working at your side!!! Now my turn ;) Still, we need to keep this working after an update, so not out of the woods yet. EDIT: I restored the model.dtb.bak from a couple of steps back and i now do see the internal disks, but still no E10M20-T1, i do however still have LAN 5. So i`m guessing i still have a different setup then yours?!?
Author
Owner

@zcpnate commented on GitHub (Oct 13, 2023):

If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300

I haven't attempted this yet but I did just shoot over a paypal donation to help with the cost of the card. Thanks for all your hard work!

<!-- gh-comment-id:1761719435 --> @zcpnate commented on GitHub (Oct 13, 2023): > If I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300 I haven't attempted this yet but I did just shoot over a paypal donation to help with the cost of the card. Thanks for all your hard work!
Author
Owner

@RozzNL commented on GitHub (Oct 13, 2023):

I also just did a donation...totally forgot about it...
Dave, thank you very much for the work you have already done.

<!-- gh-comment-id:1761788408 --> @RozzNL commented on GitHub (Oct 13, 2023): I also just did a donation...totally forgot about it... Dave, thank you very much for the work you have already done.
Author
Owner

@007revad commented on GitHub (Oct 13, 2023):

I used a model.dtb with "100,100,100,100" like the one in the first zip file ds1821+_model_with_e10m20-t1.zip

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

  1. cd to the directory you extracted the downloaded model.dtb to.
  2. chmod 644 model.dtb
  3. sudo chown root:root model.dtb
  4. cp -pu model.dtb /etc.defaults/model.dtb
  5. cp -pu model.dtb /etc/model.dtb
  6. Reboot.
  7. Check storage manager again.
<!-- gh-comment-id:1762054819 --> @007revad commented on GitHub (Oct 13, 2023): I used a model.dtb with "100,100,100,100" like the one in the first zip file [ds1821+_model_with_e10m20-t1.zip](https://github.com/007revad/Synology_HDD_db/files/12774402/ds1821%2B_model_with_e10m20-t1.zip) But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file [model.zip](https://github.com/007revad/Synology_HDD_db/files/12897698/model.zip) 1. cd to the directory you extracted the downloaded model.dtb to. 2. `chmod 644 model.dtb` 3. `sudo chown root:root model.dtb` 4. `cp -pu model.dtb /etc.defaults/model.dtb` 5. `cp -pu model.dtb /etc/model.dtb` 6. Reboot. 7. Check storage manager again.
Author
Owner

@RozzNL commented on GitHub (Oct 14, 2023):

OK dave, i really dont know where it is going wrong for me?

*copied model.dtb (100,100,100,100 version from above comment) into /etc and /etc.defaults, both are root
*checked adapter_cards.conf in: /usr/syno/etc and /usr/syno/etc.defaults, both have: E10M20-T1_sup_nvme, E10M20-T1_sup_nic and even E10M20-T1_sup_sata are all 1821+=yes, both are root

  • did not run syno_hdd_db.sh script, this was performed only after the reinstallation of update 3 a few days ago.
  • no other scripts of yours.
  • LAN 5 never went away after the first run of syno_hdd_db.sh.
  • only the internal nvme disks are shown and i can perform an online assemble.
  • still no pcie card with discs.

So where is it going wrong?

EDIT:

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

For me this removes my internal nvme drives.

<!-- gh-comment-id:1762822234 --> @RozzNL commented on GitHub (Oct 14, 2023): OK dave, i really dont know where it is going wrong for me? *copied model.dtb (100,100,100,100 version from above comment) into /etc and /etc.defaults, both are root *checked adapter_cards.conf in: /usr/syno/etc and /usr/syno/etc.defaults, both have: E10M20-T1_sup_nvme, E10M20-T1_sup_nic and even E10M20-T1_sup_sata are all 1821+=yes, both are root * did not run syno_hdd_db.sh script, this was performed only after the reinstallation of update 3 a few days ago. * no other scripts of yours. * LAN 5 never went away after the first run of syno_hdd_db.sh. * only the internal nvme disks are shown and i can perform an online assemble. * still no pcie card with discs. So where is it going wrong? EDIT: > But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file [model.zip](https://github.com/007revad/Synology_HDD_db/files/12897698/model.zip) For me this removes my internal nvme drives.
Author
Owner

@007revad commented on GitHub (Oct 14, 2023):

But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip

For me this removes my internal nvme drives.

Do the fans to run at full speed?

I'm not sure why it's different for you. I'll downgrade DSM to 7.2 update 3 and try it again and document the exact steps I do.

I did notice today that the values in /run/adapter_cards.conf did not match those in /usr/syno/etc.defaults/adapter_cards.conf

What does the following command return:
cat /run/adapter_cards.conf

I've spent the last few hours creating a test version of syno_hdd_db to do all the required steps, so we'll all be doing the exact same steps. But I'm momentarily stuck at trying to insert the power_limit into the model.dtb file.

<!-- gh-comment-id:1762848837 --> @007revad commented on GitHub (Oct 14, 2023): > > But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file [model.zip](https://github.com/007revad/Synology_HDD_db/files/12897698/model.zip) > > For me this removes my internal nvme drives. Do the fans to run at full speed? I'm not sure why it's different for you. I'll downgrade DSM to 7.2 update 3 and try it again and document the exact steps I do. I did notice today that the values in /run/adapter_cards.conf did not match those in /usr/syno/etc.defaults/adapter_cards.conf What does the following command return: `cat /run/adapter_cards.conf` I've spent the last few hours creating a test version of syno_hdd_db to do all the required steps, so we'll all be doing the exact same steps. But I'm momentarily stuck at trying to insert the power_limit into the model.dtb file.
Author
Owner

@RozzNL commented on GitHub (Oct 14, 2023):

No fans run normally, i use cool mode btw

cat /run/adapter_cards.conf

M2D20_sup_nvme=no
E10M20-T1_sup_sata=yes
E10M20-T1_sup_nic=yes
M2D17_sup_sata=no
E10M20-T1_sup_nvme=yes
M2D18_sup_sata=no
M2D17_sup_nic=no
M2D18_sup_nic=no
M2D20_sup_sata=no
M2D17_sup_nvme=no
M2D18_sup_nvme=no
FX2422N_sup_nic=no
FX2422N_sup_nvme=no
FX2422N_sup_sata=no
M2D20_sup_nic=no

<!-- gh-comment-id:1762849382 --> @RozzNL commented on GitHub (Oct 14, 2023): No fans run normally, i use cool mode btw > cat /run/adapter_cards.conf M2D20_sup_nvme=no E10M20-T1_sup_sata=yes E10M20-T1_sup_nic=yes M2D17_sup_sata=no E10M20-T1_sup_nvme=yes M2D18_sup_sata=no M2D17_sup_nic=no M2D18_sup_nic=no M2D20_sup_sata=no M2D17_sup_nvme=no M2D18_sup_nvme=no FX2422N_sup_nic=no FX2422N_sup_nvme=no FX2422N_sup_sata=no M2D20_sup_nic=no
Author
Owner

@007revad commented on GitHub (Oct 16, 2023):

I haven't forgotten you guys.

I've done a lot of testing, while documenting every change, and been running around in circles. At one point I replaced the E10M20-T1 with the M2D18 and spent 1/2 a day trying to get it working again then I noticed the M2D18 was not fully plugged into the PCIe slot!?!?

I also downgraded DSM from 7.2.1 update 1 to 7.2 update 3 which caused it's own issues so I was not sure if the issues were caused by parts of DSM being broken (Synology account, File Station, Schedules, packages etc). I solved that by downgrading to DSM 7.2 update 1.

My plan is to get both the M2D18 and E10M20-T1 working in DSM 7.2 update 1,

  • Then update to DSM 7.2 Update 3 and get them working.
  • Then update to DSM 7.2.1 and get them working.
  • Then update to DSM 7.2.1 Update 1 and get them working.

I want to get both cards working as Synology intended (for a cache) without running any of my scripts.

Because I got tired of copying and pasting dozens of commands every time I made a change and rebooted I've written a script that runs all the commands and outputs the results in a readable format.

https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh

FYI this is from immediately after reinstalling DSM and not running any scripts or editing anything:

root@DISKSTATION:~# /volume1/scripts/m2_card_check.sh

 Checking permissions and owner on model.dtb files
-rw-r--r-- 1 root root 3583 Jul 20 02:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 16 12:01 /etc/model.dtb
-rw-r--r-- 1 root root 3583 Oct 16 20:21 /run/model.dtb

 Checking power_limit="100,100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking E10M20-T1 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking M2D20 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking M2D18 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

 Checking /run/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /usr/syno/etc.defaults/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /usr/syno/etc/adapter_cards.conf
All OK

 Checking synodisk --enum -t cache
************ Disk Info ***************
>> Disk id: 1
>> Disk path: /dev/nvme0n1
>> Disk model: WD_BLACK SN770 500GB
>> Total capacity: 465.76 GB
>> Tempeture: 30 C

 Checking syno_slot_mapping
System Disk
Internal Disk
01:
02:
03: /dev/sata1
04: /dev/sata2
05:
06: /dev/sata3
07: /dev/sata4
08:

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme0n1
02:


 Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.3/0000:0d:00.0/nvme/nvme0
nvme1: device node not found
nvme2: device node not found
nvme3: device node not found

 Checking devicetree Power_limit
14.85,9.075

 Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking if nvme drives in PCIe card with synodisk
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"

 Checking nvme drives in /run/synostorage/disks
nvme0n1

 Checking nvme block devices in /sys/block
nvme0n1
<!-- gh-comment-id:1764315681 --> @007revad commented on GitHub (Oct 16, 2023): I haven't forgotten you guys. I've done a lot of testing, while documenting every change, and been running around in circles. At one point I replaced the E10M20-T1 with the M2D18 and spent 1/2 a day trying to get it working again then I noticed the M2D18 was not fully plugged into the PCIe slot!?!? I also downgraded DSM from 7.2.1 update 1 to 7.2 update 3 which caused it's own issues so I was not sure if the issues were caused by parts of DSM being broken (Synology account, File Station, Schedules, packages etc). I solved that by downgrading to DSM 7.2 update 1. My plan is to get both the M2D18 and E10M20-T1 working in DSM 7.2 update 1, - Then update to DSM 7.2 Update 3 and get them working. - Then update to DSM 7.2.1 and get them working. - Then update to DSM 7.2.1 Update 1 and get them working. I want to get both cards working as Synology intended (for a cache) without running any of my scripts. Because I got tired of copying and pasting dozens of commands every time I made a change and rebooted I've written a script that runs all the commands and outputs the results in a readable format. https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh FYI this is from immediately after reinstalling DSM and not running any scripts or editing anything: ``` root@DISKSTATION:~# /volume1/scripts/m2_card_check.sh Checking permissions and owner on model.dtb files -rw-r--r-- 1 root root 3583 Jul 20 02:17 /etc.defaults/model.dtb -rw-r--r-- 1 root root 4460 Oct 16 12:01 /etc/model.dtb -rw-r--r-- 1 root root 3583 Oct 16 20:21 /run/model.dtb Checking power_limit="100,100,100,100" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking E10M20-T1 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking M2D20 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking M2D18 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking /run/adapter_cards.conf E10M20-T1_sup_nic NOT set to yes E10M20-T1_sup_nvme NOT set to yes E10M20-T1_sup_sata NOT set to yes M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nic NOT set to yes E10M20-T1_sup_nvme NOT set to yes E10M20-T1_sup_sata NOT set to yes M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /usr/syno/etc/adapter_cards.conf All OK Checking synodisk --enum -t cache ************ Disk Info *************** >> Disk id: 1 >> Disk path: /dev/nvme0n1 >> Disk model: WD_BLACK SN770 500GB >> Total capacity: 465.76 GB >> Tempeture: 30 C Checking syno_slot_mapping System Disk Internal Disk 01: 02: 03: /dev/sata1 04: /dev/sata2 05: 06: /dev/sata3 07: /dev/sata4 08: Esata port count: 2 Esata port 1 01: Esata port 2 01: USB Device 01: 02: 03: 04: Internal SSD Cache: 01: /dev/nvme0n1 02: Checking udevadm nvme paths nvme0: /devices/pci0000:00/0000:00:01.3/0000:0d:00.0/nvme/nvme0 nvme1: device node not found nvme2: device node not found nvme3: device node not found Checking devicetree Power_limit 14.85,9.075 Checking if nvme drives in PCIe card with synonvme nvme0: Not M.2 adapter card nvme1: Not M.2 adapter card nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking if nvme drives in PCIe card with synodisk nvme0: Not M.2 adapter card nvme1: Not M.2 adapter card nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking PCIe slot path(s) [pci] pci1="0000:00:01.2" Checking nvme drives in /run/synostorage/disks nvme0n1 Checking nvme block devices in /sys/block nvme0n1 ```
Author
Owner

@RozzNL commented on GitHub (Oct 16, 2023):

Sounds like a plan Dave, just do your thing.
Personally not in any hurry.
My idee was to use the internals as cache and the PCIE card as storage.
Am away from home a few days but i should be able to test some settings remotely if needed.

<!-- gh-comment-id:1765087767 --> @RozzNL commented on GitHub (Oct 16, 2023): Sounds like a plan Dave, just do your thing. Personally not in any hurry. My idee was to use the internals as cache and the PCIE card as storage. Am away from home a few days but i should be able to test some settings remotely if needed.
Author
Owner

@007revad commented on GitHub (Oct 18, 2023):

Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3.

Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy.

But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager.

The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. Note: I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing.

I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult.

M2D18 working in DSM 7.2 Update 3
m2d18_20231018-114916

E10M20-T1 working in DSM 7.2 Update 3
e10m20-t1_and_nic_20231018-122342

Can you do the following to test it:

  1. Download and run m2_card_fix.sh
  2. Reboot.
  3. If it didn't work, reboot again. See note.

Note: When I first ran m2_card_fix.sh and rebooted I found /run/adapter_cards.conf was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore /run/adapter_cards.conf if it missing.

I only noticed /run/adapter_cards.conf was missing when I ran m2_card_check.sh and saw
ls: cannot access '/run/adapter_cards.conf': No such file or directory

If you have any issues please run m2_card_check.sh and reply with the output.

<!-- gh-comment-id:1767674804 --> @007revad commented on GitHub (Oct 18, 2023): Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3. Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy. But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager. The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. **Note:** I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing. I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult. **M2D18 working in DSM 7.2 Update 3** ![m2d18_20231018-114916](https://github.com/007revad/Synology_HDD_db/assets/39733752/5afe7573-82fc-4524-88e2-aaa539bd4245) **E10M20-T1 working in DSM 7.2 Update 3** ![e10m20-t1_and_nic_20231018-122342](https://github.com/007revad/Synology_HDD_db/assets/39733752/69dd7d42-1e46-4c5c-a788-66e244aa1c76) Can you do the following to test it: 1. Download and run [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) 2. Reboot. 3. If it didn't work, reboot again. See note. **Note:** When I first ran m2_card_fix.sh and rebooted I found `/run/adapter_cards.conf` was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore `/run/adapter_cards.conf` if it missing. I only noticed `/run/adapter_cards.conf` was missing when I ran [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and saw `ls: cannot access '/run/adapter_cards.conf': No such file or directory` If you have any issues please run [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and reply with the output.
Author
Owner

@007revad commented on GitHub (Oct 18, 2023):

FYI I noticed that my NVMe drives sometimes changed their number.

The nvme drive in the internal slot 1 was nvme0 when the M.2 card was not being detected.

Then when the M.2 card was detected, the nvme drive in the internal slot 1 had changed to nvme1. And the nvme drive in slot 1 of the M.2 card was now nvme0.

When I had 3 NVMe drives installed the drive in the internal slot was nvme2. After removing one of the drives from the M.2 card the drive in the internal slot became nvme1.

So if you have 4 of the same model NVMe drives and run syno_m2_volume.sh to create a volume on the drives in the M.2 card it will be difficult to tell which drives are installed where. I will update syno_m2_volume.sh to show if the drive is in an M.2 card.

In the meantime you see where each nvme drives i located with:
syno_slot_mapping | grep -A 7 'SSD'

<!-- gh-comment-id:1767698808 --> @007revad commented on GitHub (Oct 18, 2023): FYI I noticed that my NVMe drives sometimes changed their number. The nvme drive in the internal slot 1 was nvme0 when the M.2 card was not being detected. Then when the M.2 card was detected, the nvme drive in the internal slot 1 had changed to nvme1. And the nvme drive in slot 1 of the M.2 card was now nvme0. When I had 3 NVMe drives installed the drive in the internal slot was nvme2. After removing one of the drives from the M.2 card the drive in the internal slot became nvme1. So if you have 4 of the same model NVMe drives and run syno_m2_volume.sh to create a volume on the drives in the M.2 card it will be difficult to tell which drives are installed where. I will update syno_m2_volume.sh to show if the drive is in an M.2 card. In the meantime you see where each nvme drives i located with: `syno_slot_mapping | grep -A 7 'SSD'`
Author
Owner

@RozzNL commented on GitHub (Oct 18, 2023):

Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3.

Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy.

But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager.

The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. Note: I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing.

I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult.

M2D18 working in DSM 7.2 Update 3 ![m2d18_20231018-114916]

E10M20-T1 working in DSM 7.2 Update 3 ![e10m20-t1_and_nic_20231018-122342]

Can you do the following to test it:

  1. Download and run m2_card_fix.sh
  2. Reboot.
  3. If it didn't work, reboot again. See note.

Note: When I first ran m2_card_fix.sh and rebooted I found /run/adapter_cards.conf was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore /run/adapter_cards.conf if it missing.

I only noticed /run/adapter_cards.conf was missing when I ran m2_card_check.sh and saw ls: cannot access '/run/adapter_cards.conf': No such file or directory

If you have any issues please run m2_card_check.sh and reply with the output.

WHOOOHOOO....
Screen_DS1821+_Ro
Now were getting somewhere Dave!
This was after running your fix script and only 1x reboot.
Since i am not at home right now, i am nog going to create storage pools just yet, but a question: do i need your other scripts to create a storage pool on the pcie card? want to run internal nvmes as cache (not yet decided if i want to use write/read or only read, and the nvmes on the pcie will bee raid 1 storage pool with 1x volume

EDIT:
For your info,

./m2_card_check.sh
DSM 7.2-64570 Update 3
2023-10-18 21:17:56

Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking permissions and owner on model.dtb files
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb
-rw-r--r-- 1 root root 4460 Oct 18 21:00 /run/model.dtb

Checking power_limit="100,100,100,100" is in model.dtb files
All OK

Checking E10M20-T1 is in model.dtb files
All OK

Checking M2D20 is in model.dtb files
All OK

Checking M2D18 is in model.dtb files
All OK

Checking permissions and owner on adapter_cards.conf files
-rw-r--r-- 1 root root 3170 Oct 13 11:58 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 286 Oct 18 21:00 /run/adapter_cards.conf

Checking /usr/syno/etc.defaults/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /usr/syno/etc/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /run/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking synodisk --enum -t cache
************ Disk Info ***************

Disk id: 2
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Slot id: 1
Disk path: /dev/nvme1n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 38 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 40 C

Checking syno_slot_mapping

System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1


Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

Checking devicetree Power_limit
14.85,9.075

Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking if nvme drives in PCIe card with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"

Checking nvme drives in /run/synostorage/disks
nvme0n1
nvme1n1
nvme2n1
nvme3n1

Checking nvme block devices in /sys/block
nvme0n1
nvme1n1
nvme2n1
nvme3n1

Checking synostgd-disk log

Current date/time: 2023-10-18 21:17:57
Last boot date/time: 2023-10-18 21:17:00

No synostgd-disk logs since last boot

<!-- gh-comment-id:1769161824 --> @RozzNL commented on GitHub (Oct 18, 2023): > Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3. > > Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy. > > But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager. > > The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. **Note:** I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing. > > I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult. > > **M2D18 working in DSM 7.2 Update 3** ![m2d18_20231018-114916] > > **E10M20-T1 working in DSM 7.2 Update 3** ![e10m20-t1_and_nic_20231018-122342] > > Can you do the following to test it: > > 1. Download and run [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) > 2. Reboot. > 3. If it didn't work, reboot again. See note. > > **Note:** When I first ran m2_card_fix.sh and rebooted I found `/run/adapter_cards.conf` was missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore `/run/adapter_cards.conf` if it missing. > > I only noticed `/run/adapter_cards.conf` was missing when I ran [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and saw `ls: cannot access '/run/adapter_cards.conf': No such file or directory` > > If you have any issues please run [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and reply with the output. WHOOOHOOO.... ![Screen_DS1821+_Ro](https://github.com/007revad/Synology_HDD_db/assets/76004465/3350c8cc-b166-4ce1-82ec-0db633dd3d00) Now were getting somewhere Dave! This was after running your fix script and only 1x reboot. Since i am not at home right now, i am nog going to create storage pools just yet, but a question: do i need your other scripts to create a storage pool on the pcie card? want to run internal nvmes as cache (not yet decided if i want to use write/read or only read, and the nvmes on the pcie will bee raid 1 storage pool with 1x volume EDIT: For your info, .`/m2_card_check.sh` DSM 7.2-64570 Update 3 2023-10-18 21:17:56 Checking support_m2_pool setting /etc.defaults/synoinfo.conf: yes /etc/synoinfo.conf: yes Checking supportnvme setting /etc.defaults/synoinfo.conf: yes /etc/synoinfo.conf: yes Checking permissions and owner on model.dtb files -rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc.defaults/model.dtb -rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb -rw-r--r-- 1 root root 4460 Oct 18 21:00 /run/model.dtb Checking power_limit="100,100,100,100" is in model.dtb files All OK Checking E10M20-T1 is in model.dtb files All OK Checking M2D20 is in model.dtb files All OK Checking M2D18 is in model.dtb files All OK Checking permissions and owner on adapter_cards.conf files -rw-r--r-- 1 root root 3170 Oct 13 11:58 /usr/syno/etc.defaults/adapter_cards.conf -rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf -rw-r--r-- 1 root root 286 Oct 18 21:00 /run/adapter_cards.conf Checking /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /usr/syno/etc/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /run/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking synodisk --enum -t cache ************ Disk Info *************** >> Disk id: 2 >> Slot id: 1 >> Disk path: /dev/nvme0n1 >> Disk model: Samsung SSD 970 EVO Plus 2TB >> Total capacity: 1863.02 GB >> Tempeture: 47 C ************ Disk Info *************** >> Disk id: 1 >> Slot id: 1 >> Disk path: /dev/nvme1n1 >> Disk model: Samsung SSD 970 EVO Plus 2TB >> Total capacity: 1863.02 GB >> Tempeture: 47 C ************ Disk Info *************** >> Disk id: 1 >> Disk path: /dev/nvme2n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 38 C ************ Disk Info *************** >> Disk id: 2 >> Disk path: /dev/nvme3n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 40 C Checking syno_slot_mapping ---------------------------------------- System Disk Internal Disk 01: /dev/sata1 02: /dev/sata2 03: /dev/sata3 04: /dev/sata4 05: /dev/sata5 06: /dev/sata6 07: /dev/sata7 08: /dev/sata8 Esata port count: 2 Esata port 1 01: Esata port 2 01: USB Device 01: 02: 03: 04: Internal SSD Cache: 01: /dev/nvme2n1 02: /dev/nvme3n1 PCIe Slot 1: E10M20-T1 01: /dev/nvme1n1 02: /dev/nvme0n1 ---------------------------------------- Checking udevadm nvme paths nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2 nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3 Checking devicetree Power_limit 14.85,9.075 Checking if nvme drives in PCIe card with synonvme nvme0: Not M.2 adapter card nvme1: Not M.2 adapter card nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking if nvme drives in PCIe card with synodisk nvme0: E10M20-T1 nvme1: E10M20-T1 nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking PCIe slot path(s) [pci] pci1="0000:00:01.2" Checking nvme drives in /run/synostorage/disks nvme0n1 nvme1n1 nvme2n1 nvme3n1 Checking nvme block devices in /sys/block nvme0n1 nvme1n1 nvme2n1 nvme3n1 Checking synostgd-disk log ---------------------------------------- Current date/time: 2023-10-18 21:17:57 Last boot date/time: 2023-10-18 21:17:00 ---------------------------------------- No synostgd-disk logs since last boot ----------------------------------------
Author
Owner

@007revad commented on GitHub (Oct 19, 2023):

Excellent.

Interesting that you didn't need a 2nd reboot (as /run/adapter_cards.conf still existed).

do i need your other scripts to create a storage pool on the pcie card?

For NVMe drive in a PCIe card you need Synology_M2_volume to create the storage pool and then do an online assemble in storage manager. This is because storage manager won't let you create a storage pool on NVMe drives in a PCIe card. I should have see if I can get around that..

Checking synodisk --enum -t cache
************ Disk Info ***************
Disk id: 2
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Slot id: 1
Disk path: /dev/nvme1n1
Disk model: Samsung SSD 970 EVO Plus 2TB
Total capacity: 1863.02 GB
Tempeture: 47 C
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 38 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 40 C

Synology really need to learn how to spell Temperature.

Your NVMe drives are a lot warmer than my little 500GB NVMe drives. My internal NVMe is 28 C and the one in the E10M20-T1 is 33 C (without the heatsink installed). Though I do have 2 empty bays next the internal M.2 slots and I currently have the cover off the NAS.

Checking syno_slot_mapping
PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1

I notice that nvme1 is in the E10M20-T1 M.2 slot-1 and nvme0 is in M.2 slot-2. I should have tested with 2 nvme drives in the pcie card as I'd expect nvme0 to be in the E10M20-T1 M.2 slot-1, like this:

01: /dev/nvme0n1
02: /dev/nvme1n1

I wonder if Synology screwed that up because all the NAS models that have E10M20-T1 in model.dtb have 08.0 for slot-1 and 04.0 for slot-2. I can switch them around but I'm not sure if I should.

Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvem3: Not M.2 adapter card

I've never seen synonvme correctly report that an nvme drive was in a pcie card. But it did alert me to the fact I had the wrong permissions set on /usr/syno/bin/synonvme

<!-- gh-comment-id:1769755645 --> @007revad commented on GitHub (Oct 19, 2023): Excellent. Interesting that you didn't need a 2nd reboot (as /run/adapter_cards.conf still existed). > do i need your other scripts to create a storage pool on the pcie card? For NVMe drive in a PCIe card you need [Synology_M2_volume](https://github.com/007revad/Synology_M2_volume) to create the storage pool and then do an online assemble in storage manager. This is because storage manager won't let you create a storage pool on NVMe drives in a PCIe card. I should have see if I can get around that.. > Checking synodisk --enum -t cache > ************ Disk Info *************** > Disk id: 2 > Slot id: 1 > Disk path: /dev/nvme0n1 > Disk model: Samsung SSD 970 EVO Plus 2TB > Total capacity: 1863.02 GB > Tempeture: 47 C > ************ Disk Info *************** > Disk id: 1 > Slot id: 1 > Disk path: /dev/nvme1n1 > Disk model: Samsung SSD 970 EVO Plus 2TB > Total capacity: 1863.02 GB > Tempeture: 47 C > ************ Disk Info *************** > Disk id: 1 > Disk path: /dev/nvme2n1 > Disk model: Samsung SSD 970 EVO 1TB > Total capacity: 931.51 GB > Tempeture: 38 C > ************ Disk Info *************** > Disk id: 2 > Disk path: /dev/nvme3n1 > Disk model: Samsung SSD 970 EVO 1TB > Total capacity: 931.51 GB > Tempeture: 40 C Synology really need to learn how to spell Temperature. Your NVMe drives are a lot warmer than my little 500GB NVMe drives. My internal NVMe is 28 C and the one in the E10M20-T1 is 33 C (without the heatsink installed). Though I do have 2 empty bays next the internal M.2 slots and I currently have the cover off the NAS. > Checking syno_slot_mapping > PCIe Slot 1: E10M20-T1 > 01: /dev/nvme1n1 > 02: /dev/nvme0n1 I notice that nvme1 is in the E10M20-T1 M.2 slot-1 and nvme0 is in M.2 slot-2. I should have tested with 2 nvme drives in the pcie card as I'd expect nvme0 to be in the E10M20-T1 M.2 slot-1, like this: 01: /dev/nvme**0**n1 02: /dev/nvme**1**n1 I wonder if Synology screwed that up because all the NAS models that have E10M20-T1 in model.dtb have 08.0 for slot-1 and 04.0 for slot-2. I can switch them around but I'm not sure if I should. > Checking if nvme drives in PCIe card with synonvme > nvme0: Not M.2 adapter card > nvme1: Not M.2 adapter card > nvme2: Not M.2 adapter card > nvem3: Not M.2 adapter card I've never seen synonvme correctly report that an nvme drive was in a pcie card. But it did alert me to the fact I had the wrong permissions set on /usr/syno/bin/synonvme
Author
Owner

@007revad commented on GitHub (Oct 19, 2023):

@RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on ... and select "Online Assemble".

<!-- gh-comment-id:1770066792 --> @007revad commented on GitHub (Oct 19, 2023): @RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on **...** and select "Online Assemble".
Author
Owner

@RozzNL commented on GitHub (Oct 19, 2023):

@RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on ... and select "Online Assemble".

I deleted all 3 available pools because that was me testing and changing everything before i reached out to you.
Already created a read cache on the internals, will use the m2_volume script for the pcie.
As for the temps, my cover is also off but the syno is placed in a relative warm place which does not help, but temps are well within operating range so not worried.

Do you already know if this will survive a dsm update?

<!-- gh-comment-id:1770293780 --> @RozzNL commented on GitHub (Oct 19, 2023): > @RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on **...** and select "Online Assemble". I deleted all 3 available pools because that was me testing and changing everything before i reached out to you. Already created a read cache on the internals, will use the m2_volume script for the pcie. As for the temps, my cover is also off but the syno is placed in a relative warm place which does not help, but temps are well within operating range so not worried. Do you already know if this will survive a dsm update?
Author
Owner

@007revad commented on GitHub (Oct 19, 2023):

Do you already know if this will survive a dsm update?

I assume you mean the M2 volume? After a DSM update you'll need to run m2_card_fix then maybe do an online assemble.

Once I update syno_hdd_db you won't need m2_card_fix.

<!-- gh-comment-id:1770380602 --> @007revad commented on GitHub (Oct 19, 2023): > Do you already know if this will survive a dsm update? I assume you mean the M2 volume? After a DSM update you'll need to run m2_card_fix then maybe do an online assemble. Once I update syno_hdd_db you won't need m2_card_fix.
Author
Owner

@MirHekmat commented on GitHub (Oct 25, 2023):

Hi @007revad , I have DS1821+ as well and I am having the same issue for me its just showing the E10M20-T1 in the info centre doesn't show the Lan5 at all nor the drives.

I am good around computers hardware installation etc but bad around coding, I see you have managed to help @RozzNL and fix the issue. Could you kindly summarise the correct and necessary steps to get this up and running? I have contacted Synology and they are saying return the card its not in the Synology compatible list. I would hate to return it as the Two extra cache/storage would be extremely helpful for my 4K video editing.
image
image

<!-- gh-comment-id:1778440760 --> @MirHekmat commented on GitHub (Oct 25, 2023): Hi @007revad , I have DS1821+ as well and I am having the same issue for me its just showing the E10M20-T1 in the info centre doesn't show the Lan5 at all nor the drives. I am good around computers hardware installation etc but bad around coding, I see you have managed to help @RozzNL and fix the issue. Could you kindly summarise the correct and necessary steps to get this up and running? I have contacted Synology and they are saying return the card its not in the Synology compatible list. I would hate to return it as the Two extra cache/storage would be extremely helpful for my 4K video editing. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/9df317cf-8aa2-43fb-ade1-af493c3ebfdc) ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/43c4e160-7e7e-4a57-9b9f-eb3f6f923a1c)
Author
Owner

@007revad commented on GitHub (Oct 25, 2023):

@MirHekmat

Which DSM version is your DS1821+ using?

I assume you've already run syno_hdd_db.sh since installing the E10M20-T1.

  1. Go to m2_card_fix.sh
  2. Download m2_card_fix.sh (see image below).
  3. Run m2_card_fix.sh with sudo -i
  4. Reboot.

download_raw

<!-- gh-comment-id:1778459001 --> @007revad commented on GitHub (Oct 25, 2023): @MirHekmat Which DSM version is your DS1821+ using? I assume you've already run syno_hdd_db.sh since installing the E10M20-T1. 1. Go to [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) 2. Download m2_card_fix.sh (see image below). 3. Run m2_card_fix.sh with sudo -i 4. Reboot. ![download_raw](https://github.com/007revad/Synology_HDD_db/assets/39733752/49b63bf7-36f5-4f83-be25-1ba98fc45348)
Author
Owner

@MirHekmat commented on GitHub (Oct 25, 2023):

Hey Dave,

The DSM is: DSM 7.2-64570 Update 1

I actually haven't run syno_hdd_db.sh

I read this post from top to bottom, I saw there were a few things that were done and some worked some didn't work, as the other person mentions. So I just wanted to start from where it actually mattered ( maybe it all matters, I am not sure).

So would you like me to start from here steps 1 to 4 and it should work
1.Go to m2_card_fix.sh
2.Download m2_card_fix.sh (see image below).
3.Run m2_card_fix.sh with sudo -i
4.Reboot.

or fir run syno_hdd_db.sh (where is this located kindly advise) then step 1~4.
Sorry I am a noob Thank you so much for all your help!

<!-- gh-comment-id:1778597547 --> @MirHekmat commented on GitHub (Oct 25, 2023): Hey Dave, The DSM is: DSM 7.2-64570 Update 1 I actually haven't run syno_hdd_db.sh I read this post from top to bottom, I saw there were a few things that were done and some worked some didn't work, as the other person mentions. So I just wanted to start from where it actually mattered ( maybe it all matters, I am not sure). So would you like me to start from here steps 1 to 4 and it should work 1.Go to [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) 2.Download m2_card_fix.sh (see image below). 3.Run m2_card_fix.sh with sudo -i 4.Reboot. or fir run syno_hdd_db.sh (where is this located kindly advise) then step 1~4. Sorry I am a noob Thank you so much for all your help!
Author
Owner

@007revad commented on GitHub (Oct 25, 2023):

The DSM is: DSM 7.2-64570 Update 1

I actually haven't run syno_hdd_db.sh

For a DS1821+ with DSM 7.2-64570 Update 1 you only need Synology_HDD_db and the E10M20-T1 will work.

If you update to DSM 7.2-64570 Update 2 or Update 3 you'd also need the following steps.

  1. Go to m2_card_fix.sh
  2. Download m2_card_fix.sh (see image below).
  3. Run m2_card_fix.sh with sudo -i
  4. Reboot.

I will integrate m2_card_fix.sh into Synology_HDD_db soon so it will do it all.

I've also got to test DSM 7.2.1-69057 Update 1

<!-- gh-comment-id:1778609854 --> @007revad commented on GitHub (Oct 25, 2023): > The DSM is: DSM 7.2-64570 Update 1 > > I actually haven't run syno_hdd_db.sh For a DS1821+ with DSM 7.2-64570 Update 1 you only need [Synology_HDD_db](https://github.com/007revad/Synology_HDD_db) and the E10M20-T1 will work. If you update to DSM 7.2-64570 Update 2 or Update 3 you'd also need the following steps. 1. Go to [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) 2. Download m2_card_fix.sh (see image below). 3. Run m2_card_fix.sh with sudo -i 4. Reboot. I will integrate m2_card_fix.sh into Synology_HDD_db soon so it will do it all. I've also got to test DSM 7.2.1-69057 Update 1
Author
Owner

@MirHekmat commented on GitHub (Oct 25, 2023):

Thank you for clear instructions,
Also, as I had one delivery for the whole lot. Received 2x 16TB IronWolf at the same time as E10M20-T1. So I installed everything.
NAS is in the process of adding the 2x 16TB's to my SHR raid ( current progress @41.98%.)

Do you suggest I wait for NAS to finish this building to 100%. ETA is 2 days remaining. Or is it safe to run the code and reboot? You reckon it'll start back from where it left or I might lose some progress.
image

<!-- gh-comment-id:1778740175 --> @MirHekmat commented on GitHub (Oct 25, 2023): Thank you for clear instructions, Also, as I had one delivery for the whole lot. Received 2x 16TB IronWolf at the same time as E10M20-T1. So I installed everything. NAS is in the process of adding the 2x 16TB's to my SHR raid ( current progress @41.98%.) Do you suggest I wait for NAS to finish this building to 100%. ETA is 2 days remaining. Or is it safe to run the code and reboot? You reckon it'll start back from where it left or I might lose some progress. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/17b3d40f-c243-4d8b-ac45-44bc8871bc19)
Author
Owner

@007revad commented on GitHub (Oct 25, 2023):

Do you suggest I wait for NAS to finish this building to 100%.

I would wait until it's finished.

<!-- gh-comment-id:1778875739 --> @007revad commented on GitHub (Oct 25, 2023): > Do you suggest I wait for NAS to finish this building to 100%. I would wait until it's finished.
Author
Owner

@MirHekmat commented on GitHub (Oct 28, 2023):

Hey Mate, it worked great, thank you for your hard work. Chipped in a bit through PayPal.

Do I need to now maintain this through a task scheduler as you have mentioned.

Also Is this the correct guide for creating m.2 storage volumes? https://github.com/007revad/Synology_M2_volume

<!-- gh-comment-id:1783664766 --> @MirHekmat commented on GitHub (Oct 28, 2023): Hey Mate, it worked great, thank you for your hard work. Chipped in a bit through PayPal. Do I need to now maintain this through a task scheduler as you have mentioned. Also Is this the correct guide for creating m.2 storage volumes? https://github.com/007revad/Synology_M2_volume
Author
Owner

@MirHekmat commented on GitHub (Oct 28, 2023):

There is also this one: https://github.com/007revad/Synology_enable_M2_volume

not sure of the difference, which one would work best? I have 4x Samsung m.2s 2 installed internally and 2x installed in E10M20-T1. would like to make 2x in E10M20-T1 to be use a storage if possible.

<!-- gh-comment-id:1783672484 --> @MirHekmat commented on GitHub (Oct 28, 2023): There is also this one: https://github.com/007revad/Synology_enable_M2_volume not sure of the difference, which one would work best? I have 4x Samsung m.2s 2 installed internally and 2x installed in E10M20-T1. would like to make 2x in E10M20-T1 to be use a storage if possible.
Author
Owner

@007revad commented on GitHub (Oct 28, 2023):

Chipped in a bit through PayPal.

Thanks.

Anytime you update DSM you'll need to run syno_hdd_db again. So it's easier to schedule it to run at boot-up.

Synology_enable_M2_volume isn't needed on a DS1821+ if you've run syno_hdd_db.

You will need Synology_M2_volume if you want to use the NVMe drives in the E10M20-T as a volume. This is because DSM won't allow creating a volume on NVMe drives in an M.2 adaptor card (not even for their own Synology branded NVMe drives).

<!-- gh-comment-id:1783692792 --> @007revad commented on GitHub (Oct 28, 2023): > Chipped in a bit through PayPal. Thanks. Anytime you update DSM you'll need to run syno_hdd_db again. So it's easier to schedule it to run at boot-up. Synology_enable_M2_volume isn't needed on a DS1821+ if you've run syno_hdd_db. You will need [Synology_M2_volume](https://github.com/007revad/Synology_M2_volume) if you want to use the NVMe drives in the E10M20-T as a volume. This is because DSM won't allow creating a volume on NVMe drives in an M.2 adaptor card (not even for their own Synology branded NVMe drives).
Author
Owner

@MirHekmat commented on GitHub (Oct 28, 2023):

Thanks all worked out now!

<!-- gh-comment-id:1783721844 --> @MirHekmat commented on GitHub (Oct 28, 2023): Thanks all worked out now!
Author
Owner

@MirHekmat commented on GitHub (Nov 1, 2023):

@007revad Hey mate,

I think you are in Australia too, I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.

<!-- gh-comment-id:1788428808 --> @MirHekmat commented on GitHub (Nov 1, 2023): @007revad Hey mate, I think you are in Australia too, I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.
Author
Owner

@007revad commented on GitHub (Nov 1, 2023):

I think you are in Australia too,

I figured you were also in Australia.

I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.

Apparently Synology's E10G18-T1 uses the Aquantia AQN107 controller but with custom firmware... so other Aquantia AQN107 controller based 10G cards (like the Asus-XG-C100C) don't work. With DSM 6 you could download the Linux driver source and compile it on your Synology.

https://servicemax.com.au/tips/synology-10gigabit-ethernet-on-the-cheap/
https://www.reddit.com/r/synology/comments/k4a5px/how_i_got_a_generic_cheap_aqc107_card_working_on/

There's comments on the Xpenology forum saying that doesn't work in DSM 7 (but that may just be because they didn't use the latest driver source).

The Xpenology people do have drivers for the DS1621+ (same CPU as DS1821+) up to DSM 7.1.1 but nothing for DSM 7.2 (unless the DSM 7.1.1 driver still works). It seems like a lot work to keep the driver up to date with each DSM update.

I actually have a Asus-XG-C100C in my PC so I could install my E10G18-T1 in my PC and try the Asus-XG-C100C in my DS1821+ to test it. Which seems like a lot work to save $100 AU.

10G cards that do work by just plugging them in are usually 2nd hand 10G SFP cards, or some 10GbE cards, but they only support 10G and 1G (no 2.5G or 5G).

  • Mellanox Connect X2 or X3
  • Intel based cards.

See:
https://www.reddit.com/r/synology/comments/ssjoi6/thirdparty_10g_nic_compatibility_for_ds_1821_only/
https://www.reddit.com/r/synology/comments/kcd3d6/cost_effective_3rd_party_10gbe_nic_for_synology/

<!-- gh-comment-id:1788552579 --> @007revad commented on GitHub (Nov 1, 2023): > I think you are in Australia too, I figured you were also in Australia. > I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted. Apparently Synology's E10G18-T1 uses the Aquantia AQN107 controller but with custom firmware... so other Aquantia AQN107 controller based 10G cards (like the Asus-XG-C100C) don't work. With DSM 6 you could download the Linux driver source and compile it on your Synology. https://servicemax.com.au/tips/synology-10gigabit-ethernet-on-the-cheap/ https://www.reddit.com/r/synology/comments/k4a5px/how_i_got_a_generic_cheap_aqc107_card_working_on/ There's comments on the Xpenology forum saying that doesn't work in DSM 7 (but that may just be because they didn't use the latest driver source). The Xpenology people do have drivers for the DS1621+ (same CPU as DS1821+) up to DSM 7.1.1 but nothing for DSM 7.2 (unless the DSM 7.1.1 driver still works). It seems like a lot work to keep the driver up to date with each DSM update. I actually have a Asus-XG-C100C in my PC so I could install my E10G18-T1 in my PC and try the Asus-XG-C100C in my DS1821+ to test it. Which seems like a lot work to save $100 AU. 10G cards that do work by just plugging them in are usually 2nd hand 10G SFP cards, or some 10GbE cards, but they only support 10G and 1G (no 2.5G or 5G). - Mellanox Connect X2 or X3 - Intel based cards. See: https://www.reddit.com/r/synology/comments/ssjoi6/thirdparty_10g_nic_compatibility_for_ds_1821_only/ https://www.reddit.com/r/synology/comments/kcd3d6/cost_effective_3rd_party_10gbe_nic_for_synology/
Author
Owner

@MirHekmat commented on GitHub (Nov 1, 2023):

Sounds like alot of work, I did read some of those Reddit post as well and it seems like alot of mucking around and since I am not invested in SFP at all, I complete agree not worth saving $100, I think I'll just buy Synology DS1821+ compatible one from Amazon. Also good to know DS1621+ and DS1821+ use the same CPU.

Thank you OZ Fellow!

<!-- gh-comment-id:1788570432 --> @MirHekmat commented on GitHub (Nov 1, 2023): Sounds like alot of work, I did read some of those Reddit post as well and it seems like alot of mucking around and since I am not invested in SFP at all, I complete agree not worth saving $100, I think I'll just buy Synology DS1821+ compatible one from Amazon. Also good to know DS1621+ and DS1821+ use the same CPU. Thank you OZ Fellow!
Author
Owner

@bitcinnamon commented on GitHub (Nov 6, 2023):

Hi Dave @007revad

I really appreciate your kindly support and have already read from top to the end
while I tried all of these scripts but unfortunately no one works on my rig.

RS1221+, 
OS: DSM 7.2.1-69057 Update 1
RAM: Samsung 32g *2 // compatible 
HDD: 8*12TB SHR-2 //  mixed with Seagate, WD and Toshiba, they all works fine
Expansion Card: E10M20-T1
M.2 Drives: CFD Gaming 2TB *2  // recognized as: [Unknown CSSD-M2B2TPG3VNF]

In the initial state, DSM can setup my M.2 Drives as cache, without any warnings or errors.

  1. Ran [Synology_HDD_db] script, M.2 drives disappeared, 10GbE NIC still works.
  2. Then I tried [m2_card_fix.sh] mentioned above, nothing happens.

After this I full reset the rig (7.2.1u1), the M.2 drives appeared back in list.

  1. Ran [Synology_M2_volume] to create pools and reboot, gone again.
  2. Ran [Synology_HDD_db], nothing happens.
  3. Ran [m2_card_fix.sh], still.

Tried another full reset (7.2.1u1), can see M.2 drives with "unsupported" in the list.
When I click on 'Reset Drive', it turns green OK and is able to make cache.

  1. Ran [Synology_HDD_db], disappeared again.
  2. Ran [m2_card_fix.sh], still.

So I downgraded to DSM 7.2 U1-64570, prevented updating to 7.2u3.

  1. Ran [Synology_HDD_db], M.2 drives disappeared again.

Factory reset (7.2u1) and use [Synology_M2_volume] to create pools and reboot, also got disappeared.

  1. Ran [Synology_HDD_db], nothing happens.
  2. Ran [m2_card_fix.sh], nothing happens.

Another factory reset, use madam to create raids by SSH manually, after reboot they disappear againnnn.

  1. Ran [Synology_HDD_db], nothing happens.
  2. Ran [m2_card_fix.sh], nothing happens.

I posted my rs1221+'s synonvme and libsynonvme.so.1, hope can helps.
rs1221.zip

<!-- gh-comment-id:1795346846 --> @bitcinnamon commented on GitHub (Nov 6, 2023): Hi Dave @007revad I really appreciate your kindly support and have already read from top to the end while I tried all of these scripts but unfortunately no one works on my rig. ``` RS1221+, OS: DSM 7.2.1-69057 Update 1 RAM: Samsung 32g *2 // compatible HDD: 8*12TB SHR-2 // mixed with Seagate, WD and Toshiba, they all works fine Expansion Card: E10M20-T1 M.2 Drives: CFD Gaming 2TB *2 // recognized as: [Unknown CSSD-M2B2TPG3VNF] ``` In the initial state, DSM can setup my M.2 Drives as cache, without any warnings or errors. 1. Ran [Synology_HDD_db] script, M.2 drives disappeared, 10GbE NIC still works. 2. Then I tried [m2_card_fix.sh] mentioned above, nothing happens. After this I full reset the rig (7.2.1u1), the M.2 drives appeared back in list. 1. Ran [Synology_M2_volume] to create pools and reboot, gone again. 2. Ran [Synology_HDD_db], nothing happens. 3. Ran [m2_card_fix.sh], still. Tried another full reset (7.2.1u1), can see M.2 drives with "unsupported" in the list. When I click on 'Reset Drive', it turns green OK and is able to make cache. 1. Ran [Synology_HDD_db], disappeared again. 2. Ran [m2_card_fix.sh], still. So I downgraded to DSM 7.2 U1-64570, prevented updating to 7.2u3. 1. Ran [Synology_HDD_db], M.2 drives disappeared again. Factory reset (7.2u1) and use [Synology_M2_volume] to create pools and reboot, also got disappeared. 1. Ran [Synology_HDD_db], nothing happens. 2. Ran [m2_card_fix.sh], nothing happens. Another factory reset, use madam to create raids by SSH manually, after reboot they disappear againnnn. 1. Ran [Synology_HDD_db], nothing happens. 2. Ran [m2_card_fix.sh], nothing happens. I posted my rs1221+'s synonvme and libsynonvme.so.1, hope can helps. [rs1221.zip](https://github.com/007revad/Synology_HDD_db/files/13269832/rs1221.zip)
Author
Owner

@007revad commented on GitHub (Nov 6, 2023):

@bitcinnamon

Is this a real RS1221+ or Xpenology?

I see a few issues:

  • The m2-card-fix.sh was just a test script only meant for the DS1821+. Running it on any other model would potentially break things.
  • The RS1221+ already supports the E10M20-T1.
  • Synology_HDD_db won't allow you to create a volume on M.2 drives that are in an M.2 adapter card.
  • Your NVMe drives having an Unknown could be causing your issues.

When the drives disappear after using Synology_M2_volume or creating them with mdadm and rebooting are you sure there isn't a "Online assemble" option in storage manager?

Did you run Synology_HDD_db with the -n option? Is it scheduled to run at start-up?

A few people have reported that they needed to run "Synology_HDD_db and reboot" 2 or 3 times to stop their NVMe drives vanishing. One person even scheduled Synology_HDD_db to run at shutdown and boot-up.

Can you run m2_card_check.sh and reply with it's output?

<!-- gh-comment-id:1796447652 --> @007revad commented on GitHub (Nov 6, 2023): @bitcinnamon Is this a real RS1221+ or Xpenology? I see a few issues: - The m2-card-fix.sh was just a test script only meant for the DS1821+. Running it on any other model would potentially break things. - The RS1221+ already supports the E10M20-T1. - Synology_HDD_db won't allow you to create a volume on M.2 drives that are in an M.2 adapter card. - Your NVMe drives having an Unknown could be causing your issues. When the drives disappear after using Synology_M2_volume or creating them with mdadm and rebooting are you sure there isn't a "Online assemble" option in storage manager? Did you run Synology_HDD_db with the -n option? Is it scheduled to run at start-up? A few people have reported that they needed to run "Synology_HDD_db and reboot" 2 or 3 times to stop their NVMe drives vanishing. One person even scheduled Synology_HDD_db to run at shutdown and boot-up. Can you run [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and reply with it's output?
Author
Owner

@007revad commented on GitHub (Nov 7, 2023):

@bitcinnamon

I hashed synonvme and libsynonvme.so.1 for all NAS models that have them.

For the 70 Synology NAS models that have synonvme and libsynonvme.so.1 (i.e. the models that support M.2 drives) I've found:

  1. They all have the exact same synonvme file.
  2. There are 2 different libsynonvme.so.1 files.
  3. The 13 models that have full size PCIe slots and don't officially M.2 adapter cards all have the same libsynonvme.so.1 file.

I've updated m2_card-fix.sh and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+

The output should like this:

# /volume1/scripts/m2_card_fix.sh
RS1221+

Downloading 64570_libsynonvme.so.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 54154  100 54154    0     0   557k      0 --:--:-- --:--:-- --:--:--  562k

Downloading 64570_synonvme
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 17241  100 17241    0     0   313k      0 --:--:-- --:--:-- --:--:--  317k
<!-- gh-comment-id:1797122848 --> @007revad commented on GitHub (Nov 7, 2023): @bitcinnamon I hashed synonvme and libsynonvme.so.1 for all NAS models that have them. For the 70 Synology NAS models that have synonvme and libsynonvme.so.1 (i.e. the models that support M.2 drives) I've found: 1. They all have the exact same synonvme file. 2. There are 2 different libsynonvme.so.1 files. 3. The 13 models that have full size PCIe slots and don't officially M.2 adapter cards all have the same libsynonvme.so.1 file. I've updated [m2_card-fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+ The output should like this: ``` # /volume1/scripts/m2_card_fix.sh RS1221+ Downloading 64570_libsynonvme.so.1 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 54154 100 54154 0 0 557k 0 --:--:-- --:--:-- --:--:-- 562k Downloading 64570_synonvme % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17241 100 17241 0 0 313k 0 --:--:-- --:--:-- --:--:-- 317k ```
Author
Owner

@bitcinnamon commented on GitHub (Nov 7, 2023):

Thank you for your kindly reply.

@bitcinnamon
Is this a real RS1221+ or Xpenology?

Yes it is, all brand new just got from Amazon.co.jp.

When the drives disappear after using Synology_M2_volume or creating them with mdadm and rebooting are you sure there isn't a "Online assemble" option in storage manager?

No, I never saw pools been created under Storage Pools menu, nor "Online Assemble"
Each time I need to go into HDD/SDD and scroll down to find the M.2 drives vanished.

Did you run Synology_HDD_db with the -n option?
Yes I ran Synology_HDD_db with -nr.

Is it scheduled to run at start-up?

My bad. Never tried reboot twice or more, nor scheduled to run at start-up.
Just run -> got vanished -> run another script -> still, factory reset.
I will try to add it to task schedule again.

Can you run m2_card_check.sh and reply with it's output?

Certainly, I will update here when I got this output.

Also I will try to change my M.2 drives to something like Samsung, Toshiba ones instead of Unknown.

Thank you very much, Dave.

<!-- gh-comment-id:1797831216 --> @bitcinnamon commented on GitHub (Nov 7, 2023): Thank you for your kindly reply. > @bitcinnamon > Is this a real RS1221+ or Xpenology? Yes it is, all brand new just got from Amazon.co.jp. > When the drives disappear after using Synology_M2_volume or creating them with mdadm and rebooting are you sure there isn't a "Online assemble" option in storage manager? No, I never saw pools been created under Storage Pools menu, nor "Online Assemble" Each time I need to go into HDD/SDD and scroll down to find the M.2 drives vanished. > Did you run Synology_HDD_db with the -n option? Yes I ran Synology_HDD_db with -nr. > Is it scheduled to run at start-up? My bad. Never tried reboot twice or more, nor scheduled to run at start-up. Just run -> got vanished -> run another script -> still, factory reset. I will try to add it to task schedule again. > Can you run [m2_card_check.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and reply with it's output? Certainly, I will update here when I got this output. Also I will try to change my M.2 drives to something like Samsung, Toshiba ones instead of Unknown. Thank you very much, Dave.
Author
Owner

@007revad commented on GitHub (Nov 7, 2023):

@bitcinnamon

Did you see the last half of this comment where I've updated m2_card-fix.sh and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+

<!-- gh-comment-id:1798101182 --> @007revad commented on GitHub (Nov 7, 2023): @bitcinnamon Did you see the last half of [this comment](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1797122848) where I've updated [m2_card-fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ **RS1221+** RS1221rp+
Author
Owner

@bitcinnamon commented on GitHub (Nov 7, 2023):

@bitcinnamon

Did you see the last half of this comment where I've updated m2_card-fix.sh and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+

Yes I see them and gonna try it again! Thank you so much.
And here is my m2_card_check.sh output:

2023-11-07 23:46:58

 Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: no
/etc/synoinfo.conf:          no

 Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf:          yes

 Checking permissions and owner of libsynonvme.so.1
 Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 58314 Jun 21 18:28 /usr/lib/libsynonvme.so.1

 Checking permissions and owner of synonvme
 Which should be -rwxr-xr-x 1 root root
-rwxr-xr-x 1 root root 17273 Jun 21 18:28 /usr/syno/bin/synonvme

 Checking permissions and owner of model.dtb files
 Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3980 Sep 23 23:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 3980 Sep 23 23:17 /etc/model.dtb
-rw-r--r-- 1 root root 3980 Nov  7 23:10 /run/model.dtb

 Checking power_limit="100,100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /etc/model.dtb
Missing in /run/model.dtb

 Checking E10M20-T1 is in model.dtb files
All OK

 Checking M2D20 is in model.dtb files
All OK

 Checking M2D18 is in model.dtb files
All OK

 Checking permissions and owner of adapter_cards.conf files
 Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3376 Sep  7 18:18 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3376 Sep  7 18:18 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 288 Nov  7 23:10 /run/adapter_cards.conf

 Checking /usr/syno/etc.defaults/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /usr/syno/etc/adapter_cards.conf
E10M20-T1_sup_nic NOT set to yes
E10M20-T1_sup_nvme NOT set to yes
E10M20-T1_sup_sata NOT set to yes
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

 Checking /run/adapter_cards.conf
M2D18_sup_sata NOT set to yes

 Checking synodisk --enum -t cache
************ Disk Info ***************
>> Disk id: 2
>> Slot id: 1
>> Disk path: /dev/nvme0n1
>> Disk model: CSSD-M2B2TPG3VNF
>> Total capacity: 1863.02 GB
>> Tempeture: 32 C
************ Disk Info ***************
>> Disk id: 1
>> Slot id: 1
>> Disk path: /dev/nvme1n1
>> Disk model: CSSD-M2B2TPG3VNF
>> Total capacity: 1863.02 GB
>> Tempeture: 32 C

 Checking syno_slot_mapping
----------------------------------------
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata3
03: /dev/sata6
04: /dev/sata8
05: /dev/sata2
06: /dev/sata4
07: /dev/sata5
08: /dev/sata7

Esata port count: 1
Esata port 1
01:

USB Device
01:
02:

PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1

----------------------------------------

 Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/0000:02:04.0/0000:06:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/0000:02:08.0/0000:07:00.0/nvme/nvme1
nvme2: device node not found
nvme3: device node not found

 Checking devicetree Power_limit
cat: /sys/firmware/devicetree/base/power_limit: No such file or directory

 Checking if nvme drives in PCIe card are detected with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking if nvme drives in PCIe card are detected with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

 Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.1"

 Checking nvme drives in /run/synostorage/disks
nvme0n1
nvme1n1

 Checking nvme block devices in /sys/block
nvme0n1
nvme1n1

 Checking synostgd-disk log
----------------------------------------
Current date/time:   2023-11-07 23:46:58
date: invalid date ‘@’
Last boot date/time:
date: invalid date ‘@’
----------------------------------------
No synostgd-disk logs since last boot
----------------------------------------```
<!-- gh-comment-id:1798985243 --> @bitcinnamon commented on GitHub (Nov 7, 2023): > @bitcinnamon > > Did you see the last half of [this comment](https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1797122848) where I've updated [m2_card-fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh) and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ **RS1221+** RS1221rp+ Yes I see them and gonna try it again! Thank you so much. And here is my m2_card_check.sh output: ```DSM 7.2.1-69057 Update 1 2023-11-07 23:46:58 Checking support_m2_pool setting /etc.defaults/synoinfo.conf: no /etc/synoinfo.conf: no Checking supportnvme setting /etc.defaults/synoinfo.conf: yes /etc/synoinfo.conf: yes Checking permissions and owner of libsynonvme.so.1 Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 58314 Jun 21 18:28 /usr/lib/libsynonvme.so.1 Checking permissions and owner of synonvme Which should be -rwxr-xr-x 1 root root -rwxr-xr-x 1 root root 17273 Jun 21 18:28 /usr/syno/bin/synonvme Checking permissions and owner of model.dtb files Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 3980 Sep 23 23:17 /etc.defaults/model.dtb -rw-r--r-- 1 root root 3980 Sep 23 23:17 /etc/model.dtb -rw-r--r-- 1 root root 3980 Nov 7 23:10 /run/model.dtb Checking power_limit="100,100,100,100" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /etc/model.dtb Missing in /run/model.dtb Checking E10M20-T1 is in model.dtb files All OK Checking M2D20 is in model.dtb files All OK Checking M2D18 is in model.dtb files All OK Checking permissions and owner of adapter_cards.conf files Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 3376 Sep 7 18:18 /usr/syno/etc.defaults/adapter_cards.conf -rw-r--r-- 1 root root 3376 Sep 7 18:18 /usr/syno/etc/adapter_cards.conf -rw-r--r-- 1 root root 288 Nov 7 23:10 /run/adapter_cards.conf Checking /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nic NOT set to yes E10M20-T1_sup_nvme NOT set to yes E10M20-T1_sup_sata NOT set to yes M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nic NOT set to yes E10M20-T1_sup_nvme NOT set to yes E10M20-T1_sup_sata NOT set to yes M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /run/adapter_cards.conf M2D18_sup_sata NOT set to yes Checking synodisk --enum -t cache ************ Disk Info *************** >> Disk id: 2 >> Slot id: 1 >> Disk path: /dev/nvme0n1 >> Disk model: CSSD-M2B2TPG3VNF >> Total capacity: 1863.02 GB >> Tempeture: 32 C ************ Disk Info *************** >> Disk id: 1 >> Slot id: 1 >> Disk path: /dev/nvme1n1 >> Disk model: CSSD-M2B2TPG3VNF >> Total capacity: 1863.02 GB >> Tempeture: 32 C Checking syno_slot_mapping ---------------------------------------- System Disk Internal Disk 01: /dev/sata1 02: /dev/sata3 03: /dev/sata6 04: /dev/sata8 05: /dev/sata2 06: /dev/sata4 07: /dev/sata5 08: /dev/sata7 Esata port count: 1 Esata port 1 01: USB Device 01: 02: PCIe Slot 1: E10M20-T1 01: /dev/nvme1n1 02: /dev/nvme0n1 ---------------------------------------- Checking udevadm nvme paths nvme0: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/0000:02:04.0/0000:06:00.0/nvme/nvme0 nvme1: /devices/pci0000:00/0000:00:01.1/0000:01:00.0/0000:02:08.0/0000:07:00.0/nvme/nvme1 nvme2: device node not found nvme3: device node not found Checking devicetree Power_limit cat: /sys/firmware/devicetree/base/power_limit: No such file or directory Checking if nvme drives in PCIe card are detected with synonvme nvme0: Not M.2 adapter card nvme1: Not M.2 adapter card nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking if nvme drives in PCIe card are detected with synodisk nvme0: E10M20-T1 nvme1: E10M20-T1 nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking PCIe slot path(s) [pci] pci1="0000:00:01.1" Checking nvme drives in /run/synostorage/disks nvme0n1 nvme1n1 Checking nvme block devices in /sys/block nvme0n1 nvme1n1 Checking synostgd-disk log ---------------------------------------- Current date/time: 2023-11-07 23:46:58 date: invalid date ‘@’ Last boot date/time: date: invalid date ‘@’ ---------------------------------------- No synostgd-disk logs since last boot ----------------------------------------```
Author
Owner

@bitcinnamon commented on GitHub (Nov 7, 2023):

Issue solved.

Your NVMe drives having an Unknown could be causing your issues.

That's absolutely correct! I just made it with these scripts.
Thank you so much Dave for your kindly support!

Here is what I did today,

  1. Download the Synology_HDD_db script and run with -fnr and with output below.
Synology_HDD_db v3.1.65
RS1221+ DSM 7.2.1-69057-1
Using options: -fnr
Running from: /volume1/scripts/Synology_HDD_db-3.1.65/syno_hdd_db.sh

HDD/SSD models found: 3
MG07ACA12TE,4003
ST12000VN0008-2PH103,SC61
WD120EFBX-68B0EN0,85.00A85

M.2 drive models found: 1
CSSD-M2B2TPG3VNF,EGFM13.0

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

Backed up rs1221+_host_v7.db
Added MG07ACA12TE to rs1221+_host_v7.db
ST12000VN0008-2PH103 already exists in rs1221+_host_v7.db
Added WD120EFBX-68B0EN0 to rs1221+_host_v7.db
Added CSSD-M2B2TPG3VNF to rs1221+_host_v7.db
Backed up rs1221+_e10m20-t1_v7.db
Added CSSD-M2B2TPG3VNF to rs1221+_e10m20-t1_v7.db
Backed up adapter_cards.conf

E10M20-T1 NIC already enabled for RS1221+
E10M20-T1 NVMe already enabled for RS1221+
E10M20-T1 SATA already enabled for RS1221+
E10M20-T1 already enabled in model.dtb

Backed up synoinfo.conf

Disabled support disk compatibility.

Disabled support memory compatibility.

Set max memory to 65536 GB.

Enabled M.2 volume support.

Disabled drive db auto updates.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.
  1. Add the script above in Task Schedule set to run at each start-up.
  2. Download and run the [m2_card_fix.sh] // delete the version check segment in the script, since my buildnumber is 69057.
  3. Use [Synology_M2_volume] to create 2 Single pools on my M.2 drives.
  4. Reboot, and got SSD vanished as before.
  5. Reboot again, nothing changes.

Your NVMe drives having an Unknown could be causing your issues.

I remember this and change my two CFD Gaming (Japanese Local Brand) SSDs into a Seagate FireCuda and a Sabrent Rocket (another Japanese tiny brand?)

  1. Boot and got these new SSDs recognized, the Seagate shows correctly, but Sabrent comes with an Unknown Sabrent ...
  2. Use [Synology_M2_volume] to create 2 Single pools on my M.2 drives.
  3. Reboot, got 1 pool (Seagate FireCuda) available, click on “Online Assemble”, and create storage successfully. The Unknown Sabrent remains vanished.

I realized that the SSD which recognized as 'Unknown' brand doesn't work
Change it to a Plextor 1TB SSD.

  1. Boot and got the Plextor show in the HDD/SSD section, [Synology_M2_volume] to create a pool.
  2. The another pool turns available, online assembled and create storage succeeded.

Conclusion: NEVER PUT UN-FAMOUS-BRAND SSDs IN AN Xpenology.

<!-- gh-comment-id:1799303072 --> @bitcinnamon commented on GitHub (Nov 7, 2023): Issue solved. > Your NVMe drives having an Unknown could be causing your issues. That's absolutely correct! I just made it with these scripts. Thank you so much Dave for your kindly support! Here is what I did today, 1. Download the Synology_HDD_db script and run with -fnr and with output below. ``` Synology_HDD_db v3.1.65 RS1221+ DSM 7.2.1-69057-1 Using options: -fnr Running from: /volume1/scripts/Synology_HDD_db-3.1.65/syno_hdd_db.sh HDD/SSD models found: 3 MG07ACA12TE,4003 ST12000VN0008-2PH103,SC61 WD120EFBX-68B0EN0,85.00A85 M.2 drive models found: 1 CSSD-M2B2TPG3VNF,EGFM13.0 M.2 PCIe card models found: 1 E10M20-T1 No Expansion Units found Backed up rs1221+_host_v7.db Added MG07ACA12TE to rs1221+_host_v7.db ST12000VN0008-2PH103 already exists in rs1221+_host_v7.db Added WD120EFBX-68B0EN0 to rs1221+_host_v7.db Added CSSD-M2B2TPG3VNF to rs1221+_host_v7.db Backed up rs1221+_e10m20-t1_v7.db Added CSSD-M2B2TPG3VNF to rs1221+_e10m20-t1_v7.db Backed up adapter_cards.conf E10M20-T1 NIC already enabled for RS1221+ E10M20-T1 NVMe already enabled for RS1221+ E10M20-T1 SATA already enabled for RS1221+ E10M20-T1 already enabled in model.dtb Backed up synoinfo.conf Disabled support disk compatibility. Disabled support memory compatibility. Set max memory to 65536 GB. Enabled M.2 volume support. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` 2. Add the script above in Task Schedule set to run at each start-up. 3. Download and run the [m2_card_fix.sh] // delete the version check segment in the script, since my buildnumber is 69057. 4. Use [Synology_M2_volume] to create 2 Single pools on my M.2 drives. 5. Reboot, and got SSD vanished as before. 6. Reboot again, nothing changes. > Your NVMe drives having an Unknown could be causing your issues. I remember this and change my two CFD Gaming (Japanese Local Brand) SSDs into a Seagate FireCuda and a Sabrent Rocket (another Japanese tiny brand?) 7. Boot and got these new SSDs recognized, the Seagate shows correctly, but Sabrent comes with an Unknown Sabrent ... 8. Use [Synology_M2_volume] to create 2 Single pools on my M.2 drives. 9. Reboot, got 1 pool (Seagate FireCuda) available, click on “Online Assemble”, and create storage successfully. The Unknown Sabrent remains vanished. I realized that the SSD which recognized as 'Unknown' brand doesn't work Change it to a Plextor 1TB SSD. 10. Boot and got the Plextor show in the HDD/SSD section, [Synology_M2_volume] to create a pool. 11. The another pool turns available, online assembled and create storage succeeded. Conclusion: NEVER PUT UN-FAMOUS-BRAND SSDs IN AN Xpenology.
Author
Owner

@007revad commented on GitHub (Nov 7, 2023):

@bitcinnamon

Issue solved.

RS1221+ DSM 7.2.1-69057-1
Download and run the [m2_card_fix.sh] // delete the version check segment in the script, since my buildnumber is 69057.

Nice that it also works in DSM 7.2.1.

Set max memory to 65536 GB.

That should be 65536 MB. I hope it's just the script showing "GB" as the unit in the output.

What does the following command return?
get_key_value /etc.defaults/synoinfo.conf mem_max_mb

I'm wondering if I can get the CFD and Sabrient brand NVMe drives working.

  1. What does storage manager show their brand as? Can you reply with a screenshot?
  2. What do the following commands return for the CFD and Sabrient NVMe drives?
cat "/sys/block/nvme0n1/device/model"
cat "/sys/block/nvme0n1/device/firmware_rev"
cat "/sys/block/nvme0n1/device/rev"
cat "/sys/block/nvme1n1/device/model"
cat "/sys/block/nvme1n1/device/firmware_rev"
cat "/sys/block/nvme1n1/device/rev"
<!-- gh-comment-id:1800043680 --> @007revad commented on GitHub (Nov 7, 2023): @bitcinnamon > Issue solved. > > RS1221+ DSM 7.2.1-69057-1 > Download and run the [m2_card_fix.sh] // delete the version check segment in the script, since my buildnumber is 69057. Nice that it also works in DSM 7.2.1. > Set max memory to 65536 GB. That should be 65536 MB. I hope it's just the script showing "GB" as the unit in the output. What does the following command return? `get_key_value /etc.defaults/synoinfo.conf mem_max_mb` I'm wondering if I can get the CFD and Sabrient brand NVMe drives working. 1. What does storage manager show their brand as? Can you reply with a screenshot? 2. What do the following commands return for the CFD and Sabrient NVMe drives? ``` cat "/sys/block/nvme0n1/device/model" cat "/sys/block/nvme0n1/device/firmware_rev" cat "/sys/block/nvme0n1/device/rev" cat "/sys/block/nvme1n1/device/model" cat "/sys/block/nvme1n1/device/firmware_rev" cat "/sys/block/nvme1n1/device/rev" ```
Author
Owner

@bitcinnamon commented on GitHub (Nov 8, 2023):

What does the following command return?

$ get_key_value /etc.defaults/synoinfo.conf mem_max_mb
65536

What does storage manager show their brand as? Can you reply with a screenshot?

Unknown CSSD-M2B2TPG3VNF
Unknown Sabrent

and for screenshots and the following commands outputs, I will upload them 3-4 days later.

<!-- gh-comment-id:1802179799 --> @bitcinnamon commented on GitHub (Nov 8, 2023): > What does the following command return? ``` $ get_key_value /etc.defaults/synoinfo.conf mem_max_mb 65536 ``` > What does storage manager show their brand as? Can you reply with a screenshot? Unknown CSSD-M2B2TPG3VNF Unknown Sabrent and for screenshots and the following commands outputs, I will upload them 3-4 days later.
Author
Owner

@RozzNL commented on GitHub (Nov 16, 2023):

EDIT:
Running syno_hdd_db.sh v3.2.66-RC corrects all....
All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2

Hi Dave,
Updating my DS1821+ to DSM 7.2.1-69057 Update 1 and Update 2 breaks the internal NVMES slots.
Running syno_hdd_db.sh, m2_card_fix did not resolve the issue.
The NVME`s on the E10M20-T1 do work.

Running m2_card_fix.sh gives output:
DS1821+
69057 not supported

Running m2_card_check gives output:
DS1821+
DSM 7.2.1-69057 Update 2
2023-11-16 14:18:24

Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes

Checking md5 hash of libsynonvme.so.1
libsynonvme.so.1 is 7.2-64570 version

Checking md5 hash of synonvme
synonvme is 7.2-64570 version

Checking permissions and owner of libsynonvme.so.1
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 54154 Nov 16 13:53 /usr/lib/libsynonvme.so.1

Checking permissions and owner of synonvme
Which should be -rwxr-xr-x 1 root root
-rwxr-xr-x 1 root root 17241 Nov 16 13:53 /usr/syno/bin/synonvme

Checking permissions and owner of model.dtb files
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3583 Sep 23 17:11 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb
-rw-r--r-- 1 root root 3583 Nov 16 14:11 /run/model.dtb

Checking if default power_limit="14.85,9.075" is in model.dtb files
Missing in /etc/model.dtb

Checking power_limit="14.85,14.85,14.85" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /etc/model.dtb
Missing in /run/model.dtb

Checking power_limit="14.85,14.85,14.85,14.85" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /etc/model.dtb
Missing in /run/model.dtb

Checking power_limit="100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

Checking power_limit="100,100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

Checking E10M20-T1 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

Checking M2D20 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

Checking M2D18 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb

Checking permissions and owner of adapter_cards.conf files
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3412 Nov 16 13:43 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 286 Nov 16 14:11 /run/adapter_cards.conf

Checking /usr/syno/etc.defaults/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /usr/syno/etc/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking /run/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes

Checking synodisk --enum -t cache
************ Disk Info ***************

Disk id: 1
Disk path: /dev/nvme2n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 33 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme3n1
Disk model: Samsung SSD 970 EVO 1TB
Total capacity: 931.51 GB
Tempeture: 37 C

Checking syno_slot_mapping

System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8

Esata port count: 2
Esata port 1
01:

Esata port 2
01:

USB Device
01:
02:
03:
04:

Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1

PCIe Slot 1: E10M20-T1


Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

Checking devicetree Power_limit
14.85,9.075

Checking if nvme drives in PCIe card are detected with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking if nvme drives in PCIe card are detected with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card

Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"

Checking nvme drives in /run/synostorage/disks
nvme2n1
nvme3n1

Checking nvme block devices in /sys/block
nvme0n1
nvme1n1
nvme2n1
nvme3n1

Checking synoscgi log

Current date/time: 2023-11-16 14:18:25
Last boot date/time: 2023-11-16 14:11:14

2023-11-16T14:13:38+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[7271]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme0n1
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1

<!-- gh-comment-id:1814425612 --> @RozzNL commented on GitHub (Nov 16, 2023): **EDIT: Running syno_hdd_db.sh v3.2.66-RC corrects all.... All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2** Hi Dave, Updating my DS1821+ to DSM 7.2.1-69057 Update 1 and Update 2 breaks the internal NVMES slots. Running syno_hdd_db.sh, m2_card_fix did not resolve the issue. The NVME`s on the E10M20-T1 do work. Running m2_card_fix.sh gives output: DS1821+ 69057 not supported Running m2_card_check gives output: DS1821+ DSM 7.2.1-69057 Update 2 2023-11-16 14:18:24 Checking support_m2_pool setting /etc.defaults/synoinfo.conf: yes /etc/synoinfo.conf: yes Checking supportnvme setting /etc.defaults/synoinfo.conf: yes /etc/synoinfo.conf: yes Checking md5 hash of libsynonvme.so.1 libsynonvme.so.1 is 7.2-64570 version Checking md5 hash of synonvme synonvme is 7.2-64570 version Checking permissions and owner of libsynonvme.so.1 Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 54154 Nov 16 13:53 /usr/lib/libsynonvme.so.1 Checking permissions and owner of synonvme Which should be -rwxr-xr-x 1 root root -rwxr-xr-x 1 root root 17241 Nov 16 13:53 /usr/syno/bin/synonvme Checking permissions and owner of model.dtb files Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 3583 Sep 23 17:11 /etc.defaults/model.dtb -rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb -rw-r--r-- 1 root root 3583 Nov 16 14:11 /run/model.dtb Checking if default power_limit="14.85,9.075" is in model.dtb files Missing in /etc/model.dtb Checking power_limit="14.85,14.85,14.85" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /etc/model.dtb Missing in /run/model.dtb Checking power_limit="14.85,14.85,14.85,14.85" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /etc/model.dtb Missing in /run/model.dtb Checking power_limit="100,100,100" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking power_limit="100,100,100,100" is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking E10M20-T1 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking M2D20 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking M2D18 is in model.dtb files Missing in /etc.defaults/model.dtb Missing in /run/model.dtb Checking permissions and owner of adapter_cards.conf files Which should be -rw-r--r-- 1 root root -rw-r--r-- 1 root root 3412 Nov 16 13:43 /usr/syno/etc.defaults/adapter_cards.conf -rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf -rw-r--r-- 1 root root 286 Nov 16 14:11 /run/adapter_cards.conf Checking /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /usr/syno/etc/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking /run/adapter_cards.conf M2D20_sup_nvme NOT set to yes M2D18_sup_nvme NOT set to yes M2D18_sup_sata NOT set to yes Checking synodisk --enum -t cache ************ Disk Info *************** >> Disk id: 1 >> Disk path: /dev/nvme2n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 33 C ************ Disk Info *************** >> Disk id: 2 >> Disk path: /dev/nvme3n1 >> Disk model: Samsung SSD 970 EVO 1TB >> Total capacity: 931.51 GB >> Tempeture: 37 C Checking syno_slot_mapping ---------------------------------------- System Disk Internal Disk 01: /dev/sata1 02: /dev/sata2 03: /dev/sata3 04: /dev/sata4 05: /dev/sata5 06: /dev/sata6 07: /dev/sata7 08: /dev/sata8 Esata port count: 2 Esata port 1 01: Esata port 2 01: USB Device 01: 02: 03: 04: Internal SSD Cache: 01: /dev/nvme2n1 02: /dev/nvme3n1 PCIe Slot 1: E10M20-T1 ---------------------------------------- Checking udevadm nvme paths nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2 nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3 Checking devicetree Power_limit 14.85,9.075 Checking if nvme drives in PCIe card are detected with synonvme nvme0: Not M.2 adapter card nvme1: Not M.2 adapter card nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking if nvme drives in PCIe card are detected with synodisk nvme0: E10M20-T1 nvme1: E10M20-T1 nvme2: Not M.2 adapter card nvme3: Not M.2 adapter card Checking PCIe slot path(s) [pci] pci1="0000:00:01.2" Checking nvme drives in /run/synostorage/disks nvme2n1 nvme3n1 Checking nvme block devices in /sys/block nvme0n1 nvme1n1 nvme2n1 nvme3n1 Checking synoscgi log ---------------------------------------- Current date/time: 2023-11-16 14:18:25 Last boot date/time: 2023-11-16 14:11:14 ---------------------------------------- 2023-11-16T14:13:38+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[7271]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1 2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info 2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme0n1 2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info 2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1
Author
Owner

@MirHekmat commented on GitHub (Nov 17, 2023):

@007revad
Hey mate,

I am under alot of stress at the moment hoping nothing is lost yet. I have so many photos. I had migrated everything on the new NAS over few months now and was working fine.

So I followed RozNlL last comment :
EDIT:
Running syno_hdd_db.sh v3.2.66-RC corrects all....
All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2)

As I updated my DSM to DSM 7.2.1-69057 Update 1

After updating and the running new script V3.2.66, Restarted the system and I am hearing continous beeping and following error message. Really hoping you can suggest what do I do from here. I have restarted a few times. I did get Drive crashed on drive 2. I clicked repair as the option came. The 2nd drive and all the drives are showing healhy.

image

This is a copy paste as I still have that screen up and what happened when I run the script:
Synology_HDD_db v3.2.66
DS1821+ DSM 7.2.1-69057-1
Using options:
Running from: /volume1/homes/Mir/Scripts2/syno_hdd_db.sh

HDD/SSD models found: 6
ST12000NE0008-2PK103,EN02
ST12000VN0008-2YS101,SC60
ST16000NE000-2RW103,SB30
ST16000NE000-2RW103,SN02
ST16000NE000-2RW103,SN03
ST2000DM001-1CH164,CC26

M.2 drive models found: 3
Samsung SSD 960 EVO 1TB,3B7QCXE7
Samsung SSD 970 EVO Plus 1TB,2B2QEXM7
Samsung SSD 970 EVO Plus 500GB,2B2QEXM7

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db
ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db.new
ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db
ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
ST2000DM001-1CH164 already exists in ds1821+_host_v7.db
ST2000DM001-1CH164 already exists in ds1821+_host_v7.db.new
Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db.new
Samsung SSD 960 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db.new
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db.new
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 already exists in model.dtb

Support disk compatibility already enabled.

Support memory compatibility already enabled.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already enabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

Please tell me if I can relax as I have about 30 TB of data. I was just preparing to transfer a full backup of all this data to 2nd DS1821+ (the new 1821+ is simply plugged with built in Ethernet ports Lan1+Lan2+Lan3+Lan4 I was trying to get faster data transfer through SMB3 all on 1 GB link) was hoping I could get 4 GB on the new backup nas and obviously already the other one had 10GB enabled through E10M20-t1.

Nothing out of ordinary installed on the New DS1821+(Empty back).

OLD DS1821+ shows 50.9 TB as you can see.

please help !

<!-- gh-comment-id:1816481089 --> @MirHekmat commented on GitHub (Nov 17, 2023): @007revad Hey mate, I am under alot of stress at the moment hoping nothing is lost yet. I have so many photos. I had migrated everything on the new NAS over few months now and was working fine. So I followed RozNlL last comment : EDIT: Running syno_hdd_db.sh v3.2.66-RC corrects all.... All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2) As I updated my DSM to DSM 7.2.1-69057 Update 1 After updating and the running new script V3.2.66, Restarted the system and I am hearing continous beeping and following error message. Really hoping you can suggest what do I do from here. I have restarted a few times. I did get Drive crashed on drive 2. I clicked repair as the option came. The 2nd drive and all the drives are showing healhy. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/4f8166bb-d23d-4893-885b-a3a4196cadbf) This is a copy paste as I still have that screen up and what happened when I run the script: Synology_HDD_db v3.2.66 DS1821+ DSM 7.2.1-69057-1 Using options: Running from: /volume1/homes/Mir/Scripts2/syno_hdd_db.sh HDD/SSD models found: 6 ST12000NE0008-2PK103,EN02 ST12000VN0008-2YS101,SC60 ST16000NE000-2RW103,SB30 ST16000NE000-2RW103,SN02 ST16000NE000-2RW103,SN03 ST2000DM001-1CH164,CC26 M.2 drive models found: 3 Samsung SSD 960 EVO 1TB,3B7QCXE7 Samsung SSD 970 EVO Plus 1TB,2B2QEXM7 Samsung SSD 970 EVO Plus 500GB,2B2QEXM7 M.2 PCIe card models found: 1 E10M20-T1 No Expansion Units found ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db.new ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db.new Updated ST16000NE000-2RW103 to ds1821+_host_v7.db Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new Updated ST16000NE000-2RW103 to ds1821+_host_v7.db Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new Updated ST16000NE000-2RW103 to ds1821+_host_v7.db Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new ST2000DM001-1CH164 already exists in ds1821+_host_v7.db ST2000DM001-1CH164 already exists in ds1821+_host_v7.db.new Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db.new Samsung SSD 960 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db.new Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_e10m20-t1_v7.db Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db.new Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_e10m20-t1_v7.db E10M20-T1 NIC already enabled for DS1821+ E10M20-T1 NVMe already enabled for DS1821+ E10M20-T1 already exists in model.dtb Support disk compatibility already enabled. Support memory compatibility already enabled. NVMe support already enabled. M.2 volume support already enabled. Drive db auto updates already enabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. Please tell me if I can relax as I have about 30 TB of data. I was just preparing to transfer a full backup of all this data to 2nd DS1821+ (the new 1821+ is simply plugged with built in Ethernet ports Lan1+Lan2+Lan3+Lan4 I was trying to get faster data transfer through SMB3 all on 1 GB link) was hoping I could get 4 GB on the new backup nas and obviously already the other one had 10GB enabled through E10M20-t1. Nothing out of ordinary installed on the New DS1821+(Empty back). OLD DS1821+ shows 50.9 TB as you can see. please help !
Author
Owner

@RozzNL commented on GitHub (Nov 17, 2023):

@MirHekmat
I did first also use m2_card_fix.sh before i used the syno_hdd_db.sh RC
Maybe you can try that?

<!-- gh-comment-id:1816607572 --> @RozzNL commented on GitHub (Nov 17, 2023): @MirHekmat I did first also use m2_card_fix.sh before i used the syno_hdd_db.sh RC Maybe you can try that?
Author
Owner

@007revad commented on GitHub (Nov 17, 2023):

@RozzNL
Thanks for providing the solution for MirHekmat

@MirHekmat
When you were getting the continuous beeps were the fans also running at full speed?

The DS1821+ and DS1621+ are like a problem child. They need extra things done when using an unsupported M.2 adapter card or they throw a very scary tantrum. I am still working on integrating m2_card_fix into syno_hdd_db. Or finding a better solution.

<!-- gh-comment-id:1816969889 --> @007revad commented on GitHub (Nov 17, 2023): @RozzNL Thanks for providing the solution for MirHekmat @MirHekmat When you were getting the continuous beeps were the fans also running at full speed? The DS1821+ and DS1621+ are like a problem child. They need extra things done when using an unsupported M.2 adapter card or they throw a very scary tantrum. I am still working on integrating m2_card_fix into syno_hdd_db. Or finding a better solution.
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

@RozzNL @007revad Fans were running fine. I checked the alert in DSM and it was on orange alert with missing SSD error. all sort of things was going on. The beeping sounds makes you panic I wish it was more settle buzz.

One more thing after the DSM update and the script run through SHH the DSM sort of had reset itself when I logged in as I can't see those desktop icons etc that I placed as shortcuts (possibly because of Volume 1 missing a drive).

However my user logins are still working as I had defined. No idea why this is happening or what exactly is happening. I just regret that I thought I should get the 10G E10M20-t1 up and running before I do the Entire backup through Hyperbackup, just to speed things up. The new Nas was on the new DSM 69507 and it wasn't showing the drive to save everything, hence the reason for updating to match the firmaware, hoping that will fix.

so lastnight after it crushed and as reported by synology I did lots of googling and checked synology forums for solutions. One of the options was after you get "your volume has degraded" in order to fix that you need to repair if you see the option available, so I hit that 3 dot menu in Storage section and hit repair. the first drive which was part of original SHR a 12TB ironwolf seems to have disconnected somehow and I had spare 16TB (set for Hot spare) and these two were coming as available drives. It asked me in order to repair you have two drives available to use it. I assumed I shouldn't touch 12TB as it was originally part of SHR when everything was working and assuming it may still have the parity information in case this repair method won't work, I still have that info available (it was saying all the data on this new drive will be erased) So I chose that Hot Spare 16TB Ironwold spare Unused HDD for the replacement drive

So since lastnight till now the progress is about 66.68% for repair:
image

I am waiting for this to finish and I am kinda hoping it will bring back those data for volume 1. I think its sorting out its parity for the missing drive on this SHR . Though I am not 100% what exactly happening and what the chaos for last night

Once I get my data back, I'll just transfer it all to the new NAS without worrying about 10G speed M.2 the extra hacks etc.
Then only I am thinking I'll run new scripts ^ to take on the primary NAS as its affordable data wise for in case something worse happens.

To you experience people, what are the chances I have lost my data or you reckon I should be able to get my data back.
It seems like i have another 8-10 hours before I'll know the answer if am getting my data back. The suspense is absolute killing and I feel very anxious as its limbo. Can't make up my mind if I should give up on my data or no it probably will come back. Don't know whats going to happen :)

One more question to you experienced people, Do I still need to sort out the Cache error its giving or once it repairs it should be able to just bring the data back. As I dont care about Cache at the moment, I just hope I have a chance with my data back :)

Please check this:
image

<!-- gh-comment-id:1817384289 --> @MirHekmat commented on GitHub (Nov 18, 2023): @RozzNL @007revad Fans were running fine. I checked the alert in DSM and it was on orange alert with missing SSD error. all sort of things was going on. The beeping sounds makes you panic I wish it was more settle buzz. One more thing after the DSM update and the script run through SHH the DSM sort of had reset itself when I logged in as I can't see those desktop icons etc that I placed as shortcuts (possibly because of Volume 1 missing a drive). However my user logins are still working as I had defined. No idea why this is happening or what exactly is happening. I just regret that I thought I should get the 10G E10M20-t1 up and running before I do the Entire backup through Hyperbackup, just to speed things up. The new Nas was on the new DSM 69507 and it wasn't showing the drive to save everything, hence the reason for updating to match the firmaware, hoping that will fix. so lastnight after it crushed and as reported by synology I did lots of googling and checked synology forums for solutions. One of the options was after you get "your volume has degraded" in order to fix that you need to repair if you see the option available, so I hit that 3 dot menu in Storage section and hit repair. the first drive which was part of original SHR a 12TB ironwolf seems to have disconnected somehow and I had spare 16TB (set for Hot spare) and these two were coming as available drives. It asked me in order to repair you have two drives available to use it. I assumed I shouldn't touch 12TB as it was originally part of SHR when everything was working and assuming it may still have the parity information in case this repair method won't work, I still have that info available (it was saying all the data on this new drive will be erased) So I chose that Hot Spare 16TB Ironwold spare Unused HDD for the replacement drive So since lastnight till now the progress is about 66.68% for repair: ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/7c3625dc-6a7e-47a0-9322-0b70f4fad8c0) I am waiting for this to finish and I am kinda hoping it will bring back those data for volume 1. I think its sorting out its parity for the missing drive on this SHR . Though I am not 100% what exactly happening and what the chaos for last night Once I get my data back, I'll just transfer it all to the new NAS without worrying about 10G speed M.2 the extra hacks etc. Then only I am thinking I'll run new scripts ^ to take on the primary NAS as its affordable data wise for in case something worse happens. To you experience people, what are the chances I have lost my data or you reckon I should be able to get my data back. It seems like i have another 8-10 hours before I'll know the answer if am getting my data back. The suspense is absolute killing and I feel very anxious as its limbo. Can't make up my mind if I should give up on my data or no it probably will come back. Don't know whats going to happen :) One more question to you experienced people, Do I still need to sort out the Cache error its giving or once it repairs it should be able to just bring the data back. As I dont care about Cache at the moment, I just hope I have a chance with my data back :) Please check this: ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/a37249cf-4d12-4dfe-bb0d-ff08938aed82)
Author
Owner

@RozzNL commented on GitHub (Nov 18, 2023):

You have a 1 drive fault tolerant setup/raid…..so you can loose 1x harddisk entirely and still get all your data back.
Doesn’t matter which harddrive fails! (So yes you could even have used the “original” harddisk that failed, as long as its not a hardware failure ofcourse)

Your cache on the other hand….you have a read/write cache….data protection wise this is a no no….if your data on the write cache fails, you CAN/COULD loose data on the volume…but i am not sure what happens when the read/write cache comes back online…i havent had that fault before.

I use the 3-2-1 backup method….3x data backup of which 2x different locations/NASses at home and 1x online.

My recommendation to all: do not use read/write cache on the volume you really need backup of…only use read cache.
Using a Raid 1 NVME storage pool / volume is much faster and easier!

<!-- gh-comment-id:1817407877 --> @RozzNL commented on GitHub (Nov 18, 2023): You have a 1 drive fault tolerant setup/raid…..so you can loose 1x harddisk entirely and still get all your data back. Doesn’t matter which harddrive fails! (So yes you could even have used the “original” harddisk that failed, as long as its not a hardware failure ofcourse) Your cache on the other hand….you have a read/write cache….data protection wise this is a no no….if your data on the write cache fails, you CAN/COULD loose data on the volume…but i am not sure what happens when the read/write cache comes back online…i havent had that fault before. I use the 3-2-1 backup method….3x data backup of which 2x different locations/NASses at home and 1x online. My recommendation to all: do not use read/write cache on the volume you really need backup of…only use read cache. Using a Raid 1 NVME storage pool / volume is much faster and easier!
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

Edit 2: sorry so in this screen shot just taken now. in share folder in control panel I still see those folder as I had structured, the grey ones are the ones in Volume 1 they Yellow onese are different storage pool that is still accessible. Volume 3 is two samsung m.2 that were on E10M20-t1
Could this mean that once the repair process is done I'll have access to those file. Please if anyone could shed some light !?? I am very new to Synology raid shr etc.

Edit 1: so you can see from this screen shot that i posted here three weeks ago its showing 11.7 TB used.
image

Also I bought the 2nd DS1821+ for backing up so these were my steps for 3.2.1 backup. but I guess I made mistake during the journey.

OH cra...p . so am I screwed here because of the Cache? I really thought cache is just a proxy storage for faster reads and writes, instead of accessing Volume 1 all the time for the data the cache would save a copy of some data and I really was treating that as seperate data storage. Are you saying its in part of the same chain as the Volume 1 SHR?

Also one more thing can you see the screen shot before all the choas, it was 29.1 TB use of of 50 TB now its saying 50.9 TB alocated. shouldn't it still say 29.tb used but volume is degrade etc? like why isn't it showing used space.

<!-- gh-comment-id:1817414569 --> @MirHekmat commented on GitHub (Nov 18, 2023): Edit 2: sorry so in this screen shot just taken now. in share folder in control panel I still see those folder as I had structured, the grey ones are the ones in Volume 1 they Yellow onese are different storage pool that is still accessible. Volume 3 is two samsung m.2 that were on E10M20-t1 Could this mean that once the repair process is done I'll have access to those file. Please if anyone could shed some light !?? I am very new to Synology raid shr etc. Edit 1: so you can see from this screen shot that i posted here three weeks ago its showing 11.7 TB used. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/e11f32ab-40e3-4b2c-9d97-c83f93489607) Also I bought the 2nd DS1821+ for backing up so these were my steps for 3.2.1 backup. but I guess I made mistake during the journey. OH cra...p . so am I screwed here because of the Cache? I really thought cache is just a proxy storage for faster reads and writes, instead of accessing Volume 1 all the time for the data the cache would save a copy of some data and I really was treating that as seperate data storage. Are you saying its in part of the same chain as the Volume 1 SHR? Also one more thing can you see the screen shot before all the choas, it was 29.1 TB use of of 50 TB now its saying 50.9 TB alocated. shouldn't it still say 29.tb used but volume is degrade etc? like why isn't it showing used space.
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

The 50.9 TB used is normal when the NAS is repairing that storage pool. Once it's finished (in 8.5 days!) it should again show 29.1 TB.

With a read/write cache data is written to the write cache and then later saved to the HDDs. If something bad happens before the data is written to the HDDs that data is lost.

<!-- gh-comment-id:1817419709 --> @007revad commented on GitHub (Nov 18, 2023): The 50.9 TB used is normal when the NAS is repairing that storage pool. Once it's finished (in 8.5 days!) it should again show 29.1 TB. With a read/write cache data is written to the write cache and then later saved to the HDDs. If something bad happens before the data is written to the HDDs that data is lost.
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

Mate you are giving me hope thank you ! I changed the sync speed to custom so its going fast at the moment currently sitting on 78% on step 1 since running the repair first at around 10ish PM Perth time time now is 3:19 pm. I guess your calculation might involve still time for Step 2? Hoping I don't have to die every day for 8 days if it takes this long :D to find out about it. Although I don't mind at all if it takes that long and it'll give me my data back.

This is screen shot of current performance of the drives during repair process, drive 6 is separate volume in itself called Volume 2 (still fully accessble) :
image

Is there anything else I can do to verify if my data is still on those drives. or would you suggest to relax and take a chill pill.

<!-- gh-comment-id:1817423077 --> @MirHekmat commented on GitHub (Nov 18, 2023): Mate you are giving me hope thank you ! I changed the sync speed to custom so its going fast at the moment currently sitting on 78% on step 1 since running the repair first at around 10ish PM Perth time time now is 3:19 pm. I guess your calculation might involve still time for Step 2? Hoping I don't have to die every day for 8 days if it takes this long :D to find out about it. Although I don't mind at all if it takes that long and it'll give me my data back. This is screen shot of current performance of the drives during repair process, drive 6 is separate volume in itself called Volume 2 (still fully accessble) : ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/5bbf8fcb-b505-456d-8db5-ac984fc946a5) Is there anything else I can do to verify if my data is still on those drives. or would you suggest to relax and take a chill pill.
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

The 8.5 days was from your screenshot that showed "Adding drives... 41.9% (Time left: 8 days 8 hours". But I see it's now up to 78% already. In my experience step 2 is faster.

Is this SHR-2 or RAID 6?

Basically there's nothing you can do until it's finished. Just relax and let it do what it needs to do. And be glad we don't live in a country with daily rolling blackouts. If don't see any warning messages popping up you can assume it's all good.

<!-- gh-comment-id:1817429921 --> @007revad commented on GitHub (Nov 18, 2023): The 8.5 days was from your screenshot that showed "Adding drives... 41.9% (Time left: 8 days 8 hours". But I see it's now up to 78% already. In my experience step 2 is faster. Is this SHR-2 or RAID 6? Basically there's nothing you can do until it's finished. Just relax and let it do what it needs to do. And be glad we don't live in a country with daily rolling blackouts. If don't see any warning messages popping up you can assume it's all good.
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

This is SHR 1

". In my experience step is faster." you mean to say step 2 is faster?

Totally agree <3 really hoping so.
If I do see any issue or my data doesn't appear after repair process. are there 3rd party services or synology that could recover these data ?

<!-- gh-comment-id:1817431994 --> @MirHekmat commented on GitHub (Nov 18, 2023): This is SHR 1 ". In my experience step is faster." you mean to say step 2 is faster? Totally agree <3 really hoping so. If I do see any issue or my data doesn't appear after repair process. are there 3rd party services or synology that could recover these data ?
Author
Owner

@RozzNL commented on GitHub (Nov 18, 2023):

Degraded means repairable and no data loss….take another chill pill ;)

https://kb.synology.com/en-ca/DSM/help/DSM/StorageManager/storage_pool_repair?version=7

<!-- gh-comment-id:1817432942 --> @RozzNL commented on GitHub (Nov 18, 2023): Degraded means repairable and no data loss….take another chill pill ;) https://kb.synology.com/en-ca/DSM/help/DSM/StorageManager/storage_pool_repair?version=7
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

RozzNL hey mate!!! thank you!!!! Would definitely update how it goes after the steps are completed.

Also I guess I'll ignore the cache error at the moment. Hoping the SSD cache will not associate itself on to Volume1.

<!-- gh-comment-id:1817434193 --> @MirHekmat commented on GitHub (Nov 18, 2023): RozzNL hey mate!!! thank you!!!! Would definitely update how it goes after the steps are completed. Also I guess I'll ignore the cache error at the moment. Hoping the SSD cache will not associate itself on to Volume1.
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

". In my experience step is faster." you mean to say step 2 is faster?

Yep, Step 2.

<!-- gh-comment-id:1817434526 --> @007revad commented on GitHub (Nov 18, 2023): > ". In my experience step is faster." you mean to say step 2 is faster? Yep, Step 2.
Author
Owner

@RozzNL commented on GitHub (Nov 18, 2023):

In your screenshot “ time left: 8 days” you can see the ssd cache has a green dot.

So it looks like the ssdcache is also being repaired due to the repair of storage pool 1
In my opinion it all looks ok…but that’s just going on from your screenshot and synology showing a green dot.

<!-- gh-comment-id:1817435441 --> @RozzNL commented on GitHub (Nov 18, 2023): In your screenshot “ time left: 8 days” you can see the ssd cache has a green dot. So it looks like the ssdcache is also being repaired due to the repair of storage pool 1 In my opinion it all looks ok…but that’s just going on from your screenshot and synology showing a green dot.
Author
Owner

@MirHekmat commented on GitHub (Nov 18, 2023):

I now see where you guys read the 8 days. That was a screenshot from three weeks ago when I was asking for help to run the script (I had just started inquiring with Dave on how to run this scrip getting some guidance) he suggestted to wait for that to finish (three weeks ago before all this new thing happend). I just copied that screenshot as it was posted here on Github already. Used it as source of how it showed data available versus how there was no data.

Discard this 8 days screenshot as that has nothing got to do with now :) I am still hopeful !
image

This is the current screenshot:
image

Storage pool 2 that you are seeing as missing right at the bottom is E10M20-T1 with 2x m.2 samsung SSD. I don't really care about that too as the were just some project duplicates for Davinici resolve quick access.

<!-- gh-comment-id:1817436193 --> @MirHekmat commented on GitHub (Nov 18, 2023): I now see where you guys read the 8 days. That was a screenshot from three weeks ago when I was asking for help to run the script (I had just started inquiring with Dave on how to run this scrip getting some guidance) he suggestted to wait for that to finish (three weeks ago before all this new thing happend). I just copied that screenshot as it was posted here on Github already. Used it as source of how it showed data available versus how there was no data. Discard this 8 days screenshot as that has nothing got to do with now :) I am still hopeful ! ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/cefd51dc-9d40-409e-8b04-4d7d00bab8ad) This is the current screenshot: ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/45b921ce-2895-4d8c-b4b8-534d80c3dc99) Storage pool 2 that you are seeing as missing right at the bottom is E10M20-T1 with 2x m.2 samsung SSD. I don't really care about that too as the were just some project duplicates for Davinici resolve quick access.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

image

Okey that single HDD seems to have fixed but I guess this is internal 2x SSD cache is still coming as missing. Because Storage pool 2x was E10m20-T1.

@007revad So do you reckon running m2_card_fix.sh would bring the internal m.2 back up and running no idea why that would stop running any ideas?

<!-- gh-comment-id:1817796219 --> @MirHekmat commented on GitHub (Nov 19, 2023): ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/b252c0cc-32de-406c-858c-3bc31e7784fd) Okey that single HDD seems to have fixed but I guess this is internal 2x SSD cache is still coming as missing. Because Storage pool 2x was E10m20-T1. @007revad So do you reckon running m2_card_fix.sh would bring the internal m.2 back up and running no idea why that would stop running any ideas?
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

So do you reckon running m2_card_fix.sh would bring the internal m.2 back up and running

Yes. After running m2_card_fix.sh and rebooting your internal m.2 drives and and m.2 drives on E10M20-T1 should back up and running.

no idea why that would stop running any ideas?

Because updating DSM would have restored the 2 files that m2_card_fix.sh replaces.

<!-- gh-comment-id:1817828177 --> @007revad commented on GitHub (Nov 19, 2023): > So do you reckon running m2_card_fix.sh would bring the internal m.2 back up and running Yes. After running m2_card_fix.sh and rebooting your internal m.2 drives and and m.2 drives on E10M20-T1 should back up and running. > no idea why that would stop running any ideas? Because updating DSM would have restored the 2 files that m2_card_fix.sh replaces.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

thank Dave, I have created a ticket with Synology as well as I wasn't sure. Apparently Synology might be able to give give you read only access to your Crushed volume which is all that matters to me at this time

So my questions is do you suggest if I run M2_card_fix.sh since I don't have any backup of the data ? it wouldn't miss anything on permenant basis would it? trying to be really careful here

Can you also please tell me where I can find m2_card_fix.sh and bit of guide on how to run it?

<!-- gh-comment-id:1817832232 --> @MirHekmat commented on GitHub (Nov 19, 2023): thank Dave, I have created a ticket with Synology as well as I wasn't sure. Apparently Synology might be able to give give you read only access to your Crushed volume which is all that matters to me at this time So my questions is do you suggest if I run M2_card_fix.sh since I don't have any backup of the data ? it wouldn't miss anything on permenant basis would it? trying to be really careful here Can you also please tell me where I can find m2_card_fix.sh and bit of guide on how to run it?
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

Running m2_card_fix.sh won't affect your HDD volumes or make anything worse. It just replaces 2 nvme related DSM files with 7.2 versions.

  1. Go to m2_card_fix.sh
  2. Download m2_card_fix.sh (see image below).
  3. Run m2_card_fix.sh with sudo -i
  4. Reboot.

download_raw

<!-- gh-comment-id:1817836681 --> @007revad commented on GitHub (Nov 19, 2023): Running m2_card_fix.sh won't affect your HDD volumes or make anything worse. It just replaces 2 nvme related DSM files with 7.2 versions. 1. Go to [m2_card_fix.sh](https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_fix.sh) 2. Download m2_card_fix.sh (see image below). 3. Run m2_card_fix.sh with sudo -i 4. Reboot. ![download_raw](https://github.com/007revad/Synology_HDD_db/assets/39733752/49b63bf7-36f5-4f83-be25-1ba98fc45348)
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

Can't see any images?

<!-- gh-comment-id:1817836972 --> @MirHekmat commented on GitHub (Nov 19, 2023): Can't see any images?
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

Can't see any images?

I can see your images. I also see the images in the emails GitHub send me.

<!-- gh-comment-id:1817837585 --> @007revad commented on GitHub (Nov 19, 2023): > Can't see any images? I can see your images. I also see the images in the emails GitHub send me.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

I really have a good feeling this will resolve the issue however I am having trouble running the code. what I mean can't see any images was step 2 you mentioned (see image below) I couldn't see any image attached however I think I have manage to download it using the 3 dot on the right corner.

So when trying to run the scrip this is what its saying
image

<!-- gh-comment-id:1817839263 --> @MirHekmat commented on GitHub (Nov 19, 2023): I really have a good feeling this will resolve the issue however I am having trouble running the code. what I mean can't see any images was step 2 you mentioned (see image below) I couldn't see any image attached however I think I have manage to download it using the 3 dot on the right corner. So when trying to run the scrip this is what its saying ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/56407e5e-15cb-4fb4-a740-9c5e73dd0ab5)
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

It should be sudo with a lower case S. Not Sudo

And there needs to be a space between -i and /volume

sudo -i /volume2/Scripts/m2_card_fix.sh

<!-- gh-comment-id:1817840701 --> @007revad commented on GitHub (Nov 19, 2023): It should be sudo with a lower case S. Not Sudo And there needs to be a space between -i and /volume `sudo -i /volume2/Scripts/m2_card_fix.sh`
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

Oh I see, I didn't think it was case sensitive
its says this:

image

sudo -i /volume2/Scripts/m2_card_fix.sh
DS1821+
69057 not supported

<!-- gh-comment-id:1817841099 --> @MirHekmat commented on GitHub (Nov 19, 2023): Oh I see, I didn't think it was case sensitive its says this: ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/5bd0c62d-a8de-4c23-be6c-3d4f81b4731f) sudo -i /volume2/Scripts/m2_card_fix.sh DS1821+ 69057 not supported
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

I just updated m2_card_fxi.sh to allow running it on 7.2.1-69057

Can you download it again and run this new version.

<!-- gh-comment-id:1817842674 --> @007revad commented on GitHub (Nov 19, 2023): I just updated m2_card_fxi.sh to allow running it on 7.2.1-69057 Can you download it again and run this new version.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

okey sure

<!-- gh-comment-id:1817842782 --> @MirHekmat commented on GitHub (Nov 19, 2023): okey sure
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

This is what I got

/volume2/Scripts/m2_card_fix.sh: line 23: syntax error near unexpected token ;' /volume2/Scripts/m2_card_fix.sh: line 23: || ||;'

<!-- gh-comment-id:1817843117 --> @MirHekmat commented on GitHub (Nov 19, 2023): This is what I got > /volume2/Scripts/m2_card_fix.sh: line 23: syntax error near unexpected token `;' > /volume2/Scripts/m2_card_fix.sh: line 23: ` [[ $modelname == "RS1221+" ]] || [[ $modelname == "RS1221rp+" ]] ||;'
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

Oops. I deleted a model and left the last || in there.

Can you download it again.

<!-- gh-comment-id:1817843693 --> @007revad commented on GitHub (Nov 19, 2023): Oops. I deleted a model and left the last || in there. Can you download it again.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

Still the same:

/volume2/Scripts/m2_card_fix.sh: line 23: syntax error near unexpected token ;' /volume2/Scripts/m2_card_fix.sh: line 23: || ||;'

<!-- gh-comment-id:1817844290 --> @MirHekmat commented on GitHub (Nov 19, 2023): Still the same: > /volume2/Scripts/m2_card_fix.sh: line 23: syntax error near unexpected token `;' > /volume2/Scripts/m2_card_fix.sh: line 23: ` [[ $modelname == "RS1221+" ]] || [[ $modelname == "RS1221rp+" ]] ||;'
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

You didn't download the new version. Or you downloaded it to a different folder.

<!-- gh-comment-id:1817845530 --> @007revad commented on GitHub (Nov 19, 2023): You didn't download the new version. Or you downloaded it to a different folder.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

I am deleting older verision as its not working, i have now refreshed the page: deleted all the old downloads of the file and re-did all the steps.
now it ask the password however previous error:

DS1821+
69057 not supported

image

<!-- gh-comment-id:1817847285 --> @MirHekmat commented on GitHub (Nov 19, 2023): I am deleting older verision as its not working, i have now refreshed the page: deleted all the old downloads of the file and re-did all the steps. now it ask the password however previous error: > DS1821+ > 69057 not supported ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/60b3d95e-884e-4585-9426-2e72c0837fad)
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

I shouldn't multi-task at midnight :(

I've fixed it again. Can you download it... again.

<!-- gh-comment-id:1817848537 --> @007revad commented on GitHub (Nov 19, 2023): I shouldn't multi-task at midnight :( I've fixed it again. Can you download it... again.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

All good mate, i truly appreciate all the help. I am still getting the same error. I did restart the terminal just in case there was something there. So started a fresh session maybe it wasn't needed but I did it anyhow.

image

I know its late night for you if you would like to take a break thats all fine. If you want to revisit this tomorrow happy to wait.

<!-- gh-comment-id:1817850638 --> @MirHekmat commented on GitHub (Nov 19, 2023): All good mate, i truly appreciate all the help. I am still getting the same error. I did restart the terminal just in case there was something there. So started a fresh session maybe it wasn't needed but I did it anyhow. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/48c06830-a20d-4c04-b1d2-50199784856d) I know its late night for you if you would like to take a break thats all fine. If you want to revisit this tomorrow happy to wait.
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

I've fixed it again.

<!-- gh-comment-id:1817852860 --> @007revad commented on GitHub (Nov 19, 2023): I've fixed it again.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

M<ATTEEEEEE its fixeeeeedddddd!!!

<!-- gh-comment-id:1817853923 --> @MirHekmat commented on GitHub (Nov 19, 2023): M<ATTEEEEEE its fixeeeeedddddd!!!
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

I have accesess to everything !!!!

<!-- gh-comment-id:1817853995 --> @MirHekmat commented on GitHub (Nov 19, 2023): I have accesess to everything !!!!
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

NICE!!!

<!-- gh-comment-id:1817856436 --> @007revad commented on GitHub (Nov 19, 2023): NICE!!!
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

@MirHekmat Thank you very much for your donation

<!-- gh-comment-id:1817856791 --> @007revad commented on GitHub (Nov 19, 2023): @MirHekmat Thank you very much for your donation
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

No worries mate, you helped late night. thanks to your efforts.
I had another question. I read many forum that people complained saying if you leave those SSD caches as read/write it'll most like cause a Volume 1 crash.

I do lots of video editing. I was leaving it on as read/write after my back up is complete should i change those internal ssd to Read only as suggested by many users?

<!-- gh-comment-id:1817858151 --> @MirHekmat commented on GitHub (Nov 19, 2023): No worries mate, you helped late night. thanks to your efforts. I had another question. I read many forum that people complained saying if you leave those SSD caches as read/write it'll most like cause a Volume 1 crash. I do lots of video editing. I was leaving it on as read/write after my back up is complete should i change those internal ssd to Read only as suggested by many users?
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

Read/write caches are dangerous if you enable "Pin all Btrfs metadata to SSD cache". If the cache drive dies, gets removed or isn't mounted your volume may crash. Which sounds like what happened to you.

I had a read-only cache, and after updating DSM the NVMe cache was missing but my HDD volumes were okay. After running m2_card-fix.sh and rebooting my read only cache was back.

If you have plenty of RAM in the DS1821+ you won't see an improvement with a read/write or read-only cache. RAM is faster than the NVMe drives. I have 32GB of RAM in my DS1821+ and don't see any difference when running a read/write cache, or read cache.

As of DSM 7.2, DSM only caches small files so I don't think there'd be any benefit when editing videos.

A read cache only helps when you frequently access the same small files, and the combined size of all the cached small files exceeds the amount of RAM available to use for caching. Databases and web servers really benefit from a cache.

<!-- gh-comment-id:1817864543 --> @007revad commented on GitHub (Nov 19, 2023): Read/write caches are dangerous if you enable "Pin all Btrfs metadata to SSD cache". If the cache drive dies, gets removed or isn't mounted your volume may crash. Which sounds like what happened to you. I had a read-only cache, and after updating DSM the NVMe cache was missing but my HDD volumes were okay. After running m2_card-fix.sh and rebooting my read only cache was back. If you have plenty of RAM in the DS1821+ you won't see an improvement with a read/write or read-only cache. RAM is faster than the NVMe drives. I have 32GB of RAM in my DS1821+ and don't see any difference when running a read/write cache, or read cache. As of DSM 7.2, DSM only caches small files so I don't think there'd be any benefit when editing videos. A read cache only helps when you frequently access the same small files, and the combined size of all the cached small files exceeds the amount of RAM available to use for caching. Databases and web servers really benefit from a cache.
Author
Owner

@MirHekmat commented on GitHub (Nov 19, 2023):

Thats great info, I also have 32GB ram so I guess it'll be great.

Yes thats exactly what happened.

I'll do the backup now hoping it'll finish by tomorrow morning and then make the changes to readonly

<!-- gh-comment-id:1817865229 --> @MirHekmat commented on GitHub (Nov 19, 2023): Thats great info, I also have 32GB ram so I guess it'll be great. Yes thats exactly what happened. I'll do the backup now hoping it'll finish by tomorrow morning and then make the changes to readonly
Author
Owner

@007revad commented on GitHub (Nov 25, 2023):

@zcpnate @RozzNL @MirHekmat @zcpnate

There's a new release candidate version of the script that now correctly enables M.2 cards like the E10M20-T1 for the DS1821+ and DS1621+. https://github.com/007revad/Synology_HDD_db/releases/tag/v3.2.67-RC

If you previously ran m2_card_fix.sh then you should undo some of the changes it made. Those changes will automatically get undone when you update to the next full version of DSM (7.2.2 or 7.3).

If you want to undo those changes now:

  1. Run https://github.com/007revad/Synology_DSM_reinstall and install DSM 7.2.1 (with Update 1).
  2. Run Synology_HDD_db v3.2.67-RC.
  3. Reboot.

Either way you should also run Synology_Cleanup_Coredumps because m2_card_fix.sh caused a core dump which would left a coredump files on the root of volume1.

<!-- gh-comment-id:1826227088 --> @007revad commented on GitHub (Nov 25, 2023): @zcpnate @RozzNL @MirHekmat @zcpnate There's a new release candidate version of the script that now correctly enables M.2 cards like the E10M20-T1 for the DS1821+ and DS1621+. https://github.com/007revad/Synology_HDD_db/releases/tag/v3.2.67-RC If you previously ran **m2_card_fix.sh** then you should undo some of the changes it made. Those changes will automatically get undone when you update to the next full version of DSM (7.2.2 or 7.3). If you want to undo those changes now: 1. Run https://github.com/007revad/Synology_DSM_reinstall and install [DSM 7.2.1 (with Update 1)](https://archive.synology.com/download/Os/DSM/7.2.1-69057-1-NanoPacked). 2. Run [Synology_HDD_db v3.2.67-RC](https://github.com/007revad/Synology_HDD_db/releases/tag/v3.2.67-RC). 3. Reboot. Either way you should also run [Synology_Cleanup_Coredumps](https://github.com/007revad/Synology_Cleanup_Coredumps) because **m2_card_fix.sh** caused a core dump which would left a coredump files on the root of volume1.
Author
Owner

@MirHekmat commented on GitHub (Nov 26, 2023):

Hey Dave,
Thanks!
Do I really need to do all this as everything is running fine at the moment . 300MB occupations of the dump files are not that big of deal for me. If its nothing major I'll leave everything as it is?

Also DSM is suggestion new update, please see image bellow? I see the DSM 7.2.1 the one you are suggesting is older DSM ??

Experiencing last time the issues that I had. I would wait until experience users that also use your scripts will trial before I update mine. This way at least there will be some fix available.

image

<!-- gh-comment-id:1826726233 --> @MirHekmat commented on GitHub (Nov 26, 2023): Hey Dave, Thanks! Do I really need to do all this as everything is running fine at the moment . 300MB occupations of the dump files are not that big of deal for me. If its nothing major I'll leave everything as it is? Also DSM is suggestion new update, please see image bellow? I see the DSM 7.2.1 the one you are suggesting is older DSM ?? Experiencing last time the issues that I had. I would wait until experience users that also use your scripts will trial before I update mine. This way at least there will be some fix available. ![image](https://github.com/007revad/Synology_HDD_db/assets/148930802/7711450e-bd69-4582-94b4-6394545e1e9e)
Author
Owner

@007revad commented on GitHub (Nov 26, 2023):

@MirHekmat
You can certainly leave your DS1821+ as it is and it'll be fine. It's what I would have done if I didn't need to test the script on a clean DSM install.

The reason I said to downgrade to "DSM 7.2.1 (with Update 1)" is because to reinstall the "same" version you have to install a full version. For example:

  • 7.2.1 is a full version
  • 7.2.1 update 1 is an incremental update.
  • 7.2.1 (with Update 1) is a full version that includes update 1.
  • 7.2.1 update 2 or 3 etc are incremental updates.

The full versions are around 300 MB. The incremental are around 3 MB.

<!-- gh-comment-id:1826734330 --> @007revad commented on GitHub (Nov 26, 2023): @MirHekmat You can certainly leave your DS1821+ as it is and it'll be fine. It's what I would have done if I didn't need to test the script on a clean DSM install. The reason I said to downgrade to "DSM 7.2.1 (with Update 1)" is because to reinstall the "same" version you have to install a full version. For example: - 7.2.1 is a full version - 7.2.1 update 1 is an incremental update. - 7.2.1 (with Update 1) is a full version that includes update 1. - 7.2.1 update 2 or 3 etc are incremental updates. The full versions are around 300 MB. The incremental are around 3 MB.
Author
Owner

@MirHekmat commented on GitHub (Nov 26, 2023):

I see. thanks mate for your efforts!

<!-- gh-comment-id:1826734795 --> @MirHekmat commented on GitHub (Nov 26, 2023): I see. thanks mate for your efforts!
Author
Owner

@WanpengQian commented on GitHub (Nov 30, 2023):

As far as I'm aware, starting from DSM 7.2, Synology officially supports the use of M.2 SSDs as a storage pool on the DS1821+. You can find more information on supported models at
https://kb.synology.com/en-us/DSM/tutorial/Which_models_support_M_2_SSD_storage_pool.

If I intend to use internal M.2 drives for the pool, is it necessary to run this script? Or can any M.2 SSD be used, or does it need to be a certified SSD according to Synology's recommendations?
for non certified SSD, we need to run this script.

<!-- gh-comment-id:1832942335 --> @WanpengQian commented on GitHub (Nov 30, 2023): As far as I'm aware, starting from DSM 7.2, Synology officially supports the use of M.2 SSDs as a storage pool on the DS1821+. You can find more information on supported models at https://kb.synology.com/en-us/DSM/tutorial/Which_models_support_M_2_SSD_storage_pool. If I intend to use internal M.2 drives for the pool, is it necessary to run this script? Or can any M.2 SSD be used, or does it need to be a certified SSD according to Synology's recommendations? for non certified SSD, we need to run this script.
Author
Owner

@007revad commented on GitHub (Nov 30, 2023):

@WanpengQian

If you own one of those supported models AND have Synology brand M.2 drives you don't need the syno_hdd_db script.

If you want to use other brand M.2 drives as a volume you will need the syno_hdd_db script.

<!-- gh-comment-id:1833074183 --> @007revad commented on GitHub (Nov 30, 2023): @WanpengQian If you own one of those supported models AND have Synology brand M.2 drives you don't need the syno_hdd_db script. If you want to use other brand M.2 drives as a volume you will need the syno_hdd_db script.
Author
Owner

@007revad commented on GitHub (Dec 2, 2023):

Synology_HDD_db v3.2.68 released which now correctly, and simply, enables E10M20-T1, M2D20, M2D18 and M2D17 in models that use device tree and are using DSM 7.2 Update 2 and 3, 7.2.1, 7.2.1 Update 1, 2 and 3.

I also updated Synology_enable_M2_card which does the same but allows you choose which M.2 card to enable, or choose to enable all of them.

<!-- gh-comment-id:1837096844 --> @007revad commented on GitHub (Dec 2, 2023): [Synology_HDD_db v3.2.68](https://github.com/007revad/Synology_HDD_db/releases/tag/v3.2.68) released which now correctly, and simply, enables E10M20-T1, M2D20, M2D18 and M2D17 in models that use device tree and are using DSM 7.2 Update 2 and 3, 7.2.1, 7.2.1 Update 1, 2 and 3. I also updated [Synology_enable_M2_card](https://github.com/007revad/Synology_enable_M2_card) which does the same but allows you choose which M.2 card to enable, or choose to enable all of them.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#773
No description provided.