mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #148] DS1821+ with 2x NVMEs internal and 2x NVMEs on E10M20-T1, no show in Storage Manager after script #773
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#773
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @RozzNL on GitHub (Sep 30, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/148
Hi all,
I have a DS1821+ and am running DSM 7.2-64570 Update 3.
Installed 2 Samsung NVME
s in the hardware slots on the Syno, after running the script syno_hdd_db.sh from [u/daveR007](https://www.reddit.com/u/daveR007/) the ssds show up and i can use them as cache.Ran that a couple of years.
Just recently i found the E10M20-T1 card and installed with 2 more NVME`s, ran the script again:
So it sees the 4 ssd`s but does not show them in the syno gui, i ran the syno_create_m2_volume.sh and created 2 raid1 volumes, 1x raid1 on the onboard slots and 1x raid 1 on the E10M20-T1 card.
But still they do not show up in the gui, also no online assemble option.
Answer from private chat with Dave:
This is caused by DSM 7.2 Update 3 adding a power_limit for NVMe drives
@007revad commented on GitHub (Sep 30, 2023):
I need to get some information from you. Can you reply with what the following commands return:
synodisk --enum -t cachecat /sys/block/nvme0n1/device/syno_block_infocat /sys/block/nvme1n1/device/syno_block_infocat /sys/block/nvme2n1/device/syno_block_infocat /sys/block/nvme3n1/device/syno_block_info@007revad commented on GitHub (Sep 30, 2023):
And 2 more:
for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; doneAssuming the last line of that command ended in 0000:07:00.0 then run this command:
for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done@RozzNL commented on GitHub (Sep 30, 2023):
I did use the create m2 volume script again and created 4x single volumes, hope this does not mess up your needed information.
synodisk --enum -t cacheNo info returned
cat /sys/block/nvme0n1/device/syno_block_infopciepath=00:01.2,00.0,04.0,00.0
cat /sys/block/nvme1n1/device/syno_block_infopciepath=00:01.2,00.0,08.0,00.0
cat /sys/block/nvme2n1/device/syno_block_infopciepath=00:01.3,00.0
cat /sys/block/nvme3n1/device/syno_block_infopciepath=00:01.4,00.0
for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie01
/sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie02
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0
Yes indeed returned your assumed info
for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:07:00.0:pcie12
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:00.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:02.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:03.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0
/sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:0c.0
@007revad commented on GitHub (Sep 30, 2023):
I have enough to create a model.dtb file for your DS1821+ to make the E10M20-T1 and it's NVMe drives appear in storage manager.
But the result of the last command is a little confusing. Though it doesn't matter for what we're doing.
I don't know what 0000:08:00.0 and 0000:08:0c.0 are for. One of them could be for the 10G in the E10M20-T1.
@RozzNL commented on GitHub (Sep 30, 2023):
Great!
I don`t mind testing some more for you if you need the info for the future?
@007revad commented on GitHub (Sep 30, 2023):
Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip
Then
chmod 644 model.dtbcp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bakcp -pu model.dtb /etc.defaults/model.dtbcp -pu model.dtb /etc/model.dtb@007revad commented on GitHub (Sep 30, 2023):
I will take you up on that.
@RozzNL commented on GitHub (Sep 30, 2023):
Nope, nothing changed in Storage Manager
@007revad commented on GitHub (Sep 30, 2023):
That's disappointing and unexpected.
It's 9pm here and it's been a busy day. I'll get back to you tomorrow.
@RozzNL commented on GitHub (Sep 30, 2023):
No probs Dave, thanks for so far.
@zcpnate commented on GitHub (Sep 30, 2023):
This appears to be the same as my open issue #132. Reverting to 7.2u1 does consistently fix it but am now stuck on that dsm version.
@007revad commented on GitHub (Sep 30, 2023):
EDIT Don't worry about these commands. See my later comment here.
What do the following commands return:
grep "e10m20-t1" /run/model.dtbgrep "power_limit" /run/model.dtbgrep "100,100,100,100" /run/model.dtbget_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+@007revad commented on GitHub (Sep 30, 2023):
@zcpnate @cfsnate
Yes, this is the same problem. I was going to reply to issue #132 once @RozzNL had confirmed the fix is working.
@007revad commented on GitHub (Sep 30, 2023):
@RozzNL @zcpnate @cfsnate
I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.
The syno_hdd_db.sh that I have scheduled to run at boot-up has the
check_modeldtb "$c"lines commented out. For the E10M20-T1 you want to change line 1335 fromcheck_modeldtb "$c"to#check_modeldtb "$c"After editing syno_hdd_db.sh redo the steps in this comment.
@RozzNL commented on GitHub (Oct 1, 2023):
Will try that later today Dave
@RozzNL commented on GitHub (Oct 1, 2023):
Just for the sake of testing, did your commands before i edited the script:
Commented out, reapplied the modeldtb and applicable commands, rebooted.
The modified script with commented out check is run at shutdown, after boot up, still no drives in Storage Manager. 👎
EDIT:
I doublechecked that i use the modified model.dtb file you gave me, dates+size are the same as your modified file.
EDIT2:
I do run the syno_hdd_db.sh with the -nfr option btw
@007revad commented on GitHub (Oct 1, 2023):
Try disabling the schedules for syno_hdd_db and leaving it disabled, then run this command
set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ no@007revad commented on GitHub (Oct 1, 2023):
Can you tell me what these commands return:
synodisk --enum -t cacheudevadm info --query path --name nvme0udevadm info --query path --name nvme1udevadm info --query path --name nvme2udevadm info --query path --name nvme3@RozzNL commented on GitHub (Oct 1, 2023):
Disabled schedule, ran command, rebooted, nothing changed in Storage Manager.
synodisk --enum -t cacheNothing returned
udevadm info --query path --name nvme0/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3
EDIT:
Looking at your command, and looking in the file "adapter_cards.conf" i see:
[E10M20-T1_sup_nic] and
[E10M20-T1_sup_nvme] and
[E10M20-T1_sup_sata] and
DS1821+=yes, but also lower in the list
DS1821+=no
There are multiple references for the same DS.....not only for the DS1821+ but also other DS`s
@007revad commented on GitHub (Oct 1, 2023):
I don't understand why
synodisk --enum -t cacheis not returning anything.Are there any nvme erros if you run:
sudo grep synostgd-disk /var/log/messages | tail -10@RozzNL commented on GitHub (Oct 1, 2023):
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1
EDIT:
But running the (modified 1335 line) script
./syno_hdd_db.sh -nfr
Synology_HDD_db v3.1.64
DS1821+ DSM 7.2-64570-3
Using options: -nfr
Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh
HDD/SSD models found: 2
ST14000NM001G-2KJ103,SN03
ST16000NM001G-2KK103,SN03
M.2 drive models found: 2
Samsung SSD 970 EVO 1TB,2B2QEXE7
Samsung SSD 970 EVO Plus 2TB,2B2QEXM7
M.2 PCIe card models found: 1
E10M20-T1
No Expansion Units found
ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db
ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db
E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 SATA already enabled for DS1821+
Disabled support disk compatibility.
Disabled support memory compatibility.
Max memory already set to 64 GB.
M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.
@007revad commented on GitHub (Oct 1, 2023):
Synology uses the same adapter_cards.conf on every Synology NAS model (even models without a PCIe slot). It lists which PCIe adapter cards each model supports.
Can you try deleting the line that says "DS1821+=no"
I also just noticed that every model that officially supports the E10M20-T1 is listed as yes in the [E10M20-T1_sup_sata] section. Even though Synology's information says the E10M20-T1 does not support SATA M.2 drives on any NAS model.
The Xpenology people just add the NAS model = yes under every section in adapter_cards.conf
@007revad commented on GitHub (Oct 1, 2023):
Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.
What does the following command return?
/sys/firmware/devicetree/base/power_limit && echoThe only Synology models I own that have M.2 slots have:
I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:
Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.
Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.
@zcpnate commented on GitHub (Oct 1, 2023):
I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?
@007revad commented on GitHub (Oct 1, 2023):
7.2u1 didn't have a power limit. Synology added the power limit in 7.2u2
@RozzNL commented on GitHub (Oct 1, 2023):
Ah...de [ ] are seperate sections, gottcha
EDIT:
All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?
cat /sys/firmware/devicetree/base/power_limit && echo14.85,9.075
@007revad commented on GitHub (Oct 1, 2023):
@zcpnate
Can you check if
smartctl --info /dev/nvme0works for NVMe drives in 7.2u1@007revad commented on GitHub (Oct 1, 2023):
Yes, running syno_hdd_db would have set it back to yes. But I don't think it matters.
@zcpnate commented on GitHub (Oct 1, 2023):
@007revad commented on GitHub (Oct 1, 2023):
On 7.2u3 I get
Read NVMe Identify Controller failed: NVMe Status 0x4002Someone else on 7.2.1 gets
Read NVMe Identify Controller failed: NVMe Status 0x200bThe only thing that's consistent is that smartctl --info for nvme drives doesn't work in DSM 7.2
@zcpnate commented on GitHub (Oct 1, 2023):
I tested a few other nvme drives and got 200b for my internally mounted nvme drives acting as a volume.
@RozzNL commented on GitHub (Oct 1, 2023):
I too get the 0x200b
@007revad commented on GitHub (Oct 1, 2023):
Can you try:
synodiskport -cachesynonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1synonvme --m2-card-model-get /dev/nvme2n1;synonvme --get-location /dev/nvme2n1`synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1@zcpnate commented on GitHub (Oct 1, 2023):
ash-4.4# synodiskport -cache
nvme0n1 nvme1n1 nvme2n1 nvme3n1
ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1
E10M20-T1
Device: /dev/nvme0n1, PCI Slot: 1, Card Slot: 2
ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1
E10M20-T1
Device: /dev/nvme1n1, PCI Slot: 1, Card Slot: 1
ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1
Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1
ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2
@007revad commented on GitHub (Oct 1, 2023):
I had a typo in the last command. It should return the same result, but the command should have been:
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1@zcpnate commented on GitHub (Oct 1, 2023):
ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1
Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2
blind copy paste haha didn't catch that
@007revad commented on GitHub (Oct 2, 2023):
While searching for what causes the "nvme_model_spec_get.c:90 Incorrect power limit number 4!=2" log entry I found 7.2-U3 has 2 scripts related to nvme power. I need to check if 7.2.1 still has those scripts.
syno_nvme_power_limit_set.serviceruns/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.shthen runs/usr/syno/lib/systemd/scripts/nvme_power_state.sh -d $dev_name -p $pwr_limitwhich sets the power limit to $pwr_limit for nvme drive $dev_nameIt can also list the power states of the specified nvme drive. Strangely both my DS720+ and DS1821+ return the exact same power states even though both have different power_limits set in model.dtb
For me
/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0returns:@RozzNL commented on GitHub (Oct 2, 2023):
/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0For me it returns:
@007revad commented on GitHub (Oct 2, 2023):
Yours looks more like I'd expect the output of a Synology command or script to look like.
Does this return an error? Or a list of nvme drives and power limits?
@RozzNL commented on GitHub (Oct 2, 2023):
Nope, i doesn`t return anything...
@007revad commented on GitHub (Oct 2, 2023):
So what about these:
nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}synonvme --get-power-limit nvme0n1synonvme --get-power-limit nvme1n1synonvme --get-power-limit nvme2n1synonvme --get-power-limit nvme3n1@RozzNL commented on GitHub (Oct 2, 2023):
All return with nothing 👎
@007revad commented on GitHub (Oct 2, 2023):
Does
synodiskport -cachereturn:
nvme0n1 nvme1n1 nvme2n1 nvme3n1@RozzNL commented on GitHub (Oct 2, 2023):
nope, still returns nothing....and i still have the same errors btw
2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1@zcpnate commented on GitHub (Oct 2, 2023):
FYI these power limit scripts do not exist on 7.2u1
@007revad commented on GitHub (Oct 3, 2023):
I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got
4!=2in logs but I didn't.@RozzNL
Can you do the following:
enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"#enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"DS1821+=yesline under the[E10M20-T1_sup_sata]section in/usr/syno/etc.defaults/adapter_cards.confAlso, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.
@007revad commented on GitHub (Oct 3, 2023):
If anyone wants a quick solution (instead of waiting for more trial and error testing) you can replace /usr/lib/libsynonvme.so.1 with the one from DSM 7.2-64570. I know this works in 7.2 update 2 and update 3. But I have no idea if it works in 7.2.1
build=$(get_key_value /etc.defaults/VERSION buildnumber)nano=$(get_key_value /etc.defaults/VERSION nano)cp -p /usr/lib/libsynonvme.so.1 /usr/lib/libsynonvme.so.1.${build}-${nano}.bakmv -f libsynonvme.so.1 /usr/lib/libsynonvme.so.1 && chmod a+r /usr/lib/libsynonvme.so.1@RozzNL commented on GitHub (Oct 3, 2023):
Goodmorning all,
Performed the comment-out, removed line DS1821+=yes, rebooted, no change.
I have indeed performed the enable_m2_volume script, so i restored this back with running the script again, rebooted but i could not get back into the gui, had to reboot 2x again. after succesful reboot, still no change.
Checked the comment-out and line were still removed (just to be sure the m2_volume script had not interfered) and i did forget to run the hdd_db script after i edited it, duh...so reran all again to check, still no change
EDIT:
Checking some of the commands you sent previously.
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1Not M.2 adapter card
Can't get the location of /dev/nvme3n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1Not M.2 adapter card
Can't get the location of /dev/nvme2n1
synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1E10M20-T1
Can't get the location of /dev/nvme1n1
@007revad commented on GitHub (Oct 4, 2023):
I'm curious if the issues @RozzNL is having are the same for everyone.
@zcpnate what does
synodisk --enum -t cachereturn for you?Are you willing to try 7.2 update 3 again, but this time:
synodisk --enum -t cachereturns something.@zcpnate commented on GitHub (Oct 4, 2023):
Can get you this info tmw. I'd be willing to upgrade to u3 for testing as I'm pretty sure I can reliably downgrade to u1 in the event of no success. Also totally willing to jump on a zoom and we can debug in real time.
@007revad commented on GitHub (Oct 7, 2023):
@zcpnate
Did you get a chance to try 7.2 update 3 with the model.dtb file from https://github.com/007revad/Synology_HDD_db/issues/148#issuecomment-1741733516
and line 1335 in syno_hdd_db.sh changed from this:
check_modeldtb "$c"to this:
#check_modeldtb "$c"Then reboot.
@RozzNL
There seems to be something really wrong with your DSM installation. Can you reinstall DSM 7.2 update 3 following the steps here: https://github.com/007revad/Synology_DSM_reinstall
Note skip steps 6 and 9 because you want DSM 7.2 update 1 to update itself to update 3.
Then do the same steps I outlined for above for zcpnate.
@RozzNL commented on GitHub (Oct 7, 2023):
I downgraded to a full release of DSM_DS1821+_64570.pat, rebooted and was auto upgraded to latest release of DSM 7.2-64570 U3 after reboot.
As expected i did see the 2 internal NVME
s but not the E10M20-T1 card (so no 2x NVMEs and 10GbE)Ran the syno_hdd_db.sh with 2 lines commented out from your ealier request (lines 1334
#enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"and line 1335#check_modeldtb "$c"After reboot, i saw both internal NVME
s in Storage Manager and could online assemble them and i got the 10GbE back. Still no NVMEs on the E10M20-T1 card, but i think this is expected due to not running your script syno_create_m2_volume.sh right?So awaiting your further orders :-)
I did run the following commands for you:
synodisk --enum -t cache************ Disk Info ***************
grep "e10m20-t1" /run/model.dtbreturns nothing
grep "power_limit" /run/model.dtbBinary file /run/model.dtb matches
grep "100,100,100,100" /run/model.dtbreturns nothing
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+yes
get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+retuns nothing, but is commented out in hdd_db
udevadm info --query path --name nvme0/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
udevadm info --query path --name nvme2/devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
udevadm info --query path --name nvme3/devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3
sudo grep synostgd-disk /var/log/messages | tail -102023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:36:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_slot_info_get.c:53 Failed to find slot info
2023-10-07T11:37:18+02:00 DS1821 synostgd-disk[15521]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1
/sys/firmware/devicetree/base/power_limit && echo-ash: /sys/firmware/devicetree/base/power_limit: Permission denied
smartctl --info /dev/nvme0smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Read NVMe Identify Controller failed: NVMe Status 0x200b
synodiskport -cachenvme2n1 nvme3n1
synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1E10M20-T1
Can't get the location of /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1E10M20-T1
Can't get the location of /dev/nvme1n1
synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1Not M.2 adapter card
Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1
synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1Not M.2 adapter card
Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2
nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}nvme2n1 nvme3n1
output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}nvme2n1:9.075 nvme3n1:9.075
synonvme --get-power-limit nvme0n1returns nothing
synonvme --get-power-limit nvme1n1returns nothing
synonvme --get-power-limit nvme2n1nvme2n1:14.85
synonvme --get-power-limit nvme3n1nvme3n1:14.85
EDIT:
uncommenting 1334 in syno_hdd_db.sh line to
enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+yes
udevadm info --query path --name nvme0/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
udevadm info --query path --name nvme1/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
Maybe try your modified modeldtb as a next step?
@007revad commented on GitHub (Oct 9, 2023):
Sorry, that command should have been:
cat /sys/firmware/devicetree/base/power_limit && echoThis appears to suggest I have the ports back to front. What does this command return:
syno_slot_mapping@RozzNL commented on GitHub (Oct 9, 2023):
14.85,9.075
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8
Esata port count: 2
Esata port 1
01:
Esata port 2
01:
USB Device
01:
02:
03:
04:
Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1
PCIe Slot 1: E10M20-T1
@007revad commented on GitHub (Oct 9, 2023):
Try these:
grep "e10m20-t1" /etc.defaults/model.dtbgrep "power_limit" /etc.defaults/model.dtbgrep "100,100,100,100" /etc.defaults/model.dtbIf all 3 of the above commands return "Binary file /etc.defaults/model.dtb matches" then run these commands:
chmod 644 /etc.defaults/model.dtbcp -pu /etc.defaults/model.dtb /etc/model.dtbcp -pu /etc.defaults/model.dtb /run/model.dtb@RozzNL commented on GitHub (Oct 9, 2023):
Returns nothing
Binary file /etc.defaults/model.dtb matches
Returns nothing
Did not run below commands.
@007revad commented on GitHub (Oct 9, 2023):
Download this zip file:
ds1821+_model_with_e10m20-t1.zip
Then
chmod 644 model.dtbcp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bakcp -pu model.dtb /etc.defaults/model.dtbcp -pu model.dtb /etc/model.dtbcp -pu model.dtb /run/model.dtb@RozzNL commented on GitHub (Oct 9, 2023):
Wil do above when i get back from work.
Edit:
Unzipped and copied model.dtb 3x als per instructions.
I did notice that /run/model.dtb gets rewritten at bootup? am i correct? i saw this due to the time stamp on the file, it had changed, the other 2 were still the same timestamp.
Again no NVME`s in Storage Manager (all are gone)
But:
grep "e10m20-t1" /etc.defaults/model.dtbBinary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtbBinary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtbBinary file /etc.defaults/model.dtb matches
syno_slot_mappingSystem Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8
Esata port count: 2
Esata port 1
01:
Esata port 2
01:
USB Device
01:
02:
03:
04:
Internal SSD Cache:
01:
02:
PCIe Slot 1: E10M20-T1
01:
02:
synodiskport -cacheReturns blank
@007revad commented on GitHub (Oct 9, 2023):
Yes, DSM does overwrite /run/model.dtb during boot.
Can you restore the backed up model.dtb file, until I create a new model.dtb for you to try.
cp -p /etc.defaults/model.dtb.bak /etc.defaults/model.dtbcp -pu /etc.defaults/model.dtb.bak /etc/model.dtb@007revad commented on GitHub (Oct 10, 2023):
I've been thinking this would be a lot easier if I had a E10M20-T1, not just for DSM 7.2 update 3 and 7.2.1 but for when new versions of the storage manager package are released. Where I live a E10M20-T1 costs 1/3rd the price of a DS1821+!?!?
So I have a question for those who have E10M20-T1. Do the included M.2 heatsinks come with sticky single use thermal pads? It looks like it would be hard to temporarily remove an M.2 drive for testing.
@RozzNL commented on GitHub (Oct 11, 2023):
Dave, i have swapped my internal NVME’s multiple times with the ones on the E10M20-T1, totally no problems removing the heatsink.
And because i like to tinker and tweak as much as possible, i also installed 2 coolers on the internals NVME’s ([Gelid Solutions Icecap M.2 SSD Cooler)
@007revad commented on GitHub (Oct 11, 2023):
Thanks. I just bought a E10M20-T1 online and paid for express shipping. The online store's distribution center is only a few suburbs away from me so hopefully it will arrive quickly (they don't allow pick-up).
@007revad commented on GitHub (Oct 11, 2023):
I just unpacked DSM 7.2 update 3 for all 113 Synology models that can use DSM 7.2
pcie_postfix = "00.0,08.0,00.0"andpcie_postfix = "00.0,04.0,00.0"(for both E10M20-T1 and M2D20).This confirms that the pcie_postfix values that I used in the model.dtb file were correct.
I also noticed that those 14 models that support E10M20-T1 do not have SATA M.2 support enabled in model.dtb for the E10M20-T1, even though they all have
E10M20-T1_sup_sataenabled in adaptor_cards.confThis confirms that adding entries for SATA M.2 support in model.dtb won't make any difference.
I've compiled 2 new model.dtb files for you to try:
power_limit = "100,100";ds1821+_100x2.zippower_limit = "14.85,9.075";ds1821+_14.85.zipUnzip it to a directory on the DS1821+ then
chmod 644 model.dtbcp -pu model.dtb /etc.defaults/model.dtbcp -pu model.dtb /etc/model.dtb@RozzNL commented on GitHub (Oct 11, 2023):
Still only NVME`s internal in Storage Manager, before and after.
Still only NVME`s internal in Storage Manager, before and after.
Both files show same info below:
synodiskport -cachenvme2n1 nvme3n1
grep "e10m20-t1" /etc.defaults/model.dtbBinary file /etc.defaults/model.dtb matches
grep "power_limit" /etc.defaults/model.dtbBinary file /etc.defaults/model.dtb matches
grep "100,100,100,100" /etc.defaults/model.dtbReturns nothing
syno_slot_mappingInternal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1
PCIe Slot 1: E10M20-T1
01:
02:
@007revad commented on GitHub (Oct 11, 2023):
Depending on which model.dtb file you were using you'd need to run either:
grep "100,100" /etc.defaults/model.dtbor
grep "14.85,9.075" /etc.defaults/model.dtbWhat do these 3 commands return:
ls -l /etc.defaults/model.dtbls -l /etc/model.dtbls -l /run/model.dtb@RozzNL commented on GitHub (Oct 12, 2023):
Gotcha.
ls -l /etc.defaults/model.dtb-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc.defaults/model.dtb
ls -l /etc/model.dtb-rw-r--r-- 1 Rozz users 3848 Oct 11 12:09 /etc/model.dtb
ls -l /run/model.dtb-rw-r--r-- 1 root root 3848 Oct 11 17:13 /run/model.dtb
@007revad commented on GitHub (Oct 12, 2023):
Can you download this zip file:
ds1821+_model_with_e10m20-t1.zip
Unzip it to a directory on the DS1821+ then
chmod 644 model.dtbsudo chown root:root model.dtbcp -pu model.dtb /etc.defaults/model.dtbcp -pu model.dtb /etc/model.dtb@RozzNL commented on GitHub (Oct 12, 2023):
Will do when i get back home from work.
EDIT:
Internal NVME`s gone again in Storage Manager after reboot.
@007revad commented on GitHub (Oct 13, 2023):
My E10M20-T1 arrived 30 minutes ago and I now have both 10GbE and NVMe drives working in DSM 7.2-64570 Update 3 :o)
The solution was simple once I realized LAN 5 was missing as well as the NVMe drives.
sudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nvme DS1821+ yessudo set_section_key_value /usr/syno/etc/adapter_cards.conf E10M20-T1_sup_nic DS1821+ yesIf I'd thought of checking that "/usr/syno/etc/adapter_cards.conf" matched "/usr/syno/etc.defaults/adapter_cards.conf" and they both contained DS1821+ in the correct places I could have saved $300
@007revad commented on GitHub (Oct 13, 2023):
FYI Do NOT update to 7.2.1 yet.
I just updated to 7.2.1 and:
@RozzNL commented on GitHub (Oct 13, 2023):
Which model.dtb file did you use? (Original? 100? 14,85?)
Cause i still do not see the drives in Storage Manager after performing your adapter_cards.conf in /etc and /etc.defaults
My LAN 5 never went away after the first time i used your syno_hdd_db script
So i`m glad you got it working at your side!!! Now my turn ;)
Still, we need to keep this working after an update, so not out of the woods yet.
EDIT:
I restored the model.dtb.bak from a couple of steps back and i now do see the internal disks, but still no E10M20-T1, i do however still have LAN 5.
So i`m guessing i still have a different setup then yours?!?
@zcpnate commented on GitHub (Oct 13, 2023):
I haven't attempted this yet but I did just shoot over a paypal donation to help with the cost of the card. Thanks for all your hard work!
@RozzNL commented on GitHub (Oct 13, 2023):
I also just did a donation...totally forgot about it...
Dave, thank you very much for the work you have already done.
@007revad commented on GitHub (Oct 13, 2023):
I used a model.dtb with "100,100,100,100" like the one in the first zip file ds1821+_model_with_e10m20-t1.zip
But the one I actually used also contains E10M20-T1, M2D20 and M2D18 in this file model.zip
chmod 644 model.dtbsudo chown root:root model.dtbcp -pu model.dtb /etc.defaults/model.dtbcp -pu model.dtb /etc/model.dtb@RozzNL commented on GitHub (Oct 14, 2023):
OK dave, i really dont know where it is going wrong for me?
*copied model.dtb (100,100,100,100 version from above comment) into /etc and /etc.defaults, both are root
*checked adapter_cards.conf in: /usr/syno/etc and /usr/syno/etc.defaults, both have: E10M20-T1_sup_nvme, E10M20-T1_sup_nic and even E10M20-T1_sup_sata are all 1821+=yes, both are root
So where is it going wrong?
EDIT:
For me this removes my internal nvme drives.
@007revad commented on GitHub (Oct 14, 2023):
Do the fans to run at full speed?
I'm not sure why it's different for you. I'll downgrade DSM to 7.2 update 3 and try it again and document the exact steps I do.
I did notice today that the values in /run/adapter_cards.conf did not match those in /usr/syno/etc.defaults/adapter_cards.conf
What does the following command return:
cat /run/adapter_cards.confI've spent the last few hours creating a test version of syno_hdd_db to do all the required steps, so we'll all be doing the exact same steps. But I'm momentarily stuck at trying to insert the power_limit into the model.dtb file.
@RozzNL commented on GitHub (Oct 14, 2023):
No fans run normally, i use cool mode btw
M2D20_sup_nvme=no
E10M20-T1_sup_sata=yes
E10M20-T1_sup_nic=yes
M2D17_sup_sata=no
E10M20-T1_sup_nvme=yes
M2D18_sup_sata=no
M2D17_sup_nic=no
M2D18_sup_nic=no
M2D20_sup_sata=no
M2D17_sup_nvme=no
M2D18_sup_nvme=no
FX2422N_sup_nic=no
FX2422N_sup_nvme=no
FX2422N_sup_sata=no
M2D20_sup_nic=no
@007revad commented on GitHub (Oct 16, 2023):
I haven't forgotten you guys.
I've done a lot of testing, while documenting every change, and been running around in circles. At one point I replaced the E10M20-T1 with the M2D18 and spent 1/2 a day trying to get it working again then I noticed the M2D18 was not fully plugged into the PCIe slot!?!?
I also downgraded DSM from 7.2.1 update 1 to 7.2 update 3 which caused it's own issues so I was not sure if the issues were caused by parts of DSM being broken (Synology account, File Station, Schedules, packages etc). I solved that by downgrading to DSM 7.2 update 1.
My plan is to get both the M2D18 and E10M20-T1 working in DSM 7.2 update 1,
I want to get both cards working as Synology intended (for a cache) without running any of my scripts.
Because I got tired of copying and pasting dozens of commands every time I made a change and rebooted I've written a script that runs all the commands and outputs the results in a readable format.
https://github.com/007revad/Synology_HDD_db/blob/test/m2_card_check.sh
FYI this is from immediately after reinstalling DSM and not running any scripts or editing anything:
@RozzNL commented on GitHub (Oct 16, 2023):
Sounds like a plan Dave, just do your thing.
Personally not in any hurry.
My idee was to use the internals as cache and the PCIE card as storage.
Am away from home a few days but i should be able to test some settings remotely if needed.
@007revad commented on GitHub (Oct 18, 2023):
Synology made changes that added a power limit for NVMe drives in DSM 7.2 Update 2. There were no NVMe related changes in Update 3. So what works in update 2 also works Update 3.
Getting the M2D18 and E10M20-T1 working in DSM 7.2 Update 1 was easy.
But getting them working in DSM 7.2 Update 2 was a lot harder. I'm actually wondering if my DS1821+ was running Update 1 when I previously got the E10M20-T1 working. I wish I'd taken a screenshot of the DSM version together with storage manager.
The good news is I have my M2D18 and E10M20-T1 both working in DSM 7.2 Update 3. Note: I have not run any of my scripts yet because I didn't want to introduce any extra variables to the testing.
I also have not tested DSM 7.2.1 yet because rolling back to 7.2 update 1 was difficult.
M2D18 working in DSM 7.2 Update 3

E10M20-T1 working in DSM 7.2 Update 3

Can you do the following to test it:
Note: When I first ran m2_card_fix.sh and rebooted I found
/run/adapter_cards.confwas missing. I created the missing file by hand but when I rebooted DSM replaced it... so rebooting a 2nd time should restore/run/adapter_cards.confif it missing.I only noticed
/run/adapter_cards.confwas missing when I ran m2_card_check.sh and sawls: cannot access '/run/adapter_cards.conf': No such file or directoryIf you have any issues please run m2_card_check.sh and reply with the output.
@007revad commented on GitHub (Oct 18, 2023):
FYI I noticed that my NVMe drives sometimes changed their number.
The nvme drive in the internal slot 1 was nvme0 when the M.2 card was not being detected.
Then when the M.2 card was detected, the nvme drive in the internal slot 1 had changed to nvme1. And the nvme drive in slot 1 of the M.2 card was now nvme0.
When I had 3 NVMe drives installed the drive in the internal slot was nvme2. After removing one of the drives from the M.2 card the drive in the internal slot became nvme1.
So if you have 4 of the same model NVMe drives and run syno_m2_volume.sh to create a volume on the drives in the M.2 card it will be difficult to tell which drives are installed where. I will update syno_m2_volume.sh to show if the drive is in an M.2 card.
In the meantime you see where each nvme drives i located with:
syno_slot_mapping | grep -A 7 'SSD'@RozzNL commented on GitHub (Oct 18, 2023):
WHOOOHOOO....

Now were getting somewhere Dave!
This was after running your fix script and only 1x reboot.
Since i am not at home right now, i am nog going to create storage pools just yet, but a question: do i need your other scripts to create a storage pool on the pcie card? want to run internal nvmes as cache (not yet decided if i want to use write/read or only read, and the nvmes on the pcie will bee raid 1 storage pool with 1x volume
EDIT:
For your info,
.
/m2_card_check.shDSM 7.2-64570 Update 3
2023-10-18 21:17:56
Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes
Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes
Checking permissions and owner on model.dtb files
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb
-rw-r--r-- 1 root root 4460 Oct 18 21:00 /run/model.dtb
Checking power_limit="100,100,100,100" is in model.dtb files
All OK
Checking E10M20-T1 is in model.dtb files
All OK
Checking M2D20 is in model.dtb files
All OK
Checking M2D18 is in model.dtb files
All OK
Checking permissions and owner on adapter_cards.conf files
-rw-r--r-- 1 root root 3170 Oct 13 11:58 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 286 Oct 18 21:00 /run/adapter_cards.conf
Checking /usr/syno/etc.defaults/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking /usr/syno/etc/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking /run/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking synodisk --enum -t cache
************ Disk Info ***************
Checking syno_slot_mapping
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8
Esata port count: 2
Esata port 1
01:
Esata port 2
01:
USB Device
01:
02:
03:
04:
Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1
PCIe Slot 1: E10M20-T1
01: /dev/nvme1n1
02: /dev/nvme0n1
Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3
Checking devicetree Power_limit
14.85,9.075
Checking if nvme drives in PCIe card with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card
Checking if nvme drives in PCIe card with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card
Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"
Checking nvme drives in /run/synostorage/disks
nvme0n1
nvme1n1
nvme2n1
nvme3n1
Checking nvme block devices in /sys/block
nvme0n1
nvme1n1
nvme2n1
nvme3n1
Checking synostgd-disk log
Current date/time: 2023-10-18 21:17:57
Last boot date/time: 2023-10-18 21:17:00
No synostgd-disk logs since last boot
@007revad commented on GitHub (Oct 19, 2023):
Excellent.
Interesting that you didn't need a 2nd reboot (as /run/adapter_cards.conf still existed).
For NVMe drive in a PCIe card you need Synology_M2_volume to create the storage pool and then do an online assemble in storage manager. This is because storage manager won't let you create a storage pool on NVMe drives in a PCIe card. I should have see if I can get around that..
Synology really need to learn how to spell Temperature.
Your NVMe drives are a lot warmer than my little 500GB NVMe drives. My internal NVMe is 28 C and the one in the E10M20-T1 is 33 C (without the heatsink installed). Though I do have 2 empty bays next the internal M.2 slots and I currently have the cover off the NAS.
I notice that nvme1 is in the E10M20-T1 M.2 slot-1 and nvme0 is in M.2 slot-2. I should have tested with 2 nvme drives in the pcie card as I'd expect nvme0 to be in the E10M20-T1 M.2 slot-1, like this:
01: /dev/nvme0n1
02: /dev/nvme1n1
I wonder if Synology screwed that up because all the NAS models that have E10M20-T1 in model.dtb have 08.0 for slot-1 and 04.0 for slot-2. I can switch them around but I'm not sure if I should.
I've never seen synonvme correctly report that an nvme drive was in a pcie card. But it did alert me to the fact I had the wrong permissions set on /usr/syno/bin/synonvme
@007revad commented on GitHub (Oct 19, 2023):
@RozzNL I just noticed your screenshot shows 3 available pools. You should be able to click on ... and select "Online Assemble".
@RozzNL commented on GitHub (Oct 19, 2023):
I deleted all 3 available pools because that was me testing and changing everything before i reached out to you.
Already created a read cache on the internals, will use the m2_volume script for the pcie.
As for the temps, my cover is also off but the syno is placed in a relative warm place which does not help, but temps are well within operating range so not worried.
Do you already know if this will survive a dsm update?
@007revad commented on GitHub (Oct 19, 2023):
I assume you mean the M2 volume? After a DSM update you'll need to run m2_card_fix then maybe do an online assemble.
Once I update syno_hdd_db you won't need m2_card_fix.
@MirHekmat commented on GitHub (Oct 25, 2023):
Hi @007revad , I have DS1821+ as well and I am having the same issue for me its just showing the E10M20-T1 in the info centre doesn't show the Lan5 at all nor the drives.
I am good around computers hardware installation etc but bad around coding, I see you have managed to help @RozzNL and fix the issue. Could you kindly summarise the correct and necessary steps to get this up and running? I have contacted Synology and they are saying return the card its not in the Synology compatible list. I would hate to return it as the Two extra cache/storage would be extremely helpful for my 4K video editing.


@007revad commented on GitHub (Oct 25, 2023):
@MirHekmat
Which DSM version is your DS1821+ using?
I assume you've already run syno_hdd_db.sh since installing the E10M20-T1.
@MirHekmat commented on GitHub (Oct 25, 2023):
Hey Dave,
The DSM is: DSM 7.2-64570 Update 1
I actually haven't run syno_hdd_db.sh
I read this post from top to bottom, I saw there were a few things that were done and some worked some didn't work, as the other person mentions. So I just wanted to start from where it actually mattered ( maybe it all matters, I am not sure).
So would you like me to start from here steps 1 to 4 and it should work
1.Go to m2_card_fix.sh
2.Download m2_card_fix.sh (see image below).
3.Run m2_card_fix.sh with sudo -i
4.Reboot.
or fir run syno_hdd_db.sh (where is this located kindly advise) then step 1~4.
Sorry I am a noob Thank you so much for all your help!
@007revad commented on GitHub (Oct 25, 2023):
For a DS1821+ with DSM 7.2-64570 Update 1 you only need Synology_HDD_db and the E10M20-T1 will work.
If you update to DSM 7.2-64570 Update 2 or Update 3 you'd also need the following steps.
I will integrate m2_card_fix.sh into Synology_HDD_db soon so it will do it all.
I've also got to test DSM 7.2.1-69057 Update 1
@MirHekmat commented on GitHub (Oct 25, 2023):
Thank you for clear instructions,
Also, as I had one delivery for the whole lot. Received 2x 16TB IronWolf at the same time as E10M20-T1. So I installed everything.
NAS is in the process of adding the 2x 16TB's to my SHR raid ( current progress @41.98%.)
Do you suggest I wait for NAS to finish this building to 100%. ETA is 2 days remaining. Or is it safe to run the code and reboot? You reckon it'll start back from where it left or I might lose some progress.

@007revad commented on GitHub (Oct 25, 2023):
I would wait until it's finished.
@MirHekmat commented on GitHub (Oct 28, 2023):
Hey Mate, it worked great, thank you for your hard work. Chipped in a bit through PayPal.
Do I need to now maintain this through a task scheduler as you have mentioned.
Also Is this the correct guide for creating m.2 storage volumes? https://github.com/007revad/Synology_M2_volume
@MirHekmat commented on GitHub (Oct 28, 2023):
There is also this one: https://github.com/007revad/Synology_enable_M2_volume
not sure of the difference, which one would work best? I have 4x Samsung m.2s 2 installed internally and 2x installed in E10M20-T1. would like to make 2x in E10M20-T1 to be use a storage if possible.
@007revad commented on GitHub (Oct 28, 2023):
Thanks.
Anytime you update DSM you'll need to run syno_hdd_db again. So it's easier to schedule it to run at boot-up.
Synology_enable_M2_volume isn't needed on a DS1821+ if you've run syno_hdd_db.
You will need Synology_M2_volume if you want to use the NVMe drives in the E10M20-T as a volume. This is because DSM won't allow creating a volume on NVMe drives in an M.2 adaptor card (not even for their own Synology branded NVMe drives).
@MirHekmat commented on GitHub (Oct 28, 2023):
Thanks all worked out now!
@MirHekmat commented on GitHub (Nov 1, 2023):
@007revad Hey mate,
I think you are in Australia too, I have a 2nd DS1821+ coming. Do you have any cheaper/good thirdparty 10G card alternate I bought the TP-Link TX401 but I couldnt get that to work. dummy me didn't know back then Synology is very hardware restricted.
@007revad commented on GitHub (Nov 1, 2023):
I figured you were also in Australia.
Apparently Synology's E10G18-T1 uses the Aquantia AQN107 controller but with custom firmware... so other Aquantia AQN107 controller based 10G cards (like the Asus-XG-C100C) don't work. With DSM 6 you could download the Linux driver source and compile it on your Synology.
https://servicemax.com.au/tips/synology-10gigabit-ethernet-on-the-cheap/
https://www.reddit.com/r/synology/comments/k4a5px/how_i_got_a_generic_cheap_aqc107_card_working_on/
There's comments on the Xpenology forum saying that doesn't work in DSM 7 (but that may just be because they didn't use the latest driver source).
The Xpenology people do have drivers for the DS1621+ (same CPU as DS1821+) up to DSM 7.1.1 but nothing for DSM 7.2 (unless the DSM 7.1.1 driver still works). It seems like a lot work to keep the driver up to date with each DSM update.
I actually have a Asus-XG-C100C in my PC so I could install my E10G18-T1 in my PC and try the Asus-XG-C100C in my DS1821+ to test it. Which seems like a lot work to save $100 AU.
10G cards that do work by just plugging them in are usually 2nd hand 10G SFP cards, or some 10GbE cards, but they only support 10G and 1G (no 2.5G or 5G).
See:
https://www.reddit.com/r/synology/comments/ssjoi6/thirdparty_10g_nic_compatibility_for_ds_1821_only/
https://www.reddit.com/r/synology/comments/kcd3d6/cost_effective_3rd_party_10gbe_nic_for_synology/
@MirHekmat commented on GitHub (Nov 1, 2023):
Sounds like alot of work, I did read some of those Reddit post as well and it seems like alot of mucking around and since I am not invested in SFP at all, I complete agree not worth saving $100, I think I'll just buy Synology DS1821+ compatible one from Amazon. Also good to know DS1621+ and DS1821+ use the same CPU.
Thank you OZ Fellow!
@bitcinnamon commented on GitHub (Nov 6, 2023):
Hi Dave @007revad
I really appreciate your kindly support and have already read from top to the end
while I tried all of these scripts but unfortunately no one works on my rig.
In the initial state, DSM can setup my M.2 Drives as cache, without any warnings or errors.
After this I full reset the rig (7.2.1u1), the M.2 drives appeared back in list.
Tried another full reset (7.2.1u1), can see M.2 drives with "unsupported" in the list.
When I click on 'Reset Drive', it turns green OK and is able to make cache.
So I downgraded to DSM 7.2 U1-64570, prevented updating to 7.2u3.
Factory reset (7.2u1) and use [Synology_M2_volume] to create pools and reboot, also got disappeared.
Another factory reset, use madam to create raids by SSH manually, after reboot they disappear againnnn.
I posted my rs1221+'s synonvme and libsynonvme.so.1, hope can helps.
rs1221.zip
@007revad commented on GitHub (Nov 6, 2023):
@bitcinnamon
Is this a real RS1221+ or Xpenology?
I see a few issues:
When the drives disappear after using Synology_M2_volume or creating them with mdadm and rebooting are you sure there isn't a "Online assemble" option in storage manager?
Did you run Synology_HDD_db with the -n option? Is it scheduled to run at start-up?
A few people have reported that they needed to run "Synology_HDD_db and reboot" 2 or 3 times to stop their NVMe drives vanishing. One person even scheduled Synology_HDD_db to run at shutdown and boot-up.
Can you run m2_card_check.sh and reply with it's output?
@007revad commented on GitHub (Nov 7, 2023):
@bitcinnamon
I hashed synonvme and libsynonvme.so.1 for all NAS models that have them.
For the 70 Synology NAS models that have synonvme and libsynonvme.so.1 (i.e. the models that support M.2 drives) I've found:
I've updated m2_card-fix.sh and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+
The output should like this:
@bitcinnamon commented on GitHub (Nov 7, 2023):
Thank you for your kindly reply.
Yes it is, all brand new just got from Amazon.co.jp.
No, I never saw pools been created under Storage Pools menu, nor "Online Assemble"
Each time I need to go into HDD/SDD and scroll down to find the M.2 drives vanished.
My bad. Never tried reboot twice or more, nor scheduled to run at start-up.
Just run -> got vanished -> run another script -> still, factory reset.
I will try to add it to task schedule again.
Certainly, I will update here when I got this output.
Also I will try to change my M.2 drives to something like Samsung, Toshiba ones instead of Unknown.
Thank you very much, Dave.
@007revad commented on GitHub (Nov 7, 2023):
@bitcinnamon
Did you see the last half of this comment where I've updated m2_card-fix.sh and it now supports DS1821+ DS1621+ DS1520+ RS822+ RS822rp+ RS1221+ RS1221rp+
@bitcinnamon commented on GitHub (Nov 7, 2023):
Yes I see them and gonna try it again! Thank you so much.
And here is my m2_card_check.sh output:
@bitcinnamon commented on GitHub (Nov 7, 2023):
Issue solved.
That's absolutely correct! I just made it with these scripts.
Thank you so much Dave for your kindly support!
Here is what I did today,
I remember this and change my two CFD Gaming (Japanese Local Brand) SSDs into a Seagate FireCuda and a Sabrent Rocket (another Japanese tiny brand?)
I realized that the SSD which recognized as 'Unknown' brand doesn't work
Change it to a Plextor 1TB SSD.
Conclusion: NEVER PUT UN-FAMOUS-BRAND SSDs IN AN Xpenology.
@007revad commented on GitHub (Nov 7, 2023):
@bitcinnamon
Nice that it also works in DSM 7.2.1.
That should be 65536 MB. I hope it's just the script showing "GB" as the unit in the output.
What does the following command return?
get_key_value /etc.defaults/synoinfo.conf mem_max_mbI'm wondering if I can get the CFD and Sabrient brand NVMe drives working.
@bitcinnamon commented on GitHub (Nov 8, 2023):
Unknown CSSD-M2B2TPG3VNF
Unknown Sabrent
and for screenshots and the following commands outputs, I will upload them 3-4 days later.
@RozzNL commented on GitHub (Nov 16, 2023):
EDIT:
Running syno_hdd_db.sh v3.2.66-RC corrects all....
All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2
Hi Dave,
Updating my DS1821+ to DSM 7.2.1-69057 Update 1 and Update 2 breaks the internal NVMES slots.
Running syno_hdd_db.sh, m2_card_fix did not resolve the issue.
The NVME`s on the E10M20-T1 do work.
Running m2_card_fix.sh gives output:
DS1821+
69057 not supported
Running m2_card_check gives output:
DS1821+
DSM 7.2.1-69057 Update 2
2023-11-16 14:18:24
Checking support_m2_pool setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes
Checking supportnvme setting
/etc.defaults/synoinfo.conf: yes
/etc/synoinfo.conf: yes
Checking md5 hash of libsynonvme.so.1
libsynonvme.so.1 is 7.2-64570 version
Checking md5 hash of synonvme
synonvme is 7.2-64570 version
Checking permissions and owner of libsynonvme.so.1
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 54154 Nov 16 13:53 /usr/lib/libsynonvme.so.1
Checking permissions and owner of synonvme
Which should be -rwxr-xr-x 1 root root
-rwxr-xr-x 1 root root 17241 Nov 16 13:53 /usr/syno/bin/synonvme
Checking permissions and owner of model.dtb files
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3583 Sep 23 17:11 /etc.defaults/model.dtb
-rw-r--r-- 1 root root 4460 Oct 14 13:17 /etc/model.dtb
-rw-r--r-- 1 root root 3583 Nov 16 14:11 /run/model.dtb
Checking if default power_limit="14.85,9.075" is in model.dtb files
Missing in /etc/model.dtb
Checking power_limit="14.85,14.85,14.85" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /etc/model.dtb
Missing in /run/model.dtb
Checking power_limit="14.85,14.85,14.85,14.85" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /etc/model.dtb
Missing in /run/model.dtb
Checking power_limit="100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb
Checking power_limit="100,100,100,100" is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb
Checking E10M20-T1 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb
Checking M2D20 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb
Checking M2D18 is in model.dtb files
Missing in /etc.defaults/model.dtb
Missing in /run/model.dtb
Checking permissions and owner of adapter_cards.conf files
Which should be -rw-r--r-- 1 root root
-rw-r--r-- 1 root root 3412 Nov 16 13:43 /usr/syno/etc.defaults/adapter_cards.conf
-rw-r--r-- 1 root root 3170 Oct 14 12:58 /usr/syno/etc/adapter_cards.conf
-rw-r--r-- 1 root root 286 Nov 16 14:11 /run/adapter_cards.conf
Checking /usr/syno/etc.defaults/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking /usr/syno/etc/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking /run/adapter_cards.conf
M2D20_sup_nvme NOT set to yes
M2D18_sup_nvme NOT set to yes
M2D18_sup_sata NOT set to yes
Checking synodisk --enum -t cache
************ Disk Info ***************
Checking syno_slot_mapping
System Disk
Internal Disk
01: /dev/sata1
02: /dev/sata2
03: /dev/sata3
04: /dev/sata4
05: /dev/sata5
06: /dev/sata6
07: /dev/sata7
08: /dev/sata8
Esata port count: 2
Esata port 1
01:
Esata port 2
01:
USB Device
01:
02:
03:
04:
Internal SSD Cache:
01: /dev/nvme2n1
02: /dev/nvme3n1
PCIe Slot 1: E10M20-T1
Checking udevadm nvme paths
nvme0: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0
nvme1: /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1
nvme2: /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2
nvme3: /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3
Checking devicetree Power_limit
14.85,9.075
Checking if nvme drives in PCIe card are detected with synonvme
nvme0: Not M.2 adapter card
nvme1: Not M.2 adapter card
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card
Checking if nvme drives in PCIe card are detected with synodisk
nvme0: E10M20-T1
nvme1: E10M20-T1
nvme2: Not M.2 adapter card
nvme3: Not M.2 adapter card
Checking PCIe slot path(s)
[pci]
pci1="0000:00:01.2"
Checking nvme drives in /run/synostorage/disks
nvme2n1
nvme3n1
Checking nvme block devices in /sys/block
nvme0n1
nvme1n1
nvme2n1
nvme3n1
Checking synoscgi log
Current date/time: 2023-11-16 14:18:25
Last boot date/time: 2023-11-16 14:11:14
2023-11-16T14:13:38+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[7271]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme0n1
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_slot_info_get.c:53 Failed to find slot info
2023-11-16T14:13:45+01:00 DS1821 synoscgi_SYNO.Storage.CGI.Storage_1_load_info[8076]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme1n1
@MirHekmat commented on GitHub (Nov 17, 2023):
@007revad
Hey mate,
I am under alot of stress at the moment hoping nothing is lost yet. I have so many photos. I had migrated everything on the new NAS over few months now and was working fine.
So I followed RozNlL last comment :
EDIT:
Running syno_hdd_db.sh v3.2.66-RC corrects all....
All NVME`s back online and all storage pools back! My DS1821+ running DSM 7.2.1-69057 Update 2)
As I updated my DSM to DSM 7.2.1-69057 Update 1
After updating and the running new script V3.2.66, Restarted the system and I am hearing continous beeping and following error message. Really hoping you can suggest what do I do from here. I have restarted a few times. I did get Drive crashed on drive 2. I clicked repair as the option came. The 2nd drive and all the drives are showing healhy.
This is a copy paste as I still have that screen up and what happened when I run the script:
Synology_HDD_db v3.2.66
DS1821+ DSM 7.2.1-69057-1
Using options:
Running from: /volume1/homes/Mir/Scripts2/syno_hdd_db.sh
HDD/SSD models found: 6
ST12000NE0008-2PK103,EN02
ST12000VN0008-2YS101,SC60
ST16000NE000-2RW103,SB30
ST16000NE000-2RW103,SN02
ST16000NE000-2RW103,SN03
ST2000DM001-1CH164,CC26
M.2 drive models found: 3
Samsung SSD 960 EVO 1TB,3B7QCXE7
Samsung SSD 970 EVO Plus 1TB,2B2QEXM7
Samsung SSD 970 EVO Plus 500GB,2B2QEXM7
M.2 PCIe card models found: 1
E10M20-T1
No Expansion Units found
ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db
ST12000NE0008-2PK103 already exists in ds1821+_host_v7.db.new
ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db
ST12000VN0008-2YS101 already exists in ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db
Updated ST16000NE000-2RW103 to ds1821+_host_v7.db.new
ST2000DM001-1CH164 already exists in ds1821+_host_v7.db
ST2000DM001-1CH164 already exists in ds1821+_host_v7.db.new
Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 960 EVO 1TB already exists in ds1821+_host_v7.db.new
Samsung SSD 960 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_host_v7.db.new
Samsung SSD 970 EVO Plus 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_host_v7.db.new
Samsung SSD 970 EVO Plus 500GB already exists in ds1821+_e10m20-t1_v7.db
E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 already exists in model.dtb
Support disk compatibility already enabled.
Support memory compatibility already enabled.
NVMe support already enabled.
M.2 volume support already enabled.
Drive db auto updates already enabled.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.
Please tell me if I can relax as I have about 30 TB of data. I was just preparing to transfer a full backup of all this data to 2nd DS1821+ (the new 1821+ is simply plugged with built in Ethernet ports Lan1+Lan2+Lan3+Lan4 I was trying to get faster data transfer through SMB3 all on 1 GB link) was hoping I could get 4 GB on the new backup nas and obviously already the other one had 10GB enabled through E10M20-t1.
Nothing out of ordinary installed on the New DS1821+(Empty back).
OLD DS1821+ shows 50.9 TB as you can see.
please help !
@RozzNL commented on GitHub (Nov 17, 2023):
@MirHekmat
I did first also use m2_card_fix.sh before i used the syno_hdd_db.sh RC
Maybe you can try that?
@007revad commented on GitHub (Nov 17, 2023):
@RozzNL
Thanks for providing the solution for MirHekmat
@MirHekmat
When you were getting the continuous beeps were the fans also running at full speed?
The DS1821+ and DS1621+ are like a problem child. They need extra things done when using an unsupported M.2 adapter card or they throw a very scary tantrum. I am still working on integrating m2_card_fix into syno_hdd_db. Or finding a better solution.
@MirHekmat commented on GitHub (Nov 18, 2023):
@RozzNL @007revad Fans were running fine. I checked the alert in DSM and it was on orange alert with missing SSD error. all sort of things was going on. The beeping sounds makes you panic I wish it was more settle buzz.
One more thing after the DSM update and the script run through SHH the DSM sort of had reset itself when I logged in as I can't see those desktop icons etc that I placed as shortcuts (possibly because of Volume 1 missing a drive).
However my user logins are still working as I had defined. No idea why this is happening or what exactly is happening. I just regret that I thought I should get the 10G E10M20-t1 up and running before I do the Entire backup through Hyperbackup, just to speed things up. The new Nas was on the new DSM 69507 and it wasn't showing the drive to save everything, hence the reason for updating to match the firmaware, hoping that will fix.
so lastnight after it crushed and as reported by synology I did lots of googling and checked synology forums for solutions. One of the options was after you get "your volume has degraded" in order to fix that you need to repair if you see the option available, so I hit that 3 dot menu in Storage section and hit repair. the first drive which was part of original SHR a 12TB ironwolf seems to have disconnected somehow and I had spare 16TB (set for Hot spare) and these two were coming as available drives. It asked me in order to repair you have two drives available to use it. I assumed I shouldn't touch 12TB as it was originally part of SHR when everything was working and assuming it may still have the parity information in case this repair method won't work, I still have that info available (it was saying all the data on this new drive will be erased) So I chose that Hot Spare 16TB Ironwold spare Unused HDD for the replacement drive
So since lastnight till now the progress is about 66.68% for repair:

I am waiting for this to finish and I am kinda hoping it will bring back those data for volume 1. I think its sorting out its parity for the missing drive on this SHR . Though I am not 100% what exactly happening and what the chaos for last night
Once I get my data back, I'll just transfer it all to the new NAS without worrying about 10G speed M.2 the extra hacks etc.
Then only I am thinking I'll run new scripts ^ to take on the primary NAS as its affordable data wise for in case something worse happens.
To you experience people, what are the chances I have lost my data or you reckon I should be able to get my data back.
It seems like i have another 8-10 hours before I'll know the answer if am getting my data back. The suspense is absolute killing and I feel very anxious as its limbo. Can't make up my mind if I should give up on my data or no it probably will come back. Don't know whats going to happen :)
One more question to you experienced people, Do I still need to sort out the Cache error its giving or once it repairs it should be able to just bring the data back. As I dont care about Cache at the moment, I just hope I have a chance with my data back :)
Please check this:

@RozzNL commented on GitHub (Nov 18, 2023):
You have a 1 drive fault tolerant setup/raid…..so you can loose 1x harddisk entirely and still get all your data back.
Doesn’t matter which harddrive fails! (So yes you could even have used the “original” harddisk that failed, as long as its not a hardware failure ofcourse)
Your cache on the other hand….you have a read/write cache….data protection wise this is a no no….if your data on the write cache fails, you CAN/COULD loose data on the volume…but i am not sure what happens when the read/write cache comes back online…i havent had that fault before.
I use the 3-2-1 backup method….3x data backup of which 2x different locations/NASses at home and 1x online.
My recommendation to all: do not use read/write cache on the volume you really need backup of…only use read cache.
Using a Raid 1 NVME storage pool / volume is much faster and easier!
@MirHekmat commented on GitHub (Nov 18, 2023):
Edit 2: sorry so in this screen shot just taken now. in share folder in control panel I still see those folder as I had structured, the grey ones are the ones in Volume 1 they Yellow onese are different storage pool that is still accessible. Volume 3 is two samsung m.2 that were on E10M20-t1
Could this mean that once the repair process is done I'll have access to those file. Please if anyone could shed some light !?? I am very new to Synology raid shr etc.
Edit 1: so you can see from this screen shot that i posted here three weeks ago its showing 11.7 TB used.

Also I bought the 2nd DS1821+ for backing up so these were my steps for 3.2.1 backup. but I guess I made mistake during the journey.
OH cra...p . so am I screwed here because of the Cache? I really thought cache is just a proxy storage for faster reads and writes, instead of accessing Volume 1 all the time for the data the cache would save a copy of some data and I really was treating that as seperate data storage. Are you saying its in part of the same chain as the Volume 1 SHR?
Also one more thing can you see the screen shot before all the choas, it was 29.1 TB use of of 50 TB now its saying 50.9 TB alocated. shouldn't it still say 29.tb used but volume is degrade etc? like why isn't it showing used space.
@007revad commented on GitHub (Nov 18, 2023):
The 50.9 TB used is normal when the NAS is repairing that storage pool. Once it's finished (in 8.5 days!) it should again show 29.1 TB.
With a read/write cache data is written to the write cache and then later saved to the HDDs. If something bad happens before the data is written to the HDDs that data is lost.
@MirHekmat commented on GitHub (Nov 18, 2023):
Mate you are giving me hope thank you ! I changed the sync speed to custom so its going fast at the moment currently sitting on 78% on step 1 since running the repair first at around 10ish PM Perth time time now is 3:19 pm. I guess your calculation might involve still time for Step 2? Hoping I don't have to die every day for 8 days if it takes this long :D to find out about it. Although I don't mind at all if it takes that long and it'll give me my data back.
This is screen shot of current performance of the drives during repair process, drive 6 is separate volume in itself called Volume 2 (still fully accessble) :

Is there anything else I can do to verify if my data is still on those drives. or would you suggest to relax and take a chill pill.
@007revad commented on GitHub (Nov 18, 2023):
The 8.5 days was from your screenshot that showed "Adding drives... 41.9% (Time left: 8 days 8 hours". But I see it's now up to 78% already. In my experience step 2 is faster.
Is this SHR-2 or RAID 6?
Basically there's nothing you can do until it's finished. Just relax and let it do what it needs to do. And be glad we don't live in a country with daily rolling blackouts. If don't see any warning messages popping up you can assume it's all good.
@MirHekmat commented on GitHub (Nov 18, 2023):
This is SHR 1
". In my experience step is faster." you mean to say step 2 is faster?
Totally agree <3 really hoping so.
If I do see any issue or my data doesn't appear after repair process. are there 3rd party services or synology that could recover these data ?
@RozzNL commented on GitHub (Nov 18, 2023):
Degraded means repairable and no data loss….take another chill pill ;)
https://kb.synology.com/en-ca/DSM/help/DSM/StorageManager/storage_pool_repair?version=7
@MirHekmat commented on GitHub (Nov 18, 2023):
RozzNL hey mate!!! thank you!!!! Would definitely update how it goes after the steps are completed.
Also I guess I'll ignore the cache error at the moment. Hoping the SSD cache will not associate itself on to Volume1.
@007revad commented on GitHub (Nov 18, 2023):
Yep, Step 2.
@RozzNL commented on GitHub (Nov 18, 2023):
In your screenshot “ time left: 8 days” you can see the ssd cache has a green dot.
So it looks like the ssdcache is also being repaired due to the repair of storage pool 1
In my opinion it all looks ok…but that’s just going on from your screenshot and synology showing a green dot.
@MirHekmat commented on GitHub (Nov 18, 2023):
I now see where you guys read the 8 days. That was a screenshot from three weeks ago when I was asking for help to run the script (I had just started inquiring with Dave on how to run this scrip getting some guidance) he suggestted to wait for that to finish (three weeks ago before all this new thing happend). I just copied that screenshot as it was posted here on Github already. Used it as source of how it showed data available versus how there was no data.
Discard this 8 days screenshot as that has nothing got to do with now :) I am still hopeful !

This is the current screenshot:

Storage pool 2 that you are seeing as missing right at the bottom is E10M20-T1 with 2x m.2 samsung SSD. I don't really care about that too as the were just some project duplicates for Davinici resolve quick access.
@MirHekmat commented on GitHub (Nov 19, 2023):
Okey that single HDD seems to have fixed but I guess this is internal 2x SSD cache is still coming as missing. Because Storage pool 2x was E10m20-T1.
@007revad So do you reckon running m2_card_fix.sh would bring the internal m.2 back up and running no idea why that would stop running any ideas?
@007revad commented on GitHub (Nov 19, 2023):
Yes. After running m2_card_fix.sh and rebooting your internal m.2 drives and and m.2 drives on E10M20-T1 should back up and running.
Because updating DSM would have restored the 2 files that m2_card_fix.sh replaces.
@MirHekmat commented on GitHub (Nov 19, 2023):
thank Dave, I have created a ticket with Synology as well as I wasn't sure. Apparently Synology might be able to give give you read only access to your Crushed volume which is all that matters to me at this time
So my questions is do you suggest if I run M2_card_fix.sh since I don't have any backup of the data ? it wouldn't miss anything on permenant basis would it? trying to be really careful here
Can you also please tell me where I can find m2_card_fix.sh and bit of guide on how to run it?
@007revad commented on GitHub (Nov 19, 2023):
Running m2_card_fix.sh won't affect your HDD volumes or make anything worse. It just replaces 2 nvme related DSM files with 7.2 versions.
@MirHekmat commented on GitHub (Nov 19, 2023):
Can't see any images?
@007revad commented on GitHub (Nov 19, 2023):
I can see your images. I also see the images in the emails GitHub send me.
@MirHekmat commented on GitHub (Nov 19, 2023):
I really have a good feeling this will resolve the issue however I am having trouble running the code. what I mean can't see any images was step 2 you mentioned (see image below) I couldn't see any image attached however I think I have manage to download it using the 3 dot on the right corner.
So when trying to run the scrip this is what its saying

@007revad commented on GitHub (Nov 19, 2023):
It should be sudo with a lower case S. Not Sudo
And there needs to be a space between -i and /volume
sudo -i /volume2/Scripts/m2_card_fix.sh@MirHekmat commented on GitHub (Nov 19, 2023):
Oh I see, I didn't think it was case sensitive
its says this:
sudo -i /volume2/Scripts/m2_card_fix.sh
DS1821+
69057 not supported
@007revad commented on GitHub (Nov 19, 2023):
I just updated m2_card_fxi.sh to allow running it on 7.2.1-69057
Can you download it again and run this new version.
@MirHekmat commented on GitHub (Nov 19, 2023):
okey sure
@MirHekmat commented on GitHub (Nov 19, 2023):
This is what I got
@007revad commented on GitHub (Nov 19, 2023):
Oops. I deleted a model and left the last || in there.
Can you download it again.
@MirHekmat commented on GitHub (Nov 19, 2023):
Still the same:
@007revad commented on GitHub (Nov 19, 2023):
You didn't download the new version. Or you downloaded it to a different folder.
@MirHekmat commented on GitHub (Nov 19, 2023):
I am deleting older verision as its not working, i have now refreshed the page: deleted all the old downloads of the file and re-did all the steps.
now it ask the password however previous error:
@007revad commented on GitHub (Nov 19, 2023):
I shouldn't multi-task at midnight :(
I've fixed it again. Can you download it... again.
@MirHekmat commented on GitHub (Nov 19, 2023):
All good mate, i truly appreciate all the help. I am still getting the same error. I did restart the terminal just in case there was something there. So started a fresh session maybe it wasn't needed but I did it anyhow.
I know its late night for you if you would like to take a break thats all fine. If you want to revisit this tomorrow happy to wait.
@007revad commented on GitHub (Nov 19, 2023):
I've fixed it again.
@MirHekmat commented on GitHub (Nov 19, 2023):
M<ATTEEEEEE its fixeeeeedddddd!!!
@MirHekmat commented on GitHub (Nov 19, 2023):
I have accesess to everything !!!!
@007revad commented on GitHub (Nov 19, 2023):
NICE!!!
@007revad commented on GitHub (Nov 19, 2023):
@MirHekmat Thank you very much for your donation
@MirHekmat commented on GitHub (Nov 19, 2023):
No worries mate, you helped late night. thanks to your efforts.
I had another question. I read many forum that people complained saying if you leave those SSD caches as read/write it'll most like cause a Volume 1 crash.
I do lots of video editing. I was leaving it on as read/write after my back up is complete should i change those internal ssd to Read only as suggested by many users?
@007revad commented on GitHub (Nov 19, 2023):
Read/write caches are dangerous if you enable "Pin all Btrfs metadata to SSD cache". If the cache drive dies, gets removed or isn't mounted your volume may crash. Which sounds like what happened to you.
I had a read-only cache, and after updating DSM the NVMe cache was missing but my HDD volumes were okay. After running m2_card-fix.sh and rebooting my read only cache was back.
If you have plenty of RAM in the DS1821+ you won't see an improvement with a read/write or read-only cache. RAM is faster than the NVMe drives. I have 32GB of RAM in my DS1821+ and don't see any difference when running a read/write cache, or read cache.
As of DSM 7.2, DSM only caches small files so I don't think there'd be any benefit when editing videos.
A read cache only helps when you frequently access the same small files, and the combined size of all the cached small files exceeds the amount of RAM available to use for caching. Databases and web servers really benefit from a cache.
@MirHekmat commented on GitHub (Nov 19, 2023):
Thats great info, I also have 32GB ram so I guess it'll be great.
Yes thats exactly what happened.
I'll do the backup now hoping it'll finish by tomorrow morning and then make the changes to readonly
@007revad commented on GitHub (Nov 25, 2023):
@zcpnate @RozzNL @MirHekmat @zcpnate
There's a new release candidate version of the script that now correctly enables M.2 cards like the E10M20-T1 for the DS1821+ and DS1621+. https://github.com/007revad/Synology_HDD_db/releases/tag/v3.2.67-RC
If you previously ran m2_card_fix.sh then you should undo some of the changes it made. Those changes will automatically get undone when you update to the next full version of DSM (7.2.2 or 7.3).
If you want to undo those changes now:
Either way you should also run Synology_Cleanup_Coredumps because m2_card_fix.sh caused a core dump which would left a coredump files on the root of volume1.
@MirHekmat commented on GitHub (Nov 26, 2023):
Hey Dave,
Thanks!
Do I really need to do all this as everything is running fine at the moment . 300MB occupations of the dump files are not that big of deal for me. If its nothing major I'll leave everything as it is?
Also DSM is suggestion new update, please see image bellow? I see the DSM 7.2.1 the one you are suggesting is older DSM ??
Experiencing last time the issues that I had. I would wait until experience users that also use your scripts will trial before I update mine. This way at least there will be some fix available.
@007revad commented on GitHub (Nov 26, 2023):
@MirHekmat
You can certainly leave your DS1821+ as it is and it'll be fine. It's what I would have done if I didn't need to test the script on a clean DSM install.
The reason I said to downgrade to "DSM 7.2.1 (with Update 1)" is because to reinstall the "same" version you have to install a full version. For example:
The full versions are around 300 MB. The incremental are around 3 MB.
@MirHekmat commented on GitHub (Nov 26, 2023):
I see. thanks mate for your efforts!
@WanpengQian commented on GitHub (Nov 30, 2023):
As far as I'm aware, starting from DSM 7.2, Synology officially supports the use of M.2 SSDs as a storage pool on the DS1821+. You can find more information on supported models at
https://kb.synology.com/en-us/DSM/tutorial/Which_models_support_M_2_SSD_storage_pool.
If I intend to use internal M.2 drives for the pool, is it necessary to run this script? Or can any M.2 SSD be used, or does it need to be a certified SSD according to Synology's recommendations?
for non certified SSD, we need to run this script.
@007revad commented on GitHub (Nov 30, 2023):
@WanpengQian
If you own one of those supported models AND have Synology brand M.2 drives you don't need the syno_hdd_db script.
If you want to use other brand M.2 drives as a volume you will need the syno_hdd_db script.
@007revad commented on GitHub (Dec 2, 2023):
Synology_HDD_db v3.2.68 released which now correctly, and simply, enables E10M20-T1, M2D20, M2D18 and M2D17 in models that use device tree and are using DSM 7.2 Update 2 and 3, 7.2.1, 7.2.1 Update 1, 2 and 3.
I also updated Synology_enable_M2_card which does the same but allows you choose which M.2 card to enable, or choose to enable all of them.