[GH-ISSUE #87] Unrecognized firmware DS1823xs+? #537

Closed
opened 2026-03-11 11:46:21 +03:00 by kerem · 37 comments
Owner

Originally created by @tmnext on GitHub (Jun 11, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/87

I am probably doing something wrong, since it seems to work for everyone else: I still get the 'unrecognized firmware' on all my drives. Although sometimes that message is gone and everything seems fine; I cant repeat it though.

Here is the output of my script when run manually, the error message seems new:

Screenshot 2023-06-11 at 12 45 35 PM

Here is the config in task scheduler, with options -nr:

Screenshot 2023-06-11 at 12 49 30 PM

Side-note: I also added an old M2D20 to my DS1823xs+, with one M2 drive. The script 1.0.6 indeed finds it and displays the correct M2 disk. Not sure if you can solve this, but would be nice if the NAS would be able to access it. So far it only displays the drives 1-8, 2xM2 build in slots, but not the M2D20 drive under storage manager:

Screenshot 2023-06-11 at 12 49 13 PM

Issue:
Screenshot 2023-06-11 at 1 06 01 PM

Originally created by @tmnext on GitHub (Jun 11, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/87 I am probably doing something wrong, since it seems to work for everyone else: I still get the 'unrecognized firmware' on all my drives. Although sometimes that message is gone and everything seems fine; I cant repeat it though. Here is the output of my script when run manually, the error message seems new: ![Screenshot 2023-06-11 at 12 45 35 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/b97c0eea-e890-46e1-a8ba-f1ee6e29928e) Here is the config in task scheduler, with options -nr: ![Screenshot 2023-06-11 at 12 49 30 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/b677f397-593b-41db-9de7-246630917f11) Side-note: I also added an old M2D20 to my DS1823xs+, with one M2 drive. The script 1.0.6 indeed finds it and displays the correct M2 disk. Not sure if you can solve this, but would be nice if the NAS would be able to access it. So far it only displays the drives 1-8, 2xM2 build in slots, but not the M2D20 drive under storage manager: ![Screenshot 2023-06-11 at 12 49 13 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/97c3178c-c27d-45f4-9748-5b8b5ce168f2) Issue: ![Screenshot 2023-06-11 at 1 06 01 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/0ddc829a-5002-4e2b-8277-ffb11659d987)
kerem closed this issue 2026-03-11 11:46:26 +03:00
Author
Owner

@007revad commented on GitHub (Jun 11, 2023):

I'm surprised more people aren't complaining about these issues. You're only the 3rd person who's mentioned them since DSM 7.2 was officially released. Because of the reports by the other 2 people I've been working on fixing these issues.

Things I've already fixed (but I haven't released the updated version yet).

  1. I've changed the script to avoid the error that is on line 4 of your first screenshot... but I need to investigate why some people with M2 drives don't have /run/synostorage/disks/nvme0n1/m2_pool_support
  2. The firmware version being only 4 characters for HDD/SSD SATA (and SAS?) drives - which is the cause for "Unsupported firmware version" message in storage manager. I've changed how the script gets HDD/SSD firmware version.
  3. Installed memory being returned as GB instead of MB. I've changed the script to work with GB or MB.

Do you want to try Synology_HDD_db v2.3.49-RC

Does your "Samsung SSD 960 PRO 2TB" show in storage manager without the "Unsupported firmware version" message? Or is it the NVMe drive that is in the M2D20 that does not show at all in storage manager?

FYI If you previously ran either script with an older version of DSM (DSM 6 or 7.1.1 etc) do NOT run the syno_hdd_bd.sh with the --restore option. It will restore the old DSM version's backup of synoinfo.conf and you'll lose access to storage manager and few tabs in control panel. I'm working on a fix for this issue.

<!-- gh-comment-id:1586105962 --> @007revad commented on GitHub (Jun 11, 2023): I'm surprised more people aren't complaining about these issues. You're only the 3rd person who's mentioned them since DSM 7.2 was officially released. Because of the reports by the other 2 people I've been working on fixing these issues. Things I've already fixed **(but I haven't released the updated version yet)**. 1. I've changed the script to avoid the error that is on line 4 of your first screenshot... but I need to investigate why some people with M2 drives don't have /run/synostorage/disks/nvme0n1/m2_pool_support 2. The firmware version being only 4 characters for HDD/SSD SATA (and SAS?) drives - which is the cause for "Unsupported firmware version" message in storage manager. I've changed how the script gets HDD/SSD firmware version. 3. Installed memory being returned as GB instead of MB. I've changed the script to work with GB or MB. Do you want to try [Synology_HDD_db v2.3.49-RC](https://github.com/007revad/Synology_HDD_db/releases/tag/v2.3.49-RC) Does your "Samsung SSD 960 PRO 2TB" show in storage manager without the "Unsupported firmware version" message? Or is it the NVMe drive that is in the M2D20 that does not show at all in storage manager? FYI If you previously ran either script with an older version of DSM (DSM 6 or 7.1.1 etc) do **_NOT_** run the syno_hdd_bd.sh with the --restore option. It will restore the old DSM version's backup of synoinfo.conf and you'll lose access to storage manager and few tabs in control panel. I'm working on a fix for this issue.
Author
Owner

@tmnext commented on GitHub (Jun 11, 2023):

Thanks for the quick reply. I could try the 2.3.49 tomorrow.

"Does your "Samsung SSD 960 PRO 2TB" show in storage manager without the "Unsupported firmware version" message? Or is it the NVMe drive that is in the M2D20 that does not show at all in storage manager?" - The M2M20 card does not show up in storage manager, neither does the drive (Samsung SSD 960 PRO 2TB) under HDD/SDD in storage manager. The only place I saw it popping up was in your script output, so I had some hope that you could make it workable, since the system can clearly "see" the card and the drive, as proven by your script; but it does not show up anywhere in the DSM GUI (I have not tried anything else yet e.g. using ssh, just added the card today since it was laying around). In the info center in the control panel it shows for the PCI slot just "occupied". This is nothing urgent and just an interesting thing on the side. Also it should be noted that Synology does not list the M2D20 card for the DS1823xs+ as compatible.

<!-- gh-comment-id:1586175638 --> @tmnext commented on GitHub (Jun 11, 2023): Thanks for the quick reply. I could try the 2.3.49 tomorrow. "Does your "Samsung SSD 960 PRO 2TB" show in storage manager without the "Unsupported firmware version" message? Or is it the NVMe drive that is in the M2D20 that does not show at all in storage manager?" - The M2M20 card does not show up in storage manager, neither does the drive (Samsung SSD 960 PRO 2TB) under HDD/SDD in storage manager. The only place I saw it popping up was in your script output, so I had some hope that you could make it workable, since the system can clearly "see" the card and the drive, as proven by your script; but it does not show up anywhere in the DSM GUI (I have not tried anything else yet e.g. using ssh, just added the card today since it was laying around). In the info center in the control panel it shows for the PCI slot just "occupied". This is nothing urgent and just an interesting thing on the side. Also it should be noted that Synology does not list the M2D20 card for the DS1823xs+ as compatible.
Author
Owner

@tmnext commented on GitHub (Jun 13, 2023):

Update: I used the 2.3.51-RC script by running
/volume1/data/.scripts/syno_enable_m2_volume.sh
/volume1/data/.scripts/syno_hdd_db.sh --autoupdate=7
as scheduled task as root at shut-down

The 'unrecognized firmware' is now gone for disks 1-6, but not for the two M2 drives !?!

M2D20: Any thoughts on how this possibly could be accessed as read-cache on your end, though not part of your script?

<!-- gh-comment-id:1588480585 --> @tmnext commented on GitHub (Jun 13, 2023): Update: I used the 2.3.51-RC script by running /volume1/data/.scripts/syno_enable_m2_volume.sh /volume1/data/.scripts/syno_hdd_db.sh --autoupdate=7 as scheduled task as root at shut-down The 'unrecognized firmware' is now gone for disks 1-6, but not for the two M2 drives !?! M2D20: Any thoughts on how this possibly could be accessed as read-cache on your end, though not part of your script?
Author
Owner

@tmnext commented on GitHub (Jun 13, 2023):

Update 2:

Changed the task scheduler according your advice to:

/volume1/data/.scripts/syno_hdd_db.sh --autoupdate=7
as scheduled task as root at boot-up

removed M2_enable script. Same result:

The 'unrecognized firmware' is now gone for disks 1-6, but not for the two M2 drives !?!

<!-- gh-comment-id:1588519275 --> @tmnext commented on GitHub (Jun 13, 2023): Update 2: Changed the task scheduler according your advice to: /volume1/data/.scripts/syno_hdd_db.sh --autoupdate=7 as scheduled task as root at **boot-up** removed M2_enable script. Same result: The 'unrecognized firmware' is now gone for disks 1-6, but not for the two M2 drives !?!
Author
Owner

@007revad commented on GitHub (Jun 13, 2023):

So the 2 M.2 drives in the internal M.2 slots of the DS1823xs+ still show "Unrecognized firmware version"?

I only discovered yesterday that the M2D20 only works with certain models, but not the DS1823xs+ or my DS1821+. And I have no idea how Synology pick which models to make it work with. To me, if it has a PCIe slot any M.2 or network card should work.
https://www.synology.com/en-global/products/M2D20#specs

The M2D20 card also does not work in my DS1821+ but it does work in a DS1819+ !?!?! I might compare those 2 models' DSM files to see if I can work out how to enable M2D20 support in officially unsupported models.

My Synology_M2_volume script should be able to create a storage pool on the M.2 drives in the M2D20. https://github.com/007revad/Synology_M2_volume You'd then go into storage manager and do an online assemble then create your volume. After that you could then setup the M2 drives in the internal M.2 slots of the DS1823xs+ as a cache from within storage manager.

<!-- gh-comment-id:1588615230 --> @007revad commented on GitHub (Jun 13, 2023): So the 2 M.2 drives in the internal M.2 slots of the DS1823xs+ still show "Unrecognized firmware version"? I only discovered yesterday that the M2D20 only works with certain models, but not the DS1823xs+ or my DS1821+. And I have no idea how Synology pick which models to make it work with. To me, if it has a PCIe slot any M.2 or network card should work. https://www.synology.com/en-global/products/M2D20#specs The M2D20 card also does not work in my DS1821+ but it does work in a DS1819+ !?!?! I might compare those 2 models' DSM files to see if I can work out how to enable M2D20 support in officially unsupported models. My Synology_M2_volume script should be able to create a storage pool on the M.2 drives in the M2D20. https://github.com/007revad/Synology_M2_volume You'd then go into storage manager and do an online assemble then create your volume. After that you could then setup the M2 drives in the internal M.2 slots of the DS1823xs+ as a cache from within storage manager.
Author
Owner

@tmnext commented on GitHub (Jun 13, 2023):

Yes, correct, All SSD and HDD are fine, just the M2 drives show the 'unrecognized firmware' (Crucial P3 Plus):

Screenshot 2023-06-13 at 2 38 07 PM

Note, to enable those M2 drives plugged into the internal M2 slots (not the M2D20) as storage volume I had to use the enable_M2 script originally when I first set it up. I was surprised to read in your other thread here that everything is now handled by the hdd_db script and the enable_M2 is not needed anymore.

M2D20: I ran the Synology_M2_volume as dry run; it seems I could create a pool with the M2D20 drive, but a) is that safe, since the drive is not even displayed under HDD in storage manager (?), and b) it wouldn't solve that I want to use the drive as read-cache, right now storage manager says "no drive available", when I try to add a read-cache SSD (?)

Output of M2_volume script for M2D20 drive:
Screenshot 2023-06-13 at 3 20 00 PM

<!-- gh-comment-id:1588653775 --> @tmnext commented on GitHub (Jun 13, 2023): Yes, correct, All SSD and HDD are fine, just the M2 drives show the 'unrecognized firmware' (Crucial P3 Plus): ![Screenshot 2023-06-13 at 2 38 07 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/be73b0b2-3dfb-4d84-97cd-be4de572e305) Note, to enable those M2 drives plugged into the internal M2 slots (not the M2D20) as storage volume I had to use the enable_M2 script originally when I first set it up. I was surprised to read in your other thread here that everything is now handled by the hdd_db script and the enable_M2 is not needed anymore. M2D20: I ran the Synology_M2_volume as dry run; it seems I could create a pool with the M2D20 drive, but a) is that safe, since the drive is not even displayed under HDD in storage manager (?), and b) it wouldn't solve that I want to use the drive as read-cache, right now storage manager says "no drive available", when I try to add a read-cache SSD (?) Output of M2_volume script for M2D20 drive: ![Screenshot 2023-06-13 at 3 20 00 PM](https://github.com/007revad/Synology_HDD_db/assets/84424468/ecf9b3d9-da6b-4281-b78b-4312def77e5d)
Author
Owner

@007revad commented on GitHub (Jun 16, 2023):

Note, to enable those M2 drives plugged into the internal M2 slots (not the M2D20) as storage volume I had to use the
enable_M2 script originally when I first set it up. I was surprised to read in your other thread here that everything is now
handled by the hdd_db script and the enable_M2 is not needed anymore.

The hdd_db script on it's own works for Synology NAS models that officially support M2 volumes. The enable_M2 script also works on those models, but seems to also work on some models that don't officially support M2 volumes.

M2D20: I ran the Synology_M2_volume as dry run; it seems I could create a pool with the M2D20 drive, but a) is that safe, since the drive is not even displayed under HDD in storage manager (?),

I would try it just to see if they can then be seen in Storage Manager.

and b) it wouldn't solve that I want to use the drive as read-cache, right now storage manager says "no drive available", when I try to add a read-cache SSD (?)

If moving the 2 crucial drives to the M2D20 makes them appear as volume (because they already have a volume setup), you could then use the Samsung drive in the DS1823xs+ as cache.

Officially the DS1823xs+ does not support M2D20, M2D18 or E10M20-T1.

I'm interested in seeing if I can make the M2D20, M2D18 and E10M20-T1 become supported on NAS models that don't support them. Unfortunately I don't have an M2D20, M2D18 or E10M20-T1 to test with.

According to Synology my DS1821+ does not support the M2D20, M2D18 or E10M20-T1, yet DSM for the DS1821+ includes disk compatibility .db files for E10M20-T1, M2D20, M2D18 and M2D17.

<!-- gh-comment-id:1594252621 --> @007revad commented on GitHub (Jun 16, 2023): > Note, to enable those M2 drives plugged into the internal M2 slots (not the M2D20) as storage volume I had to use the > enable_M2 script originally when I first set it up. I was surprised to read in your other thread here that everything is now > handled by the hdd_db script and the enable_M2 is not needed anymore. The hdd_db script on it's own works for Synology NAS models that officially support M2 volumes. The enable_M2 script also works on those models, but seems to also work on some models that don't officially support M2 volumes. > M2D20: I ran the Synology_M2_volume as dry run; it seems I could create a pool with the M2D20 drive, but a) is that safe, since the drive is not even displayed under HDD in storage manager (?), I would try it just to see if they can then be seen in Storage Manager. > and b) it wouldn't solve that I want to use the drive as read-cache, right now storage manager says "no drive available", when I try to add a read-cache SSD (?) If moving the 2 crucial drives to the M2D20 makes them appear as volume (because they already have a volume setup), you could then use the Samsung drive in the DS1823xs+ as cache. Officially the DS1823xs+ does not support M2D20, M2D18 or E10M20-T1. I'm interested in seeing if I can make the M2D20, M2D18 and E10M20-T1 become supported on NAS models that don't support them. Unfortunately I don't have an M2D20, M2D18 or E10M20-T1 to test with. According to Synology my DS1821+ does not support the M2D20, M2D18 or E10M20-T1, yet DSM for the DS1821+ includes disk compatibility .db files for E10M20-T1, M2D20, M2D18 and M2D17.
Author
Owner

@tmnext commented on GitHub (Jun 16, 2023):

I just ran the syno_create_m2_volume, but nothing changed even after restart; no hdd visible, no storage pool or volume other than what is shown above. The script created a volume (same output as in the dry-run), but it doesn't seem accessible without further modifications somewhere. Storage manager still says "no ssd available for cache".

<!-- gh-comment-id:1594294922 --> @tmnext commented on GitHub (Jun 16, 2023): I just ran the syno_create_m2_volume, but nothing changed even after restart; no hdd visible, no storage pool or volume other than what is shown above. The script created a volume (same output as in the dry-run), but it doesn't seem accessible without further modifications somewhere. Storage manager still says "no ssd available for cache".
Author
Owner

@007revad commented on GitHub (Jun 16, 2023):

What do the following 2 commands return?

ls /run/synostorage/disks

ls /sys/block | grep nvme

<!-- gh-comment-id:1594318690 --> @007revad commented on GitHub (Jun 16, 2023): What do the following 2 commands return? `ls /run/synostorage/disks ` `ls /sys/block | grep nvme`
Author
Owner

@tmnext commented on GitHub (Jun 16, 2023):

nvme1n1 nvme2n1 sata1 sata2 sata3 sata4 sata5 sata6 sata7 sata8

nvme0n1
nvme1n1
nvme2n1

<!-- gh-comment-id:1594330525 --> @tmnext commented on GitHub (Jun 16, 2023): nvme1n1 nvme2n1 sata1 sata2 sata3 sata4 sata5 sata6 sata7 sata8 nvme0n1 nvme1n1 nvme2n1
Author
Owner

@007revad commented on GitHub (Jun 16, 2023):

I think I may have found how to enable your M2D20 card. Can you try the following command:

sudo set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme DS1823xs+ yes

then (without rebooting) check in storage manager is the missing NVMe drive now shows up.

If it's still missing reboot and check again.

<!-- gh-comment-id:1595386455 --> @007revad commented on GitHub (Jun 16, 2023): I think I may have found how to enable your M2D20 card. Can you try the following command: `sudo set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf M2D20_sup_nvme DS1823xs+ yes` then (without rebooting) check in storage manager is the missing NVMe drive now shows up. If it's still missing reboot and check again.
Author
Owner

@tmnext commented on GitHub (Jun 17, 2023):

Genius, it works as read-cache now, see screenshots below. Awesome, many thanks. Probably worth to integrate it into HDD_DB script for others. It seems volume creation is not allowed on the M2D20 (see below), but my intention was read-cache, that works.
Only remaining issue is the unrecognized firmware on the M2 drives internally, but I assume that will be fixed in one of the next updates. Many thanks again.

Screenshot 2023-06-17 at 11 12 45 AM Screenshot 2023-06-17 at 11 07 02 AM Screenshot 2023-06-17 at 11 06 14 AM Screenshot 2023-06-17 at 11 05 59 AM Screenshot 2023-06-17 at 11 05 47 AM
<!-- gh-comment-id:1595609290 --> @tmnext commented on GitHub (Jun 17, 2023): Genius, it works as read-cache now, see screenshots below. Awesome, many thanks. Probably worth to integrate it into HDD_DB script for others. It seems volume creation is not allowed on the M2D20 (see below), but my intention was read-cache, that works. Only remaining issue is the unrecognized firmware on the M2 drives internally, but I assume that will be fixed in one of the next updates. Many thanks again. <img width="772" alt="Screenshot 2023-06-17 at 11 12 45 AM" src="https://github.com/007revad/Synology_HDD_db/assets/84424468/20650548-8163-44f0-90f8-3a41900d33df"> <img width="971" alt="Screenshot 2023-06-17 at 11 07 02 AM" src="https://github.com/007revad/Synology_HDD_db/assets/84424468/45e455fc-7f8f-495b-9db4-e28d44489129"> <img width="990" alt="Screenshot 2023-06-17 at 11 06 14 AM" src="https://github.com/007revad/Synology_HDD_db/assets/84424468/ead440b9-690b-4c26-a603-a7eb901c40f2"> <img width="557" alt="Screenshot 2023-06-17 at 11 05 59 AM" src="https://github.com/007revad/Synology_HDD_db/assets/84424468/56fc08fc-5887-4da0-a8cc-ddb4aaef0d80"> <img width="919" alt="Screenshot 2023-06-17 at 11 05 47 AM" src="https://github.com/007revad/Synology_HDD_db/assets/84424468/75f0ce04-d438-4900-abba-945fcd7579fe">
Author
Owner

@007revad commented on GitHub (Jun 17, 2023):

Excellent. I was hoping you'd say it worked because while I was waiting for your reply I went ahead and wrote the code to add it the script.

Did you need to reboot to the see M2D20 in storage manager?

Regarding the Crucial NVMe drives, I'm wondering if it needs to be "Crucial CT4000P3PSSD8" instead of "CT4000P3PSSD8".

Can you save the DS1823xs+__host_v7.db file, zip it and attach the zip file to your reply.
sudo -i cp /var/lib/disk-compatibility/ds1823xs+_host_v7.db ~/ds1823xs+_host_v7.db

<!-- gh-comment-id:1595616009 --> @007revad commented on GitHub (Jun 17, 2023): Excellent. I was hoping you'd say it worked because while I was waiting for your reply I went ahead and wrote the code to add it the script. Did you need to reboot to the see M2D20 in storage manager? Regarding the Crucial NVMe drives, I'm wondering if it needs to be "Crucial CT4000P3PSSD8" instead of "CT4000P3PSSD8". Can you save the DS1823xs+__host_v7.db file, zip it and attach the zip file to your reply. `sudo -i cp /var/lib/disk-compatibility/ds1823xs+_host_v7.db ~/ds1823xs+_host_v7.db`
Author
Owner

@tmnext commented on GitHub (Jun 17, 2023):

ds1823xs+_host_v7.zip

Yes, please find attached.

I needed to reboot for the M2D20 to pop up; it took like 5min for that reboot.

<!-- gh-comment-id:1595658807 --> @tmnext commented on GitHub (Jun 17, 2023): [ds1823xs+_host_v7.zip](https://github.com/007revad/Synology_HDD_db/files/11778240/ds1823xs%2B_host_v7.zip) Yes, please find attached. I needed to reboot for the M2D20 to pop up; it took like 5min for that reboot.
Author
Owner

@007revad commented on GitHub (Jun 17, 2023):

Can you replace /var/lib/disk-compatibility/ds1823xs+_host_v7.db with the one in the attached zip file.
edited_ds1823xs+_host_v7.zip

You might need to set the permissions on the new /var/lib/disk-compatibility/ds1823xs+_host_v7.db file to chmod 644

Then either reboot or run the following command:
sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility

<!-- gh-comment-id:1595662509 --> @007revad commented on GitHub (Jun 17, 2023): Can you replace /var/lib/disk-compatibility/ds1823xs+_host_v7.db with the one in the attached zip file. [edited_ds1823xs+_host_v7.zip](https://github.com/007revad/Synology_HDD_db/files/11778275/edited_ds1823xs%2B_host_v7.zip) You might need to set the permissions on the new /var/lib/disk-compatibility/ds1823xs+_host_v7.db file to chmod 644 Then either reboot or run the following command: `sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility`
Author
Owner

@tmnext commented on GitHub (Jun 17, 2023):

Did that, after restart (and before) sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility doesnt have any output.

<!-- gh-comment-id:1595690790 --> @tmnext commented on GitHub (Jun 17, 2023): Did that, after restart (and before) `sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility` doesnt have any output.
Author
Owner

@007revad commented on GitHub (Jun 17, 2023):

You don't get any output from sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility. The only way to check if synostgdisk did anything is by checking it's return status was 0 with echo $?

Do the Crucial NVMe drives still show unsupported firmware?

<!-- gh-comment-id:1595694114 --> @007revad commented on GitHub (Jun 17, 2023): You don't get any output from `sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility`. The only way to check if synostgdisk did anything is by checking it's return status was 0 with `echo $?` Do the Crucial NVMe drives still show unsupported firmware?
Author
Owner

@tmnext commented on GitHub (Jun 18, 2023):

The output of 'echo $?' is '0'

The output of 'sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility' is nothing.

Maybe related to the chmod 644, I did not apply that, but also did not see any error messages.

The unrecognized firmware on the Crucial M2 remains.

Apologies for time delay, I am in GMT +8

<!-- gh-comment-id:1595914944 --> @tmnext commented on GitHub (Jun 18, 2023): The output of 'echo $?' is '0' The output of 'sudo /usr/syno/sbin/synostgdisk --check-all-disks-compatibility' is nothing. Maybe related to the chmod 644, I did not apply that, but also did not see any error messages. The unrecognized firmware on the Crucial M2 remains. Apologies for time delay, I am in GMT +8
Author
Owner

@007revad commented on GitHub (Jun 20, 2023):

Try running it with the -f option.

<!-- gh-comment-id:1598481322 --> @007revad commented on GitHub (Jun 20, 2023): Try running it with the -f option.
Author
Owner

@tmnext commented on GitHub (Jun 20, 2023):

I tried the -f option, also with the new script v3. Unfortunately the Crucials M2 still sate unrecognized firmware. Is there anything else I could try?

<!-- gh-comment-id:1598868826 --> @tmnext commented on GitHub (Jun 20, 2023): I tried the -f option, also with the new script v3. Unfortunately the Crucials M2 still sate unrecognized firmware. Is there anything else I could try?
Author
Owner

@007revad commented on GitHub (Jun 22, 2023):

You could try running the Synology_enable_M2_volume script... but I'd like to find out why syno_hdd_db is not fully working for you.

What does the following command return:
sudo synodisk --enum -t cache

Did you ever try one of the Crucial NVMe drives in the M2D20? Or the Samsung NVMe drive in the DS1821xs+? I'm curious if the Crucial still has the warning when it is in the M2D20, and if the Samsung still has no warning when in one of the NAS' M.2 slots.

<!-- gh-comment-id:1601910376 --> @007revad commented on GitHub (Jun 22, 2023): You could try running the [Synology_enable_M2_volume](https://github.com/007revad/Synology_enable_M2_volume) script... but I'd like to find out why syno_hdd_db is not fully working for you. What does the following command return: `sudo synodisk --enum -t cache` Did you ever try one of the Crucial NVMe drives in the M2D20? Or the Samsung NVMe drive in the DS1821xs+? I'm curious if the Crucial still has the warning when it is in the M2D20, and if the Samsung still has no warning when in one of the NAS' M.2 slots.
Author
Owner

@tmnext commented on GitHub (Jun 22, 2023):

Here is the return of sudo synodisk --enum -t cache

************ Disk Info ***************

Disk id: 1
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 960 PRO 2TB
Total capacity: 1907.73 GB
Tempeture: 31 C
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme1n1
Disk model: CT4000P3PSSD8
Total capacity: 3726.02 GB
Tempeture: 27 C
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme2n1
Disk model: CT4000P3PSSD8
Total capacity: 3726.02 GB
Tempeture: 28 C

Interesting spelling error on 'Temperature'

I didnt try to swap the M2's, since the M2D20 clearly said it wont work for volume, see above screenshot.

Here the output of the enable_m2_volume script, it did not solve the 'unrecognized firmware' warning of the two Crucial M2 drives.
Synology_enable_M2_volume v1.0.7
DS1823xs+ DSM 7.2-64570-1

Using options:
File already backed up.

Checking file.
198010 = 3057a
0003057a: 803E 00B8 0100 0000 9090 488B

File already edited.

<!-- gh-comment-id:1601917132 --> @tmnext commented on GitHub (Jun 22, 2023): Here is the return of `sudo synodisk --enum -t cache` ************ Disk Info *************** >> Disk id: 1 >> Slot id: 1 >> Disk path: /dev/nvme0n1 >> Disk model: Samsung SSD 960 PRO 2TB >> Total capacity: 1907.73 GB >> Tempeture: 31 C ************ Disk Info *************** >> Disk id: 1 >> Disk path: /dev/nvme1n1 >> Disk model: CT4000P3PSSD8 >> Total capacity: 3726.02 GB >> Tempeture: 27 C ************ Disk Info *************** >> Disk id: 2 >> Disk path: /dev/nvme2n1 >> Disk model: CT4000P3PSSD8 >> Total capacity: 3726.02 GB >> Tempeture: 28 C Interesting spelling error on 'Temperature' I didnt try to swap the M2's, since the M2D20 clearly said it wont work for volume, see above screenshot. Here the output of the enable_m2_volume script, it did not solve the 'unrecognized firmware' warning of the two Crucial M2 drives. Synology_enable_M2_volume v1.0.7 DS1823xs+ DSM 7.2-64570-1 Using options: File already backed up. Checking file. 198010 = 3057a 0003057a: 803E 00B8 0100 0000 9090 488B File already edited.
Author
Owner

@007revad commented on GitHub (Jun 22, 2023):

Yep, that "Tempeture" stands out.

I forgot that you've already run synology_enable_m2_volume 2 weeks ago.

************ Disk Info ***************
Disk id: 1
Slot id: 1
Disk path: /dev/nvme0n1
Disk model: Samsung SSD 960 PRO 2TB
************ Disk Info ***************
Disk id: 1
Disk path: /dev/nvme1n1
Disk model: CT4000P3PSSD8
************ Disk Info ***************
Disk id: 2
Disk path: /dev/nvme2n1
Disk model: CT4000P3PSSD8

It's interesting that DSM assigned /nvme0n1 to the Samsung in the M2D20, and then /dev/nvme1n1 and /dev/nvme2n1 to the 2 NVMe drives in the NAS' internal M.2 slots. It's also interesting that there's 2 "Disk id: 1".

My script, synodisk and Storage Manager all show the same CT4000P3PSSD8 model number for the Crucial drives. My script and Storage Manager both show the same P9CR40A firmware version for the Crucial drives.

Do these return the P9CR40A firmware version?

sudo synonvme --sn-fr-get /dev/nvme1
sudo synonvme --sn-fr-get /dev/nvme2

Do these return the CT4000P3PSSD8 model number?

sudo synonvme --model-get /dev/nvme1
sudo synonvme --model-get /dev/nvme2
<!-- gh-comment-id:1602006358 --> @007revad commented on GitHub (Jun 22, 2023): Yep, that "Tempeture" stands out. I forgot that you've already run synology_enable_m2_volume 2 weeks ago. > ************ Disk Info *************** > Disk id: 1 > Slot id: 1 > Disk path: /dev/nvme0n1 > Disk model: Samsung SSD 960 PRO 2TB > ************ Disk Info *************** > Disk id: 1 > Disk path: /dev/nvme1n1 > Disk model: CT4000P3PSSD8 > ************ Disk Info *************** > Disk id: 2 > Disk path: /dev/nvme2n1 > Disk model: CT4000P3PSSD8 It's interesting that DSM assigned /nvme0n1 to the Samsung in the M2D20, and then /dev/nvme1n1 and /dev/nvme2n1 to the 2 NVMe drives in the NAS' internal M.2 slots. It's also interesting that there's 2 "Disk id: 1". My script, synodisk and Storage Manager all show the same CT4000P3PSSD8 model number for the Crucial drives. My script and Storage Manager both show the same P9CR40A firmware version for the Crucial drives. Do these return the P9CR40A firmware version? ``` sudo synonvme --sn-fr-get /dev/nvme1 sudo synonvme --sn-fr-get /dev/nvme2 ``` Do these return the CT4000P3PSSD8 model number? ``` sudo synonvme --model-get /dev/nvme1 sudo synonvme --model-get /dev/nvme2 ```
Author
Owner

@tmnext commented on GitHub (Jun 23, 2023):

This is the output (I did not paste the serial number here):
nvme1 - Firmware Reversion: P9CR40A
nvme2 - Firmware Reversion: P9CR40A

nvme1 - Model name: CT4000P3PSSD8
nvme2 - Model name: CT4000P3PSSD8

<!-- gh-comment-id:1603685140 --> @tmnext commented on GitHub (Jun 23, 2023): This is the output (I did not paste the serial number here): nvme1 - Firmware Reversion: P9CR40A nvme2 - Firmware Reversion: P9CR40A nvme1 - Model name: CT4000P3PSSD8 nvme2 - Model name: CT4000P3PSSD8
Author
Owner

@007revad commented on GitHub (Jun 29, 2023):

Sorry, I took a few days to think about your issue and forgot reply earlier.

I've got 2 different ds1823xs+_host_v7.db files for you to try.
ds1823xs+_host_v7_new.zip

After replacing the ds1823xs+_host_v7.db file run this command (or reboot):
sudo synostgdisk --check-all-disks-compatibility

<!-- gh-comment-id:1612518037 --> @007revad commented on GitHub (Jun 29, 2023): Sorry, I took a few days to think about your issue and forgot reply earlier. I've got 2 different ds1823xs+_host_v7.db files for you to try. [ds1823xs+_host_v7_new.zip](https://github.com/007revad/Synology_HDD_db/files/11902526/ds1823xs%2B_host_v7_new.zip) After replacing the ds1823xs+_host_v7.db file run this command (or reboot): `sudo synostgdisk --check-all-disks-compatibility`
Author
Owner

@tmnext commented on GitHub (Jul 1, 2023):

Thank you, I can only test this next week Friday due to travel.

<!-- gh-comment-id:1615856788 --> @tmnext commented on GitHub (Jul 1, 2023): Thank you, I can only test this next week Friday due to travel.
Author
Owner

@tmnext commented on GitHub (Jul 7, 2023):

I tested both files with reboot, unfortunately they both lead to the same results, no change in 'unrecognized firmware' for the Crucial NVME drives. Does no one else use those?

<!-- gh-comment-id:1625076757 --> @tmnext commented on GitHub (Jul 7, 2023): I tested both files with reboot, unfortunately they both lead to the same results, no change in 'unrecognized firmware' for the Crucial NVME drives. Does no one else use those?
Author
Owner

@007revad commented on GitHub (Jul 8, 2023):

There have only been a few people who mentioned Crucial NVMe drives: Mostly people reporting that the script worked for their Synology model and with a Crucial NVMe. None of them had '23 series xs+ model.

But there was 1 person on reddit who had a Crucial NVMe drive die after less than a week. So they got a replacement and it too died in less than a week. They then got a Samsung NVMe and everything was fine.

I've searched through the 112 different Synology models' DSM 7.2 host .db files for any crucial drives: One of them had 1 instance of "Crucial_CTxxxxx" but others as "CTxxxxx" and the other 58 models that contained an entry for a Crucial drive all had "CTxxxxx". The only xs+ models who's DSM 7.2 host .db files contained entries for Crucial drives were '17 and '19 series. I was hoping to find something different in the xs+ models.

I suspect something has changed in DSM 7.2 for recent xs+ models. There was 1 person on reddit with an xs+ model who had to run the Synology_HDD_db script 3 times before all of their drives stopped showing the firmware warning. I don't know if they rebooted after each time they ran the script, or if they just ran it 3 times and rebooted.

Is the Samsung SSD 960 PRO 2TB still in the M2D20 card?
And the 2x Crucial CT4000P3PSSD8 drives are still in the internal M.2 slots?

Can you run the Synology_enable_M2_volume script with the --restore option then reboot - just in case it's somehow causing the NVMe drives in the internal M.2 slots to still show the firmware warning.

<!-- gh-comment-id:1627161807 --> @007revad commented on GitHub (Jul 8, 2023): There have only been a few people who mentioned Crucial NVMe drives: Mostly people reporting that the script worked for their Synology model and with a Crucial NVMe. None of them had '23 series xs+ model. But there was 1 person on reddit who had a Crucial NVMe drive die after less than a week. So they got a replacement and it too died in less than a week. They then got a Samsung NVMe and everything was fine. I've searched through the 112 different Synology models' DSM 7.2 host .db files for any crucial drives: One of them had 1 instance of "Crucial_CTxxxxx" but others as "CTxxxxx" and the other 58 models that contained an entry for a Crucial drive all had "CTxxxxx". The only xs+ models who's DSM 7.2 host .db files contained entries for Crucial drives were '17 and '19 series. I was hoping to find something different in the xs+ models. I suspect something has changed in DSM 7.2 for recent xs+ models. There was 1 person on reddit with an xs+ model who had to run the Synology_HDD_db script 3 times before all of their drives stopped showing the firmware warning. I don't know if they rebooted after each time they ran the script, or if they just ran it 3 times and rebooted. Is the Samsung SSD 960 PRO 2TB still in the M2D20 card? And the 2x Crucial CT4000P3PSSD8 drives are still in the internal M.2 slots? Can you run the Synology_enable_M2_volume script with the --restore option then reboot - just in case it's somehow causing the NVMe drives in the internal M.2 slots to still show the firmware warning.
Author
Owner

@tmnext commented on GitHub (Jul 9, 2023):

Thanks for the detailed reply. td;lr issue is fixed

The placement of the SSD are still as described by you. I ran as suggested Synology_enable_M2_volume script with the --restore option then reboot (note HDD script is running upon startup too as scheduled task) - the reboot took 10min (panic attack), but now (even after second reboot with normal startup time) all drives finally show no more 'unrecognized firmware'. The issue can now be closed, and many many thanks for your dedicated and detail oriented support. This script is awesome.

<!-- gh-comment-id:1627631149 --> @tmnext commented on GitHub (Jul 9, 2023): Thanks for the detailed reply. td;lr issue is fixed The placement of the SSD are still as described by you. I ran as suggested Synology_enable_M2_volume script with the --restore option then reboot (note HDD script is running upon startup too as scheduled task) - the reboot took 10min (panic attack), but now (even after second reboot with normal startup time) all drives finally show no more 'unrecognized firmware'. The issue can now be closed, and many many thanks for your dedicated and detail oriented support. This script is awesome.
Author
Owner

@007revad commented on GitHub (Sep 23, 2023):

@70m7E
Can you do me a favor and tell me what these commands return?

cat /proc/sys/kernel/syno_hw_version

cat /etc.defaults/VERSION

ll /sys/firmware/devicetree

cat /sys/firmware/devicetree/base/power_limit && echo

<!-- gh-comment-id:1732405424 --> @007revad commented on GitHub (Sep 23, 2023): @70m7E Can you do me a favor and tell me what these commands return? `cat /proc/sys/kernel/syno_hw_version` `cat /etc.defaults/VERSION` `ll /sys/firmware/devicetree` `cat /sys/firmware/devicetree/base/power_limit && echo`
Author
Owner

@tmnext commented on GitHub (Sep 23, 2023):

@70m7E

Can you do me a favor and tell me what these commands return?

cat /proc/sys/kernel/syno_hw_version

cat /etc.defaults/VERSION

ll /sys/firmware/devicetree

cat /sys/firmware/devicetree/base/power_limit && echo

Unfortunately that NAS is not operational right now due to moving; I'll do it when it's back up.

<!-- gh-comment-id:1732415980 --> @tmnext commented on GitHub (Sep 23, 2023): > @70m7E > > Can you do me a favor and tell me what these commands return? > > > > `cat /proc/sys/kernel/syno_hw_version` > > > > `cat /etc.defaults/VERSION` > > > > `ll /sys/firmware/devicetree` > > > > `cat /sys/firmware/devicetree/base/power_limit && echo` Unfortunately that NAS is not operational right now due to moving; I'll do it when it's back up.
Author
Owner

@golovan commented on GitHub (Mar 2, 2024):

The same drive model and the same problem for me. Tried to -- restore but no luck. This script is applied on shutdown and HDD_db one on the boot. Any further ideas, @007revad?

<!-- gh-comment-id:1974158770 --> @golovan commented on GitHub (Mar 2, 2024): The same drive model and the same problem for me. Tried to -- restore but no luck. This script is applied on shutdown and HDD_db one on the boot. Any further ideas, @007revad?
Author
Owner

@007revad commented on GitHub (Mar 2, 2024):

This script is applied on shutdown and HDD_db one on the boot.

Which script do you have scheduled to run on shutdown?

<!-- gh-comment-id:1974165350 --> @007revad commented on GitHub (Mar 2, 2024): > This script is applied on shutdown and HDD_db one on the boot. Which script do you have scheduled to run on shutdown?
Author
Owner

@golovan commented on GitHub (Mar 2, 2024):

Sorry, need to sleep a bit :) Synology_enable_M2_volume is on shutdown.

<!-- gh-comment-id:1974174909 --> @golovan commented on GitHub (Mar 2, 2024): Sorry, need to sleep a bit :) Synology_enable_M2_volume is on shutdown.
Author
Owner

@golovan commented on GitHub (Mar 2, 2024):

Solved. Disabling syno_enable_m2_volume from run and running syno_enable_m2_volume.sh --restore helped! Do I understand correctly there is no need in syno_enable_m2_volume anymore and Synology_HDD_db enables m2 volumes as well? @007revad

<!-- gh-comment-id:1974441131 --> @golovan commented on GitHub (Mar 2, 2024): Solved. Disabling syno_enable_m2_volume from run and running syno_enable_m2_volume.sh --restore helped! Do I understand correctly there is no need in syno_enable_m2_volume anymore and Synology_HDD_db enables m2 volumes as well? @007revad
Author
Owner

@007revad commented on GitHub (Mar 2, 2024):

@golovan

Solved. Disabling syno_enable_m2_volume from run and running syno_enable_m2_volume.sh --restore helped!

That was what I was going to suggest.

Do I understand correctly there is no need in syno_enable_m2_volume anymore and Synology_HDD_db enables m2 volumes as well?

Correct. For '20 series and later models with DSM 7.2.1 you only need Synology_HDD_db.

<!-- gh-comment-id:1974615847 --> @007revad commented on GitHub (Mar 2, 2024): @golovan > Solved. Disabling syno_enable_m2_volume from run and running syno_enable_m2_volume.sh --restore helped! That was what I was going to suggest. > Do I understand correctly there is no need in syno_enable_m2_volume anymore and Synology_HDD_db enables m2 volumes as well? Correct. For '20 series and later models with DSM 7.2.1 you only need Synology_HDD_db.
Author
Owner

@reinhead commented on GitHub (Dec 3, 2024):

I have a DS923+ and recently bought 2 x M2 Crucial P3 Plus drives. Found this thread because I also had the unrecognized firmware issue for the M2 Crucial drives. Can confirm that disabling syno_enable_m2_volume, restoring and only running Synology_HDD_db solved everything here as well. Thanks!

<!-- gh-comment-id:2514186317 --> @reinhead commented on GitHub (Dec 3, 2024): I have a DS923+ and recently bought 2 x M2 Crucial P3 Plus drives. Found this thread because I also had the unrecognized firmware issue for the M2 Crucial drives. Can confirm that disabling syno_enable_m2_volume, restoring and only running Synology_HDD_db solved everything here as well. Thanks!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#537
No description provided.