[GH-ISSUE #161] Partially working - One drive still unsupported #572

Closed
opened 2026-03-11 12:14:38 +03:00 by kerem · 20 comments
Owner

Originally created by @b-col on GitHub (Nov 16, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/161

Hello,

I am running this script on a DS920+ running DSM 7.2-64570 Update 1. After running with -nr and rebooting, I see one of my drives is now supported, while the other still has the firmware warning message.

Before:
https://i.imgur.com/jdCYAW3.png

After:
https://i.imgur.com/SIkkoTx.png

I should mention that I have also ran your Synology_enable_M2_volume script.

Originally created by @b-col on GitHub (Nov 16, 2023). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/161 Hello, I am running this script on a DS920+ running DSM 7.2-64570 Update 1. After running with -nr and rebooting, I see one of my drives is now supported, while the other still has the firmware warning message. Before: https://i.imgur.com/jdCYAW3.png After: https://i.imgur.com/SIkkoTx.png I should mention that I have also ran your Synology_enable_M2_volume script.
kerem closed this issue 2026-03-11 12:14:44 +03:00
Author
Owner

@007revad commented on GitHub (Nov 17, 2023):

What do the following commands return?

echo "'$(cat /sys/block/nvme0n1/device/model)'"

echo "'$(cat /sys/block/nvme0n1/device/firmware_rev)'"

echo "'$(cat /sys/block/nvme0n1/device/rev)'"

echo "'$(cat /sys/block/nvme1n1/device/model)'"

echo "'$(cat /sys/block/nvme1n1/device/firmware_rev)'"

echo "'$(cat /sys/block/nvme1n1/device/rev)'"

<!-- gh-comment-id:1815803845 --> @007revad commented on GitHub (Nov 17, 2023): What do the following commands return? `echo "'$(cat /sys/block/nvme0n1/device/model)'"` `echo "'$(cat /sys/block/nvme0n1/device/firmware_rev)'"` `echo "'$(cat /sys/block/nvme0n1/device/rev)'"` `echo "'$(cat /sys/block/nvme1n1/device/model)'"` `echo "'$(cat /sys/block/nvme1n1/device/firmware_rev)'"` `echo "'$(cat /sys/block/nvme1n1/device/rev)'"`
Author
Owner

@007revad commented on GitHub (Nov 17, 2023):

And these commands:

synonvme --vendor-get /dev/nvme0

synonvme --model-get /dev/nvme0

synonvme --sn-fr-get /dev/nvme0

synonvme --vendor-get /dev/nvme1

synonvme --model-get /dev/nvme1

synonvme --sn-fr-get /dev/nvme1

<!-- gh-comment-id:1815814854 --> @007revad commented on GitHub (Nov 17, 2023): And these commands: `synonvme --vendor-get /dev/nvme0` `synonvme --model-get /dev/nvme0` `synonvme --sn-fr-get /dev/nvme0` `synonvme --vendor-get /dev/nvme1` `synonvme --model-get /dev/nvme1` `synonvme --sn-fr-get /dev/nvme1`
Author
Owner

@b-col commented on GitHub (Nov 17, 2023):

Here you go!

https://i.imgur.com/lBtC1SP.png

<!-- gh-comment-id:1816406873 --> @b-col commented on GitHub (Nov 17, 2023): Here you go! https://i.imgur.com/lBtC1SP.png
Author
Owner

@007revad commented on GitHub (Nov 17, 2023):

Sorry. Those last 6 commands should have included sudo.

sudo synonvme --vendor-get /dev/nvme0

sudo synonvme --model-get /dev/nvme0

sudo synonvme --sn-fr-get /dev/nvme0

sudo synonvme --vendor-get /dev/nvme1

sudo synonvme --model-get /dev/nvme1

sudo synonvme --sn-fr-get /dev/nvme1

<!-- gh-comment-id:1817076081 --> @007revad commented on GitHub (Nov 17, 2023): Sorry. Those last 6 commands should have included sudo. `sudo synonvme --vendor-get /dev/nvme0` `sudo synonvme --model-get /dev/nvme0` `sudo synonvme --sn-fr-get /dev/nvme0` `sudo synonvme --vendor-get /dev/nvme1` `sudo synonvme --model-get /dev/nvme1` `sudo synonvme --sn-fr-get /dev/nvme1`
Author
Owner

@b-col commented on GitHub (Nov 17, 2023):

Thanks, much better results this time 👍
https://i.imgur.com/EgBS9Lw.png

<!-- gh-comment-id:1817168021 --> @b-col commented on GitHub (Nov 17, 2023): Thanks, much better results this time 👍 https://i.imgur.com/EgBS9Lw.png
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

The model and firmware version look okay.

Can you tell me what this command returns:

jq . /var/lib/disk-compatibility/ds920+_host_v7.db | grep -A 21 Lexar

<!-- gh-comment-id:1817280263 --> @007revad commented on GitHub (Nov 18, 2023): The model and firmware version look okay. Can you tell me what this command returns: `jq . /var/lib/disk-compatibility/ds920+_host_v7.db | grep -A 21 Lexar`
Author
Owner

@b-col commented on GitHub (Nov 18, 2023):

https://i.imgur.com/XRe5tQ6.png

<!-- gh-comment-id:1817282159 --> @b-col commented on GitHub (Nov 18, 2023): https://i.imgur.com/XRe5tQ6.png
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

It looks like DSM is not checking the drive database for drives that don't return the vendor.

I've had a couple of people report the same issue, but they were using some Chinese brand NVMe drives they bought cheap off Alibaba. When they tried known brand NVMe drives everything worked as it should.

One last thing I can think of testing is to see if this is something new in DSM 7.2-64570 Update 1

Do you want to do the following:

  1. Download this zip file to your home folder on your Synology. 64570_synonvme.zip
  2. Unzip the downloaded file so that synonvme is in your home folder.
  3. Then in SSH check you're in your home folder:
    echo "$PWD"
    That should return /var/services/homes/b-col (or whatever your username is)
  4. Set the correct permissions on the downloaded synonvme:
    chmod 755 $HOME/synonvme
  5. Run the following command to see if it returns the vendor for the Lexar NVMe drive:
    sudo $HOME/synonvme --vendor-get /dev/nvme0
<!-- gh-comment-id:1817344894 --> @007revad commented on GitHub (Nov 18, 2023): It looks like DSM is not checking the drive database for drives that don't return the vendor. I've had a couple of people report the same issue, but they were using some Chinese brand NVMe drives they bought cheap off Alibaba. When they tried known brand NVMe drives everything worked as it should. One last thing I can think of testing is to see if this is something new in DSM 7.2-64570 Update 1 Do you want to do the following: 1. Download this zip file to **your home folder** on your Synology. [64570_synonvme.zip](https://github.com/007revad/Synology_HDD_db/files/13398776/64570_synonvme.zip) 2. Unzip the downloaded file so that synonvme is in **your home folder**. 3. Then in SSH check you're in your home folder: `echo "$PWD"` That should return **/var/services/homes/b-col** (or whatever your username is) 4. Set the correct permissions on the downloaded synonvme: `chmod 755 $HOME/synonvme` 5. Run the following command to see if it returns the vendor for the Lexar NVMe drive: `sudo $HOME/synonvme --vendor-get /dev/nvme0`
Author
Owner

@b-col commented on GitHub (Nov 18, 2023):

In fairness, Lexar are essentially in that same bracket given they're now owned by Longsys.

I've ran through those steps and am still getting the "Fail to get disk vender" error 😞

I'd guess not, but are there any options around manually setting the drive vendor?

<!-- gh-comment-id:1817658764 --> @b-col commented on GitHub (Nov 18, 2023): In fairness, Lexar are essentially in that same bracket given they're now owned by Longsys. I've ran through those steps and am still getting the "Fail to get disk vender" error 😞 I'd guess not, but are there any options around manually setting the drive vendor?
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

There is a pci_vendor_ids.conf file that contains a list of hex vendor ids and the vendor name. Maybe adding Lexar to it will work.
We know the vendor name is Lexar so we just need to get the vendor id.

What do the following commands return:

sudo nvme id-ctrl /dev/nvme0 | grep NVME -A 5

sudo nvme id-ctrl /dev/nvme0 | grep -E ^vid | awk '{print $NF}'

<!-- gh-comment-id:1817671473 --> @007revad commented on GitHub (Nov 18, 2023): There is a pci_vendor_ids.conf file that contains a list of hex vendor ids and the vendor name. Maybe adding Lexar to it will work. We know the vendor name is Lexar so we just need to get the vendor id. What do the following commands return: `sudo nvme id-ctrl /dev/nvme0 | grep NVME -A 5` `sudo nvme id-ctrl /dev/nvme0 | grep -E ^vid | awk '{print $NF}'`
Author
Owner

@b-col commented on GitHub (Nov 18, 2023):

https://i.imgur.com/IFpw9x4.png

<!-- gh-comment-id:1817674873 --> @b-col commented on GitHub (Nov 18, 2023): https://i.imgur.com/IFpw9x4.png
Author
Owner

@007revad commented on GitHub (Nov 18, 2023):

Excellent.

Try the following:

sudo synosetkeyvalue /usr/syno/etc.defaults/pci_vendor_ids.conf 0x1d97 "Lexar Media"

If you already have storage manager open, close it and reopen it. Then check if storage manager is showing Lexar Media for the Lexar NVMe drive. If not reboot then check again.

<!-- gh-comment-id:1817676547 --> @007revad commented on GitHub (Nov 18, 2023): Excellent. Try the following: `sudo synosetkeyvalue /usr/syno/etc.defaults/pci_vendor_ids.conf 0x1d97 "Lexar Media"` If you already have storage manager open, close it and reopen it. Then check if storage manager is showing Lexar Media for the Lexar NVMe drive. If not reboot then check again.
Author
Owner

@b-col commented on GitHub (Nov 19, 2023):

That's got the correct vendor name showing in storage manager after a reboot 😄
https://i.imgur.com/6lh83Lk.png

I can also see it when rerunning the previous commands you listed:
https://i.imgur.com/aSD4Iz0.png

I tried rerunning the original hdd script and rebooting, however I still see the firmware error - Not sure if I jumped the gun here though.

<!-- gh-comment-id:1817697741 --> @b-col commented on GitHub (Nov 19, 2023): That's got the correct vendor name showing in storage manager after a reboot 😄 https://i.imgur.com/6lh83Lk.png I can also see it when rerunning the previous commands you listed: https://i.imgur.com/aSD4Iz0.png I tried rerunning the original hdd script and rebooting, however I still see the firmware error - Not sure if I jumped the gun here though.
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

It's great those changes made the vendor appear as it should. But it's disappointing that it's still showing the firmware error.

Try running syno_hdd_db.sh with -nrf

<!-- gh-comment-id:1817703829 --> @007revad commented on GitHub (Nov 19, 2023): It's great those changes made the vendor appear as it should. But it's disappointing that it's still showing the firmware error. Try running syno_hdd_db.sh with -nrf
Author
Owner

@b-col commented on GitHub (Nov 19, 2023):

Same behaviour, unfortunately.

<!-- gh-comment-id:1817706410 --> @b-col commented on GitHub (Nov 19, 2023): Same behaviour, unfortunately.
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

Maybe try running the Synology_enable_M2_volume script with the --restore option, then reboot.

<!-- gh-comment-id:1817983011 --> @007revad commented on GitHub (Nov 19, 2023): Maybe try running the Synology_enable_M2_volume script with the --restore option, then reboot.
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

I just found another pci_vendor_ids.conf file.

Try this command:

sudo synosetkeyvalue /usr/syno/etc/pci_vendor_ids.conf 0x1d97 "Lexar Media"

<!-- gh-comment-id:1817984438 --> @007revad commented on GitHub (Nov 19, 2023): I just found another pci_vendor_ids.conf file. Try this command: `sudo synosetkeyvalue /usr/syno/etc/pci_vendor_ids.conf 0x1d97 "Lexar Media"`
Author
Owner

@b-col commented on GitHub (Nov 19, 2023):

Okay, so some good progress.

I first ran your latest command to modify the pci_vendor_ids.conf file, rebooted, then reran the HDD script, but still had the issue.

Running the Synology_enable_M2_volume script with the --restore option, then rebooting with the HDD script set to run at boot has cleared the warning message 🎉

With that in mind, is there any need to run the Synology_enable_M2_volume script given I am still able to use the drives for storage right now?

Additionally, is there any harm in adding the two conf file commands to my boot script or is this unnecessary?

<!-- gh-comment-id:1818006618 --> @b-col commented on GitHub (Nov 19, 2023): Okay, so some good progress. I first ran your latest command to modify the pci_vendor_ids.conf file, rebooted, then reran the HDD script, but still had the issue. Running the Synology_enable_M2_volume script with the --restore option, then rebooting with the HDD script set to run at boot has cleared the warning message 🎉 With that in mind, is there any need to run the Synology_enable_M2_volume script given I am still able to use the drives for storage right now? Additionally, is there any harm in adding the two conf file commands to my boot script or is this unnecessary?
Author
Owner

@007revad commented on GitHub (Nov 19, 2023):

The Synology_HDD_db script is the only one I use.

Since I updated Synology_HDD_db to enable creating volumes in storage manager with non-Synology NVMe drives I don't run the Synology_enable_M2_volume script anymore.

You'd only need to run those commands again after a DSM update.

I'm updating the Synology_HDD_db script to detect "unknown" NVMe drives and automatically run those commands. So far I've got a list of 9 vendors that aren't in the pci_vendor_ids.conf files.

0x10ec TEAMGROUP
0x1987 Phison 
0x1c5c "SK Hynix" 
0x1cc4 UMIS 
0x1d97 SPCC/Lexar
0x1dbe ADATA 
0x1e49 ZHITAI 
0x1e4b HS/MAXIO
0x1f40 Netac 

I also need to make up a vendor name for "unknown" NVMe drives that aren't in the script. I was thinking maybe "Contact 007revad" or just "007revad" or "el cheapo" 😄

<!-- gh-comment-id:1818017289 --> @007revad commented on GitHub (Nov 19, 2023): The Synology_HDD_db script is the only one I use. Since I updated Synology_HDD_db to enable creating volumes in storage manager with non-Synology NVMe drives I don't run the Synology_enable_M2_volume script anymore. You'd only need to run those commands again after a DSM update. I'm updating the Synology_HDD_db script to detect "unknown" NVMe drives and automatically run those commands. So far I've got a list of 9 vendors that aren't in the pci_vendor_ids.conf files. ``` 0x10ec TEAMGROUP 0x1987 Phison 0x1c5c "SK Hynix" 0x1cc4 UMIS 0x1d97 SPCC/Lexar 0x1dbe ADATA 0x1e49 ZHITAI 0x1e4b HS/MAXIO 0x1f40 Netac ``` I also need to make up a vendor name for "unknown" NVMe drives that aren't in the script. I was thinking maybe "Contact 007revad" or just "007revad" or "el cheapo" 😄
Author
Owner

@b-col commented on GitHub (Nov 20, 2023):

Ah okay, I wasn't aware the volume features were added to the HDD script so that makes sense.

Really appreciate your help with this (especially over a weekend) 😄

<!-- gh-comment-id:1818867503 --> @b-col commented on GitHub (Nov 20, 2023): Ah okay, I wasn't aware the volume features were added to the HDD script so that makes sense. Really appreciate your help with this (especially over a weekend) 😄
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#572
No description provided.