mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #161] Partially working - One drive still unsupported #783
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#783
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @b-col on GitHub (Nov 16, 2023).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/161
Hello,
I am running this script on a DS920+ running DSM 7.2-64570 Update 1. After running with -nr and rebooting, I see one of my drives is now supported, while the other still has the firmware warning message.
Before:
https://i.imgur.com/jdCYAW3.png
After:
https://i.imgur.com/SIkkoTx.png
I should mention that I have also ran your Synology_enable_M2_volume script.
@007revad commented on GitHub (Nov 17, 2023):
What do the following commands return?
echo "'$(cat /sys/block/nvme0n1/device/model)'"echo "'$(cat /sys/block/nvme0n1/device/firmware_rev)'"echo "'$(cat /sys/block/nvme0n1/device/rev)'"echo "'$(cat /sys/block/nvme1n1/device/model)'"echo "'$(cat /sys/block/nvme1n1/device/firmware_rev)'"echo "'$(cat /sys/block/nvme1n1/device/rev)'"@007revad commented on GitHub (Nov 17, 2023):
And these commands:
synonvme --vendor-get /dev/nvme0synonvme --model-get /dev/nvme0synonvme --sn-fr-get /dev/nvme0synonvme --vendor-get /dev/nvme1synonvme --model-get /dev/nvme1synonvme --sn-fr-get /dev/nvme1@b-col commented on GitHub (Nov 17, 2023):
Here you go!
https://i.imgur.com/lBtC1SP.png
@007revad commented on GitHub (Nov 17, 2023):
Sorry. Those last 6 commands should have included sudo.
sudo synonvme --vendor-get /dev/nvme0sudo synonvme --model-get /dev/nvme0sudo synonvme --sn-fr-get /dev/nvme0sudo synonvme --vendor-get /dev/nvme1sudo synonvme --model-get /dev/nvme1sudo synonvme --sn-fr-get /dev/nvme1@b-col commented on GitHub (Nov 17, 2023):
Thanks, much better results this time 👍
https://i.imgur.com/EgBS9Lw.png
@007revad commented on GitHub (Nov 18, 2023):
The model and firmware version look okay.
Can you tell me what this command returns:
jq . /var/lib/disk-compatibility/ds920+_host_v7.db | grep -A 21 Lexar@b-col commented on GitHub (Nov 18, 2023):
https://i.imgur.com/XRe5tQ6.png
@007revad commented on GitHub (Nov 18, 2023):
It looks like DSM is not checking the drive database for drives that don't return the vendor.
I've had a couple of people report the same issue, but they were using some Chinese brand NVMe drives they bought cheap off Alibaba. When they tried known brand NVMe drives everything worked as it should.
One last thing I can think of testing is to see if this is something new in DSM 7.2-64570 Update 1
Do you want to do the following:
echo "$PWD"That should return /var/services/homes/b-col (or whatever your username is)
chmod 755 $HOME/synonvmesudo $HOME/synonvme --vendor-get /dev/nvme0@b-col commented on GitHub (Nov 18, 2023):
In fairness, Lexar are essentially in that same bracket given they're now owned by Longsys.
I've ran through those steps and am still getting the "Fail to get disk vender" error 😞
I'd guess not, but are there any options around manually setting the drive vendor?
@007revad commented on GitHub (Nov 18, 2023):
There is a pci_vendor_ids.conf file that contains a list of hex vendor ids and the vendor name. Maybe adding Lexar to it will work.
We know the vendor name is Lexar so we just need to get the vendor id.
What do the following commands return:
sudo nvme id-ctrl /dev/nvme0 | grep NVME -A 5sudo nvme id-ctrl /dev/nvme0 | grep -E ^vid | awk '{print $NF}'@b-col commented on GitHub (Nov 18, 2023):
https://i.imgur.com/IFpw9x4.png
@007revad commented on GitHub (Nov 18, 2023):
Excellent.
Try the following:
sudo synosetkeyvalue /usr/syno/etc.defaults/pci_vendor_ids.conf 0x1d97 "Lexar Media"If you already have storage manager open, close it and reopen it. Then check if storage manager is showing Lexar Media for the Lexar NVMe drive. If not reboot then check again.
@b-col commented on GitHub (Nov 19, 2023):
That's got the correct vendor name showing in storage manager after a reboot 😄
https://i.imgur.com/6lh83Lk.png
I can also see it when rerunning the previous commands you listed:
https://i.imgur.com/aSD4Iz0.png
I tried rerunning the original hdd script and rebooting, however I still see the firmware error - Not sure if I jumped the gun here though.
@007revad commented on GitHub (Nov 19, 2023):
It's great those changes made the vendor appear as it should. But it's disappointing that it's still showing the firmware error.
Try running syno_hdd_db.sh with -nrf
@b-col commented on GitHub (Nov 19, 2023):
Same behaviour, unfortunately.
@007revad commented on GitHub (Nov 19, 2023):
Maybe try running the Synology_enable_M2_volume script with the --restore option, then reboot.
@007revad commented on GitHub (Nov 19, 2023):
I just found another pci_vendor_ids.conf file.
Try this command:
sudo synosetkeyvalue /usr/syno/etc/pci_vendor_ids.conf 0x1d97 "Lexar Media"@b-col commented on GitHub (Nov 19, 2023):
Okay, so some good progress.
I first ran your latest command to modify the pci_vendor_ids.conf file, rebooted, then reran the HDD script, but still had the issue.
Running the Synology_enable_M2_volume script with the --restore option, then rebooting with the HDD script set to run at boot has cleared the warning message 🎉
With that in mind, is there any need to run the Synology_enable_M2_volume script given I am still able to use the drives for storage right now?
Additionally, is there any harm in adding the two conf file commands to my boot script or is this unnecessary?
@007revad commented on GitHub (Nov 19, 2023):
The Synology_HDD_db script is the only one I use.
Since I updated Synology_HDD_db to enable creating volumes in storage manager with non-Synology NVMe drives I don't run the Synology_enable_M2_volume script anymore.
You'd only need to run those commands again after a DSM update.
I'm updating the Synology_HDD_db script to detect "unknown" NVMe drives and automatically run those commands. So far I've got a list of 9 vendors that aren't in the pci_vendor_ids.conf files.
I also need to make up a vendor name for "unknown" NVMe drives that aren't in the script. I was thinking maybe "Contact 007revad" or just "007revad" or "el cheapo" 😄
@b-col commented on GitHub (Nov 20, 2023):
Ah okay, I wasn't aware the volume features were added to the HDD script so that makes sense.
Really appreciate your help with this (especially over a weekend) 😄