mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #534] smartctl/smartctl_nvme_smart_info_get.c:232 Failed to load attribute DB of disk /dev/nvme0n1 #187
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#187
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @magicdude4eva on GitHub (Dec 5, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/534
I notice the following messages:
Information:
@magicdude4eva commented on GitHub (Dec 7, 2025):
In case if it helps - here the nvme check:
@007revad commented on GitHub (Dec 8, 2025):
Are you only getting those log entries for /dev/nvme0n1 and not for /dev/nvme1n1 ?
Is everything working as expected, apart from the log entries?
@007revad commented on GitHub (Dec 8, 2025):
Back in September I used to get a LOT of these, every 12 seconds in /var/log/synosnmpcd.log
In /var/log/messages I get these:
In /var/log/synocrond-job.log I get these:
@magicdude4eva commented on GitHub (Dec 8, 2025):
@007revad commented on GitHub (Dec 8, 2025):
I think I know what might be causing the log entries for NVMe drives. The script sets the following for each drive added to the drive database.
Synology's own NVMe drives have:
I've just changed them to true for my NVMe drives. I'll check tomorrow if those log entries are still occurring.
@007revad commented on GitHub (Dec 11, 2025):
Setting these appeared to help a little bit:
But before I had waited long enough to see if it really did help I started working on updating syno_hdd_db because I had found duplicate entries for my "WD_BLACK SN770 500GB" NVMe drives. They each have a different firmware version.
The jq command, and therefore probably DSM, only sees the last "WD_BLACK SN770 500GB". I suspect the duplicate keys were causing some of the log entries.
I've been running https://github.com/007revad/Synology_HDD_db/releases/tag/v3.6.113 for the last 2 days and the only remaining "Failed to load attribute DB of disk /dev/nvme" log entries are 6 of these (3 for each NVMe drive) just after midnight each day.
@magicdude4eva commented on GitHub (Dec 11, 2025):
I upgraded to 3.6.113
Opening Healthinfo on the drives in StorageManager I still get the error (do I need to reboot?)
@007revad commented on GitHub (Dec 11, 2025):
Okay, so it's opening an NVMe drive's Health Info in Storage Manager that causes those log entries. Even after changing:
to:
then forcing DSM to reload the drive databases and closing and opening Storage Manager I still get those error messages in the log.
In DSM 7.3 Synology's own tool for getting smart info no longer works. It used to work in DSM 7.2
Now every attribute is zero, except the temperature which is an insanely high number. And Available Space shows Failed (probably because DSM thinks it is 0.
synodisk --smart_info_get /dev/<drive>still works for HDDs and SATA SSDs.I'm going to run the following test to see if it's bug that I can report to Synology.
@magicdude4eva commented on GitHub (Dec 12, 2025):
fwiw, on my NVME it reports fine:
If I can help anywhere with diagnostics or testing, let me know.