mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #532] NVME Drive not recognized after updates #690
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#690
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Kia0ra on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/532
Hello,
I had already opened a ticket regarding the update to 7.3.1, and I encountered the same issue with 7.3.2, namely:
@007revad commented on GitHub (Dec 11, 2025):
I've released a new version of syno_hdd_db that includes some bug fixes and improvements.
It also includes syno_hdd_shutdown.sh to schedule to run as root at shutdown. syno_hdd_shutdown checks which packages are installed on NVMe volumes and stops those packages to hopefully prevent any DSM update from "repairing" those packages to the HDD volume if the NVMe volume is offline after the first post DSM update boot.
@Kia0ra commented on GitHub (Jan 30, 2026):
Thank you for the new version, but the problem persists:
The volume degrades due to the loss of a disk and must be rebuilt with each update.
I hope the problem can be resolved. Please let me know if I can help with more information.
@007revad commented on GitHub (Jan 30, 2026):
There's something about that WD Red SN700 1000GB 111130WD that DSM does not like.
As the DSM update was only a small "Update N" update it did not replace the
_host_v7.dbfiles so they didn't need editing. All the other settings also survived the small "Update N" update.But you should probably delete all the old 220+ and 720+ db files just in case they are somehow causing the problem.
Then I would shutdown the DS423+, remove the NVMe drives, blow out any dust in the M.2 slots, insert the NVMe drives and make sure they are seated correctly.
I would also check the SMART values for the NVMe drives with https://github.com/007revad/Synology_SMART_info
@Kia0ra commented on GitHub (Jan 30, 2026):
Thanks for following up, Dave!
Yes, there is something DSM doesn't like about these NVME drives.
I wondered if it could be because they have the same model name (WD Red SN700) but a different model type (111130WD vs 111150WD)?!
I deleted the files from the old DS220+ and DS720+, we'll see what happens with the next update.
I am 99.99% sure that this is not a hardware issue (dust, poor placement). Apart from the updates, everything works perfectly, and I think I could restart 100 times without seeing the volume degrade as it does during updates.
For what it's worth, here's the smart info output:
The number of Unsafe Shutdowns seems to me to be greatly overestimated. The NAS is behind an UPS, and the drives must have experienced one or two sudden shutdowns in their lifetime when the UPS battery was faulty.
@007revad commented on GitHub (Jan 31, 2026):
118 and 87 unsafe shutdowns would seem to be a sign that something is not right.
My NVMe drives have very few unsafe shutdowns.
DSM sets power limits for the NVMe drives
power_limit = "14.85,9.9"so:Which NVMe drive is the WD Red SN700 1000GB 111130WD
@Kia0ra commented on GitHub (Jan 31, 2026):
I have no idea where this number of ‘Unsafe_Shutdowns’ comes from... I'll keep an eye on it to see if it changes.
Here are the two outputs:
It looks like it's the slot 2 drive that remains available even after an update.
@007revad commented on GitHub (Jan 31, 2026):
If you schedule https://github.com/007revad/Synology_SMART_info to run with -ie (or --increased --email) and set the scheduled task to only send emails if important SMART values change task scheduler will send you an email when the unsafe shutdowns increase.
I wonder if the 111130WD firmware had a bug, which was fixed in 111150WD or if there's an issue with M.2 slot 1. It still look to me (from the number of unsafe shut downs) like it could be that the NVMe drives are not making a 100% reliable connection in the M.2 slots.
Try shutting down the NAS, move NVMe 1 to slot 2 and NVMe 2 to slot 1. If the problem move to slot 1 it could indicate it's that NVMe drive. If the problem stays with slot 2 it could indicate an issue with slot 2.
@Kia0ra commented on GitHub (Feb 1, 2026):
I'll keep a close eye on that. Thanks for the advice on the smart info script.
I must have over 40 docker apps and a few virtual machines running intensively on these disks without any problems apart from updates. I imagine that if a disk were to disconnect in some way, it would generate a few errors on these VMs. But I'll still try to clean up the connections occasionally, just in case.
Is there a risk of destroying my RAID1 (SHR) volumes if I physically reverse the disks?
Thanks for the help anyway!
@007revad commented on GitHub (Feb 1, 2026):
DSM saves metadata on each drive so it knows which drive is which even if you move them around.