[GH-ISSUE #532] NVME Drive not recognized after updates #690

Open
opened 2026-03-11 13:10:19 +03:00 by kerem · 9 comments
Owner

Originally created by @Kia0ra on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/532

Hello,

I had already opened a ticket regarding the update to 7.3.1, and I encountered the same issue with 7.3.2, namely:

  • During the first reboot after the update, when the script runs automatically, only one type of SSD is detected:
Tâche : syno_hdd_db
Heure de début : Wed, 03 Dec 2025 08:49:26 GMT
Heure d'arrêt : Wed, 03 Dec 2025 08:49:40 GMT
État actuel : 0 (Normal)
Sortie/erreur standard :
Synology_HDD_db v3.6.111
DS423+ x86_64 DSM 7.3.2-86009
StorageManager 1.0.1-1100

ds423+_host_v7 version 8042

Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh

WARNING Don't store this script on an NVMe volume!

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

**M.2 drive models found: 1
WD Red SN700 1000GB,111150WD,1000 GB**

No M.2 PCIe cards found

No Expansion Units found
  • After a second reboot, the second SSD is detected, but in the meantime, the volume formed by these two disks has degraded and needs to be rebuilt
Tâche : syno_hdd_db
Heure de début : Wed, 03 Dec 2025 08:59:57 GMT
Heure d'arrêt : Wed, 03 Dec 2025 09:00:10 GMT
État actuel : 0 (Normal)
Sortie/erreur standard :
Synology_HDD_db v3.6.111
DS423+ x86_64 DSM 7.3.2-86009
StorageManager 1.0.1-1100

ds423+_host_v7 version 8042

Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh

WARNING Don't store this script on an NVMe volume!

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

**M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB**

No M.2 PCIe cards found

No Expansion Units found

Originally created by @Kia0ra on GitHub (Dec 3, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/532 Hello, I had already opened a ticket regarding the update to 7.3.1, and I encountered the same issue with 7.3.2, namely: - During the first reboot after the update, when the script runs automatically, only one type of SSD is detected: ``` Tâche : syno_hdd_db Heure de début : Wed, 03 Dec 2025 08:49:26 GMT Heure d'arrêt : Wed, 03 Dec 2025 08:49:40 GMT État actuel : 0 (Normal) Sortie/erreur standard : Synology_HDD_db v3.6.111 DS423+ x86_64 DSM 7.3.2-86009 StorageManager 1.0.1-1100 ds423+_host_v7 version 8042 Using options: -nr --autoupdate=3 Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh WARNING Don't store this script on an NVMe volume! HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB **M.2 drive models found: 1 WD Red SN700 1000GB,111150WD,1000 GB** No M.2 PCIe cards found No Expansion Units found ``` - After a second reboot, the second SSD is detected, but in the meantime, the volume formed by these two disks has degraded and needs to be rebuilt ``` Tâche : syno_hdd_db Heure de début : Wed, 03 Dec 2025 08:59:57 GMT Heure d'arrêt : Wed, 03 Dec 2025 09:00:10 GMT État actuel : 0 (Normal) Sortie/erreur standard : Synology_HDD_db v3.6.111 DS423+ x86_64 DSM 7.3.2-86009 StorageManager 1.0.1-1100 ds423+_host_v7 version 8042 Using options: -nr --autoupdate=3 Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh WARNING Don't store this script on an NVMe volume! HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB **M.2 drive models found: 2 WD Red SN700 1000GB,111130WD,1000 GB WD Red SN700 1000GB,111150WD,1000 GB** No M.2 PCIe cards found No Expansion Units found ```
Author
Owner

@007revad commented on GitHub (Dec 11, 2025):

I've released a new version of syno_hdd_db that includes some bug fixes and improvements.

It also includes syno_hdd_shutdown.sh to schedule to run as root at shutdown. syno_hdd_shutdown checks which packages are installed on NVMe volumes and stops those packages to hopefully prevent any DSM update from "repairing" those packages to the HDD volume if the NVMe volume is offline after the first post DSM update boot.

<!-- gh-comment-id:3640368624 --> @007revad commented on GitHub (Dec 11, 2025): I've released a new version of syno_hdd_db that includes some bug fixes and improvements. It also includes syno_hdd_shutdown.sh to schedule to run as root at shutdown. syno_hdd_shutdown checks which packages are installed on NVMe volumes and stops those packages to hopefully prevent any DSM update from "repairing" those packages to the HDD volume if the NVMe volume is offline after the first post DSM update boot.
Author
Owner

@Kia0ra commented on GitHub (Jan 30, 2026):

Thank you for the new version, but the problem persists:

  • Script output after installing DSM 7.3.2 Update 1, only 1 M2 drive detected :
Synology_HDD_db v3.6.119
DS423+ x86_64 DSM 7.3.2-86009-1
StorageManager 1.0.1-1100
SynoOnlinePack_v2 version 99991022

ds423+_host_v7 version 8044

Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

M.2 drive models found: 1
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

No Expansion Units found

WD160EDGZ-11B2DA0 (85.00A85) already exists in ds220+_host_v7.db
WD160EDGZ-11B2DA0 (85.00A85) already exists in ds720+_host_v7.db
WD160EDGZ-11B2DA0 (85.00A85) already exists in ds423+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds220+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds720+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds423+_host_v7.db

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 18 GB.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already disabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

  • Script output after a second restart, the second M2 drive is detected:
Synology_HDD_db v3.6.119
DS423+ x86_64 DSM 7.3.2-86009-1
StorageManager 1.0.1-1100
SynoOnlinePack_v2 version 99991022

ds423+_host_v7 version 8044

Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

**M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB**

No M.2 PCIe cards found

No Expansion Units found

WD160EDGZ-11B2DA0 (85.00A85) already exists in ds220+_host_v7.db
WD160EDGZ-11B2DA0 (85.00A85) already exists in ds720+_host_v7.db
WD160EDGZ-11B2DA0 (85.00A85) already exists in ds423+_host_v7.db
WD Red SN700 1000GB (111130WD) already exists in ds220+_host_v7.db
WD Red SN700 1000GB (111130WD) already exists in ds720+_host_v7.db
WD Red SN700 1000GB (111130WD) already exists in ds423+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds220+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds720+_host_v7.db
WD Red SN700 1000GB (111150WD) already exists in ds423+_host_v7.db

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 18 GB.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already disabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

The volume degrades due to the loss of a disk and must be rebuilt with each update.
I hope the problem can be resolved. Please let me know if I can help with more information.

<!-- gh-comment-id:3822991379 --> @Kia0ra commented on GitHub (Jan 30, 2026): Thank you for the new version, but the problem persists: - Script output after installing DSM 7.3.2 Update 1, only 1 M2 drive detected : ``` Synology_HDD_db v3.6.119 DS423+ x86_64 DSM 7.3.2-86009-1 StorageManager 1.0.1-1100 SynoOnlinePack_v2 version 99991022 ds423+_host_v7 version 8044 Using options: -nr --autoupdate=3 Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB M.2 drive models found: 1 WD Red SN700 1000GB,111150WD,1000 GB No M.2 PCIe cards found No Expansion Units found WD160EDGZ-11B2DA0 (85.00A85) already exists in ds220+_host_v7.db WD160EDGZ-11B2DA0 (85.00A85) already exists in ds720+_host_v7.db WD160EDGZ-11B2DA0 (85.00A85) already exists in ds423+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds220+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds720+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds423+_host_v7.db Support disk compatibility already enabled. Support memory compatibility already disabled. Max memory already set to 18 GB. NVMe support already enabled. M.2 volume support already enabled. Drive db auto updates already disabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` - Script output after a second restart, the second M2 drive is detected: ``` Synology_HDD_db v3.6.119 DS423+ x86_64 DSM 7.3.2-86009-1 StorageManager 1.0.1-1100 SynoOnlinePack_v2 version 99991022 ds423+_host_v7 version 8044 Using options: -nr --autoupdate=3 Running from: /volume1/homes/xxx/scripts/syno_hdd_db.sh HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB **M.2 drive models found: 2 WD Red SN700 1000GB,111130WD,1000 GB WD Red SN700 1000GB,111150WD,1000 GB** No M.2 PCIe cards found No Expansion Units found WD160EDGZ-11B2DA0 (85.00A85) already exists in ds220+_host_v7.db WD160EDGZ-11B2DA0 (85.00A85) already exists in ds720+_host_v7.db WD160EDGZ-11B2DA0 (85.00A85) already exists in ds423+_host_v7.db WD Red SN700 1000GB (111130WD) already exists in ds220+_host_v7.db WD Red SN700 1000GB (111130WD) already exists in ds720+_host_v7.db WD Red SN700 1000GB (111130WD) already exists in ds423+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds220+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds720+_host_v7.db WD Red SN700 1000GB (111150WD) already exists in ds423+_host_v7.db Support disk compatibility already enabled. Support memory compatibility already disabled. Max memory already set to 18 GB. NVMe support already enabled. M.2 volume support already enabled. Drive db auto updates already disabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` The volume degrades due to the loss of a disk and must be rebuilt with each update. I hope the problem can be resolved. Please let me know if I can help with more information.
Author
Owner

@007revad commented on GitHub (Jan 30, 2026):

There's something about that WD Red SN700 1000GB 111130WD that DSM does not like.

As the DSM update was only a small "Update N" update it did not replace the _host_v7.db files so they didn't need editing. All the other settings also survived the small "Update N" update.

But you should probably delete all the old 220+ and 720+ db files just in case they are somehow causing the problem.

sudo for f in /var/lib/disk-compatibility/*ds720+_host*; do rm "$f"; done
sudo for f in /var/lib/disk-compatibility/*ds220+_host*; do rm "$f"; done

Then I would shutdown the DS423+, remove the NVMe drives, blow out any dust in the M.2 slots, insert the NVMe drives and make sure they are seated correctly.

I would also check the SMART values for the NVMe drives with https://github.com/007revad/Synology_SMART_info

<!-- gh-comment-id:3825754879 --> @007revad commented on GitHub (Jan 30, 2026): There's something about that WD Red SN700 1000GB 111130WD that DSM does not like. As the DSM update was only a small "Update N" update it did not replace the `_host_v7.db` files so they didn't need editing. All the other settings also survived the small "Update N" update. But you should probably delete all the old 220+ and 720+ db files just in case they are somehow causing the problem. ``` sudo for f in /var/lib/disk-compatibility/*ds720+_host*; do rm "$f"; done sudo for f in /var/lib/disk-compatibility/*ds220+_host*; do rm "$f"; done ``` Then I would shutdown the DS423+, remove the NVMe drives, blow out any dust in the M.2 slots, insert the NVMe drives and make sure they are seated correctly. I would also check the SMART values for the NVMe drives with https://github.com/007revad/Synology_SMART_info
Author
Owner

@Kia0ra commented on GitHub (Jan 30, 2026):

Thanks for following up, Dave!

Yes, there is something DSM doesn't like about these NVME drives.
I wondered if it could be because they have the same model name (WD Red SN700) but a different model type (111130WD vs 111150WD)?!

I deleted the files from the old DS220+ and DS720+, we'll see what happens with the next update.

I am 99.99% sure that this is not a hardware issue (dust, poor placement). Apart from the updates, everything works perfectly, and I think I could restart 100 times without seeing the volume degrade as it does during updates.

For what it's worth, here's the smart info output:

M.2 Drive 1  WD Red SN700 1000GB  2134xxSN  /dev/nvme0n1
SMART Error Counter Log:         No Errors Logged
  1 Critical_Warning             0
  2 Temperature                  33 C
  5 Percentage Used              8%
 12 Power On Hours               15,010
 13 Unsafe Shutdowns             118
 14 Media Errors                 0

M.2 Drive 2  WD Red SN700 1000GB  2341xxSN  /dev/nvme1n1
SMART Error Counter Log:         No Errors Logged
  1 Critical_Warning             0
  2 Temperature                  36 C
  5 Percentage Used              10%
 12 Power On Hours               17,849
 13 Unsafe Shutdowns             87
 14 Media Errors                 0

The number of Unsafe Shutdowns seems to me to be greatly overestimated. The NAS is behind an UPS, and the drives must have experienced one or two sudden shutdowns in their lifetime when the UPS battery was faulty.

<!-- gh-comment-id:3826059625 --> @Kia0ra commented on GitHub (Jan 30, 2026): Thanks for following up, Dave! Yes, there is something DSM doesn't like about these NVME drives. I wondered if it could be because they have the same model name (WD Red SN700) but a different model type (111130WD vs 111150WD)?! I deleted the files from the old DS220+ and DS720+, we'll see what happens with the next update. I am 99.99% sure that this is not a hardware issue (dust, poor placement). Apart from the updates, everything works perfectly, and I think I could restart 100 times without seeing the volume degrade as it does during updates. For what it's worth, here's the smart info output: ``` M.2 Drive 1 WD Red SN700 1000GB 2134xxSN /dev/nvme0n1 SMART Error Counter Log: No Errors Logged 1 Critical_Warning 0 2 Temperature 33 C 5 Percentage Used 8% 12 Power On Hours 15,010 13 Unsafe Shutdowns 118 14 Media Errors 0 M.2 Drive 2 WD Red SN700 1000GB 2341xxSN /dev/nvme1n1 SMART Error Counter Log: No Errors Logged 1 Critical_Warning 0 2 Temperature 36 C 5 Percentage Used 10% 12 Power On Hours 17,849 13 Unsafe Shutdowns 87 14 Media Errors 0 ``` The number of Unsafe Shutdowns seems to me to be greatly overestimated. The NAS is behind an UPS, and the drives must have experienced one or two sudden shutdowns in their lifetime when the UPS battery was faulty.
Author
Owner

@007revad commented on GitHub (Jan 31, 2026):

118 and 87 unsafe shutdowns would seem to be a sign that something is not right.

My NVMe drives have very few unsafe shutdowns.

  • The 2 NVMe drives in my DS1821+ have 5 each (3 of these were because I removed the NVMe drives while the NAS was running, 3 times).
  • The NVMe drive in my DS720+ has 2.
  • The NVMe drive in my DS925+ has 1.

DSM sets power limits for the NVMe drives power_limit = "14.85,9.9" so:

  • 14.85 W for M.2 slot 1.
  • 9.9 W for M.2 slot 2.

Which NVMe drive is the WD Red SN700 1000GB 111130WD

cat /sys/block/nvme0n1/device/firmware_rev
cat /sys/block/nvme1n1/device/firmware_rev
<!-- gh-comment-id:3827152936 --> @007revad commented on GitHub (Jan 31, 2026): 118 and 87 unsafe shutdowns would seem to be a sign that something is not right. My NVMe drives have very few unsafe shutdowns. - The 2 NVMe drives in my DS1821+ have 5 each (3 of these were because I removed the NVMe drives while the NAS was running, 3 times). - The NVMe drive in my DS720+ has 2. - The NVMe drive in my DS925+ has 1. DSM sets power limits for the NVMe drives `power_limit = "14.85,9.9"` so: - 14.85 W for M.2 slot 1. - 9.9 W for M.2 slot 2. Which NVMe drive is the WD Red SN700 1000GB 111130WD ``` cat /sys/block/nvme0n1/device/firmware_rev cat /sys/block/nvme1n1/device/firmware_rev ```
Author
Owner

@Kia0ra commented on GitHub (Jan 31, 2026):

I have no idea where this number of ‘Unsafe_Shutdowns’ comes from... I'll keep an eye on it to see if it changes.

Here are the two outputs:

cat /sys/block/nvme0n1/device/firmware_rev > 111130WD
cat /sys/block/nvme1n1/device/firmware_rev > 111150WD

It looks like it's the slot 2 drive that remains available even after an update.

<!-- gh-comment-id:3828141745 --> @Kia0ra commented on GitHub (Jan 31, 2026): I have no idea where this number of ‘Unsafe_Shutdowns’ comes from... I'll keep an eye on it to see if it changes. Here are the two outputs: ``` cat /sys/block/nvme0n1/device/firmware_rev > 111130WD cat /sys/block/nvme1n1/device/firmware_rev > 111150WD ``` It looks like it's the slot 2 drive that remains available even after an update.
Author
Owner

@007revad commented on GitHub (Jan 31, 2026):

If you schedule https://github.com/007revad/Synology_SMART_info to run with -ie (or --increased --email) and set the scheduled task to only send emails if important SMART values change task scheduler will send you an email when the unsafe shutdowns increase.

I wonder if the 111130WD firmware had a bug, which was fixed in 111150WD or if there's an issue with M.2 slot 1. It still look to me (from the number of unsafe shut downs) like it could be that the NVMe drives are not making a 100% reliable connection in the M.2 slots.

Try shutting down the NAS, move NVMe 1 to slot 2 and NVMe 2 to slot 1. If the problem move to slot 1 it could indicate it's that NVMe drive. If the problem stays with slot 2 it could indicate an issue with slot 2.

<!-- gh-comment-id:3829182074 --> @007revad commented on GitHub (Jan 31, 2026): If you schedule https://github.com/007revad/Synology_SMART_info to run with -ie (or --increased --email) and set the scheduled task to only send emails if important SMART values change task scheduler will send you an email when the unsafe shutdowns increase. I wonder if the 111130WD firmware had a bug, which was fixed in 111150WD or if there's an issue with M.2 slot 1. It still look to me (from the number of unsafe shut downs) like it could be that the NVMe drives are not making a 100% reliable connection in the M.2 slots. Try shutting down the NAS, move NVMe 1 to slot 2 and NVMe 2 to slot 1. If the problem move to slot 1 it could indicate it's that NVMe drive. If the problem stays with slot 2 it could indicate an issue with slot 2.
Author
Owner

@Kia0ra commented on GitHub (Feb 1, 2026):

I'll keep a close eye on that. Thanks for the advice on the smart info script.

I must have over 40 docker apps and a few virtual machines running intensively on these disks without any problems apart from updates. I imagine that if a disk were to disconnect in some way, it would generate a few errors on these VMs. But I'll still try to clean up the connections occasionally, just in case.

Is there a risk of destroying my RAID1 (SHR) volumes if I physically reverse the disks?

Thanks for the help anyway!

<!-- gh-comment-id:3829778689 --> @Kia0ra commented on GitHub (Feb 1, 2026): I'll keep a close eye on that. Thanks for the advice on the smart info script. I must have over 40 docker apps and a few virtual machines running intensively on these disks without any problems apart from updates. I imagine that if a disk were to disconnect in some way, it would generate a few errors on these VMs. But I'll still try to clean up the connections occasionally, just in case. Is there a risk of destroying my RAID1 (SHR) volumes if I physically reverse the disks? Thanks for the help anyway!
Author
Owner

@007revad commented on GitHub (Feb 1, 2026):

DSM saves metadata on each drive so it knows which drive is which even if you move them around.

<!-- gh-comment-id:3830634059 --> @007revad commented on GitHub (Feb 1, 2026): DSM saves metadata on each drive so it knows which drive is which even if you move them around.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#690
No description provided.