[GH-ISSUE #529] The update to 7.3.1 did not go well #185

Closed
opened 2026-03-07 19:16:49 +03:00 by kerem · 3 comments
Owner

Originally created by @Kia0ra on GitHub (Nov 21, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/529

Hello,

Everything was working fine in 7.2.2, but after updating to 7.3.1, the SSD volume degraded.

I haven't tried to repair it yet. Is there a safe procedure to follow so that I don't lose everything?

I had the script that ran at every startup, and it continued to do so without error after the update.

I have the impression that one of the RAID1 disks failed during the update :
Image

Image

`
Tâche : syno_hdd_db
Heure de début : Fri, 21 Nov 2025 19:14:35 GMT
Heure d'arrêt : Fri, 21 Nov 2025 19:14:48 GMT
État actuel : 0 (Normal)
Sortie/erreur standard :
Synology_HDD_db v3.6.111
DS423+ x86_64 DSM 7.3.1-86003-1
StorageManager 1.0.0-01026

ds423+_host_v7 version 8042

Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxxx/scripts/syno_hdd_db.sh

WARNING Don't store this script on an NVMe volume!

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

M.2 drive models found: 1
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

No Expansion Units found

Backed up ds423+_host_v7.db.new
WD160EDGZ-11B2DA0 already exists in ds220+_host_v7.db
WD160EDGZ-11B2DA0 already exists in ds720+_host_v7.db
Added WD160EDGZ-11B2DA0 to ds423+_host_v7.db
Edited unverified drives in ds423+_host_v7.db
Added WD160EDGZ-11B2DA0 to ds423+_host_v7.db.new
Edited unverified drives in ds423+_host_v7.db.new
WD Red SN700 1000GB already exists in ds220+_host_v7.db
WD Red SN700 1000GB already exists in ds720+_host_v7.db
Added WD Red SN700 1000GB to ds423+_host_v7.db
Added WD Red SN700 1000GB to ds423+_host_v7.db.new

Backed up synoinfo.conf

Support disk compatibility already enabled.

Disabled support memory compatibility.

Set max memory to 18 GB.

NVMe support already enabled.

M.2 volume support already enabled.

Disabled drive db auto updates.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.
`

Originally created by @Kia0ra on GitHub (Nov 21, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/529 Hello, Everything was working fine in 7.2.2, but after updating to 7.3.1, the SSD volume degraded. I haven't tried to repair it yet. Is there a safe procedure to follow so that I don't lose everything? I had the script that ran at every startup, and it continued to do so without error after the update. I have the impression that one of the RAID1 disks failed during the update : <img width="1275" height="673" alt="Image" src="https://github.com/user-attachments/assets/f2822962-5ca1-4a56-a19a-40a548c5e162" /> <img width="1275" height="673" alt="Image" src="https://github.com/user-attachments/assets/0e0af0e9-0450-4d0c-8260-fbaba8c6ef39" /> ` Tâche : syno_hdd_db Heure de début : Fri, 21 Nov 2025 19:14:35 GMT Heure d'arrêt : Fri, 21 Nov 2025 19:14:48 GMT État actuel : 0 (Normal) Sortie/erreur standard : Synology_HDD_db v3.6.111 DS423+ x86_64 DSM 7.3.1-86003-1 StorageManager 1.0.0-01026 ds423+_host_v7 version 8042 Using options: -nr --autoupdate=3 Running from: /volume1/homes/xxxx/scripts/syno_hdd_db.sh WARNING Don't store this script on an NVMe volume! HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB M.2 drive models found: 1 WD Red SN700 1000GB,111150WD,1000 GB No M.2 PCIe cards found No Expansion Units found Backed up ds423+_host_v7.db.new WD160EDGZ-11B2DA0 already exists in ds220+_host_v7.db WD160EDGZ-11B2DA0 already exists in ds720+_host_v7.db Added WD160EDGZ-11B2DA0 to ds423+_host_v7.db Edited unverified drives in ds423+_host_v7.db Added WD160EDGZ-11B2DA0 to ds423+_host_v7.db.new Edited unverified drives in ds423+_host_v7.db.new WD Red SN700 1000GB already exists in ds220+_host_v7.db WD Red SN700 1000GB already exists in ds720+_host_v7.db Added WD Red SN700 1000GB to ds423+_host_v7.db Added WD Red SN700 1000GB to ds423+_host_v7.db.new Backed up synoinfo.conf Support disk compatibility already enabled. Disabled support memory compatibility. Set max memory to 18 GB. NVMe support already enabled. M.2 volume support already enabled. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. `
kerem closed this issue 2026-03-07 19:16:49 +03:00
Author
Owner

@Kia0ra commented on GitHub (Nov 21, 2025):

Here's a possible clue: when I previously ran the script under 7.2.2, two types of SSD drives were detected:

HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB

M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB

<!-- gh-comment-id:3564275791 --> @Kia0ra commented on GitHub (Nov 21, 2025): Here's a possible clue: when I previously ran the script under 7.2.2, two types of SSD drives were detected: HDD/SSD models found: 1 WD160EDGZ-11B2DA0,85.00A85,16000 GB M.2 drive models found: 2 WD Red SN700 1000GB,111130WD,1000 GB WD Red SN700 1000GB,111150WD,1000 GB
Author
Owner

@007revad commented on GitHub (Nov 24, 2025):

What does the following command return?

ls /dev/nvme*
<!-- gh-comment-id:3572849276 --> @007revad commented on GitHub (Nov 24, 2025): What does the following command return? ``` ls /dev/nvme* ```
Author
Owner

@Kia0ra commented on GitHub (Nov 24, 2025):

Hi,

I finally nervously pressed the “repair” button and everything went back to normal. It took a few hours to resynchronize the data between the two NVMe SSDs, but everything is working fine now.
For what it's worth, here is the command output:

xxxx@nas:~$ ls /dev/nvme*
/dev/nvme0    /dev/nvme0n1p1  /dev/nvme0n1p3  /dev/nvme1n1    /dev/nvme1n1p2
/dev/nvme0n1  /dev/nvme0n1p2  /dev/nvme1      /dev/nvme1n1p1  /dev/nvme1n1p3

Thanks anyway :)

<!-- gh-comment-id:3572872678 --> @Kia0ra commented on GitHub (Nov 24, 2025): Hi, I finally nervously pressed the “repair” button and everything went back to normal. It took a few hours to resynchronize the data between the two NVMe SSDs, but everything is working fine now. For what it's worth, here is the command output: ``` xxxx@nas:~$ ls /dev/nvme* /dev/nvme0 /dev/nvme0n1p1 /dev/nvme0n1p3 /dev/nvme1n1 /dev/nvme1n1p2 /dev/nvme0n1 /dev/nvme0n1p2 /dev/nvme1 /dev/nvme1n1p1 /dev/nvme1n1p3 ``` Thanks anyway :)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#185
No description provided.