mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 21:55:59 +03:00
[GH-ISSUE #529] The update to 7.3.1 did not go well #686
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#686
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Kia0ra on GitHub (Nov 21, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/529
Hello,
Everything was working fine in 7.2.2, but after updating to 7.3.1, the SSD volume degraded.
I haven't tried to repair it yet. Is there a safe procedure to follow so that I don't lose everything?
I had the script that ran at every startup, and it continued to do so without error after the update.
I have the impression that one of the RAID1 disks failed during the update :

`
Tâche : syno_hdd_db
Heure de début : Fri, 21 Nov 2025 19:14:35 GMT
Heure d'arrêt : Fri, 21 Nov 2025 19:14:48 GMT
État actuel : 0 (Normal)
Sortie/erreur standard :
Synology_HDD_db v3.6.111
DS423+ x86_64 DSM 7.3.1-86003-1
StorageManager 1.0.0-01026
ds423+_host_v7 version 8042
Using options: -nr --autoupdate=3
Running from: /volume1/homes/xxxx/scripts/syno_hdd_db.sh
[0;33mWARNING[0m Don't store this script on an NVMe volume!
HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB
M.2 drive models found: 1
WD Red SN700 1000GB,111150WD,1000 GB
No M.2 PCIe cards found
No Expansion Units found
Backed up ds423+_host_v7.db.new
[0;33mWD160EDGZ-11B2DA0[0m already exists in [0;36mds220+_host_v7.db[0m
[0;33mWD160EDGZ-11B2DA0[0m already exists in [0;36mds720+_host_v7.db[0m
Added [0;33mWD160EDGZ-11B2DA0[0m to [0;36mds423+_host_v7.db[0m
Edited unverified drives in [0;36mds423+_host_v7.db[0m
Added [0;33mWD160EDGZ-11B2DA0[0m to [0;36mds423+_host_v7.db.new[0m
Edited unverified drives in [0;36mds423+_host_v7.db.new[0m
[0;33mWD Red SN700 1000GB[0m already exists in [0;36mds220+_host_v7.db[0m
[0;33mWD Red SN700 1000GB[0m already exists in [0;36mds720+_host_v7.db[0m
Added [0;33mWD Red SN700 1000GB[0m to [0;36mds423+_host_v7.db[0m
Added [0;33mWD Red SN700 1000GB[0m to [0;36mds423+_host_v7.db.new[0m
Backed up synoinfo.conf
Support disk compatibility already enabled.
Disabled support memory compatibility.
Set max memory to 18 GB.
NVMe support already enabled.
M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to [0;36mreboot the Synology[0m to see the changes.
`
@Kia0ra commented on GitHub (Nov 21, 2025):
Here's a possible clue: when I previously ran the script under 7.2.2, two types of SSD drives were detected:
HDD/SSD models found: 1
WD160EDGZ-11B2DA0,85.00A85,16000 GB
M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB
@007revad commented on GitHub (Nov 24, 2025):
What does the following command return?
@Kia0ra commented on GitHub (Nov 24, 2025):
Hi,
I finally nervously pressed the “repair” button and everything went back to normal. It took a few hours to resynchronize the data between the two NVMe SSDs, but everything is working fine now.
For what it's worth, here is the command output:
Thanks anyway :)