mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #535] Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements #691
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#691
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @aferende on GitHub (Dec 6, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/535
Originally assigned to: @007revad on GitHub.
Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements
Hi, first of all thank you for your great work on these scripts.
I am using
syno_hdd_db.shon a DS1821+ with an official E10M20-T1 card and NVMe volumes, and everything works correctly thanks to your patches.During detailed diagnostics on DSM 7.3.2, I noticed a few areas where the script could be improved or where additional checks would help users avoid problems, especially on Ryzen-based Synology models (DS1621+, DS1821+, RS1221+, etc.).
This issue is not about a bug, but about possible enhancements.
1. DSM 7.3+ network persistence issues with Aquantia AQC107 (E10M20-T1)
On DSM 7.3+, the system partially regenerates the network configuration during major updates or network restarts.
As a result, the Aquantia NIC (10GbE on E10M20-T1) may appear as a new network interface after updates, even if the MAC address remains the same.
This causes:
ovs_eth4This is not caused by
syno_hdd_db.sh, but the script could optionally detect or warn when DSM is likely to recreate the interface.Possible enhancements:
1d6a:07b1)/usr/local/etc/rc.d/to preserve NIC MAC/interface mappingThis could help many users who experience disappearing NICs after updates.
2. NVMe volume safety checks
Users who already have NVMe volumes (not cache) on E10M20-T1 or on PCIe adapters would benefit from additional diagnostics.
Currently the script patches compatibility tables correctly, but does not explicitly check:
synostoragemodel.dtband runtime device tree are consistentSuggested improvements:
-pThis would increase safety for users running VMM or Docker on NVMe volumes.
3. Warning for read-only system paths under DSM 7.3+
Some DSM 7.3+ system paths (e.g.
/usr/syno/etc/network/,/etc.defaults/, runtimemodel.dtb) are mounted from squashfs and cannot be modified even aftermount -o remount,rw /.A check for this condition could help users understand why certain patches cannot be applied.
Suggested enhancement:
4. Optional automatic safety check: prevent running the script from NVMe volumes
Although the script warns users not to store it on NVMe volumes, many miss the message.
A simple automatic check could prevent misconfiguration:
This would avoid a common user mistake.
Summary
The script works perfectly on my DS1821+ and has been essential to enable NVMe volumes.
These suggestions aim to improve safety, clarity, and resilience for DSM 7.3+ users, especially those using:
If these ideas are useful, I'd be happy to help test future enhancements.
Thanks again for your excellent work and for maintaining this project!
@007revad commented on GitHub (Dec 11, 2025):
Thanks @aferende I'll look at adding all of these.
1.
After a major DSM update I always lose the network connection for my E10M20-T1 and have to connect the Ethernet cable to one of the 1GbE ports so I can do a clean reboot from the GUI. I don't currently have any VMs or have Virtual Switch setup so haven't experienced those issues personally.
2.
Syno_hdd_db does check and patch model.dtb in the dts_m2_card() function if the NVMe drives are in a E10M20-T1, M2D20 or M2D18. But I'm not sure if it works because I still run syno_enable_m2_card.
DSM only copies /etc/model.dtb to /run/model.dtb during boot. Which is why the script says you may need to reboot.
3.
The only patches done in /etc.defaults/ are also done in /etc/. /etc is the important location for editing files. Also editing the same files in /etc.defaults is only done as a precaution.
4.
syno_hdd_db (and my other NVMe related scripts) already check and warn if the script is located on an NVMe volume. I should include
exit 1after the warning.Extra.
There's also another issue that I've been thinking of how to prevent. When my NVMe volume on a E10M20-T1 (in my DS1821+) is not mounted until after a 2nd reboot (after a major DSM updates that replaces model.dtb) the packages installed on the NVMe volume get automatically "repaired" to the HDD volume. To try to prevent this I am working on a companion script named syno_hdd_shutdown.sh to schedule to run as root at shutdown. This syno_hdd_shutdown does the following:
I intend to update syno_hdd_db to:
@007revad commented on GitHub (Dec 11, 2025):
I just realised that the syno_hdd_shutdown script should also do the same for packages installed on drives in an unsupported expansion unit.