[GH-ISSUE #535] Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements #691

Open
opened 2026-03-11 13:11:11 +03:00 by kerem · 2 comments
Owner

Originally created by @aferende on GitHub (Dec 6, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/535

Originally assigned to: @007revad on GitHub.

Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements

Hi, first of all thank you for your great work on these scripts.
I am using syno_hdd_db.sh on a DS1821+ with an official E10M20-T1 card and NVMe volumes, and everything works correctly thanks to your patches.

During detailed diagnostics on DSM 7.3.2, I noticed a few areas where the script could be improved or where additional checks would help users avoid problems, especially on Ryzen-based Synology models (DS1621+, DS1821+, RS1221+, etc.).

This issue is not about a bug, but about possible enhancements.


1. DSM 7.3+ network persistence issues with Aquantia AQC107 (E10M20-T1)

On DSM 7.3+, the system partially regenerates the network configuration during major updates or network restarts.
As a result, the Aquantia NIC (10GbE on E10M20-T1) may appear as a new network interface after updates, even if the MAC address remains the same.

This causes:

  • VMM losing the assigned LAN 5 interface
  • Virtual Switch losing the binding to ovs_eth4
  • DSM generating a new internal UUID for the NIC

This is not caused by syno_hdd_db.sh, but the script could optionally detect or warn when DSM is likely to recreate the interface.

Possible enhancements:

  • Detect Aquantia AQC107 PCI IDs (1d6a:07b1)
  • Check whether a persistent NIC mapping is in place
  • Warn users if the network configuration may be overridden by DSM on next upgrade
  • Optionally generate a helper script under /usr/local/etc/rc.d/ to preserve NIC MAC/interface mapping

This could help many users who experience disappearing NICs after updates.


2. NVMe volume safety checks

Users who already have NVMe volumes (not cache) on E10M20-T1 or on PCIe adapters would benefit from additional diagnostics.

Currently the script patches compatibility tables correctly, but does not explicitly check:

  • whether NVMe volumes already exist
  • whether NVMe devices are mapped correctly in synostorage
  • whether DSM will treat NVMe drives as "cache only" after upgrades
  • whether model.dtb and runtime device tree are consistent

Suggested improvements:

  • Add an "NVMe Volume Status" section
  • List existing NVMe arrays and partitions
  • Warn if the user is running full NVMe volumes without -p
  • Check for missing DTB entries for E10M20-T1 and warn if needed

This would increase safety for users running VMM or Docker on NVMe volumes.


3. Warning for read-only system paths under DSM 7.3+

Some DSM 7.3+ system paths (e.g. /usr/syno/etc/network/, /etc.defaults/, runtime model.dtb) are mounted from squashfs and cannot be modified even after mount -o remount,rw /.

A check for this condition could help users understand why certain patches cannot be applied.

Suggested enhancement:

  • Detect filesystem immutability (RO) and warn the user
  • Clarify which patches are impossible on DSM 7.3+ due to read-only system partitions

4. Optional automatic safety check: prevent running the script from NVMe volumes

Although the script warns users not to store it on NVMe volumes, many miss the message.
A simple automatic check could prevent misconfiguration:

echo "Checking script location..."
if mount | grep -q "$(dirname "$0")" | grep nvme; then
    echo "ERROR: Please do not run this script from an NVMe volume." >&2
    exit 1
fi

This would avoid a common user mistake.


Summary

The script works perfectly on my DS1821+ and has been essential to enable NVMe volumes.
These suggestions aim to improve safety, clarity, and resilience for DSM 7.3+ users, especially those using:

  • official Synology PCIe cards with NVMe
  • Aquantia 10GbE NICs
  • VMM or Docker on NVMe volumes
  • non-standard hardware configurations

If these ideas are useful, I'd be happy to help test future enhancements.

Thanks again for your excellent work and for maintaining this project!

Originally created by @aferende on GitHub (Dec 6, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/535 Originally assigned to: @007revad on GitHub. # Suggestions for improvements: NVMe volume persistence checks and DSM 7.3+ NIC handling improvements Hi, first of all thank you for your great work on these scripts. I am using `syno_hdd_db.sh` on a **DS1821+** with an **official E10M20-T1** card and **NVMe volumes**, and everything works correctly thanks to your patches. During detailed diagnostics on **DSM 7.3.2**, I noticed a few areas where the script could be improved or where additional checks would help users avoid problems, especially on Ryzen-based Synology models (DS1621+, DS1821+, RS1221+, etc.). This issue is not about a bug, but about possible enhancements. --- ## 1. DSM 7.3+ network persistence issues with Aquantia AQC107 (E10M20-T1) On DSM 7.3+, the system partially regenerates the network configuration during major updates or network restarts. As a result, the Aquantia NIC (10GbE on E10M20-T1) may appear as a **new network interface** after updates, even if the MAC address remains the same. This causes: * VMM losing the assigned **LAN 5** interface * Virtual Switch losing the binding to `ovs_eth4` * DSM generating a new internal UUID for the NIC This is not caused by `syno_hdd_db.sh`, but the script could **optionally detect or warn** when DSM is likely to recreate the interface. **Possible enhancements:** * Detect Aquantia AQC107 PCI IDs (`1d6a:07b1`) * Check whether a persistent NIC mapping is in place * Warn users if the network configuration may be overridden by DSM on next upgrade * Optionally generate a helper script under `/usr/local/etc/rc.d/` to preserve NIC MAC/interface mapping This could help many users who experience disappearing NICs after updates. --- ## 2. NVMe volume safety checks Users who already have **NVMe volumes** (not cache) on E10M20-T1 or on PCIe adapters would benefit from additional diagnostics. Currently the script patches compatibility tables correctly, but does not explicitly check: * whether NVMe volumes already exist * whether NVMe devices are mapped correctly in `synostorage` * whether DSM will treat NVMe drives as "cache only" after upgrades * whether `model.dtb` and runtime device tree are consistent **Suggested improvements:** * Add an "NVMe Volume Status" section * List existing NVMe arrays and partitions * Warn if the user is running full NVMe volumes without `-p` * Check for missing DTB entries for E10M20-T1 and warn if needed This would increase safety for users running **VMM or Docker on NVMe volumes**. --- ## 3. Warning for read-only system paths under DSM 7.3+ Some DSM 7.3+ system paths (e.g. `/usr/syno/etc/network/`, `/etc.defaults/`, runtime `model.dtb`) are mounted from squashfs and cannot be modified even after `mount -o remount,rw /`. A check for this condition could help users understand why certain patches cannot be applied. Suggested enhancement: * Detect filesystem immutability (RO) and warn the user * Clarify which patches are impossible on DSM 7.3+ due to read-only system partitions --- ## 4. Optional automatic safety check: prevent running the script from NVMe volumes Although the script warns users not to store it on NVMe volumes, many miss the message. A simple automatic check could prevent misconfiguration: ```bash echo "Checking script location..." if mount | grep -q "$(dirname "$0")" | grep nvme; then echo "ERROR: Please do not run this script from an NVMe volume." >&2 exit 1 fi ``` This would avoid a common user mistake. --- ## Summary The script works perfectly on my DS1821+ and has been essential to enable NVMe volumes. These suggestions aim to improve **safety, clarity, and resilience** for DSM 7.3+ users, especially those using: * official Synology PCIe cards with NVMe * Aquantia 10GbE NICs * VMM or Docker on NVMe volumes * non-standard hardware configurations If these ideas are useful, I'd be happy to help test future enhancements. Thanks again for your excellent work and for maintaining this project!
Author
Owner

@007revad commented on GitHub (Dec 11, 2025):

Thanks @aferende I'll look at adding all of these.

1.

After a major DSM update I always lose the network connection for my E10M20-T1 and have to connect the Ethernet cable to one of the 1GbE ports so I can do a clean reboot from the GUI. I don't currently have any VMs or have Virtual Switch setup so haven't experienced those issues personally.

2.

Syno_hdd_db does check and patch model.dtb in the dts_m2_card() function if the NVMe drives are in a E10M20-T1, M2D20 or M2D18. But I'm not sure if it works because I still run syno_enable_m2_card.

DSM only copies /etc/model.dtb to /run/model.dtb during boot. Which is why the script says you may need to reboot.

3.

The only patches done in /etc.defaults/ are also done in /etc/. /etc is the important location for editing files. Also editing the same files in /etc.defaults is only done as a precaution.

4.

syno_hdd_db (and my other NVMe related scripts) already check and warn if the script is located on an NVMe volume. I should include exit 1 after the warning.

Extra.

There's also another issue that I've been thinking of how to prevent. When my NVMe volume on a E10M20-T1 (in my DS1821+) is not mounted until after a 2nd reboot (after a major DSM updates that replaces model.dtb) the packages installed on the NVMe volume get automatically "repaired" to the HDD volume. To try to prevent this I am working on a companion script named syno_hdd_shutdown.sh to schedule to run as root at shutdown. This syno_hdd_shutdown does the following:

  1. Creates an array of packages that are installed on an NVMe volume.
  2. Stops those packages.
  3. Creates a log of which packages it stopped.

I intend to update syno_hdd_db to:

  1. Check if the log exists and is not empty.
  2. Check if the NVMe volume is readable.
  3. If yes to both 1 and 2 then start the packages listed in the log, then delete the log.
<!-- gh-comment-id:3640486769 --> @007revad commented on GitHub (Dec 11, 2025): Thanks @aferende I'll look at adding all of these. ### 1. After a major DSM update I always lose the network connection for my E10M20-T1 and have to connect the Ethernet cable to one of the 1GbE ports so I can do a clean reboot from the GUI. I don't currently have any VMs or have Virtual Switch setup so haven't experienced those issues personally. ### 2. Syno_hdd_db does check and patch model.dtb in the dts_m2_card() function if the NVMe drives are in a E10M20-T1, M2D20 or M2D18. But I'm not sure if it works because I still run syno_enable_m2_card. DSM only copies /etc/model.dtb to /run/model.dtb during boot. Which is why the script says you may need to reboot. ### 3. The only patches done in /etc.defaults/ are also done in /etc/. /etc is the important location for editing files. Also editing the same files in /etc.defaults is only done as a precaution. ### 4. syno_hdd_db (and my other NVMe related scripts) already check and warn if the script is located on an NVMe volume. I should include `exit 1` after the warning. ### Extra. There's also another issue that I've been thinking of how to prevent. When my NVMe volume on a E10M20-T1 (in my DS1821+) is not mounted until after a 2nd reboot (after a major DSM updates that replaces model.dtb) the packages installed on the NVMe volume get automatically "repaired" to the HDD volume. To try to prevent this I am working on a companion script named syno_hdd_shutdown.sh to schedule to run as root at shutdown. This syno_hdd_shutdown does the following: 1. Creates an array of packages that are installed on an NVMe volume. 2. Stops those packages. 3. Creates a log of which packages it stopped. I intend to update syno_hdd_db to: 1. Check if the log exists and is not empty. 2. Check if the NVMe volume is readable. 3. If yes to both 1 and 2 then start the packages listed in the log, then delete the log.
Author
Owner

@007revad commented on GitHub (Dec 11, 2025):

I just realised that the syno_hdd_shutdown script should also do the same for packages installed on drives in an unsupported expansion unit.

<!-- gh-comment-id:3640532922 --> @007revad commented on GitHub (Dec 11, 2025): I just realised that the syno_hdd_shutdown script should also do the same for packages installed on drives in an unsupported expansion unit.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#691
No description provided.