[GH-ISSUE #521] -S (write_mostly) parameter seems ineffective when SSDs are NVMe #180

Closed
opened 2026-03-07 19:16:46 +03:00 by kerem · 3 comments
Owner

Originally created by @foxh1s on GitHub (Oct 29, 2025).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/521

Hello,

I suspect that the -S parameter (for enabling write_mostly) is not correctly processing my NVMe drives as internal SSDs.

My setup details:

  • Model: Synology DS920+
  • Drives: 4x 18TB HDDs (SATA) and 2x M.2 SSDs (NVMe)
  • Command run: /opt/Synology_HDD_db/syno_hdd_db.sh -n -r -e -S --autoupdate=15

If write_mostly were successful, I read that the HDDs should show a (W) identifier in /proc/mdstat (e.g., as discussed [here]). I'm not seeing this identifier, and the debug output suggests the NVMe drives are not being detected as internal SSDs.

📄 Output of cat /proc/mdstat

root@MyNAS:~# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] 
md4 : active raid1 nvme0n1p5[0]
      477659584 blocks super 1.2 [1/1] [U]
      
md3 : active raid1 nvme1n1p1[0]
      488381952 blocks super 1.2 [1/1] [U]
      
md2 : active raid6 sata3p5[0] sata2p5[3] sata1p5[2] sata4p5[1]
      35135197056 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md1 : active raid1 sata3p2[0] sata2p2[3] sata1p2[2] sata4p2[1]
      2097088 blocks [4/4] [UUUU]
      
md0 : active raid1 sata3p1[0] sata2p1[3] sata1p1[2] sata4p1[1]
      8388544 blocks [4/4] [UUUU]
      
unused devices: <none>

📄 Output of ./writemostly_debug.sh

root@MyNAS:/opt# ./writemostly_debug.sh 
internal_drives: sata1 sata2 sata3 sata4
internal_drives_qty: 4
idrive: sata1
idrive: sata2
idrive: sata3
idrive: sata4
internal_ssd_qty: 0
internal_hdd_qty: 4
internal_hdds: sata1 sata2 sata3 sata4

As shown in the debug output, internal_ssd_qty is 0, indicating the two NVMe drives (nvme0n1, nvme1n1 seen in mdstat) are not being counted as internal SSDs for the write_mostly logic.

Additional Information Required for Debugging

Please let me know if you require any additional logs or command outputs to help diagnose the NVMe detection issue!

Thank you!

Originally created by @foxh1s on GitHub (Oct 29, 2025). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/521 Hello, I suspect that the `-S` parameter (for enabling `write_mostly`) is not correctly processing my NVMe drives as internal SSDs. My setup details: * **Model:** Synology DS920+ * **Drives:** 4x 18TB HDDs (SATA) and 2x M.2 SSDs (NVMe) * **Command run:** `/opt/Synology_HDD_db/syno_hdd_db.sh -n -r -e -S --autoupdate=15` If `write_mostly` were successful, I read that the HDDs should show a `(W)` identifier in `/proc/mdstat` (e.g., as discussed [[here](https://www.techspark.de/speed-up-synology-dsm-with-hdd-ssd/)]). I'm not seeing this identifier, and the debug output suggests the NVMe drives are not being detected as internal SSDs. #### 📄 Output of `cat /proc/mdstat` ```bash root@MyNAS:~# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md4 : active raid1 nvme0n1p5[0] 477659584 blocks super 1.2 [1/1] [U] md3 : active raid1 nvme1n1p1[0] 488381952 blocks super 1.2 [1/1] [U] md2 : active raid6 sata3p5[0] sata2p5[3] sata1p5[2] sata4p5[1] 35135197056 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU] md1 : active raid1 sata3p2[0] sata2p2[3] sata1p2[2] sata4p2[1] 2097088 blocks [4/4] [UUUU] md0 : active raid1 sata3p1[0] sata2p1[3] sata1p1[2] sata4p1[1] 8388544 blocks [4/4] [UUUU] unused devices: <none> ``` #### 📄 Output of `./writemostly_debug.sh` ```bash root@MyNAS:/opt# ./writemostly_debug.sh internal_drives: sata1 sata2 sata3 sata4 internal_drives_qty: 4 idrive: sata1 idrive: sata2 idrive: sata3 idrive: sata4 internal_ssd_qty: 0 internal_hdd_qty: 4 internal_hdds: sata1 sata2 sata3 sata4 ``` As shown in the debug output, `internal_ssd_qty` is **0**, indicating the two NVMe drives (`nvme0n1`, `nvme1n1` seen in `mdstat`) are not being counted as internal SSDs for the `write_mostly` logic. ### ❓ Additional Information Required for Debugging Please let me know if you require any additional logs or command outputs to help diagnose the NVMe detection issue\! Thank you!
kerem closed this issue 2026-03-07 19:16:46 +03:00
Author
Owner

@007revad commented on GitHub (Oct 29, 2025):

write_mostly only works for drives that have the DSM system and swap partitions.

NVMe drives, and drives in an expansion unit, have the partitions but they're empty. DSM only installs on HDDs and 2.5 inch SATA SSDs installed in the NAS.

<!-- gh-comment-id:3463788327 --> @007revad commented on GitHub (Oct 29, 2025): write_mostly only works for drives that have the DSM system and swap partitions. NVMe drives, and drives in an expansion unit, have the partitions but they're empty. DSM only installs on HDDs and 2.5 inch SATA SSDs installed in the NAS.
Author
Owner

@foxh1s commented on GitHub (Oct 29, 2025):

write_mostly only works for drives that have the DSM system and swap partitions.

NVMe drives, and drives in an expansion unit, have the partitions but they're empty. DSM only installs on HDDs and 2.5 inch SATA SSDs installed in the NAS.

I finally manually located the device, specifically nvme1n1, and ran the script by explicitly passing it as the --ssd argument.

The command executed was:

/opt/Synology_HDD_db/syno_hdd_db.sh -n -r -e --ssd=nvme1n1 --autoupdate=15

This successfully caused the (W) identifier to appear, confirming that write_mostly is now enabled on the underlying system partitions (md0 and md1).

📄 Updated Output of cat /proc/mdstat

root@MyNAS:~# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] 
md4 : active raid1 nvme0n1p5[0]
      477659584 blocks super 1.2 [1/1] [U]
      
md3 : active raid1 nvme1n1p1[0]
      488381952 blocks super 1.2 [1/1] [U]
      
md2 : active raid6 sata3p5[0] sata2p5[3] sata1p5[2] sata4p5[1]
      35135197056 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md1 : active raid1 sata3p2[0](W) sata2p2[3](W) sata1p2[2](W) sata4p2[1](W)
      2097088 blocks [4/4] [UUUU]
      
md0 : active raid1 sata3p1[0](W) sata2p1[3](W) sata1p1[2](W) sata4p1[1](W)
      8388544 blocks [4/4] [UUUU]
      
unused devices: <none>

*I apologize for missing your previous reply. Are there any potential issues with the results shown above? Thank you again!

<!-- gh-comment-id:3463811371 --> @foxh1s commented on GitHub (Oct 29, 2025): > write_mostly only works for drives that have the DSM system and swap partitions. > > NVMe drives, and drives in an expansion unit, have the partitions but they're empty. DSM only installs on HDDs and 2.5 inch SATA SSDs installed in the NAS. I finally manually located the device, specifically **`nvme1n1`**, and ran the script by explicitly passing it as the `--ssd` argument. The command executed was: ```bash /opt/Synology_HDD_db/syno_hdd_db.sh -n -r -e --ssd=nvme1n1 --autoupdate=15 ``` This successfully caused the `(W)` identifier to appear, confirming that `write_mostly` is now enabled on the underlying system partitions (`md0` and `md1`). #### 📄 Updated Output of `cat /proc/mdstat` ```bash root@MyNAS:~# cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md4 : active raid1 nvme0n1p5[0] 477659584 blocks super 1.2 [1/1] [U] md3 : active raid1 nvme1n1p1[0] 488381952 blocks super 1.2 [1/1] [U] md2 : active raid6 sata3p5[0] sata2p5[3] sata1p5[2] sata4p5[1] 35135197056 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU] md1 : active raid1 sata3p2[0](W) sata2p2[3](W) sata1p2[2](W) sata4p2[1](W) 2097088 blocks [4/4] [UUUU] md0 : active raid1 sata3p1[0](W) sata2p1[3](W) sata1p1[2](W) sata4p1[1](W) 8388544 blocks [4/4] [UUUU] unused devices: <none> ``` *I apologize for missing your previous reply. Are there any potential issues with the results shown above? Thank you again!
Author
Owner

@007revad commented on GitHub (Oct 30, 2025):

Write mostly is set for your HDDS (sata1, sata2, sata3 and, sata4) in md0 and md1. because there's no 2.5 inch SATA SSDs in md0 or md1 it won't make any difference.

I don't think it will hurt to leave them like that. If you want to remove (W) from the HDDs run:

sudo /volume1/scripts/syno_hdd_db.sh --restore --ssd=restore

Change "/volume1/scripts/" to where syno_hdd_db.sh is located.

<!-- gh-comment-id:3466011855 --> @007revad commented on GitHub (Oct 30, 2025): Write mostly is set for your HDDS (sata1, sata2, sata3 and, sata4) in md0 and md1. because there's no 2.5 inch SATA SSDs in md0 or md1 it won't make any difference. I don't think it will hurt to leave them like that. If you want to remove (W) from the HDDs run: ``` sudo /volume1/scripts/syno_hdd_db.sh --restore --ssd=restore ``` Change "/volume1/scripts/" to where syno_hdd_db.sh is located.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#180
No description provided.