[GH-ISSUE #340] Option -S for enabling write_mostly does not work #618

Closed
opened 2026-03-11 12:41:04 +03:00 by kerem · 9 comments
Owner

Originally created by @ThomasGoering on GitHub (Aug 14, 2024).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/340

My DS920+ has two internal 14TB HDDs, two internal 2TB SSDs and two 1TB M.2 drives, the DX517 expansion unit has three 12TB HDDs.

I'm running your script with option -S to enable write_mostly for the internal SSDs but noticed that there was no output that confirms setting write_mostly. This is the output of the script (including some TEST_DEBUG output that I inserted):

Synology_HDD_db v3.5.97
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --noupdate --ram -S --email
Running from: /volume3/scripts/syno_test.sh

HDD/SSD models found: 4
Red SA500 2.5 2TB,540400WD,2000 GB
WD120EFAX-68UNTN0,81.00A81,11999 GB
WD120EFBX-68B0EN0,85.00A85,11999 GB
WD140EFFX-68VBXN0,81.00A81,14000 GB

M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

Expansion Unit models found: 1
DX517

Red SA500 2.5 2TB already exists in ds920+_host_v7.db
Red SA500 2.5 2TB already exists in ds920+_host.db
Red SA500 2.5 2TB already exists in ds920+_host.db.new
Red SA500 2.5 2TB already exists in dx517_v7.db
Red SA500 2.5 2TB already exists in dx517.db
Red SA500 2.5 2TB already exists in dx517.db.new
WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db.new
WD120EFAX-68UNTN0 already exists in dx517_v7.db
WD120EFAX-68UNTN0 already exists in dx517.db
WD120EFAX-68UNTN0 already exists in dx517.db.new
WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db.new
WD120EFBX-68B0EN0 already exists in dx517_v7.db
WD120EFBX-68B0EN0 already exists in dx517.db
WD120EFBX-68B0EN0 already exists in dx517.db.new
WD140EFFX-68VBXN0 already exists in ds920+_host_v7.db
WD140EFFX-68VBXN0 already exists in ds920+_host.db
WD140EFFX-68VBXN0 already exists in ds920+_host.db.new
WD140EFFX-68VBXN0 already exists in dx517_v7.db
WD140EFFX-68VBXN0 already exists in dx517.db
WD140EFFX-68VBXN0 already exists in dx517.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new
TEST_DEBUG: idrive=sata1, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata2, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata3, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: idrive=sata4, internal_drive=
TEST_DEBUG: is_ssd
TEST_DEBUG: internal_ssd_qty=4
TEST_DEBUG: internal_hdds=

Support disk compatibility already enabled.

Support memory compatibility already disabled.

Max memory already set to 20 GB.

NVMe support already enabled.

M.2 volume support already enabled.

Drive db auto updates already disabled.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

The debug outputs were inserted in the else part after line 1755. The output of TEST_DEBUG: idrive=sata1, internal_drive= is done after line internal_drive="$(echo "$idrive" | awk '{printf $4}')" with this echo "TEST_DEBUG: idrive=$idrive, internal_drive=$internal_drive".

It looks like internal_drive is not expected to be empty. Am I missing something or do you need more details?

Originally created by @ThomasGoering on GitHub (Aug 14, 2024). Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/340 My DS920+ has two internal 14TB HDDs, two internal 2TB SSDs and two 1TB M.2 drives, the DX517 expansion unit has three 12TB HDDs. I'm running your script with option -S to enable write_mostly for the internal SSDs but noticed that there was no output that confirms setting write_mostly. This is the output of the script (including some TEST_DEBUG output that I inserted): ``` Synology_HDD_db v3.5.97 DS920+ DSM 7.2.1-69057-5 StorageManager 1.0.0-0017 ds920+_host_v7 version 8053 ds920+_host version 4021 Using options: --noupdate --ram -S --email Running from: /volume3/scripts/syno_test.sh HDD/SSD models found: 4 Red SA500 2.5 2TB,540400WD,2000 GB WD120EFAX-68UNTN0,81.00A81,11999 GB WD120EFBX-68B0EN0,85.00A85,11999 GB WD140EFFX-68VBXN0,81.00A81,14000 GB M.2 drive models found: 2 WD Red SN700 1000GB,111130WD,1000 GB WD Red SN700 1000GB,111150WD,1000 GB No M.2 PCIe cards found Expansion Unit models found: 1 DX517 Red SA500 2.5 2TB already exists in ds920+_host_v7.db Red SA500 2.5 2TB already exists in ds920+_host.db Red SA500 2.5 2TB already exists in ds920+_host.db.new Red SA500 2.5 2TB already exists in dx517_v7.db Red SA500 2.5 2TB already exists in dx517.db Red SA500 2.5 2TB already exists in dx517.db.new WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db WD120EFAX-68UNTN0 already exists in ds920+_host.db WD120EFAX-68UNTN0 already exists in ds920+_host.db.new WD120EFAX-68UNTN0 already exists in dx517_v7.db WD120EFAX-68UNTN0 already exists in dx517.db WD120EFAX-68UNTN0 already exists in dx517.db.new WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db WD120EFBX-68B0EN0 already exists in ds920+_host.db WD120EFBX-68B0EN0 already exists in ds920+_host.db.new WD120EFBX-68B0EN0 already exists in dx517_v7.db WD120EFBX-68B0EN0 already exists in dx517.db WD120EFBX-68B0EN0 already exists in dx517.db.new WD140EFFX-68VBXN0 already exists in ds920+_host_v7.db WD140EFFX-68VBXN0 already exists in ds920+_host.db WD140EFFX-68VBXN0 already exists in ds920+_host.db.new WD140EFFX-68VBXN0 already exists in dx517_v7.db WD140EFFX-68VBXN0 already exists in dx517.db WD140EFFX-68VBXN0 already exists in dx517.db.new Updated WD Red SN700 1000GB in ds920+_host_v7.db WD Red SN700 1000GB already exists in ds920+_host.db WD Red SN700 1000GB already exists in ds920+_host.db.new Updated WD Red SN700 1000GB in ds920+_host_v7.db WD Red SN700 1000GB already exists in ds920+_host.db WD Red SN700 1000GB already exists in ds920+_host.db.new TEST_DEBUG: idrive=sata1, internal_drive= TEST_DEBUG: is_ssd TEST_DEBUG: idrive=sata2, internal_drive= TEST_DEBUG: is_ssd TEST_DEBUG: idrive=sata3, internal_drive= TEST_DEBUG: is_ssd TEST_DEBUG: idrive=sata4, internal_drive= TEST_DEBUG: is_ssd TEST_DEBUG: internal_ssd_qty=4 TEST_DEBUG: internal_hdds= Support disk compatibility already enabled. Support memory compatibility already disabled. Max memory already set to 20 GB. NVMe support already enabled. M.2 volume support already enabled. Drive db auto updates already disabled. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ```` The debug outputs were inserted in the else part after line 1755. The output of `TEST_DEBUG: idrive=sata1, internal_drive=` is done after line `internal_drive="$(echo "$idrive" | awk '{printf $4}')"` with this `echo "TEST_DEBUG: idrive=$idrive, internal_drive=$internal_drive"`. It looks like internal_drive is not expected to be empty. Am I missing something or do you need more details?
kerem closed this issue 2026-03-11 12:41:10 +03:00
Author
Owner

@007revad commented on GitHub (Aug 15, 2024):

I believe I've found the issue. Can you test the fix.

Change lines 1759 and 1760 from this:

            internal_drive="$(echo "$idrive" | awk '{printf $4}')"
            if synodisk --isssd "$internal_drive" >/dev/null; then

to this:

            #internal_drive="$(echo "$idrive" | awk '{printf $4}')"
            if synodisk --isssd /dev/"${idrive:?}" >/dev/null; then

and change line 1811 from this:

                internal_hdds+=("$internal_drive")

to this:

                internal_hdds+=("$idrive")
<!-- gh-comment-id:2290149109 --> @007revad commented on GitHub (Aug 15, 2024): I believe I've found the issue. Can you test the fix. Change lines 1759 and 1760 from this: ``` internal_drive="$(echo "$idrive" | awk '{printf $4}')" if synodisk --isssd "$internal_drive" >/dev/null; then ``` to this: ``` #internal_drive="$(echo "$idrive" | awk '{printf $4}')" if synodisk --isssd /dev/"${idrive:?}" >/dev/null; then ``` and change line 1811 from this: ``` internal_hdds+=("$internal_drive") ``` to this: ``` internal_hdds+=("$idrive") ```
Author
Owner

@007revad commented on GitHub (Aug 15, 2024):

I've released v3.5.98 which fixes this issue.

https://github.com/007revad/Synology_HDD_db/releases

<!-- gh-comment-id:2290544461 --> @007revad commented on GitHub (Aug 15, 2024): I've released v3.5.98 which fixes this issue. https://github.com/007revad/Synology_HDD_db/releases
Author
Owner

@ThomasGoering commented on GitHub (Aug 15, 2024):

Great, this is the new output:

Synology_HDD_db v3.5.98
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --noupdate --ram -S --email
Running from: /volume3/scripts/syno_hdd_db.sh

HDD/SSD models found: 4
Red SA500 2.5 2TB,540400WD,2000 GB
WD120EFAX-68UNTN0,81.00A81,11999 GB
WD120EFBX-68B0EN0,85.00A85,11999 GB
WD140EFFX-68VBXN0,81.00A81,14000 GB

M.2 drive models found: 2
WD Red SN700 1000GB,111130WD,1000 GB
WD Red SN700 1000GB,111150WD,1000 GB

No M.2 PCIe cards found

Expansion Unit models found: 1
DX517

Added Red SA500 2.5 2TB to ds920+_host_v7.db
Edited unverified drives in ds920+_host_v7.db
Added Red SA500 2.5 2TB to ds920+_host.db
Added Red SA500 2.5 2TB to ds920+_host.db.new
Added Red SA500 2.5 2TB to dx517_v7.db
Edited unverified drives in dx517_v7.db
Added Red SA500 2.5 2TB to dx517.db
Added Red SA500 2.5 2TB to dx517.db.new
WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db
WD120EFAX-68UNTN0 already exists in ds920+_host.db.new
WD120EFAX-68UNTN0 already exists in dx517_v7.db
WD120EFAX-68UNTN0 already exists in dx517.db
WD120EFAX-68UNTN0 already exists in dx517.db.new
WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db
WD120EFBX-68B0EN0 already exists in ds920+_host.db.new
WD120EFBX-68B0EN0 already exists in dx517_v7.db
Added WD120EFBX-68B0EN0 to dx517.db
WD120EFBX-68B0EN0 already exists in dx517.db.new
Added WD140EFFX-68VBXN0 to ds920+_host_v7.db
Added WD140EFFX-68VBXN0 to ds920+_host.db
Added WD140EFFX-68VBXN0 to ds920+_host.db.new
WD140EFFX-68VBXN0 already exists in dx517_v7.db
WD140EFFX-68VBXN0 already exists in dx517.db
WD140EFFX-68VBXN0 already exists in dx517.db.new
Added WD Red SN700 1000GB to ds920+_host_v7.db
Added WD Red SN700 1000GB to ds920+_host.db
Added WD Red SN700 1000GB to ds920+_host.db.new
Updated WD Red SN700 1000GB in ds920+_host_v7.db
WD Red SN700 1000GB already exists in ds920+_host.db
WD Red SN700 1000GB already exists in ds920+_host.db.new

Setting internal HDDs state to write_mostly
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync,write_mostly
  sata3 Swap partition: in_sync,write_mostly
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync,write_mostly
  sata4 Swap partition: in_sync,write_mostly

Support disk compatibility already enabled.

Disabled support memory compatibility.

Set max memory to 20 GB.

NVMe support already enabled.

Enabled M.2 volume support.

Disabled drive db auto updates.

DSM successfully checked disk compatibility.

You may need to reboot the Synology to see the changes.

Looks good!

I have one more question about write_mostly. In #318 you wrote that it looks like the setting must be done after each boot again. Is this the case?

If yes, then what is the recommended way to schedule the syno_hdd_db.sh script? I ask because I find different instructions for scheduling it:

Is this difference only when syno_hdd_db.sh is used together with syno_enable_dedupe.sh?

<!-- gh-comment-id:2290793533 --> @ThomasGoering commented on GitHub (Aug 15, 2024): Great, this is the new output: ``` Synology_HDD_db v3.5.98 DS920+ DSM 7.2.1-69057-5 StorageManager 1.0.0-0017 ds920+_host_v7 version 8053 ds920+_host version 4021 Using options: --noupdate --ram -S --email Running from: /volume3/scripts/syno_hdd_db.sh HDD/SSD models found: 4 Red SA500 2.5 2TB,540400WD,2000 GB WD120EFAX-68UNTN0,81.00A81,11999 GB WD120EFBX-68B0EN0,85.00A85,11999 GB WD140EFFX-68VBXN0,81.00A81,14000 GB M.2 drive models found: 2 WD Red SN700 1000GB,111130WD,1000 GB WD Red SN700 1000GB,111150WD,1000 GB No M.2 PCIe cards found Expansion Unit models found: 1 DX517 Added Red SA500 2.5 2TB to ds920+_host_v7.db Edited unverified drives in ds920+_host_v7.db Added Red SA500 2.5 2TB to ds920+_host.db Added Red SA500 2.5 2TB to ds920+_host.db.new Added Red SA500 2.5 2TB to dx517_v7.db Edited unverified drives in dx517_v7.db Added Red SA500 2.5 2TB to dx517.db Added Red SA500 2.5 2TB to dx517.db.new WD120EFAX-68UNTN0 already exists in ds920+_host_v7.db WD120EFAX-68UNTN0 already exists in ds920+_host.db WD120EFAX-68UNTN0 already exists in ds920+_host.db.new WD120EFAX-68UNTN0 already exists in dx517_v7.db WD120EFAX-68UNTN0 already exists in dx517.db WD120EFAX-68UNTN0 already exists in dx517.db.new WD120EFBX-68B0EN0 already exists in ds920+_host_v7.db WD120EFBX-68B0EN0 already exists in ds920+_host.db WD120EFBX-68B0EN0 already exists in ds920+_host.db.new WD120EFBX-68B0EN0 already exists in dx517_v7.db Added WD120EFBX-68B0EN0 to dx517.db WD120EFBX-68B0EN0 already exists in dx517.db.new Added WD140EFFX-68VBXN0 to ds920+_host_v7.db Added WD140EFFX-68VBXN0 to ds920+_host.db Added WD140EFFX-68VBXN0 to ds920+_host.db.new WD140EFFX-68VBXN0 already exists in dx517_v7.db WD140EFFX-68VBXN0 already exists in dx517.db WD140EFFX-68VBXN0 already exists in dx517.db.new Added WD Red SN700 1000GB to ds920+_host_v7.db Added WD Red SN700 1000GB to ds920+_host.db Added WD Red SN700 1000GB to ds920+_host.db.new Updated WD Red SN700 1000GB in ds920+_host_v7.db WD Red SN700 1000GB already exists in ds920+_host.db WD Red SN700 1000GB already exists in ds920+_host.db.new Setting internal HDDs state to write_mostly WD140EFFX-68VBXN0 sata3 DSM partition: in_sync,write_mostly sata3 Swap partition: in_sync,write_mostly WD140EFFX-68VBXN0 sata4 DSM partition: in_sync,write_mostly sata4 Swap partition: in_sync,write_mostly Support disk compatibility already enabled. Disabled support memory compatibility. Set max memory to 20 GB. NVMe support already enabled. Enabled M.2 volume support. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes. ``` Looks good! I have one more question about write_mostly. In #318 you wrote that it looks like the setting must be done after each boot again. Is this the case? If yes, then what is the recommended way to schedule the syno_hdd_db.sh script? I ask because I find different instructions for scheduling it: - https://github.com/007revad/Synology_HDD_db: "When to run the script" - "If you have DSM set to auto update the best option is to run the script every time the Synology **boots**" - https://github.com/007revad/Synology_enable_Deduplication: "Schedule the script to run at shutdown" - "Or you can schedule both Synology_enable_Deduplication and Synology_HDD_db to run when the Synology **shuts down**, ..." Is this difference only when syno_hdd_db.sh is used together with syno_enable_dedupe.sh?
Author
Owner

@007revad commented on GitHub (Aug 15, 2024):

Is this the case?

No.

I just disabled my boot schedule for syno_hdd_db.sh and rebooted my DS720+. After the reboot I checked both drives to see if the HDD still had most_writely set and it did. So the write_mostly setting survives a reboot.

sata1 is the HDD and sata2 is the SSD

~# cat /sys/block/md0/md/dev-sata1p1/state
in_sync,write_mostly
~# cat /sys/block/md1/md/dev-sata1p2/state
in_sync,write_mostly

~# cat /sys/block/md0/md/dev-sata2p1/state
in_sync
~# cat /sys/block/md1/md/dev-sata2p2/state
in_sync

I'd still schedule syno_hdd_db.sh to run at boot so you don't have to remember to run it after a DSM update or Storage Manager package update.

<!-- gh-comment-id:2290996566 --> @007revad commented on GitHub (Aug 15, 2024): > Is this the case? No. I just disabled my boot schedule for syno_hdd_db.sh and rebooted my DS720+. After the reboot I checked both drives to see if the HDD still had most_writely set and it did. So the write_mostly setting survives a reboot. sata1 is the HDD and sata2 is the SSD ``` ~# cat /sys/block/md0/md/dev-sata1p1/state in_sync,write_mostly ~# cat /sys/block/md1/md/dev-sata1p2/state in_sync,write_mostly ~# cat /sys/block/md0/md/dev-sata2p1/state in_sync ~# cat /sys/block/md1/md/dev-sata2p2/state in_sync ``` I'd still schedule syno_hdd_db.sh to run at boot so you don't have to remember to run it after a DSM update or Storage Manager package update.
Author
Owner

@ThomasGoering commented on GitHub (Aug 15, 2024):

Ok, I will schedule syno_hdd_db.sh at boot. Is scheduling syno_enable_dedupe.sh at shutdown still recommended?

I found another issue (don't know, maybe it's intended behavior): I tried to restore all changes that syno_hdd_db.sh did with this call:

/volume3/scripts/syno_hdd_db.sh --restore --ssd=restore

It looks like that --ssd=restore is ignored when option --restore is set at the same time. But --ssd=restore works when --restore is not set.

<!-- gh-comment-id:2291253442 --> @ThomasGoering commented on GitHub (Aug 15, 2024): Ok, I will schedule syno_hdd_db.sh at boot. Is scheduling syno_enable_dedupe.sh at shutdown still recommended? I found another issue (don't know, maybe it's intended behavior): I tried to restore all changes that syno_hdd_db.sh did with this call: ``` /volume3/scripts/syno_hdd_db.sh --restore --ssd=restore ``` It looks like that --ssd=restore is ignored when option --restore is set at the same time. But --ssd=restore works when --restore is not set.
Author
Owner

@007revad commented on GitHub (Aug 15, 2024):

I've changed it so "--ssd=restore --restore" can be used together. Note: At the moment --ssd=restore must be before --restore

https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.99-RC

<!-- gh-comment-id:2292431402 --> @007revad commented on GitHub (Aug 15, 2024): I've changed it so "--ssd=restore --restore" can be used together. Note: At the moment --ssd=restore must be before --restore https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.99-RC
Author
Owner

@ThomasGoering commented on GitHub (Aug 18, 2024):

Thanks. It works with two issues:

  1. The CHANGES file and the description of the release states:

Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue https://github.com/007revad/Synology_HDD_db/issues/340

This is the wrong order of the options. In your comment above you wrote that the script can be used with the options in this order: --ssd=restore --restore

  1. This is an excerpt from the output of the script using these options: --ssd=restore --restore --email
Restoring internal drive's state
Red SA500 2.5 2TB
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
Red SA500 2.5 2TB
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
Red SA500 2.5 2TB
  sata1 DSM partition:  in_sync
  sata1 Swap partition: in_sync
Red SA500 2.5 2TB
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
Red SA500 2.5 2TB
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
Red SA500 2.5 2TB
  sata2 DSM partition:  in_sync
  sata2 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync

It still uses color codes despite the fact that option --email is used. And it seems to be resetting write_mostly three times for each drive.

<!-- gh-comment-id:2295312019 --> @ThomasGoering commented on GitHub (Aug 18, 2024): Thanks. It works with two issues: 1. The CHANGES file and the description of the release states: `Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue https://github.com/007revad/Synology_HDD_db/issues/340` This is the wrong order of the options. In your comment above you wrote that the script can be used with the options in this order: --ssd=restore --restore 2. This is an excerpt from the output of the script using these options: --ssd=restore --restore --email ``` Restoring internal drive's state Red SA500 2.5 2TB sata1 DSM partition: in_sync sata1 Swap partition: in_sync Red SA500 2.5 2TB sata1 DSM partition: in_sync sata1 Swap partition: in_sync Red SA500 2.5 2TB sata1 DSM partition: in_sync sata1 Swap partition: in_sync Red SA500 2.5 2TB sata2 DSM partition: in_sync sata2 Swap partition: in_sync Red SA500 2.5 2TB sata2 DSM partition: in_sync sata2 Swap partition: in_sync Red SA500 2.5 2TB sata2 DSM partition: in_sync sata2 Swap partition: in_sync WD140EFFX-68VBXN0 sata3 DSM partition: in_sync sata3 Swap partition: in_sync WD140EFFX-68VBXN0 sata3 DSM partition: in_sync sata3 Swap partition: in_sync WD140EFFX-68VBXN0 sata3 DSM partition: in_sync sata3 Swap partition: in_sync WD140EFFX-68VBXN0 sata4 DSM partition: in_sync sata4 Swap partition: in_sync WD140EFFX-68VBXN0 sata4 DSM partition: in_sync sata4 Swap partition: in_sync WD140EFFX-68VBXN0 sata4 DSM partition: in_sync sata4 Swap partition: in_sync ``` It still uses color codes despite the fact that option --email is used. And it seems to be resetting write_mostly three times for each drive.
Author
Owner

@007revad commented on GitHub (Aug 19, 2024):

I've changed it in v3.5.100-RC so when --restore is used it also supports --ssd=restore, -e or --email and in any order. And fixed it so it only resets write_mostly once for each drive.

https://github.com/007revad/Synology_HDD_db/releases

EDIT Forgot to mention it now only resets write_mostly on drives that have write_mostly set. To reduce the spammy output for people with lots of drives, and to make it clear which drives were processed.

<!-- gh-comment-id:2295630523 --> @007revad commented on GitHub (Aug 19, 2024): I've changed it in v3.5.100-RC so when --restore is used it also supports --ssd=restore, -e or --email and in any order. And fixed it so it only resets write_mostly once for each drive. https://github.com/007revad/Synology_HDD_db/releases EDIT Forgot to mention it now only resets write_mostly on drives that have write_mostly set. To reduce the spammy output for people with lots of drives, and to make it clear which drives were processed.
Author
Owner

@ThomasGoering commented on GitHub (Aug 19, 2024):

Thanks a lot, it is now resetting write_mostly as you described!

The output still uses color codes with option --email but I don't really care. This issue can be closed as resetting write_mostly now works.

Synology_HDD_db v3.5.100
DS920+ DSM 7.2.1-69057-5 
StorageManager 1.0.0-0017

ds920+_host_v7 version 8053

ds920+_host version 4021

Using options: --restore --ssd=restore --email
Running from: /volume3/scripts/syno_hdd_db.sh

Restored support_memory_compatibility = yes
Restored mem_max_mb = 8192
Restored support_m2_pool = no
Restored storage_panel.js

Restored ds920+_host.db
Restored ds920+_host.db.new
Restored ds920+_host_v7.db
Restored dx1211_v7.db
Restored dx1215_v7.db
Restored dx1215ii_v7.db
Restored dx1222_v7.db
Restored dx213_v7.db
Restored dx510_v7.db
Restored dx513_v7.db
Restored dx517.db
Restored dx517.db.new
Restored dx517_v7.db
Restored dx5_v7.db
Restored eunit_rule.db
Restored fax224_v7.db
Restored fx2421_v7.db
Restored host_rule.db
Restored rx1211_v7.db
Restored rx1211rp_v7.db
Restored rx1213sas_v7.db
Restored rx1214_v7.db
Restored rx1214rp_v7.db
Restored rx1216sas_v7.db
Restored rx1217_v7.db
Restored rx1217rp_v7.db
Restored rx1217sas_v7.db
Restored rx1222sas_v7.db
Restored rx1223rp_v7.db
Restored rx1224rp_v7.db
Restored rx2417sas_v7.db
Restored rx410_v7.db
Restored rx415_v7.db
Restored rx418_v7.db
Restored rx4_v7.db
Restored rx6022sas_v7.db
Restored rxd1215sas_v7.db
Restored rxd1219sas_v7.db

Restore successful.

Restoring internal drive's state
WD140EFFX-68VBXN0
  sata3 DSM partition:  in_sync
  sata3 Swap partition: in_sync
WD140EFFX-68VBXN0
  sata4 DSM partition:  in_sync
  sata4 Swap partition: in_sync
<!-- gh-comment-id:2296135963 --> @ThomasGoering commented on GitHub (Aug 19, 2024): Thanks a lot, it is now resetting write_mostly as you described! The output still uses color codes with option --email but I don't really care. This issue can be closed as resetting write_mostly now works. ``` Synology_HDD_db v3.5.100 DS920+ DSM 7.2.1-69057-5 StorageManager 1.0.0-0017 ds920+_host_v7 version 8053 ds920+_host version 4021 Using options: --restore --ssd=restore --email Running from: /volume3/scripts/syno_hdd_db.sh Restored support_memory_compatibility = yes Restored mem_max_mb = 8192 Restored support_m2_pool = no Restored storage_panel.js Restored ds920+_host.db Restored ds920+_host.db.new Restored ds920+_host_v7.db Restored dx1211_v7.db Restored dx1215_v7.db Restored dx1215ii_v7.db Restored dx1222_v7.db Restored dx213_v7.db Restored dx510_v7.db Restored dx513_v7.db Restored dx517.db Restored dx517.db.new Restored dx517_v7.db Restored dx5_v7.db Restored eunit_rule.db Restored fax224_v7.db Restored fx2421_v7.db Restored host_rule.db Restored rx1211_v7.db Restored rx1211rp_v7.db Restored rx1213sas_v7.db Restored rx1214_v7.db Restored rx1214rp_v7.db Restored rx1216sas_v7.db Restored rx1217_v7.db Restored rx1217rp_v7.db Restored rx1217sas_v7.db Restored rx1222sas_v7.db Restored rx1223rp_v7.db Restored rx1224rp_v7.db Restored rx2417sas_v7.db Restored rx410_v7.db Restored rx415_v7.db Restored rx418_v7.db Restored rx4_v7.db Restored rx6022sas_v7.db Restored rxd1215sas_v7.db Restored rxd1219sas_v7.db Restore successful. Restoring internal drive's state WD140EFFX-68VBXN0 sata3 DSM partition: in_sync sata3 Swap partition: in_sync WD140EFFX-68VBXN0 sata4 DSM partition: in_sync sata4 Swap partition: in_sync ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Synology_HDD_db#618
No description provided.