mirror of
https://github.com/007revad/Synology_HDD_db.git
synced 2026-04-25 13:45:59 +03:00
[GH-ISSUE #340] Option -S for enabling write_mostly does not work #827
Labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Synology_HDD_db#827
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ThomasGoering on GitHub (Aug 14, 2024).
Original GitHub issue: https://github.com/007revad/Synology_HDD_db/issues/340
My DS920+ has two internal 14TB HDDs, two internal 2TB SSDs and two 1TB M.2 drives, the DX517 expansion unit has three 12TB HDDs.
I'm running your script with option -S to enable write_mostly for the internal SSDs but noticed that there was no output that confirms setting write_mostly. This is the output of the script (including some TEST_DEBUG output that I inserted):
The debug outputs were inserted in the else part after line 1755. The output of
TEST_DEBUG: idrive=sata1, internal_drive=is done after lineinternal_drive="$(echo "$idrive" | awk '{printf $4}')"with thisecho "TEST_DEBUG: idrive=$idrive, internal_drive=$internal_drive".It looks like internal_drive is not expected to be empty. Am I missing something or do you need more details?
@007revad commented on GitHub (Aug 15, 2024):
I believe I've found the issue. Can you test the fix.
Change lines 1759 and 1760 from this:
to this:
and change line 1811 from this:
to this:
@007revad commented on GitHub (Aug 15, 2024):
I've released v3.5.98 which fixes this issue.
https://github.com/007revad/Synology_HDD_db/releases
@ThomasGoering commented on GitHub (Aug 15, 2024):
Great, this is the new output:
Looks good!
I have one more question about write_mostly. In #318 you wrote that it looks like the setting must be done after each boot again. Is this the case?
If yes, then what is the recommended way to schedule the syno_hdd_db.sh script? I ask because I find different instructions for scheduling it:
Is this difference only when syno_hdd_db.sh is used together with syno_enable_dedupe.sh?
@007revad commented on GitHub (Aug 15, 2024):
No.
I just disabled my boot schedule for syno_hdd_db.sh and rebooted my DS720+. After the reboot I checked both drives to see if the HDD still had most_writely set and it did. So the write_mostly setting survives a reboot.
sata1 is the HDD and sata2 is the SSD
I'd still schedule syno_hdd_db.sh to run at boot so you don't have to remember to run it after a DSM update or Storage Manager package update.
@ThomasGoering commented on GitHub (Aug 15, 2024):
Ok, I will schedule syno_hdd_db.sh at boot. Is scheduling syno_enable_dedupe.sh at shutdown still recommended?
I found another issue (don't know, maybe it's intended behavior): I tried to restore all changes that syno_hdd_db.sh did with this call:
It looks like that --ssd=restore is ignored when option --restore is set at the same time. But --ssd=restore works when --restore is not set.
@007revad commented on GitHub (Aug 15, 2024):
I've changed it so "--ssd=restore --restore" can be used together. Note: At the moment --ssd=restore must be before --restore
https://github.com/007revad/Synology_HDD_db/releases/tag/v3.5.99-RC
@ThomasGoering commented on GitHub (Aug 18, 2024):
Thanks. It works with two issues:
Changed to support "--restore --ssd=restore" to restore write_mostly when restoring all other changes. Issue https://github.com/007revad/Synology_HDD_db/issues/340This is the wrong order of the options. In your comment above you wrote that the script can be used with the options in this order: --ssd=restore --restore
It still uses color codes despite the fact that option --email is used. And it seems to be resetting write_mostly three times for each drive.
@007revad commented on GitHub (Aug 19, 2024):
I've changed it in v3.5.100-RC so when --restore is used it also supports --ssd=restore, -e or --email and in any order. And fixed it so it only resets write_mostly once for each drive.
https://github.com/007revad/Synology_HDD_db/releases
EDIT Forgot to mention it now only resets write_mostly on drives that have write_mostly set. To reduce the spammy output for people with lots of drives, and to make it clear which drives were processed.
@ThomasGoering commented on GitHub (Aug 19, 2024):
Thanks a lot, it is now resetting write_mostly as you described!
The output still uses color codes with option --email but I don't really care. This issue can be closed as resetting write_mostly now works.