mirror of
https://github.com/AlexFromChaos/synology_hibernation_fixer.git
synced 2026-04-26 03:06:02 +03:00
[GH-ISSUE #1] ERROR (run) cannot load synocrond.config #2
Labels
No labels
enhancement
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/synology_hibernation_fixer#2
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @maxduke on GitHub (Jun 15, 2023).
Original GitHub issue: https://github.com/AlexFromChaos/synology_hibernation_fixer/issues/1
Originally assigned to: @AlexFromChaos on GitHub.
Hello,
After reboot I found /usr/syno/etc/synocrond.config is a empty object.
Logs:
@AlexFromChaos commented on GitHub (Jun 16, 2023):
thanks for the report, will have a look.
/usr/syno/etc/synocrond.configis an auto-generated file, normally it can be regenerated (from *.conf files) just by rebooting NAS. But anyway it's better to fix this.Also looks like it's time to add more tasks to the table of known task names.
@AlexFromChaos commented on GitHub (Jun 18, 2023):
although DSM itself does
systemctl reload synocrondto propagate synocrond configuration changes, seems that depending on synocrond state the reload may fail.Using
systemctl stop/startpair fixes this problem, but I want to do few more tests.@AlexFromChaos commented on GitHub (Jun 18, 2023):
so far the stop/start approach works good, I've committed the fix - please test
@maxduke commented on GitHub (Jun 19, 2023):
Thanks for fixing this.
Let's see if hibernation works on my NAS.
@maxduke commented on GitHub (Jun 20, 2023):
I checked the /var/log/hibernationFull.log,
It seems there's a lot of WRITE on dm-3 prevent the hibernation. (docker is installed on nvme)
@magicdude4eva commented on GitHub (Jun 20, 2023):
FWIW - on a DS1019+ (volume1 are IronWolf Pro and volume2 are NVME where I moved Docker) and running on DSM7.2, I do see a noticeable drop in power consumption (more so probably due to moving Docker to NVME)
I do not see this type of behaviour on my system (running mostly *arr app and usage is during downloads and Plex streaming - so power consumption fairly low during the day)
@AlexFromChaos commented on GitHub (Jun 25, 2023):
Can you share your
hibernationFull.log? NVMe-only writes should allow HDDs to sleep with the script, probably some disk I/O goes to HDD(s). One idea for now if docker containers are installed on the NVMe volume, but docker logs are somehow left on HDD.@maxduke commented on GitHub (Jun 26, 2023):
hibernationFull.log.zip
Thank you. I uploaded one of the log.
@AlexFromChaos commented on GitHub (Jun 26, 2023):
interesting, someone resets the HDD hibernation timer every minute. Looks like the process
SYNO.Core.Sharingis the main offender, I will check why it can write to HDD so often (keeps updatingsharing.dbfile).If
dm-3is your NVMe partition, you can ignore all writes to it in the logs. You can check to which volumedm-3corresponds by runningsudo dmsetup info /dev/dm-3 | grep -i Nameor, to show all,@maxduke commented on GitHub (Jun 26, 2023):
volume1 hdd
volume2 nvme
@AlexFromChaos commented on GitHub (Jun 26, 2023):
ok, checked what was the issue - it is Synology Virtual Machine Monitor package. Looks like they made it even more bloatware-like than it was...
One of its running processes is
synoccc_vnc_sha. This process is responsible for "VNC share garbage collector". And this is how it works basically:The 'disabled' flag can be set by signal 15 handler. But there are much more patterns like this in VMM, though normally with a different sleep interval, like an hour.
Those 1-minute wake ups from
synoccc_vnc_shacan be stopped usingThe process will remain present, but won't do any harm:
Note: this is a graceful stop as their signal 15 handler basically sets the 'disabled' flag, finishes execution of the last iteration in 1 min and then exits.
I'm going to implement VMM support for the script, starting with the most frequently triggered processes.
@AlexFromChaos commented on GitHub (Jun 26, 2023):
will track progress here: #3