mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 06:55:52 +03:00
[GH-ISSUE #643] Notification when server disk space low #410
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#410
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @silversword411 on GitHub (Jul 19, 2021).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/643
Like the "new version available" indicator in the top right.
Add a "disk space low" when less than...15% disk space?
Clicking it takes you to the doc link: https://wh1te909.github.io/tacticalrmm/update_server/#keep-an-eye-on-your-disk-space
@bbrendon commented on GitHub (Jul 19, 2021):
I know nobody asked me but I'll give my 0.02. :)
This is a slippery slope. Then why not add network/CPU/etc monitoring? There are monitoring systems for disk/network/cpu already. I vote no on this.
What you could do is add a disk free check to the install and/or upgrade script. That seems reasonable but I'm not sure if it's useful to be there.
@dinger1986 commented on GitHub (Jul 19, 2021):
I guess ideally once we have linux agents we can monitor the server itself and that will show up if there's disk space issues etc
@silversword411 commented on GitHub (Jul 20, 2021):
In 2-3 years I'm sure when there's nothing else to fix the other monitoring items would be a useful feature-add.
But disk space at 0, server stops. That's different than any other monitored items.
I think disk space warnings: especially now when there's huge log files growing would be useful (didn't know about that file till now, but I still need to do some more monitoring on the TRMM server)...but it's up to the bosses in the end ;)
@dinger1986 commented on GitHub (Jul 20, 2021):
I'm going to keep an eye on my logs but I guess the sooner we have linux agents the better.
Maybe worth doing a survey or something to see how users logs are growing?
@silversword411 commented on GitHub (Jul 20, 2021):
Hopefully @wh1te909 will popup with some insite on why that log file is so big, and why it's not part of the default log file location(s). Needs to be looked at one way or another :)
@dinger1986 commented on GitHub (Jul 20, 2021):
Im sure he will when hes got a minute, easy enough to clear the logs for now for space or move them elsewhere.
@wh1te909 commented on GitHub (Jul 21, 2021):
The issue this started from on discord is people using really old installs where the installer didn't set certain nginx rules and so it was logging everything. Do a fresh install now and by default these rules are present so those log files won't get so big but there are still alot of people running older installs which is why we still see crazy log file sizes.
I do however need to move these log files out of the tacticalrmm dir and into the standard
/var/logdir that way linux will take care of rotating the logs, this was just an old design decision mainly for myself for debugging and it made it into "production" lolUltimately it needs to be the user's responsibility to monitor the server. Trmm is just an app and should not have access to the underlying server's resources. By design, we don't run the trmm services as root so that if there is ever a vuln in the web code and somehow someone gets a shell, they can't do any significant damage. We can add docs to check server space and provide tips and @dinger1986 yes when linux agents come that is a great idea, reminds me of using a salt-minion to manage the salt-master lol which is a pretty popular practice. Anyway for now feel free to delete any of the large log files they are not needed for trmm to function.
@silversword411 commented on GitHub (Jul 21, 2021):
Whee...backup/restore testing time.
Maybe it's time to test the new proxmox cluster with real-stuff ;)