mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 15:05:57 +03:00
[GH-ISSUE #244] Issue - addition memory leaks? #2095
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#2095
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @rtwright68 on GitHub (Jan 12, 2021).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/244
After we installed 02.22 TRMM worked great over the weekend, but went completely off the rails yesterday. Restart did not affect it as well. Still suspecting a memory leak? Had to shut down the VM due to the continuous "data overdue" and then "data received" messages. Spinning wheel and lack of agent access as well.
I do have around 20 servers monitoring DNS and DHCP services, but no updates are being installed or other monitoring is occurring (like event logs, etc.).
@wh1te909 commented on GitHub (Jan 12, 2021):
memory leak should be fixed ive been on 0.2.22 for a few days now and memory is stable
which processes are using the most ram?
also paste output plz of
sudo salt --versions-report@rtwright68 commented on GitHub (Jan 12, 2021):
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tactical 93931 0.1 3.1 417708 255756 ? S 17:02 0:02 /rmm/api/env/bin/python3 -m celery worker -A tacticalrmm --loglevel=INFO --time-limit=2900 --autoscale=50,5 --logfile=/var/log/celery/w1%I.log --pidfile=/rmm/api/tacticalrmm/w1.pid --hostname=w1@tacticalrmm
tactical 97073 0.6 3.1 414572 252864 ? S 17:15 0:11 /rmm/api/env/bin/python3 -m celery worker -A tacticalrmm --loglevel=INFO --time-limit=2900 --autoscale=50,5 --logfile=/var/log/celery/w1%I.log --pidfile=/rmm/api/tacticalrmm/w1.pid --hostname=w1@tacticalrmm
root 1229 3.1 2.8 5097048 234712 ? Sl 12:55 8:58 /usr/bin/python3 /usr/bin/salt-api
root 1235 0.2 2.8 418396 229424 ? Sl 12:55 0:37 /usr/bin/python3 /usr/bin/salt-master
tactical 1469 0.1 2.0 1030908 166804 ? Sl 12:55 0:30 /usr/bin/node /meshcentral/node_modules/meshcentral --launch 1083
mongodb 989 0.3 1.6 1638152 134864 ? Ssl 12:55 0:52 /usr/bin/mongod --config /etc/mongod.conf
root 553 0.1 1.5 195672 127044 ? S<s 12:55 0:18 /lib/systemd/systemd-journald
tactical 2272 0.0 1.1 541544 92068 ? Sl 12:55 0:09 /rmm/api/env/bin/uwsgi --ini app.ini
tactical 2265 0.0 1.1 541232 90348 ? Sl 12:55 0:04 /rmm/api/env/bin/uwsgi --ini app.ini
Salt Version:
Salt: 3002.2
Dependency Versions:
cffi: Not Installed
cherrypy: unknown
dateutil: 2.7.3
docker-py: Not Installed
gitdb: 2.0.6
gitpython: 3.0.7
Jinja2: 2.10.1
libgit2: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack: 0.6.2
msgpack-pure: Not Installed
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: Not Installed
pycryptodome: 3.6.1
pygit2: Not Installed
Python: 3.8.5 (default, Jul 28 2020, 12:59:40)
python-gnupg: 0.4.5
PyYAML: 5.3.1
PyZMQ: 18.1.1
smmap: 2.0.5
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.3.2
System Versions:
dist: ubuntu 20.04 focal
locale: utf-8
machine: x86_64
release: 5.4.0-60-generic
system: Linux
version: Ubuntu 20.04 focal
@rtwright68 commented on GitHub (Jan 12, 2021):
We are seeing constant notifications for our servers (workstations are not set to notify us) of data overdue and data received for monitored agents. I powered on the TRMM VM this morning and it was steady until around 10-10:30AM or so.
It's happening for both local agents and agents across our VPN connections.
@wh1te909 commented on GitHub (Jan 12, 2021):
closing as was network issue not VM related