mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 06:55:52 +03:00
[GH-ISSUE #236] GUI Stops responding 0.2.21 - postgres restart fixes it #2089
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#2089
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bbrendon on GitHub (Jan 7, 2021).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/236
GUI has spinning wheels and doesn't load. Restarting postgres fixes it. This has happened a few times now. I haven't figured out what triggers it.
postgres has tons of these
celery has some of these
@rtwright68 commented on GitHub (Jan 8, 2021):
Exactly what we are seeing on our 479 agent server. Boosted CPU and RAM.
@bradhawkins85 commented on GitHub (Jan 8, 2021):
Storage speed may play a part in this, not saying it is the issue but may be related.
Is the database on an SSD?
@rtwright68 commented on GitHub (Jan 8, 2021):
Yes, my ESXi hosts are fully SSD (HPE 380's).
Everything was working great prior to v0.2.20
Definitely noticed the spinning and the issues connecting to Agents via Mesh after that update.
Now on 0.2.21
@dinger1986 commented on GitHub (Jan 8, 2021):
I have noticed my host being slower since the update to 0.2.20 but also had issues with disk space so had put it down to that.
Space is fine now!
@bradhawkins85 commented on GitHub (Jan 8, 2021):
Are there any errors showing up in the browser console in dev tools?
@rtwright68 commented on GitHub (Jan 8, 2021):
Checked the debug log, multiple agents were reporting data overdue (across all our VPNs and locally on the same LAN):
2021-01-08 13:45:12.600 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on BCC-INK, attempting recovery
2021-01-08 13:45:12.577 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on DCB-KIOSK1, attempting recovery
2021-01-08 13:45:12.570 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on LAPTOP-RCRLC, attempting recovery
2021-01-08 13:45:12.561 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on DRK-WEB, attempting recovery
2021-01-08 13:45:12.513 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on MJ03M68T, attempting recovery
2021-01-08 13:45:12.502 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on DESKTOP-51767, attempting recovery
2021-01-08 13:45:12.496 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on EMP-SQL1, attempting recovery
2021-01-08 13:45:12.464 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on BCC-BRIX-Martin, attempting recovery
2021-01-08 13:45:12.435 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on TNP-DIGSIGN1, attempting recovery
2021-01-08 13:45:12.430 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on ASC-DGSRBV, attempting recovery
2021-01-08 13:45:12.428 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on ROC-DAD9J1, attempting recovery
2021-01-08 13:45:12.419 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on BCC-RDS2, attempting recovery
2021-01-08 13:45:12.383 | INFO | agents.tasks:_check_agent_service:25 - Detected crashed tacticalagent service on BCI-FM1, attempting recovery
Some agents recover then it goes to the spinner and I have to restart the VM. Ugh.
After a restart, all the agents report back (data received) and things function normally.
@bbrendon commented on GitHub (Jan 8, 2021):
Right now I only have 20 agents and load is very low (0.05 to 0.10). Swap used is 0.
The server is basically fine until its suddenly not.
Unfortunately I've only used mysql in the past. Digging into postgres has proven to be quite a learning curve so far.
@rtwright68 commented on GitHub (Jan 8, 2021):
Our server is losing its mind today. I ran the backup script hoping a vacuum would help. Its essentially non-responsive on the GUI and we are receiving constant notifications of data overdue and some returning to normal.
I shut it all the way down a couple times and brought it back up but its a mess today for some reason.
@wh1te909 commented on GitHub (Jan 8, 2021):
please update to 0.2.22, restart VM and let me know if issue is still present
don't worry about the "detected crashed service" messages in debug log that's normal, is just a temp fix for now while im making some big changes to agent
@rtwright68 commented on GitHub (Jan 8, 2021):
Fantastic, thanks for the rapid fix! My guys were all having withdrawal, lol
@rtwright68 commented on GitHub (Jan 8, 2021):
We are good now!
@wh1te909 commented on GitHub (Jan 9, 2021):
@bbrendon closing as this was fixed in 0.2.22
@rtwright68 commented on GitHub (Jan 12, 2021):
TRMM worked great over the weekend, but went completely off the rails yesterday. Restart did not affect it as well. Still suspecting a memory leak? Had to shut down the VM due to the continuous "data overdue" and then "data received" messages. Spinning wheel and lack of agent access as well.