mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 06:55:52 +03:00
[GH-ISSUE #99] Meshcentral - 502 Bad Gateway nginx (reboot fixes for a period of time) #1996
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#1996
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @rtwright68 on GitHub (Sep 10, 2020).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/99
Within the last couple of updates, I run into 502 Bad Gateway - nginx while trying to take control of an agent via Meshcentral.
Rebooting the server fixes it for a period of time. I rebooted this morning and now this afternoon when going back into it I am seeing the same error.
Not sure what to take a look at?
@dinger1986 commented on GitHub (Sep 10, 2020):
I’m also getting that.
Out of interest how much free memory do you have on your machine? Or how much did you start with?
I have 2gb and it’s only got about 10% free after being in for 2 days....
Daniel Lamb
Technical Director
[cid:logo_2094e808-adcc-455a-a5fd-54265dc137fa.jpg]http://www.flonix.co.uk/
Flonix Limited
Willowview
Redgorton
Perth
PH1 3EL
Phone: 01738 500400
http://www.flonix.co.uk
Company Reg: SC341802
[cid:fb_11c7c55d-b85e-4b3e-b8d8-daabbed1905e.png]https://www.facebook.com/Flonixltd/
From: rtwright68 notifications@github.com
Sent: Thursday, September 10, 2020 7:20:06 PM
To: wh1te909/tacticalrmm tacticalrmm@noreply.github.com
Cc: Subscribed subscribed@noreply.github.com
Subject: [wh1te909/tacticalrmm] Meshcentral - 502 Bad Gateway nginx (reboot fixes for a period of time) (#99)
Within the last couple of updates, I run into 502 Bad Gateway - nginx while trying to take control of an agent via Meshcentral.
Rebooting the server fixes it for a period of time. I rebooted this morning and now this afternoon when going back into it I am seeing the same error.
Not sure what to take a look at?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://github.com/wh1te909/tacticalrmm/issues/99, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABXIVH2VAC4XZ7TDUTESXYDSFEKFNANCNFSM4RFSS5WA.
@rtwright68 commented on GitHub (Sep 10, 2020):
I also have a 2GB droplet via DigitalOcean. I have 639,928 free currently (rebooted around 5AM EST this morning). Very small set of agents (35 currently testing)
@dinger1986 commented on GitHub (Sep 10, 2020):
I only have 10 agents at the moment for testing but planned on rolling out next week, I am planning on increasing to 4GB ram and see how it goes.
Daniel
Daniel Lamb
Technical Director
[cid:logo_2094e808-adcc-455a-a5fd-54265dc137fa.jpg]http://www.flonix.co.uk/
Flonix Limited
Willowview
Redgorton
Perth
PH1 3EL
Phone: 01738 500400
http://www.flonix.co.uk
Company Reg: SC341802
[cid:fb_11c7c55d-b85e-4b3e-b8d8-daabbed1905e.png]https://www.facebook.com/Flonixltd/
From: rtwright68 notifications@github.com
Sent: 10 September 2020 19:34
To: wh1te909/tacticalrmm tacticalrmm@noreply.github.com
Cc: Daniel Lamb | Flonix Ltd daniel.lamb@flonix.co.uk; Comment comment@noreply.github.com
Subject: Re: [wh1te909/tacticalrmm] Meshcentral - 502 Bad Gateway nginx (reboot fixes for a period of time) (#99)
I also have a 2GB droplet via DigitalOcean. I have 639,928 free currently (rebooted around 5AM EST this morning). Very small set of agents (35 currently testing)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/wh1te909/tacticalrmm/issues/99#issuecomment-690606632, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABXIVHY4732QLFVFDWH24JDSFEL3DANCNFSM4RFSS5WA.
@rtwright68 commented on GitHub (Sep 10, 2020):
Good idea. I will do the same. Next step up for DigitalOcean is 2vCPU and 4GB
@Omnicef commented on GitHub (Sep 10, 2020):
I had the same issue at 2GB. Bumping up to 4GB solved it for me.
@dinger1986 commented on GitHub (Sep 10, 2020):
Easy to do on vultr, just change plan so going for it now
Daniel Lamb
Technical Director
[cid:logo_2094e808-adcc-455a-a5fd-54265dc137fa.jpg]http://www.flonix.co.uk/
Flonix Limited
Willowview
Redgorton
Perth
PH1 3EL
Phone: 01738 500400
http://www.flonix.co.uk
Company Reg: SC341802
[cid:fb_11c7c55d-b85e-4b3e-b8d8-daabbed1905e.png]https://www.facebook.com/Flonixltd/
From: rtwright68 notifications@github.com
Sent: 10 September 2020 19:36
To: wh1te909/tacticalrmm tacticalrmm@noreply.github.com
Cc: Daniel Lamb | Flonix Ltd daniel.lamb@flonix.co.uk; Comment comment@noreply.github.com
Subject: Re: [wh1te909/tacticalrmm] Meshcentral - 502 Bad Gateway nginx (reboot fixes for a period of time) (#99)
Good idea. I will do the same. Next step up for DigitalOcean is 2vCPU and 4GB
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/wh1te909/tacticalrmm/issues/99#issuecomment-690608326, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABXIVH3I5BOVEJOYAMLN7YLSFEMD3ANCNFSM4RFSS5WA.
@rtwright68 commented on GitHub (Sep 10, 2020):
Yup, have mine changed on DigitalOcean already. Will keep an eye on memory usage.
@dinger1986 commented on GitHub (Sep 10, 2020):
I have upgraded as well so see if it behaves, looking good at the moment as didnt reboot and mesh is now online again.
Daniel
Daniel Lamb
Technical Director
[cid:logo_2094e808-adcc-455a-a5fd-54265dc137fa.jpg]http://www.flonix.co.uk/
Flonix Limited
Willowview
Redgorton
Perth
PH1 3EL
Phone: 01738 500400
http://www.flonix.co.uk
Company Reg: SC341802
[cid:fb_11c7c55d-b85e-4b3e-b8d8-daabbed1905e.png]https://www.facebook.com/Flonixltd/
From: rtwright68 notifications@github.com
Sent: 10 September 2020 19:51
To: wh1te909/tacticalrmm tacticalrmm@noreply.github.com
Cc: Daniel Lamb | Flonix Ltd daniel.lamb@flonix.co.uk; Comment comment@noreply.github.com
Subject: Re: [wh1te909/tacticalrmm] Meshcentral - 502 Bad Gateway nginx (reboot fixes for a period of time) (#99)
Yup, have mine changed on DigitalOcean already. Will keep an eye on memory usage.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/wh1te909/tacticalrmm/issues/99#issuecomment-690618543, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABXIVH5QJD4ZVWNBV3VP4VTSFEN4TANCNFSM4RFSS5WA.
@wh1te909 commented on GitHub (Sep 10, 2020):
I upgraded the minimum requirements a week or so ago on the readme to 4GB thanks to @Omnicef's suggestion, I have a feeling that salt leaks memory and needs to be restarted every few days, and possibly other services too. If you don't want to restart the server here are all the services that you can restart to free up any memory leaks:
salt-master salt-api nginx rmm celery celerybeat celery-winupdate meshcentralNginx needs to be running before meshcentral otherwise order should not matterAnd yea the more agents you add the more memory you'll need. With my 600 agents i am averaging around 5GB usage, however I have also increased the amount of uwsgi workers and salt-master and salt-api threads to handle the load of that many agents, which increases memory usage. You can play around with the number of uwsgi workers in
/rmm/api/tacticalrmm/app.ini(processesandthreads) and for salt, in/etc/salt/master.d/rmm-salt.confhere is my rmm-salt.conf (notice the 2 extra fields: worker_threads for salt-master, and thread_pool for salt-api)
default if none are specified are 5 for
worker_threadsand 100 forthread_pool, but these should not be lowered only increased.if you change any of these numbers in
app.iniorrmm-salt.confthen make sure to restart all the services above to load the new configs@wh1te909 commented on GitHub (Oct 25, 2020):
I've updated the defaults in the install script to the above, just found out about the last line
socket_queue_sizeso you guys can add that too should help with the salt api load.