mirror of
https://github.com/amidaware/tacticalrmm.git
synced 2026-04-26 06:55:52 +03:00
[GH-ISSUE #1057] ENHANCEMENT: Add option to enable Proxy Protocol on TRMM Nginx Container #2594
Labels
No labels
In Process
bug
bug
dev-triage
documentation
duplicate
enhancement
fixed
good first issue
help wanted
integration
invalid
pull-request
question
requires agent update
security
ui tweak
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/tacticalrmm#2594
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @joeldeteves on GitHub (Apr 8, 2022).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/1057
Originally assigned to: @silversword411 on GitHub.
Is your feature request related to a problem? Please describe.
If TRMM is hosted behind a Load Balancer, it does not get the real IP of the client host in the logs.
Describe the solution you'd like
The solution to this problem is the use of
Proxy Protocol, see https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/PROXY_PROTOCOL: true, this way it is compatible with both K8s & docker-composeproxy protocolshould also enable it for thesslbinding (4443 for the TRMM non-root nginx container)set_real_ip_fromdirective in order to set it to the IP of the load balancer (or in the case of K8s, the IP CIDR range of the pod virtual network); it might like something like this:REAL_IP_FROM: 192.168.0.0/16Describe alternatives you've considered
Using a custom NGINX container
Additional context
Add any other context or screenshots about the feature request here.
@silversword411 commented on GitHub (Apr 8, 2022):
Is this a docs recommendation for proxy/load balancer considerations...or are you looking for something changed in TRMM?
@joeldeteves commented on GitHub (Apr 8, 2022):
Hi @silversword411 this would be a change in the startup script of the NGINX container.
@silversword411 commented on GitHub (Apr 8, 2022):
Might be better to just PR your recommended change to the repo for that.
But if I'm reading this right there's useful documentation addition for Load balanced configs. Am I not understanding something?
@joeldeteves commented on GitHub (Apr 8, 2022):
I made a note of how the changes should be implemented in order to avoid breaking existing setups. I suppose it could be adapted into the documention as well, for those who run load balancers 😎
I can PR these changes, no problem. Give me some time to test and I'll put in a merge request
@silversword411 commented on GitHub (Apr 8, 2022):
I think you have a bit better understanding on load balancers and docker/k8s stuff than I do...I'm still working on my brain map on the topics ;)
@joeldeteves commented on GitHub (Jun 20, 2022):
Hold my beer . . .
@joeldeteves commented on GitHub (Jun 21, 2022):
Cancelling this request, see notes in the merge request (also cancelled)
@joeldeteves commented on GitHub (Jun 21, 2022):
Adding a follow-up in case anyone else is wondering why I closed this:
You can use your own NGINX container + config instead of the one that comes with TRMM. Not recommended unless you know what you're doing; the TRMM one is great for most use cases and in fact I am still using it (see below).
It turns out I was able to split NATS into it's own pod on K8s using separate services. Here's how:
rmmandmeshA records to our primary Load Balancer, which talks to our Reverse Proxy (with Proxy Protocol turned on).apiA record to the "NATS" Load Balancer, for which the K8s Service is used to expose both the NGINX pod and the NATS pod (thus allowing both NATS and http requests toapito function).To summarize, the NGINX pod is shared between the "NATS" load balancer and the Reverse Proxy Load Balancer. This allows us to send our "front end" traffic through the Reverse Proxy and our "back end" traffic through the exposed NGINX service, respectively, while also allowing NATS to function as intended.
Whew! 🥳🥳🥳