[GH-ISSUE #1057] ENHANCEMENT: Add option to enable Proxy Protocol on TRMM Nginx Container #651

Closed
opened 2026-03-02 02:18:00 +03:00 by kerem · 8 comments
Owner

Originally created by @joeldeteves on GitHub (Apr 8, 2022).
Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/1057

Originally assigned to: @silversword411 on GitHub.

Is your feature request related to a problem? Please describe.
If TRMM is hosted behind a Load Balancer, it does not get the real IP of the client host in the logs.

Describe the solution you'd like
The solution to this problem is the use of Proxy Protocol, see https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/

  • It should not be enabled by default, as this will break existing setups unless it's turned on at both the Load Balancer and ingress levels.
  • Instead, it should be configurable by an environment variable e.g. PROXY_PROTOCOL: true, this way it is compatible with both K8s & docker-compose
  • Setting proxy protocol should also enable it for the ssl binding (4443 for the TRMM non-root nginx container)
  • There also needs to be an option to configure the set_real_ip_from directive in order to set it to the IP of the load balancer (or in the case of K8s, the IP CIDR range of the pod virtual network); it might like something like this: REAL_IP_FROM: 192.168.0.0/16

Describe alternatives you've considered
Using a custom NGINX container

Additional context
Add any other context or screenshots about the feature request here.

Originally created by @joeldeteves on GitHub (Apr 8, 2022). Original GitHub issue: https://github.com/amidaware/tacticalrmm/issues/1057 Originally assigned to: @silversword411 on GitHub. **Is your feature request related to a problem? Please describe.** If TRMM is hosted behind a Load Balancer, it does not get the real IP of the client host in the logs. **Describe the solution you'd like** The solution to this problem is the use of `Proxy Protocol`, see https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/ - It should not be enabled by default, as this will break existing setups unless it's turned on at both the Load Balancer and ingress levels. - Instead, it should be configurable by an environment variable e.g. `PROXY_PROTOCOL: true`, this way it is compatible with both K8s & docker-compose - Setting `proxy protocol` should also enable it for the `ssl` binding (4443 for the TRMM non-root nginx container) - There also needs to be an option to configure the `set_real_ip_from` directive in order to set it to the IP of the load balancer (or in the case of K8s, the IP CIDR range of the pod virtual network); it might like something like this: `REAL_IP_FROM: 192.168.0.0/16` **Describe alternatives you've considered** Using a custom NGINX container **Additional context** Add any other context or screenshots about the feature request here.
kerem 2026-03-02 02:18:00 +03:00
  • closed this issue
  • added the
    question
    label
Author
Owner

@silversword411 commented on GitHub (Apr 8, 2022):

Is this a docs recommendation for proxy/load balancer considerations...or are you looking for something changed in TRMM?

<!-- gh-comment-id:1093361593 --> @silversword411 commented on GitHub (Apr 8, 2022): Is this a docs recommendation for proxy/load balancer considerations...or are you looking for something changed in TRMM?
Author
Owner

@joeldeteves commented on GitHub (Apr 8, 2022):

Is this a docs recommendation for proxy/load balancer considerations...or are you looking for something changed in TRMM?

Hi @silversword411 this would be a change in the startup script of the NGINX container.

<!-- gh-comment-id:1093363424 --> @joeldeteves commented on GitHub (Apr 8, 2022): > Is this a docs recommendation for proxy/load balancer considerations...or are you looking for something changed in TRMM? Hi @silversword411 this would be a change in the startup script of the NGINX container.
Author
Owner

@silversword411 commented on GitHub (Apr 8, 2022):

Might be better to just PR your recommended change to the repo for that.

But if I'm reading this right there's useful documentation addition for Load balanced configs. Am I not understanding something?

<!-- gh-comment-id:1093391198 --> @silversword411 commented on GitHub (Apr 8, 2022): Might be better to just PR your recommended change to the repo for that. But if I'm reading this right there's useful documentation addition for Load balanced configs. Am I not understanding something?
Author
Owner

@joeldeteves commented on GitHub (Apr 8, 2022):

I made a note of how the changes should be implemented in order to avoid breaking existing setups. I suppose it could be adapted into the documention as well, for those who run load balancers 😎

I can PR these changes, no problem. Give me some time to test and I'll put in a merge request

<!-- gh-comment-id:1093392536 --> @joeldeteves commented on GitHub (Apr 8, 2022): > I made a note of how the changes should be implemented in order to avoid breaking existing setups. I suppose it could be adapted into the documention as well, for those who run load balancers 😎 I can PR these changes, no problem. Give me some time to test and I'll put in a merge request
Author
Owner

@silversword411 commented on GitHub (Apr 8, 2022):

I think you have a bit better understanding on load balancers and docker/k8s stuff than I do...I'm still working on my brain map on the topics ;)

<!-- gh-comment-id:1093394095 --> @silversword411 commented on GitHub (Apr 8, 2022): I think you have a bit better understanding on load balancers and docker/k8s stuff than I do...I'm still working on my brain map on the topics ;)
Author
Owner

@joeldeteves commented on GitHub (Jun 20, 2022):

Hold my beer . . .

<!-- gh-comment-id:1159979644 --> @joeldeteves commented on GitHub (Jun 20, 2022): Hold my beer . . .
Author
Owner

@joeldeteves commented on GitHub (Jun 21, 2022):

Cancelling this request, see notes in the merge request (also cancelled)

<!-- gh-comment-id:1161138213 --> @joeldeteves commented on GitHub (Jun 21, 2022): Cancelling this request, see notes in the merge request (also cancelled)
Author
Owner

@joeldeteves commented on GitHub (Jun 21, 2022):

Adding a follow-up in case anyone else is wondering why I closed this:

  1. You can use your own NGINX container + config instead of the one that comes with TRMM. Not recommended unless you know what you're doing; the TRMM one is great for most use cases and in fact I am still using it (see below).

  2. It turns out I was able to split NATS into it's own pod on K8s using separate services. Here's how:

  • I ended up leaving NATS on its own Load Balancer (with Proxy Protocol turned off)
  • Pointed the rmm and mesh A records to our primary Load Balancer, which talks to our Reverse Proxy (with Proxy Protocol turned on).
  • Pointed the api A record to the "NATS" Load Balancer, for which the K8s Service is used to expose both the NGINX pod and the NATS pod (thus allowing both NATS and http requests to api to function).

To summarize, the NGINX pod is shared between the "NATS" load balancer and the Reverse Proxy Load Balancer. This allows us to send our "front end" traffic through the Reverse Proxy and our "back end" traffic through the exposed NGINX service, respectively, while also allowing NATS to function as intended.

Whew! 🥳🥳🥳

<!-- gh-comment-id:1161249250 --> @joeldeteves commented on GitHub (Jun 21, 2022): Adding a follow-up in case anyone else is wondering why I closed this: 1) You can use your own NGINX container + config instead of the one that comes with TRMM. Not recommended unless you know what you're doing; the TRMM one is great for most use cases and in fact I am still using it (see below). 2) It turns out I *was* able to split NATS into it's own pod on K8s using separate services. Here's how: - I ended up leaving NATS on its own Load Balancer (with Proxy Protocol turned off) - Pointed the `rmm` and `mesh` A records to our primary Load Balancer, which talks to our Reverse Proxy (with Proxy Protocol turned **on**). - Pointed the `api` A record to the "NATS" Load Balancer, for which the K8s Service is used to expose both the NGINX pod and the NATS pod (thus allowing both NATS and http requests to `api` to function). To summarize, the NGINX pod is shared between the "NATS" load balancer and the Reverse Proxy Load Balancer. This allows us to send our "front end" traffic through the Reverse Proxy and our "back end" traffic through the exposed NGINX service, respectively, while also allowing NATS to function as intended. Whew! 🥳🥳🥳
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/tacticalrmm#651
No description provided.