[GH-ISSUE #2197] Host not working when using container's name with Podman #1574

Open
opened 2026-02-26 07:31:37 +03:00 by kerem · 12 comments
Owner

Originally created by @matheusfenolio on GitHub (Aug 12, 2022).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2197

Checklist

  • Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image?
    • Yes
  • Are you sure you're not using someone else's docker image?
    • Yes
  • Have you searched for similar issues (both open and closed)?
    • Yes

Describe the bug
I'm trying to add a host using the name of the container, and when I open the URL, a 502 bad gateway appears. I tried using Docker, and it worked, but with Podman, it only works with container's IP.

Nginx Proxy Manager Version
v2.9.18

To Reproduce
Steps to reproduce the behavior:

  1. Create the container to forward: podman run --name web --network nginx-proxy-manager_default -d nginx:latest
  2. Add the host with:
    -Forward Hostname / IP: web
    -Port: 80
  3. Try to access service.

Expected behavior
The Nginx welcome page should appear

Screenshots
image

image

image

image

image

Operating System
Tried on

  • Ubuntu 20.04 - Podman 3.2.2
  • Ubuntu 22.04 - Podman 3.2.2 and Podman 4.2
  • Fedora 36 - Podman 4.1

Additional context
docker-compose.yml
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt

Originally created by @matheusfenolio on GitHub (Aug 12, 2022). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2197 **Checklist** - Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image? - Yes - Are you sure you're not using someone else's docker image? - Yes - Have you searched for similar issues (both open and closed)? - Yes **Describe the bug** I'm trying to add a host using the name of the container, and when I open the URL, a 502 bad gateway appears. I tried using Docker, and it worked, but with Podman, it only works with container's IP. **Nginx Proxy Manager Version** v2.9.18 **To Reproduce** Steps to reproduce the behavior: 1. Create the container to forward: podman run --name web --network nginx-proxy-manager_default -d nginx:latest 2. Add the host with: -Forward Hostname / IP: web -Port: 80 3. Try to access service. **Expected behavior** The Nginx welcome page should appear **Screenshots** ![image](https://user-images.githubusercontent.com/37229507/184433996-0e8805af-5d84-4ad9-ad1e-b2d4de79789a.png) ![image](https://user-images.githubusercontent.com/37229507/184433637-9148525d-a9bc-49a0-9b5d-0bc733a96e81.png) ![image](https://user-images.githubusercontent.com/37229507/184433763-7ef0ea8c-cb80-445f-a6c4-e45fe6982086.png) ![image](https://user-images.githubusercontent.com/37229507/184433806-49e5f2ba-d805-466e-a454-9972a13d934e.png) ![image](https://user-images.githubusercontent.com/37229507/184433858-db66448d-f6b5-4dc8-8ae6-bcf359deae77.png) **Operating System** Tried on - Ubuntu 20.04 - Podman 3.2.2 - Ubuntu 22.04 - Podman 3.2.2 and Podman 4.2 - Fedora 36 - Podman 4.1 **Additional context** docker-compose.yml version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' - '81:81' - '443:443' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt
Author
Owner

@the1ts commented on GitHub (Aug 13, 2022):

Don't think this is an NPM issue, there is lots of talk of podman issues around DNS after the default network change that was done for Podman 4 which was released in May of this year. Seems although CNI was left as the network stack if network was already setup, the DNS plugin was no longer used by default so this is the symptom. Try adding the DNS plugin to Podman for the old CNI network you probably have or remove the exisiting old network and upgrade to the new default network type which does have DNS builtin.

<!-- gh-comment-id:1214239467 --> @the1ts commented on GitHub (Aug 13, 2022): Don't think this is an NPM issue, there is lots of talk of podman issues around DNS after the default network change that was done for Podman 4 which was released in May of this year. Seems although CNI was left as the network stack if network was already setup, the DNS plugin was no longer used by default so this is the symptom. Try adding the DNS plugin to Podman for the old CNI network you probably have or remove the exisiting old network and upgrade to the new default network type which does have DNS builtin.
Author
Owner

@matheusfenolio commented on GitHub (Sep 9, 2022):

I'll give it a shot! I saw some issues about it, but I thought it was fixed since I'm able to connect two containers only using the container name.

<!-- gh-comment-id:1242264034 --> @matheusfenolio commented on GitHub (Sep 9, 2022): I'll give it a shot! I saw some issues about it, but I thought it was fixed since I'm able to connect two containers only using the container name.
Author
Owner

@BrandonG777 commented on GitHub (Mar 29, 2023):

whats strange is DNS works properly from command line, and even for creating subfolder proxies but not hostname proxies. hostnames will intermittently resolve which has me scratching my head even more.

<!-- gh-comment-id:1489117663 --> @BrandonG777 commented on GitHub (Mar 29, 2023): whats strange is DNS works properly from command line, and even for creating subfolder proxies but not hostname proxies. hostnames will intermittently resolve which has me scratching my head even more.
Author
Owner

@BrandonG777 commented on GitHub (Mar 30, 2023):

Even using CNI network backend DNS is still intermittent. cannot replicate the issue at command line

<!-- gh-comment-id:1490332605 --> @BrandonG777 commented on GitHub (Mar 30, 2023): Even using CNI network backend DNS is still intermittent. cannot replicate the issue at command line
Author
Owner

@the1ts commented on GitHub (Mar 30, 2023):

Sorry, I don't use podman or npm anymore, but there is a good reason why command line and nginx work differently. DNS for the command line is done via podman or whatever is setting up the container. Nginx is setup to use /data/nginx/resolv.conf. This from memory gets set at first start and perhaps is incorrect or has multiple servers in it, hence intermittent.

<!-- gh-comment-id:1490370041 --> @the1ts commented on GitHub (Mar 30, 2023): Sorry, I don't use podman or npm anymore, but there is a good reason why command line and nginx work differently. DNS for the command line is done via podman or whatever is setting up the container. Nginx is setup to use /data/nginx/resolv.conf. This from memory gets set at first start and perhaps is incorrect or has multiple servers in it, hence intermittent.
Author
Owner

@BrandonG777 commented on GitHub (Mar 30, 2023):

Well thanks for the reply anyway, you ultimately pointed me in the right direction. /data/nginx/resolv.conf no longer exist and didn't seem to have any affect on resolving this issue. However /etc/resolv.conf contained the podman host address and then my real host system dns entires. It appears that nginx was doing some sort of round robin selection of which DNS server it was going to use for lookup. Since this only affects host proxies and not subfolder location proxies I added a resolver entry to my host proxy custom config and that is working for me. I noticed the swag container uses 127.0.0.11 in it's resolver config which doesn't seem to work with this container. Maybe they are doing adding some sort of port forwarding or adding a package to make this work? Seems like that solution would work well for this container as well.

<!-- gh-comment-id:1490826482 --> @BrandonG777 commented on GitHub (Mar 30, 2023): Well thanks for the reply anyway, you ultimately pointed me in the right direction. /data/nginx/resolv.conf no longer exist and didn't seem to have any affect on resolving this issue. However /etc/resolv.conf contained the podman host address and then my real host system dns entires. It appears that nginx was doing some sort of round robin selection of which DNS server it was going to use for lookup. Since this only affects host proxies and not subfolder location proxies I added a resolver entry to my host proxy custom config and that is working for me. I noticed the swag container uses 127.0.0.11 in it's resolver config which doesn't seem to work with this container. Maybe they are doing adding some sort of port forwarding or adding a package to make this work? Seems like that solution would work well for this container as well.
Author
Owner

@fuzzyfox commented on GitHub (Jul 19, 2023):

I believe I have tracked this down to a combination how nginx appears to handle the DNS resolution when it has multiple resolvers specified, and podman DNS resolution.

The issues can be resolved by manually altering the /etc/nginx/conf.d/include/resolvers.conf to only include the podman resolver ip, and no other ips. In my case:

# from
resolver 10.89.0.1 8.8.8.8 ipv6=off valid=10s;
# to
resolver 10.89.0.1 ipv6=off valid=10s;

This resolved my issues, obviously this wont stick through a container restart right now due to how this file is generated so as a more resilient interim solution I've added the updated resolvers line to the custom configs for proxies https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations. Ideally I'd put it into the custom http.conf, however as nginx does not allow redeclaring directives, this wont work right now.

My proposal would be to allow a custom resolver config to be used on boot-up, and if it doesn't exist, then generate the file as it is right now.

<!-- gh-comment-id:1642139593 --> @fuzzyfox commented on GitHub (Jul 19, 2023): I believe I have tracked this down to a combination how nginx appears to handle the DNS resolution when it has multiple resolvers specified, and podman DNS resolution. The issues can be resolved by manually altering the `/etc/nginx/conf.d/include/resolvers.conf` to only include the podman resolver ip, and no other ips. In my case: ```nginx # from resolver 10.89.0.1 8.8.8.8 ipv6=off valid=10s; # to resolver 10.89.0.1 ipv6=off valid=10s; ``` This resolved my issues, obviously this wont stick through a container restart right now due to [how this file is generated](https://github.com/NginxProxyManager/nginx-proxy-manager/blob/f91f0ee8db0a72ffb6a8a059474cb5d48ecff2c1/docker/rootfs/etc/s6-overlay/s6-rc.d/prepare/40-dynamic.sh ) so as a more resilient interim solution I've added the updated resolvers line to the custom configs for proxies <https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations>. Ideally I'd put it into the custom `http.conf`, however as nginx does not allow redeclaring directives, this wont work right now. My proposal would be to allow a custom resolver config to be used on boot-up, and if it doesn't exist, then generate the file as it is right now.
Author
Owner

@github-actions[bot] commented on GitHub (Feb 12, 2024):

Issue is now considered stale. If you want to keep it open, please comment 👍

<!-- gh-comment-id:1937975922 --> @github-actions[bot] commented on GitHub (Feb 12, 2024): Issue is now considered stale. If you want to keep it open, please comment :+1:
Author
Owner

@patrickklaeren commented on GitHub (Aug 23, 2024):

I believe I have this exact error running Portainer and using the hostname, the exact same as in the original description above.

<!-- gh-comment-id:2307603594 --> @patrickklaeren commented on GitHub (Aug 23, 2024): I believe I have this exact error running Portainer and using the hostname, the exact same as in the original description above.
Author
Owner

@github-actions[bot] commented on GitHub (Mar 29, 2025):

Issue is now considered stale. If you want to keep it open, please comment 👍

<!-- gh-comment-id:2763016702 --> @github-actions[bot] commented on GitHub (Mar 29, 2025): Issue is now considered stale. If you want to keep it open, please comment :+1:
Author
Owner

@perryk commented on GitHub (Sep 3, 2025):

Just adding a comment to keep this issue open.

Same issue persists in latest NPM and Podman.

Mentioned also in #2608

<!-- gh-comment-id:3248671375 --> @perryk commented on GitHub (Sep 3, 2025): Just adding a comment to keep this issue open. Same issue persists in latest NPM and Podman. Mentioned also in #2608
Author
Owner

@Raul824 commented on GitHub (Feb 9, 2026):

Am facing similar issue on PikaOS.

NPM Version 2.13.7
podman version 5.7.0

Output of curl command from host machine

*   Trying 192.168.1.110:8096...
* Established connection to 192.168.1.110 (192.168.1.110 port 8096) from 192.168.1.110 port 43224 
* using HTTP/1.x
> GET / HTTP/1.1
> Host: 192.168.1.110:8096
> User-Agent: curl/8.18.0
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 302 Found
< Content-Length: 0
< Date: Mon, 09 Feb 2026 15:14:39 GMT
< Server: Kestrel
< Location: web/
< 
* Connection #0 to host 192.168.1.110:8096 left intact```

Output of curl from inside the pod of npm.

```curl -v http://192.168.1.110:8096
*   Trying 192.168.1.110:8096...
* connect to 192.168.1.110 port 8096 failed: Connection refused
* Failed to connect to 192.168.1.110 port 8096 after 0 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to 192.168.1.110 port 8096 after 0 ms: Couldn't connect to server```
<!-- gh-comment-id:3872334902 --> @Raul824 commented on GitHub (Feb 9, 2026): Am facing similar issue on PikaOS. NPM Version 2.13.7 podman version 5.7.0 Output of curl command from host machine ```curl -v http://192.168.1.110:8096 * Trying 192.168.1.110:8096... * Established connection to 192.168.1.110 (192.168.1.110 port 8096) from 192.168.1.110 port 43224 * using HTTP/1.x > GET / HTTP/1.1 > Host: 192.168.1.110:8096 > User-Agent: curl/8.18.0 > Accept: */* > * Request completely sent off < HTTP/1.1 302 Found < Content-Length: 0 < Date: Mon, 09 Feb 2026 15:14:39 GMT < Server: Kestrel < Location: web/ < * Connection #0 to host 192.168.1.110:8096 left intact``` Output of curl from inside the pod of npm. ```curl -v http://192.168.1.110:8096 * Trying 192.168.1.110:8096... * connect to 192.168.1.110 port 8096 failed: Connection refused * Failed to connect to 192.168.1.110 port 8096 after 0 ms: Couldn't connect to server * Closing connection 0 curl: (7) Failed to connect to 192.168.1.110 port 8096 after 0 ms: Couldn't connect to server```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#1574
No description provided.