[GH-ISSUE #865] Share ssl certificate between two containers #735

Open
opened 2026-02-26 06:34:11 +03:00 by kerem · 22 comments
Owner

Originally created by @andycandy-de on GitHub (Feb 3, 2021).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/865

Hello,

is it possible to share ssl certificates between two npm containers? That would be nice. I want to setup a local npm (private network) and a public npm(connected with the internet). I set up a pihole, so I can link some subdomains directly to the local npm. The host configuration must be detached but the ssl validation must be done with the public npm. So it would be a great feature to share the certificates between two containers.

Originally created by @andycandy-de on GitHub (Feb 3, 2021). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/865 Hello, is it possible to share ssl certificates between two npm containers? That would be nice. I want to setup a local npm (private network) and a public npm(connected with the internet). I set up a pihole, so I can link some subdomains directly to the local npm. The host configuration must be detached but the ssl validation must be done with the public npm. So it would be a great feature to share the certificates between two containers.
Author
Owner

@theRAGEhero commented on GitHub (Feb 8, 2021):

isn't possible right now? I've always seen the possibily to select the certificate that you want. Or maybe I'm wrong?

<!-- gh-comment-id:775128251 --> @theRAGEhero commented on GitHub (Feb 8, 2021): isn't possible right now? I've always seen the possibily to select the certificate that you want. Or maybe I'm wrong?
Author
Owner

@andycandy-de commented on GitHub (Feb 9, 2021):

I shared the folder including the ssl certificate with both containers, but just the one who created the certificates knows them. The other container don't show the certificates. Neither in the "SSL Certificates" view nor in the "New Proxy Host" dialog. What's wrong?

<!-- gh-comment-id:776195503 --> @andycandy-de commented on GitHub (Feb 9, 2021): I shared the folder including the ssl certificate with both containers, but just the one who created the certificates knows them. The other container don't show the certificates. Neither in the "SSL Certificates" view nor in the "New Proxy Host" dialog. What's wrong?
Author
Owner

@lieven121 commented on GitHub (Feb 11, 2021):

This is because the certificates aren't looked up trough the files they are stored in the database and there is a ref to where they should be.
This would be more a feature request if you would want that functionality.

<!-- gh-comment-id:777282311 --> @lieven121 commented on GitHub (Feb 11, 2021): This is because the certificates aren't looked up trough the files they are stored in the database and there is a ref to where they should be. This would be more a feature request if you would want that functionality.
Author
Owner

@andycandy-de commented on GitHub (Feb 11, 2021):

Thank you for the response. I think this would be a nice feature. Can I move this issue to a feature request?

<!-- gh-comment-id:777703207 --> @andycandy-de commented on GitHub (Feb 11, 2021): Thank you for the response. I think this would be a nice feature. Can I move this issue to a feature request?
Author
Owner

@SteveGBuck commented on GitHub (Mar 17, 2021):

Is it not suitable to just have one container exposed to the Internet and use access lists to restrict the sites you want to keep internal?

<!-- gh-comment-id:801319267 --> @SteveGBuck commented on GitHub (Mar 17, 2021): Is it not suitable to just have one container exposed to the Internet and use access lists to restrict the sites you want to keep internal?
Author
Owner

@andycandy-de commented on GitHub (Mar 18, 2021):

@SteveGBuck thank you for your response. I thought about this solution, but I don't want to setup the Services this way. The reason is simple. I just want some services public and some services local. To use the access lists exposes the local services to the internet with a protection layer. But this is in my opinion also an unnecessary security issue. So I just want to access my local services with a non-public NPM. But It would be nice to get the certificates from the public NPM.

<!-- gh-comment-id:801771230 --> @andycandy-de commented on GitHub (Mar 18, 2021): @SteveGBuck thank you for your response. I thought about this solution, but I don't want to setup the Services this way. The reason is simple. I just want some services public and some services local. To use the access lists exposes the local services to the internet with a protection layer. But this is in my opinion also an unnecessary security issue. So I just want to access my local services with a non-public NPM. But It would be nice to get the certificates from the public NPM.
Author
Owner

@SteveGBuck commented on GitHub (Mar 18, 2021):

@andycandy-de Understood, I have this same fear as it does feel risky that the only separation between your internal services and external ones are a rule to determine which subnet the request has come from (though I guess isn't that really fundamentally what a firewall does anyway?).

I do try to mitigate some risk by only having the web server of my application containers in the same docker network as NPM. I try to keep all the application and database containers in their own seperate application default docker network. I also run a threat prevention application on my router for added protection. But I'm no security expert and I'm sure there's other measures I should consider.

<!-- gh-comment-id:801834474 --> @SteveGBuck commented on GitHub (Mar 18, 2021): @andycandy-de Understood, I have this same fear as it does feel risky that the only separation between your internal services and external ones are a rule to determine which subnet the request has come from (though I guess isn't that really fundamentally what a firewall does anyway?). I do try to mitigate some risk by only having the web server of my application containers in the same docker network as NPM. I try to keep all the application and database containers in their own seperate application default docker network. I also run a threat prevention application on my router for added protection. But I'm no security expert and I'm sure there's other measures I should consider.
Author
Owner

@andycandy-de commented on GitHub (Mar 22, 2021):

@SteveGBuck yes, of course just the public services should be accessable for the NPM. The required back end services should just be accessable for the public service and not for the NPM. That's also how I configured my setup.
I don't understand how do you want to protect the Services with a subnet rule. Can you explain that to me?

<!-- gh-comment-id:803826341 --> @andycandy-de commented on GitHub (Mar 22, 2021): @SteveGBuck yes, of course just the public services should be accessable for the NPM. The required back end services should just be accessable for the public service and not for the NPM. That's also how I configured my setup. I don't understand how do you want to protect the Services with a subnet rule. Can you explain that to me?
Author
Owner

@SteveGBuck commented on GitHub (Mar 22, 2021):

@andycandy-de All I mean is that if you navigate to "Access Lists" in the top menu of NPM. Create a new access list that has only your internal subnet for your internal network. Then you can set this access list rule to you hosts you want to keep internal only. Then if you try to access this site from outside your network you will get 403 forbidden message.

<!-- gh-comment-id:804059490 --> @SteveGBuck commented on GitHub (Mar 22, 2021): @andycandy-de All I mean is that if you navigate to "Access Lists" in the top menu of NPM. Create a new access list that has only your internal subnet for your internal network. Then you can set this access list rule to you hosts you want to keep internal only. Then if you try to access this site from outside your network you will get 403 forbidden message.
Author
Owner

@sovanyio commented on GitHub (Mar 24, 2021):

Came here to say that something like this would be useful for existing letsencrypt configs.
I currently have a wildcard cert set for my local domain that is setup with a DNS challenge that I would rather have the host system continue to manage.

Replicating my proxy configuration without being able to use this existing certificate is currently blocking my adoption of the project. I know that this is my problem and not the project's but it would be nice if ssl certs were portable for myself and not treated like docker containers ;).

Maybe something at the start of the container that looks at what is available under the directory and creates the db entries.. I might look at this myself

<!-- gh-comment-id:805496911 --> @sovanyio commented on GitHub (Mar 24, 2021): Came here to say that something like this would be useful for existing letsencrypt configs. I currently have a wildcard cert set for my local domain that is setup with a DNS challenge that I would rather have the host system continue to manage. Replicating my proxy configuration without being able to use this existing certificate is currently blocking my adoption of the project. I know that this is my problem and not the project's but it would be nice if ssl certs were portable for myself and not treated like docker containers ;). Maybe something at the start of the container that looks at what is available under the directory and creates the db entries.. I might look at this myself
Author
Owner

@JakobTewes commented on GitHub (Jan 28, 2022):

@andycandy-de, I´m interested in the "expose certificates to filesystem", too.
From my perspective this totally makes sense, as there is more than "just http" needing ssl encapsulation such as mqtt.

<!-- gh-comment-id:1024312042 --> @JakobTewes commented on GitHub (Jan 28, 2022): @andycandy-de, I´m interested in the "expose certificates to filesystem", too. From my perspective this totally makes sense, as there is more than "just http" needing ssl encapsulation such as mqtt.
Author
Owner

@shodanx2 commented on GitHub (Sep 6, 2022):

Hi,

This discussion seems to have stalled.
Here is a clear use case where this is needed.

I am running nginx proxy manager in a docker container and I also run a docker-mailserver container.

The docker-mailserver container needs SSL certifications for SSL/TLS but I don't want it to handle it's own separate set of certificates, I want it to use the ones managed by NPM.

After a freshly installed NPM you will find the following certificate files in your installation directory

image

And while I'm having other problem so I can't test this (*)
but, I suspect that because the path are "NPM-1" instead of the actual name of the certification, the mailserver will not find the certificates on its own, they will need to be copied from "NPM-1" to "example.com" and once you do that, they will no longer be updated by the NPM container.

Although i will attempt to just create symlinks

image

And then a symlink from the NPM folder to the docker-mailserver folder

image

Unfortunately, that does not work

image

Ok change of plans, I will edit the docker-compose.yml

image

And as far as I can tell this works !!

Ok, so as far as this issue is concerned here is my recommendation

I don't know why NPM calls it's live certification folder NPM-1 NPM-2 and so on

However I would suggest automatic create of symlink to the appropriate domain names

so ln -s letsencrypt/live/NPM-1 letsencrypt/live/example.com

  • (my mailserver reboots after 10 seconds because it's missing the certificates)
<!-- gh-comment-id:1237622473 --> @shodanx2 commented on GitHub (Sep 6, 2022): Hi, This discussion seems to have stalled. Here is a clear use case where this is needed. I am running nginx proxy manager in a docker container and I also run a docker-mailserver container. The docker-mailserver container needs SSL certifications for SSL/TLS but I don't want it to handle it's own separate set of certificates, I want it to use the ones managed by NPM. After a freshly installed NPM you will find the following certificate files in your installation directory ![image](https://user-images.githubusercontent.com/10621885/188536464-46ca6065-eb85-418b-86af-48b8cf3741b5.png) And while I'm having other problem so I can't test this (*) but, I suspect that because the path are "NPM-1" instead of the actual name of the certification, the mailserver will not find the certificates on its own, they will need to be copied from "NPM-1" to "example.com" and once you do that, they will no longer be updated by the NPM container. Although i will attempt to just create symlinks ![image](https://user-images.githubusercontent.com/10621885/188537735-a2e48fe2-2bc4-461e-bd7d-3a0c17c2e1a6.png) And then a symlink from the NPM folder to the docker-mailserver folder ![image](https://user-images.githubusercontent.com/10621885/188538707-c79a5026-4b56-4b09-932d-e80489c74d75.png) Unfortunately, that does not work ![image](https://user-images.githubusercontent.com/10621885/188541496-0c72172d-3758-4d03-8bae-acd65b0a6ea7.png) Ok change of plans, I will edit the docker-compose.yml ![image](https://user-images.githubusercontent.com/10621885/188541590-1b5758fe-3d32-4c0b-a9bf-f0667ffd1d7e.png) And as far as I can tell this works !! Ok, so as far as this issue is concerned here is my recommendation I don't know why NPM calls it's live certification folder NPM-1 NPM-2 and so on However I would suggest automatic create of symlink to the appropriate domain names so ln -s letsencrypt/live/NPM-1 letsencrypt/live/example.com * (my mailserver reboots after 10 seconds because it's missing the certificates)
Author
Owner

@Jarsky commented on GitHub (Jan 13, 2023):

And while I'm having other problem so I can't test this (*) but, I suspect that because the path are "NPM-1" instead of the actual name of the certification, the mailserver will not find the certificates on its own, they will need to be copied from "NPM-1" to "example.com" and once you do that, they will no longer be updated by the NPM container.

Although i will attempt to just create symlinks

And then a symlink from the NPM folder to the docker-mailserver folder

Unfortunately, that does not work

Ok change of plans, I will edit the docker-compose.yml

And as far as I can tell this works !!

Ok, so as far as this issue is concerned here is my recommendation

I don't know why NPM calls it's live certification folder NPM-1 NPM-2 and so on

However I would suggest automatic create of symlink to the appropriate domain names

so ln -s letsencrypt/live/NPM-1 letsencrypt/live/example.com

  • (my mailserver reboots after 10 seconds because it's missing the certificates)

To add to your comment @shodanx2, Certbot has a function called renewal-hooks which you can read about here:
https://eff-certbot.readthedocs.io/en/stable/using.html#pre-and-post-validation-hooks

This allows you to create validation scripts for pre and post using bash.
There is also a hook called 'deploy' which will run after the new certificate has been generated.

With NPM docker, as part of creating the docker you would map the /data folder.
You can also map to the letsencrypt folder. So if you're using docker-compose it might look something like this:

version: '3'
services:
  nginx-proxy-manager:
    container_name: nginx-proxy-manager
    image: 'jc21/nginx-proxy-manager:latest'
    networks:
      - proxy
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    restart: always
networks:
  proxy:
    external: true

So the location of the deploy renewal-hooks would be located in the below path in this example:

/opt/nginx-proxy-manager/letsencrypt/renewal-hooks/deploy

You can create a bash file e.g 'mailserver.sh' and make it executable (e.g chmod +x mailserver.sh)
Then use bash to define what you want to do during deploy. Heres an example of what it could look like.

#!/bin/bash
NPMALIAS="npm-1"
DOMAIN="mydomain.com"
dateF=$(date +%Y-%m)
certFile=cert
keyFile=key

#Check if correct renewed cert
#[[ $RENEWED_LINEAGE != "/etc/letsencrypt/live/$NPMALIAS" ]] && exit 0

#Backup existing certs
echo "Backing up certs"
if [ -f /data/certs/$DOMAIN/$certFile.pem ]; then
    #Check if certs exist
    if [ ! -f /data/certs/$DOMAIN/archive/ ]; then
        echo "Creating path structure"
        mkdir -p /data/certs/$DOMAIN/archive/
        touch /data/certs/$DOMAIN/$certFile.pem
        touch /data/certs/$DOMAIN/$keyFile.pem
    fi
fi

#Update certs
echo "Updating certs"
cat /etc/letsencrypt/live/$NPMALIAS/fullchain.pem > /data/certs/$DOMAIN/$certFile.pem
cat /etc/letsencrypt/live/$NPMALIAS/privkey.pem > /data/certs/$DOMAIN/$keyFile.pem

#Reload nginx configuration
nginx -s reload

So in this example, during the deploy of the certificate

  • It will check if its renewing /etc/letsencrypt/live/npm-1, if its not npm-1 it will exit
  • If its npm-1 it will archive the old certificate, and then copy the new certificate data to the certificate.

You can create another pem file for npm-2, and another for npm-3 etc.....
You can then access the certs from any container through
/opt/nginx-proxy-manager/data/certs

Alternatively if you dont have letsencrypt on your host, you could make the directory and symlink it to use the default location in your apps/containers:

sudo mkdir -p /etc/letsencrypt
sudo ln -s /opt/nginx-proxy-manager/data/certs /etc/letsencrypt/live

EDIT: OK so it seems the global variable $RENEWED_LINEAGE isnt working in NPM....
Might be related to the other issues with SSL Certificates....you can still use the above, but have to comment out the $RENEWED_LINEAGE check, which means it will run with every certificate renewal in NPM rather than the specific one youre targetting

<!-- gh-comment-id:1382400602 --> @Jarsky commented on GitHub (Jan 13, 2023): > > And while I'm having other problem so I can't test this (*) but, I suspect that because the path are "NPM-1" instead of the actual name of the certification, the mailserver will not find the certificates on its own, they will need to be copied from "NPM-1" to "example.com" and once you do that, they will no longer be updated by the NPM container. > > Although i will attempt to just create symlinks > > > And then a symlink from the NPM folder to the docker-mailserver folder > > > Unfortunately, that does not work > > > Ok change of plans, I will edit the docker-compose.yml > > > And as far as I can tell this works !! > > Ok, so as far as this issue is concerned here is my recommendation > > I don't know why NPM calls it's live certification folder NPM-1 NPM-2 and so on > > However I would suggest automatic create of symlink to the appropriate domain names > > so ln -s letsencrypt/live/NPM-1 letsencrypt/live/example.com > > * (my mailserver reboots after 10 seconds because it's missing the certificates) To add to your comment @shodanx2, Certbot has a function called renewal-hooks which you can read about here: https://eff-certbot.readthedocs.io/en/stable/using.html#pre-and-post-validation-hooks This allows you to create validation scripts for pre and post using bash. There is also a hook called 'deploy' which will run after the new certificate has been generated. With NPM docker, as part of creating the docker you would map the /data folder. You can also map to the letsencrypt folder. So if you're using docker-compose it might look something like this: ``` version: '3' services: nginx-proxy-manager: container_name: nginx-proxy-manager image: 'jc21/nginx-proxy-manager:latest' networks: - proxy ports: - '80:80' - '81:81' - '443:443' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt restart: always networks: proxy: external: true ``` So the location of the deploy renewal-hooks would be located in the below path in this example: `/opt/nginx-proxy-manager/letsencrypt/renewal-hooks/deploy` You can create a bash file e.g 'mailserver.sh' and make it executable (e.g chmod +x mailserver.sh) Then use bash to define what you want to do during deploy. Heres an example of what it could look like. ``` #!/bin/bash NPMALIAS="npm-1" DOMAIN="mydomain.com" dateF=$(date +%Y-%m) certFile=cert keyFile=key #Check if correct renewed cert #[[ $RENEWED_LINEAGE != "/etc/letsencrypt/live/$NPMALIAS" ]] && exit 0 #Backup existing certs echo "Backing up certs" if [ -f /data/certs/$DOMAIN/$certFile.pem ]; then #Check if certs exist if [ ! -f /data/certs/$DOMAIN/archive/ ]; then echo "Creating path structure" mkdir -p /data/certs/$DOMAIN/archive/ touch /data/certs/$DOMAIN/$certFile.pem touch /data/certs/$DOMAIN/$keyFile.pem fi fi #Update certs echo "Updating certs" cat /etc/letsencrypt/live/$NPMALIAS/fullchain.pem > /data/certs/$DOMAIN/$certFile.pem cat /etc/letsencrypt/live/$NPMALIAS/privkey.pem > /data/certs/$DOMAIN/$keyFile.pem #Reload nginx configuration nginx -s reload ``` So in this example, during the deploy of the certificate - It will check if its renewing /etc/letsencrypt/live/npm-1, if its not npm-1 it will exit - If its npm-1 it will archive the old certificate, and then copy the new certificate data to the certificate. You can create another pem file for npm-2, and another for npm-3 etc..... You can then access the certs from any container through `/opt/nginx-proxy-manager/data/certs` Alternatively if you dont have letsencrypt on your host, you could make the directory and symlink it to use the default location in your apps/containers: ``` sudo mkdir -p /etc/letsencrypt sudo ln -s /opt/nginx-proxy-manager/data/certs /etc/letsencrypt/live ``` **EDIT: OK so it seems the global variable $RENEWED_LINEAGE isnt working in NPM.... Might be related to the other issues with SSL Certificates....you can still use the above, but have to comment out the $RENEWED_LINEAGE check, which means it will run with every certificate renewal in NPM rather than the specific one youre targetting**
Author
Owner

@JakobTewes commented on GitHub (Jan 13, 2023):

Thanks for sharing that @Jarsky and @shodanx2!

Personally I’d love, having a beautiful integrated solution (with web representation) for this, as I see it as a high value feature.

kr

Jakob

<!-- gh-comment-id:1382585864 --> @JakobTewes commented on GitHub (Jan 13, 2023): Thanks for sharing that @Jarsky and @shodanx2! Personally I’d love, having a beautiful integrated solution (with web representation) for this, as I see it as a high value feature. kr Jakob
Author
Owner

@shodanx2 commented on GitHub (Jan 14, 2023):

Thank you for the explanation, I will put this into practise the next time I setup one of those servers and will report what I learned here.

<!-- gh-comment-id:1382609385 --> @shodanx2 commented on GitHub (Jan 14, 2023): Thank you for the explanation, I will put this into practise the next time I setup one of those servers and will report what I learned here.
Author
Owner

@github-actions[bot] commented on GitHub (Mar 20, 2024):

Issue is now considered stale. If you want to keep it open, please comment 👍

<!-- gh-comment-id:2008552290 --> @github-actions[bot] commented on GitHub (Mar 20, 2024): Issue is now considered stale. If you want to keep it open, please comment :+1:
Author
Owner

@JakobTewes commented on GitHub (Mar 20, 2024):

Plz keep this open 👌

<!-- gh-comment-id:2008852474 --> @JakobTewes commented on GitHub (Mar 20, 2024): Plz keep this open 👌
Author
Owner

@wakawakaaa commented on GitHub (May 25, 2024):

In this case best approach would be to just provide path to tls certificates in docker-compose.yml file, which is done like traefik does it.

tls:
certificates:
- certFile: /path/to/domain.cert
keyFile: /path/to/domain.key
stores:
- default

So, nginx-proxy-manager will use this certificate as a default and one in the list of ssl certificate can be selected.

Please have this feature implemented. thanks

<!-- gh-comment-id:2131369313 --> @wakawakaaa commented on GitHub (May 25, 2024): In this case best approach would be to just provide path to tls certificates in docker-compose.yml file, which is done like traefik does it. tls: certificates: - certFile: /path/to/domain.cert keyFile: /path/to/domain.key stores: - default So, nginx-proxy-manager will use this certificate as a default and one in the list of ssl certificate can be selected. Please have this feature implemented. thanks
Author
Owner

@vincent1890 commented on GitHub (Aug 28, 2024):

up

<!-- gh-comment-id:2316435030 --> @vincent1890 commented on GitHub (Aug 28, 2024): up
Author
Owner

@SpongeManiac commented on GitHub (Apr 2, 2025):

It's a shame this still hasn't been addressed. I am attempting to run GitLab behind NGINX Proxy Manager, and the container registry requires access to the SSL certificates, which are not easily accessed.

<!-- gh-comment-id:2773242176 --> @SpongeManiac commented on GitHub (Apr 2, 2025): It's a shame this still hasn't been addressed. I am attempting to run GitLab behind NGINX Proxy Manager, and the container registry requires access to the SSL certificates, which are not easily accessed.
Author
Owner

@github-actions[bot] commented on GitHub (Oct 16, 2025):

Issue is now considered stale. If you want to keep it open, please comment 👍

<!-- gh-comment-id:3408904101 --> @github-actions[bot] commented on GitHub (Oct 16, 2025): Issue is now considered stale. If you want to keep it open, please comment :+1:
Author
Owner

@JakobTewes commented on GitHub (Oct 16, 2025):

ouch…still a nice feature

<!-- gh-comment-id:3409545078 --> @JakobTewes commented on GitHub (Oct 16, 2025): ouch…still a nice feature
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#735
No description provided.