mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 17:35:52 +03:00
[GH-ISSUE #1277] New Install - init-stage2 failed - changing ownership not permitted on logs folder #1031
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#1031
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @SaschaHenning on GitHub (Jul 31, 2021).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1277
Checklist
jc21/nginx-proxy-manager:latestdocker image?Describe the bug
After an update of the docker image the init did fail (see below). So I created a new container.
Nginx Proxy Manager Version
2.9.5 - pulled jc21/nginx-proxy-manager:latest today
Operating System
Portainer LXC on Proxmox server (x64)
Maybe important detail: Mapped folders for data etc. are on an NFS Share.
@bynicolas commented on GitHub (Nov 24, 2021):
Still same issue in 2.9.12
Also using data folder on a NFS share
@the1ts commented on GitHub (Dec 8, 2021):
The chown is being done as the root user, does the NFS export have no_root_squash? If not then the NFS server is simply reducing the permissions of the remote root user stopping docker and its containers from doing a chown 0:0.
@SrFrancia commented on GitHub (Mar 18, 2022):
I'm having the same issue running Rocky Linux 8.5 and using podman 3.4.2 on a proxmox VM
just checked my filesystem and its XFS does it have a similar system to root_squash @the1ts (I just learnt about it)?
This is what happens when running from scratch just after everything is downloaded
Nginx Proxy Manager Version
2.9.16 - pulled jc21/nginx-proxy-manager:latest today
@WaldHabets commented on GitHub (May 8, 2022):
Try using the
:Zsuffix for your volumes, soinstead of
@bynicolas commented on GitHub (May 20, 2022):
Still had issues and adding
:Zsuffixe did not work for me.Since only the
logsdirectory seems to have it's ownership changed, I've simply relocated the volume mapping for that log directory locally on the host (and not on the NFS share)so in other words, I've added
--volume /path/to/local/nginx-proxy-manager/data/logs:/data/logsin addition to the previous--volume /path/to/nfs/nginx-proxy-manager/data:/dataThis workaround works fine for me, but maybe you'd want to create a cronjob to rsync or move over your log files to the nfs share in your particular situation
@bynicolas commented on GitHub (May 20, 2022):
@the1ts
I have my NFS share set to map all users to the admin user on my NFS server. It seems to me that the container shouldn't have permissions issues, but I don't understand why I'm still getting this error. For the record, I'm using Synology DSM as the NFS host.
If anyone else has a better explanation or a fix, I'd be glad to learn about it, meanwhile, the workaround above will do just fine for me.
@the1ts commented on GitHub (May 20, 2022):
@bynicolas Yeah, I can cause this exact error by using your setup. I have the standard for a NFS only share. i.e. mapping none and security sys. This means that the client sets the permissions not the server. If you squash permissions with mapping a root user on the client doesn't have root on the server so cannot chown 0:0 and scripts in containers die.
@bynicolas commented on GitHub (May 20, 2022):
@the1ts Thanks for the tip, I managed to get it working (kind of) with the default setup as well
mapping nonesecurity sysThe only thing now is that I see a bunch of warnings about failed logrotate due to insecure permissions
Do you happen to know how to how to fix this? Maybe I'm using incorrect mount options?
@the1ts commented on GitHub (May 21, 2022):
All I can suggest is perhaps fix the permissions manually since they've been somewhat broken by the NFS issues in the past. My /data/logs is chown 0:0 and chmod 755 and /data/logs/*log is again chown 0:0 but chmod 644.
This looks correct as it follows the chmod for /data/logs in the NPM scripts and the 644 for the files implied by the logrotate config.
@jpharaoh27 commented on GitHub (Jul 15, 2022):
Hey all, I'm having a similar issue here. In the screenshot, I created and chown/chmod the /data/logs directory and I got the chown error. Before I created the logs directory, I was getting a "mkdir - permission denied" error when it was trying to make /data/logs.
Any advice/help will be super appreciated.. been racking my head on this for days lol
@github-actions[bot] commented on GitHub (Mar 7, 2024):
Issue is now considered stale. If you want to keep it open, please comment 👍
@ensleep commented on GitHub (May 2, 2024):
I want to know why npm need chown?The root in npm has the right to edit file in nfs . Why It need a right to run chown?
@jc21 commented on GitHub (May 2, 2024):
because the image can run nginx as non-root, if specified with PUID env variable.
@ian351c commented on GitHub (Jun 4, 2024):
[EDIT]
So it looks like there's more chowning going on than just Stage 2. I created a PR that skips chowning for the /data and /etc/letsecrypt folders since those (for me) are mounted remotely (VMware HGFS) and don't allow chowning to happen. It's PR-3792 and implements a SKIP_FILE_OWNERSHIP environment variable that, if set to true, will skip chowning anything in /data or /etc/letsencrypt. This works for me because I I mounted the HGFS volume with the same UID/GID that I am running the NPM container as, so there is no need to attempt to chown those files.
[END EDIT]
For folks that don't know how this works (which was me up until 10 minutes ago): @jc21 is using s6-overlay by Just Containers. This framework appears to automatically try to change ownership of (at least some of) the data/config/log files stored in volumes or binds for the container when PUID or GUID is set. This fails in certain situations (for example: I am running NPM in a Docker container on Debian VM inside VMWare Fusion on a Mac mini M1). To override this behavior, set the following environment variable: S6_BEHAVIOUR_IF_STAGE2_FAILS=0 in the docker command or compose file. Be aware that I have no idea what else this might break, but it seems to be working for me. I still have read/write access to all my files inside the container, I just can't change ownership.
References:
NPM Dockerfile (see line 15)
Customizing s6-overlay behavior
@ian351c commented on GitHub (Jun 6, 2024):
@jc21 I'm not sure how to request this to be merged, but if you point me in the right direction, I'll go that way...
@github-actions[bot] commented on GitHub (Dec 8, 2024):
Issue is now considered stale. If you want to keep it open, please comment 👍
@crazychatting commented on GitHub (Apr 16, 2025):
skipping the ownership would help to include custom configs via docker swarm config mounts
@github-actions[bot] commented on GitHub (Nov 9, 2025):
Issue is now considered stale. If you want to keep it open, please comment 👍