[GH-ISSUE #676] Support ZFS #572

Closed
opened 2026-02-26 06:33:27 +03:00 by kerem · 7 comments
Owner

Originally created by @papatistos on GitHub (Oct 25, 2020).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/676

I'm not sure if this should rather be classified as a bug but since zfs is not explicitly supported, I'm putting it as a feature request:

Currently, the container will not work if you map /config onto a zfs volume. The logs will look like this:

[s6-finish] sending all processes the TERM signal.,
[s6-finish] sending all processes the KILL signal and exiting.,
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.,
[s6-init] ensuring user provided files have correct perms...exited 0.,
[fix-attrs.d] applying ownership & permissions fixes...,
[fix-attrs.d] done.,
[cont-init.d] executing container initialization scripts...,
[cont-init.d] 00-app-niceness.sh: executing... ,
[cont-init.d] 00-app-niceness.sh: exited 0.,
[cont-init.d] 00-app-script.sh: executing... ,
[cont-init.d] 00-app-script.sh: exited 0.,
[cont-init.d] 00-app-user-map.sh: executing... ,
[cont-init.d] 00-app-user-map.sh: exited 0.,
[cont-init.d] 00-clean-logmonitor-states.sh: executing... ,
[cont-init.d] 00-clean-logmonitor-states.sh: exited 0.,
[cont-init.d] 00-clean-tmp-dir.sh: executing... ,
[cont-init.d] 00-clean-tmp-dir.sh: exited 0.,
[cont-init.d] 00-set-app-deps.sh: executing... ,
[cont-init.d] 00-set-app-deps.sh: exited 0.,
[cont-init.d] 00-set-home.sh: executing... ,
[cont-init.d] 00-set-home.sh: exited 0.,
[cont-init.d] 00-take-config-ownership.sh: executing... ,
[cont-init.d] 00-take-config-ownership.sh: exited 0.,
[cont-init.d] 00-xdg-runtime-dir.sh: executing... ,
[cont-init.d] 00-xdg-runtime-dir.sh: exited 0.,
[cont-init.d] nginx-proxy-manager.sh: executing... ,
[cont-init.d] nginx-proxy-manager.sh: Initializing database data directory...,
[cont-init.d] nginx-proxy-manager.sh: exited 1.,
[services.d] stopping services,
[services.d] stopping s6-fdholderd...,
[cont-finish.d] executing container finish scripts...,
[cont-finish.d] done.,
[s6-finish] syncing disks.,
[s6-finish] sending all processes the TERM signal.,
[s6-finish] sending all processes the KILL signal and exiting.,
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.,
[s6-init] ensuring user provided files have correct perms...exited 0.,
[fix-attrs.d] applying ownership & permissions fixes...,
[fix-attrs.d] done.,
[cont-init.d] executing container initialization scripts...,
[cont-init.d] 00-app-niceness.sh: executing... ,
[cont-init.d] 00-app-niceness.sh: exited 0.,
[cont-init.d] 00-app-script.sh: executing... ,
[cont-init.d] 00-app-script.sh: exited 0.,
[cont-init.d] 00-app-user-map.sh: executing... ,
[cont-init.d] 00-app-user-map.sh: exited 0.,
[cont-init.d] 00-clean-logmonitor-states.sh: executing... ,
[cont-init.d] 00-clean-logmonitor-states.sh: exited 0.,
[cont-init.d] 00-clean-tmp-dir.sh: executing... ,
[cont-init.d] 00-clean-tmp-dir.sh: exited 0.,
[cont-init.d] 00-set-app-deps.sh: executing... ,
[cont-init.d] 00-set-app-deps.sh: exited 0.,
[cont-init.d] 00-set-home.sh: executing... ,
[cont-init.d] 00-set-home.sh: exited 0.,
[cont-init.d] 00-take-config-ownership.sh: executing... ,
[cont-init.d] 00-take-config-ownership.sh: exited 0.,
[cont-init.d] 00-xdg-runtime-dir.sh: executing... ,
[cont-init.d] 00-xdg-runtime-dir.sh: exited 0.,
[cont-init.d] nginx-proxy-manager.sh: executing... ,
[cont-init.d] nginx-proxy-manager.sh: Initializing database data directory...,
[cont-init.d] nginx-proxy-manager.sh: exited 1.,
[services.d] stopping services,
[services.d] stopping s6-fdholderd...,
[cont-finish.d] executing container finish scripts...,
[cont-finish.d] done.,
[s6-finish] syncing disks.,
[s6-finish] sending all processes the TERM signal.,

It took me a felt eternity to figure out that the reason why this wasn't working was that I was using the "wrong" filesystem. zfs doesn't work because apparently it doesn't support fallocate (which NPM seems to use).

So if it is possible to avoid fallocate, that would make NPM compatible with zfs. I can see, though, that this is probably not a priority for you so I'd like to suggest to warn users about this (either in the instructions) or, ideally, by detecting that fallocate is not working and issuing and error about this (and possibly stopping the container).

Originally created by @papatistos on GitHub (Oct 25, 2020). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/676 I'm not sure if this should rather be classified as a bug but since zfs is not explicitly supported, I'm putting it as a feature request: Currently, the container will not work if you map `/config` onto a zfs volume. The logs will look like this: ~~~~ [s6-finish] sending all processes the TERM signal., [s6-finish] sending all processes the KILL signal and exiting., [s6-init] making user provided files available at /var/run/s6/etc...exited 0., [s6-init] ensuring user provided files have correct perms...exited 0., [fix-attrs.d] applying ownership & permissions fixes..., [fix-attrs.d] done., [cont-init.d] executing container initialization scripts..., [cont-init.d] 00-app-niceness.sh: executing... , [cont-init.d] 00-app-niceness.sh: exited 0., [cont-init.d] 00-app-script.sh: executing... , [cont-init.d] 00-app-script.sh: exited 0., [cont-init.d] 00-app-user-map.sh: executing... , [cont-init.d] 00-app-user-map.sh: exited 0., [cont-init.d] 00-clean-logmonitor-states.sh: executing... , [cont-init.d] 00-clean-logmonitor-states.sh: exited 0., [cont-init.d] 00-clean-tmp-dir.sh: executing... , [cont-init.d] 00-clean-tmp-dir.sh: exited 0., [cont-init.d] 00-set-app-deps.sh: executing... , [cont-init.d] 00-set-app-deps.sh: exited 0., [cont-init.d] 00-set-home.sh: executing... , [cont-init.d] 00-set-home.sh: exited 0., [cont-init.d] 00-take-config-ownership.sh: executing... , [cont-init.d] 00-take-config-ownership.sh: exited 0., [cont-init.d] 00-xdg-runtime-dir.sh: executing... , [cont-init.d] 00-xdg-runtime-dir.sh: exited 0., [cont-init.d] nginx-proxy-manager.sh: executing... , [cont-init.d] nginx-proxy-manager.sh: Initializing database data directory..., [cont-init.d] nginx-proxy-manager.sh: exited 1., [services.d] stopping services, [services.d] stopping s6-fdholderd..., [cont-finish.d] executing container finish scripts..., [cont-finish.d] done., [s6-finish] syncing disks., [s6-finish] sending all processes the TERM signal., [s6-finish] sending all processes the KILL signal and exiting., [s6-init] making user provided files available at /var/run/s6/etc...exited 0., [s6-init] ensuring user provided files have correct perms...exited 0., [fix-attrs.d] applying ownership & permissions fixes..., [fix-attrs.d] done., [cont-init.d] executing container initialization scripts..., [cont-init.d] 00-app-niceness.sh: executing... , [cont-init.d] 00-app-niceness.sh: exited 0., [cont-init.d] 00-app-script.sh: executing... , [cont-init.d] 00-app-script.sh: exited 0., [cont-init.d] 00-app-user-map.sh: executing... , [cont-init.d] 00-app-user-map.sh: exited 0., [cont-init.d] 00-clean-logmonitor-states.sh: executing... , [cont-init.d] 00-clean-logmonitor-states.sh: exited 0., [cont-init.d] 00-clean-tmp-dir.sh: executing... , [cont-init.d] 00-clean-tmp-dir.sh: exited 0., [cont-init.d] 00-set-app-deps.sh: executing... , [cont-init.d] 00-set-app-deps.sh: exited 0., [cont-init.d] 00-set-home.sh: executing... , [cont-init.d] 00-set-home.sh: exited 0., [cont-init.d] 00-take-config-ownership.sh: executing... , [cont-init.d] 00-take-config-ownership.sh: exited 0., [cont-init.d] 00-xdg-runtime-dir.sh: executing... , [cont-init.d] 00-xdg-runtime-dir.sh: exited 0., [cont-init.d] nginx-proxy-manager.sh: executing... , [cont-init.d] nginx-proxy-manager.sh: Initializing database data directory..., [cont-init.d] nginx-proxy-manager.sh: exited 1., [services.d] stopping services, [services.d] stopping s6-fdholderd..., [cont-finish.d] executing container finish scripts..., [cont-finish.d] done., [s6-finish] syncing disks., [s6-finish] sending all processes the TERM signal., ~~~~ It took me a felt eternity to figure out that the reason why this wasn't working was that I was using the "wrong" filesystem. zfs doesn't work because apparently it doesn't support `fallocate` (which NPM seems to use). So if it is possible to avoid `fallocate`, that would make NPM compatible with zfs. I can see, though, that this is probably not a priority for you so I'd like to suggest to warn users about this (either in the instructions) or, ideally, by detecting that `fallocate` is not working and issuing and error about this (and possibly stopping the container).
kerem 2026-02-26 06:33:27 +03:00
Author
Owner

@Wadera commented on GitHub (Oct 27, 2020):

Are you sure that you configured docker correctly? Mine works as expected.

Check this:

root@myhost:~# docker info | grep zfs
 Storage Driver: zfs

https://docs.docker.com/storage/storagedriver/zfs-driver/

<!-- gh-comment-id:717469150 --> @Wadera commented on GitHub (Oct 27, 2020): Are you sure that you configured docker correctly? Mine works as expected. Check this: ``` root@myhost:~# docker info | grep zfs Storage Driver: zfs ``` https://docs.docker.com/storage/storagedriver/zfs-driver/
Author
Owner

@papatistos commented on GitHub (Oct 29, 2020):

Maybe there is a missunderstanding: by "zfs volume", I didn't mean a docker volume (strictly speaking) but a binding to a zfs directory on the host). Like this:

image

If I missunderstood, could you explain some more? docker info | grep zfs returns nothing. So does docker info | grep storage

<!-- gh-comment-id:718280834 --> @papatistos commented on GitHub (Oct 29, 2020): Maybe there is a missunderstanding: by "zfs volume", I didn't mean a docker volume (strictly speaking) but a binding to a zfs directory on the host). Like this: ![image](https://user-images.githubusercontent.com/3662750/97510097-43467300-1984-11eb-91c9-5989f0e817ec.png) If I missunderstood, could you explain some more? `docker info | grep zfs` returns nothing. So does `docker info | grep storage`
Author
Owner

@Wadera commented on GitHub (Oct 31, 2020):

If it doesn't show nothing - that's mean you didn't have configured ZFS storage driver.
It's not NPM issue, but docker configuration.
More about Docker ZFS support here: https://docs.docker.com/storage/storagedriver/zfs-driver/

<!-- gh-comment-id:719950922 --> @Wadera commented on GitHub (Oct 31, 2020): If it doesn't show nothing - that's mean you didn't have configured ZFS storage driver. It's not NPM issue, but docker configuration. More about Docker ZFS support here: https://docs.docker.com/storage/storagedriver/zfs-driver/
Author
Owner

@papatistos commented on GitHub (May 15, 2021):

If it doesn't show nothing - that's mean you didn't have configured ZFS storage driver.

I'm not sure why I would need the ZFS storage driver. I'm running about 15 docker containers, all of which use the zfs storage without any issues.

You never said anything about this:

zfs doesn't work because apparently it doesn't support fallocate (which NPM seems to use).

So if it is possible to avoid fallocate, that would make NPM compatible with zfs.

Is it not possible to avoid fallocate?

<!-- gh-comment-id:841734370 --> @papatistos commented on GitHub (May 15, 2021): > If it doesn't show nothing - that's mean you didn't have configured ZFS storage driver. I'm not sure why I would need the ZFS storage driver. I'm running about 15 docker containers, all of which use the zfs storage without any issues. You never said anything about this: >zfs doesn't work because apparently it doesn't support fallocate (which NPM seems to use). > >So if it is possible to avoid fallocate, that would make NPM compatible with zfs. Is it not possible to avoid `fallocate`?
Author
Owner

@chaptergy commented on GitHub (May 16, 2021):

At least the sqlite3 npm package seems to use fallocate, which is a database option via knex.
If anyone wants to try and prevent usage of this when sqlite is not used to see if it helps, PRs are welcome.

<!-- gh-comment-id:841790510 --> @chaptergy commented on GitHub (May 16, 2021): At least the sqlite3 npm package seems to use `fallocate`, which is a database option via knex. If anyone wants to try and prevent usage of this when sqlite is not used to see if it helps, PRs are welcome.
Author
Owner

@github-actions[bot] commented on GitHub (Mar 24, 2024):

Issue is now considered stale. If you want to keep it open, please comment 👍

<!-- gh-comment-id:2016661945 --> @github-actions[bot] commented on GitHub (Mar 24, 2024): Issue is now considered stale. If you want to keep it open, please comment :+1:
Author
Owner

@github-actions[bot] commented on GitHub (May 4, 2025):

Issue was closed due to inactivity.

<!-- gh-comment-id:2848914102 --> @github-actions[bot] commented on GitHub (May 4, 2025): Issue was closed due to inactivity.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#572
No description provided.