[GH-ISSUE #1610] Dockerfile build error #848

Closed
opened 2026-03-04 01:49:19 +03:00 by kerem · 17 comments
Owner

Originally created by @pgb6 on GitHub (Mar 25, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1610

Version of s3fs being used (s3fs --version)

(Inside docker container, if I comment out s3 command and lines after in Dockerfile)
Amazon Simple Storage Service File System V1.89 (commit:ef079f4) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

(Inside docker container, if I comment out s3 command and lines after in Dockerfile)
2.9.9

Kernel information (uname -r)

(Inside docker container, if I comment out s3 command and lines after in Dockerfile)
5.4.72-microsoft-standard-WSL2

GNU/Linux Distribution, if applicable (cat /etc/os-release)

(Inside docker container, if I comment out s3 command and lines after in Dockerfile)
NAME="Ubuntu"
VERSION="20.10 (Groovy Gorilla)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.10"
VERSION_ID="20.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=groovy
UBUNTU_CODENAME=groovy

Details about issue

I wanted to use s3fs so that the outputs of my python script could go directly to an S3 bucket via a mount on a Docker containers' directory (I know I can use boto3 in my script, but I'm not sure why this isn't working).
This is my dockerfile:

FROM ubuntu:20.10

RUN apt-get update -y && \
    apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev mime-support automake libtool wget tar git unzip
RUN apt-get install lsb-release -y  && apt-get install zip -y && apt-get install vim -y


## Install Python
RUN apt-get update && \
    apt-get install -y \
        python3 \
        python3-pip \
        python3-setuptools \  
    && pip3 install --upgrade pip \
    && apt-get clean

## Install S3 Fuse
RUN rm -rf /usr/src/s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3fs-fuse
WORKDIR /usr/src/s3fs-fuse 
RUN ./autogen.sh && ./configure && make && make install

## Create workdir and logs folder (for mounting)
WORKDIR /code
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY src/ . 
RUN mkdir logs
RUN chmod 755 ./logs

## Set Your AWS Access credentials
ARG AWS_ACCESS_KEY_ID=..................
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY=..................
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
ARG AWS_DEFAULT_REGION=us-east-1
ENV AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
RUN aws ecr get-login --no-include-email | bash

## Set the directory where you want to mount your s3 bucket
ARG S3_MOUNT_DIRECTORY=/code/logs
ENV S3_MOUNT_DIRECTORY=$S3_MOUNT_DIRECTORY
ARG S3_BUCKET_NAME=mybucket
ENV S3_BUCKET_NAME=$S3_BUCKET_NAME 

## S3fs-fuse credential config
RUN echo $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY > /root/.passwd-s3fs && \
    chmod 600 /root/.passwd-s3fs

## Run s3fs
WORKDIR /
RUN s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY 


## Entry Point
WORKDIR /code
ENTRYPOINT ["python3","./twitterSearch.py"]

Running docker build on this dockerfile results in :

[+] Building 1.4s (22/23)
 => [internal] load build definition from Dockerfile                                   0.0s
 => => transferring dockerfile: 1.83kB                                                 0.0s
 => [internal] load .dockerignore                                                      0.0s
 => => transferring context: 2B                                                        0.0s
 => [internal] load metadata for docker.io/library/ubuntu:20.10                        0.5s
 => [auth] library/ubuntu:pull token for registry-1.docker.io                          0.0s
 => [ 1/19] FROM docker.io/library/ubuntu:20.10@sha256:37586e1b9bab0a851b639c9102b002  0.0s
 => [internal] load build context                                                      0.0s
 => => transferring context: 100B                                                      0.0s
 => CACHED [ 2/19] RUN apt-get update -y &&     apt-get install -y build-essential li  0.0s
 => CACHED [ 3/19] RUN apt-get install lsb-release -y  && apt-get install zip -y && a  0.0s
 => CACHED [ 4/19] RUN apt-get update &&     apt-get install -y         python3        0.0s
 => CACHED [ 5/19] RUN rm -rf /usr/src/s3fs-fuse                                       0.0s
 => CACHED [ 6/19] RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3f  0.0s
 => CACHED [ 7/19] WORKDIR /usr/src/s3fs-fuse                                          0.0s
 => CACHED [ 8/19] RUN ./autogen.sh && ./configure && make && make install             0.0s
 => CACHED [ 9/19] WORKDIR /code                                                       0.0s
 => CACHED [10/19] COPY requirements.txt .                                             0.0s
 => CACHED [11/19] RUN pip3 install -r requirements.txt                                0.0s
 => CACHED [12/19] COPY src/ .                                                         0.0s
 => CACHED [13/19] RUN mkdir logs                                                      0.0s
 => CACHED [14/19] RUN chmod 755 ./logs                                                0.0s
 => [15/19] RUN aws ecr get-login --no-include-email | bash                            0.2s
 => [16/19] RUN echo ..................:.................. > /root/.passwd-s3fs &&     0.3s
 => ERROR [17/19] RUN s3fs mybucket /code/logs                                                 0.4s
------
 > [17/19] RUN s3fs  mybucket /code/logs:
#21 0.275 fuse: device not found, try 'modprobe fuse' first
------
executor failed running [/bin/sh -c s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY]: exit code: 1

And if I follow the suggestion and include 'RUN modprobe fuse' before this line in the Dockerfile, the build will fail with a 'modprobe not found' error. However, if I erase all the line below and including the dockerfile s3fs command, and ssh into the running container, I am able to run the s3fs command. Only then, can I see the mount in 'df -h'.

I am not well-versed in Docker, so I suspect I may be misusing s3fs in this case.

Originally created by @pgb6 on GitHub (Mar 25, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1610 #### Version of s3fs being used (s3fs --version) (Inside docker container, if I comment out s3 command and lines after in Dockerfile) Amazon Simple Storage Service File System V1.89 (commit:ef079f4) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) (Inside docker container, if I comment out s3 command and lines after in Dockerfile) 2.9.9 #### Kernel information (uname -r) (Inside docker container, if I comment out s3 command and lines after in Dockerfile) 5.4.72-microsoft-standard-WSL2 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) (Inside docker container, if I comment out s3 command and lines after in Dockerfile) NAME="Ubuntu" VERSION="20.10 (Groovy Gorilla)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.10" VERSION_ID="20.10" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=groovy UBUNTU_CODENAME=groovy ### Details about issue I wanted to use s3fs so that the outputs of my python script could go directly to an S3 bucket via a mount on a Docker containers' directory (I know I can use boto3 in my script, but I'm not sure why this isn't working). This is my dockerfile: ```bash FROM ubuntu:20.10 RUN apt-get update -y && \ apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev mime-support automake libtool wget tar git unzip RUN apt-get install lsb-release -y && apt-get install zip -y && apt-get install vim -y ## Install Python RUN apt-get update && \ apt-get install -y \ python3 \ python3-pip \ python3-setuptools \ && pip3 install --upgrade pip \ && apt-get clean ## Install S3 Fuse RUN rm -rf /usr/src/s3fs-fuse RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3fs-fuse WORKDIR /usr/src/s3fs-fuse RUN ./autogen.sh && ./configure && make && make install ## Create workdir and logs folder (for mounting) WORKDIR /code COPY requirements.txt . RUN pip3 install -r requirements.txt COPY src/ . RUN mkdir logs RUN chmod 755 ./logs ## Set Your AWS Access credentials ARG AWS_ACCESS_KEY_ID=.................. ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ARG AWS_SECRET_ACCESS_KEY=.................. ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY ARG AWS_DEFAULT_REGION=us-east-1 ENV AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION RUN aws ecr get-login --no-include-email | bash ## Set the directory where you want to mount your s3 bucket ARG S3_MOUNT_DIRECTORY=/code/logs ENV S3_MOUNT_DIRECTORY=$S3_MOUNT_DIRECTORY ARG S3_BUCKET_NAME=mybucket ENV S3_BUCKET_NAME=$S3_BUCKET_NAME ## S3fs-fuse credential config RUN echo $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY > /root/.passwd-s3fs && \ chmod 600 /root/.passwd-s3fs ## Run s3fs WORKDIR / RUN s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY ## Entry Point WORKDIR /code ENTRYPOINT ["python3","./twitterSearch.py"] ``` Running docker build on this dockerfile results in : ```bash [+] Building 1.4s (22/23) => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.83kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/ubuntu:20.10 0.5s => [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s => [ 1/19] FROM docker.io/library/ubuntu:20.10@sha256:37586e1b9bab0a851b639c9102b002 0.0s => [internal] load build context 0.0s => => transferring context: 100B 0.0s => CACHED [ 2/19] RUN apt-get update -y && apt-get install -y build-essential li 0.0s => CACHED [ 3/19] RUN apt-get install lsb-release -y && apt-get install zip -y && a 0.0s => CACHED [ 4/19] RUN apt-get update && apt-get install -y python3 0.0s => CACHED [ 5/19] RUN rm -rf /usr/src/s3fs-fuse 0.0s => CACHED [ 6/19] RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3f 0.0s => CACHED [ 7/19] WORKDIR /usr/src/s3fs-fuse 0.0s => CACHED [ 8/19] RUN ./autogen.sh && ./configure && make && make install 0.0s => CACHED [ 9/19] WORKDIR /code 0.0s => CACHED [10/19] COPY requirements.txt . 0.0s => CACHED [11/19] RUN pip3 install -r requirements.txt 0.0s => CACHED [12/19] COPY src/ . 0.0s => CACHED [13/19] RUN mkdir logs 0.0s => CACHED [14/19] RUN chmod 755 ./logs 0.0s => [15/19] RUN aws ecr get-login --no-include-email | bash 0.2s => [16/19] RUN echo ..................:.................. > /root/.passwd-s3fs && 0.3s => ERROR [17/19] RUN s3fs mybucket /code/logs 0.4s ------ > [17/19] RUN s3fs mybucket /code/logs: #21 0.275 fuse: device not found, try 'modprobe fuse' first ------ executor failed running [/bin/sh -c s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY]: exit code: 1 ``` And if I follow the suggestion and include 'RUN modprobe fuse' before this line in the Dockerfile, the build will fail with a 'modprobe not found' error. However, if I erase all the line below and including the dockerfile s3fs command, and ssh into the running container, I am able to run the s3fs command. Only then, can I see the mount in 'df -h'. I am not well-versed in Docker, so I suspect I may be misusing s3fs in this case.
kerem 2026-03-04 01:49:19 +03:00
Author
Owner

@FANMixco commented on GitHub (Apr 8, 2021):

I´m facing the same issue while building the dockerfile. Did you find how to fix it @pgb6?

<!-- gh-comment-id:815847992 --> @FANMixco commented on GitHub (Apr 8, 2021): I´m facing the same issue while building the dockerfile. Did you find how to fix it @pgb6?
Author
Owner

@pgb6 commented on GitHub (Apr 8, 2021):

@FANMixco I couldn't find out how to fix it, so I just used the boto3 SDK instead of s3fs. I assumed it may just be a problem with the inner workings of s3fs itself. What else did you try to debug the issue?

<!-- gh-comment-id:815868761 --> @pgb6 commented on GitHub (Apr 8, 2021): @FANMixco I couldn't find out how to fix it, so I just used the boto3 SDK instead of s3fs. I assumed it may just be a problem with the inner workings of s3fs itself. What else did you try to debug the issue?
Author
Owner

@gaul commented on GitHub (Apr 8, 2021):

This isn't an s3fs-specific issue and would occur with any FUSE filesystem like sshfs. You need to configure your Docker container properly to access fuse. Unfortunately I can't help you with this.

<!-- gh-comment-id:815897549 --> @gaul commented on GitHub (Apr 8, 2021): This isn't an s3fs-specific issue and would occur with any FUSE filesystem like sshfs. You need to configure your Docker container properly to access fuse. Unfortunately I can't help you with this.
Author
Owner

@weisdd commented on GitHub (Apr 10, 2021):

On Ubuntu, you can pass the following options to docker to give it access to fuse:
--cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined

E.g.
docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined alpine

The same set of options should work for docker build as well, though it doesn't make much sense to do so. s3fs process will be terminated once the build is completed. You should move this logic to your entrypoint script.

<!-- gh-comment-id:817111299 --> @weisdd commented on GitHub (Apr 10, 2021): On Ubuntu, you can pass the following options to docker to give it access to fuse: `--cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined` E.g. `docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined alpine` The same set of options should work for docker build as well, though it doesn't make much sense to do so. s3fs process will be terminated once the build is completed. You should move this logic to your entrypoint script.
Author
Owner

@FANMixco commented on GitHub (Apr 13, 2021):

Hi @weisdd , it didn´t work:

docker build --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -t fanmixco/aws-writer .
unknown flag: --cap-add
See 'docker build --help'.

<!-- gh-comment-id:818774227 --> @FANMixco commented on GitHub (Apr 13, 2021): Hi @weisdd , it didn´t work: ``` docker build --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -t fanmixco/aws-writer . unknown flag: --cap-add See 'docker build --help'. ```
Author
Owner

@weisdd commented on GitHub (Apr 13, 2021):

@FANMixco yeah, you're right, it's not there for build. But anyway, there's no reason to keep RUN s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY in Dockerfile as well as many other things. If you really need to mount a bucket for your script to work, s3fs has to be called during runtime (docker run ...), not during build. And you can pass the options described above to docker run.

<!-- gh-comment-id:818983034 --> @weisdd commented on GitHub (Apr 13, 2021): @FANMixco yeah, you're right, it's not there for build. But anyway, there's no reason to keep `RUN s3fs $S3_BUCKET_NAME $S3_MOUNT_DIRECTORY` in Dockerfile as well as many other things. If you really need to mount a bucket for your script to work, s3fs has to be called during runtime (`docker run ...`), not during build. And you can pass the options described above to `docker run`.
Author
Owner

@pgb6 commented on GitHub (Apr 13, 2021):

@weisdd Could you explain why there's no reason to run the command in the Dockerfile? For use-cases like AWS ECS, it wouldn't make sense to run the container and then SSH into it to run the mount command--in this case, everything should be automated after the container is built and run.

<!-- gh-comment-id:818998275 --> @pgb6 commented on GitHub (Apr 13, 2021): @weisdd Could you explain why there's no reason to run the command in the Dockerfile? For use-cases like AWS ECS, it wouldn't make sense to run the container and then SSH into it to run the mount command--in this case, everything should be automated after the container is built and run.
Author
Owner

@weisdd commented on GitHub (Apr 13, 2021):

@pgb6 Sure. :)
Just imagine that you have an EC2 virtual machine and you want to run an nginx. You install the package, but forget to instruct systemd to automatically start the daemon. So, after a reboot, you do have nginx, though it's not running.
With your Dockerfile, you have something similar. - When you mount a bucket through s3fs without -f (foreground) flag, there's a new process running in background (=daemon). If the process dies for some reason, you can't work with the files as there's nothing to translate system calls to S3-specific HTTP requests anymore.
Docker build is needed to prepackage an image. The processes you run during the build stage are not carried to the run stage, they all exit by the end of the build. Thus, when you run a new container with the image, there's no s3fs daemon anymore. You have to call it in an entrypoint script, and then exec into your python script (an example).

I'd recommend you to google an introduction to Docker (there should be hundreds of decent articles and plenty of great books on the Internet) to get an idea of how containers are different from virtual machines. Just an hour or two would be enough to start. Keep going :)

As a side note, to me, s3fs seems to be an overkill here. Yet, that's another question :)

<!-- gh-comment-id:819092168 --> @weisdd commented on GitHub (Apr 13, 2021): @pgb6 Sure. :) Just imagine that you have an EC2 virtual machine and you want to run an nginx. You install the package, but forget to instruct systemd to automatically start the daemon. So, after a reboot, you do have nginx, though it's not running.\ With your Dockerfile, you have something similar. - When you mount a bucket through s3fs without `-f` (foreground) flag, there's a new process running in background (=daemon). If the process dies for some reason, you can't work with the files as there's nothing to translate system calls to S3-specific HTTP requests anymore.\ Docker build is needed to prepackage an image. The processes you run during the build stage are not carried to the run stage, they all exit by the end of the build. Thus, when you run a new container with the image, there's no s3fs daemon anymore. You have to call it in an entrypoint script, and then exec into your python script ([an example](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint)). I'd recommend you to google an introduction to Docker (there should be hundreds of decent articles and plenty of great books on the Internet) to get an idea of how containers are different from virtual machines. Just an hour or two would be enough to start. Keep going :) As a side note, to me, s3fs seems to be an overkill here. Yet, that's another question :)
Author
Owner

@gaul commented on GitHub (Apr 14, 2021):

It would be great if someone from the community could write up documentation, e.g., a wiki page, that we can point users to for common Docker issues and maybe add a blurb in the README. There are also a diversity of s3fs Docker projects that all seem similar to me. Ideally we could promote one or two of the higher-quality ones in the s3fs README.

Unfortunately the s3fs maintainers (including me) lack sufficient container/Docker background to help with these issues. Please help out if you can!

<!-- gh-comment-id:819193208 --> @gaul commented on GitHub (Apr 14, 2021): It would be great if someone from the community could write up documentation, e.g., a wiki page, that we can point users to for common Docker issues and maybe add a blurb in the README. There are also a diversity of s3fs Docker projects that all seem similar to me. Ideally we could promote one or two of the higher-quality ones in the s3fs README. Unfortunately the s3fs maintainers (including me) lack sufficient container/Docker background to help with these issues. Please help out if you can!
Author
Owner

@FANMixco commented on GitHub (Apr 15, 2021):

I added it as an entry point and got a different error:

docker run  --rm -it --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined fanmixco/aws-writer
s3fs: invalid option -- 'j'

Example:

ENTRYPOINT ["s3fs", "cpoa-test", "/home/app/logs", "-o", "dbglevel=info"]

<!-- gh-comment-id:820295473 --> @FANMixco commented on GitHub (Apr 15, 2021): I added it as an entry point and got a different error: ``` docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined fanmixco/aws-writer s3fs: invalid option -- 'j' ``` Example: `ENTRYPOINT ["s3fs", "cpoa-test", "/home/app/logs", "-o", "dbglevel=info"]`
Author
Owner

@FANMixco commented on GitHub (Apr 16, 2021):

Hi @pgb6, I made it work like in this answer:

https://stackoverflow.com/a/67129918/1928691

<!-- gh-comment-id:821338089 --> @FANMixco commented on GitHub (Apr 16, 2021): Hi @pgb6, I made it work like in this answer: https://stackoverflow.com/a/67129918/1928691
Author
Owner

@pgb6 commented on GitHub (Apr 16, 2021):

@weisdd thank you so much for this answer! It definitely helped improve my understanding of the issue here :)

<!-- gh-comment-id:821343804 --> @pgb6 commented on GitHub (Apr 16, 2021): @weisdd thank you so much for this answer! It definitely helped improve my understanding of the issue here :)
Author
Owner

@pgb6 commented on GitHub (Apr 16, 2021):

@FANMixco Wow, this is also extremely helpful. Thank you for this--I will close the issue now since it works!! Cheers.

<!-- gh-comment-id:821349196 --> @pgb6 commented on GitHub (Apr 16, 2021): @FANMixco Wow, this is also extremely helpful. Thank you for this--I will close the issue now since it works!! Cheers.
Author
Owner

@weisdd commented on GitHub (Apr 17, 2021):

@FANMixco it's actually a bad example. I'll try to explain why:

#REMEMBER. Take care with the Unix break lines if you use Windows or macOS.
#!/bin/sh
#The & is important to run s3fs in the background.
s3fs MY_BUCKET:/logs /home/app/logs &
java -jar MY_JAVA_APP.jar
#The & is important to run s3fs in the background.
s3fs MY_BUCKET:/logs /home/app/logs &

s3fs runs in background by default, so you don't actually need to use &.

java -jar MY_JAVA_APP.jar

If you run an app like that, it won't be receiving signals. Thus, it would not stop gracefully if you send a SIGTERM.
In Kubernetes, for instance, you'll see the app not reacting to SIGTERM for 30 seconds (default timeout), and then the app will be killed by SIGKILL. The same applies for docker stop.

There are two options here: either write traps or just exec into the app. The latter would look like:

exec java -jar MY_JAVA_APP.jar

Not sure about background tasks with s3fs, maybe signal propagation will just work or traps are required. - Needs testing.

<!-- gh-comment-id:821804804 --> @weisdd commented on GitHub (Apr 17, 2021): @FANMixco it's actually a bad example. I'll try to explain why: ```bash #REMEMBER. Take care with the Unix break lines if you use Windows or macOS. #!/bin/sh #The & is important to run s3fs in the background. s3fs MY_BUCKET:/logs /home/app/logs & java -jar MY_JAVA_APP.jar ``` ```bash #The & is important to run s3fs in the background. s3fs MY_BUCKET:/logs /home/app/logs & ``` s3fs runs in background by default, so you don't actually need to use `&`. ```bash java -jar MY_JAVA_APP.jar ``` If you run an app like that, it won't be receiving signals. Thus, it would not stop gracefully if you send a SIGTERM.\ In Kubernetes, for instance, you'll see the app not reacting to SIGTERM for 30 seconds (default timeout), and then the app will be killed by SIGKILL. The same applies for `docker stop`. There are two options here: either write traps or just exec into the app. The latter would look like: ```bash exec java -jar MY_JAVA_APP.jar ``` Not sure about background tasks with s3fs, maybe signal propagation will just work or traps are required. - Needs testing.
Author
Owner

@FANMixco commented on GitHub (Apr 17, 2021):

Hi @weisdd. Thanks for your ideas, but the reason is simple: Business requirements. I'm unauthorized to change any of the JARs. I have to sync the logs folder with the S3 but I cannot change the JARs logic.

<!-- gh-comment-id:821805331 --> @FANMixco commented on GitHub (Apr 17, 2021): Hi @weisdd. Thanks for your ideas, but the reason is simple: Business requirements. I'm unauthorized to change any of the JARs. I have to sync the logs folder with the S3 but I cannot change the JARs logic.
Author
Owner

@weisdd commented on GitHub (Apr 17, 2021):

@FANMixco there's no modification for jars, they just start through exec. More details here.

<!-- gh-comment-id:821807852 --> @weisdd commented on GitHub (Apr 17, 2021): @FANMixco there's no modification for jars, they just start through `exec`. More details [here](https://docs.docker.com/engine/reference/builder/#entrypoint).
Author
Owner

@FANMixco commented on GitHub (Apr 19, 2021):

@weisdd. It happens in my specific business case. However, if you use Docker-compose it doesn't happen the sh trouble.

<!-- gh-comment-id:822322137 --> @FANMixco commented on GitHub (Apr 19, 2021): @weisdd. It happens in my specific business case. However, if you use Docker-compose it doesn't happen the sh trouble.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#848
No description provided.