[GH-ISSUE #1893] writing to S3 creates multiple PUT events causing duplicate trigger for same file #963

Closed
opened 2026-03-04 01:50:15 +03:00 by kerem · 4 comments
Owner

Originally created by @baldpope on GitHub (Feb 16, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1893

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

1.90

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2

Kernel information (uname -r)

5.10.93-87.444.amzn2.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"

s3fs command line used, if applicable

/etc/fstab entry, if applicable

# mybucket
mybucket.prod /mnt/mybucket.prod fuse.s3fs _netdev,allow_other,use_path_request_style,endpoint=us-east-2,url=https://s3-us-east-2.amazonaws.com/,iam_role=access_role,nonempty,umask=000 0 0

Details about issue

When using an ftp client (not sure if relevant) to write files directly to the s3 mount point. there are two events that get triggered for the same object. How do I configure s3fs to only have a single PUT when the file is done writing?

Originally created by @baldpope on GitHub (Feb 16, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1893 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) 1.90 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2 #### Kernel information (uname -r) 5.10.93-87.444.amzn2.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/" #### s3fs command line used, if applicable ``` ``` #### /etc/fstab entry, if applicable ``` # mybucket mybucket.prod /mnt/mybucket.prod fuse.s3fs _netdev,allow_other,use_path_request_style,endpoint=us-east-2,url=https://s3-us-east-2.amazonaws.com/,iam_role=access_role,nonempty,umask=000 0 0 ``` ### Details about issue When using an ftp client (not sure if relevant) to write files directly to the s3 mount point. there are two events that get triggered for the same object. How do I configure s3fs to only have a single PUT when the file is done writing?
kerem 2026-03-04 01:50:15 +03:00
  • closed this issue
  • added the
    need info
    label
Author
Owner

@gaul commented on GitHub (Feb 17, 2022):

s3fs buffers files locally and uploads them after close, fsync, or when -o max_dirty_data (default: 5 GB) is reached. Which operations does your application perform and with what size files do you see duplicate events?

<!-- gh-comment-id:1042977534 --> @gaul commented on GitHub (Feb 17, 2022): s3fs buffers files locally and uploads them after `close`, `fsync`, or when `-o max_dirty_data` (default: 5 GB) is reached. Which operations does your application perform and with what size files do you see duplicate events?
Author
Owner

@baldpope commented on GitHub (Feb 18, 2022):

In this case, I'm using a traditional ftp client. I connect to the remote host and download a file to /mnt/mybucket

In the example files, they are small, less than 10K.

<!-- gh-comment-id:1045250905 --> @baldpope commented on GitHub (Feb 18, 2022): In this case, I'm using a traditional ftp client. I connect to the remote host and download a file to /mnt/mybucket In the example files, they are small, less than 10K.
Author
Owner

@gaul commented on GitHub (Feb 19, 2022):

You might try strace on the FTP client to see which syscalls it is performing. You can get similar information from s3fs -d. Can you tell us what combination of open, write, fsync, close, rename, etc. your application performs? It is very likely creating some temporary file.

<!-- gh-comment-id:1045966593 --> @gaul commented on GitHub (Feb 19, 2022): You might try `strace` on the FTP client to see which syscalls it is performing. You can get similar information from `s3fs -d`. Can you tell us what combination of `open`, `write`, `fsync`, `close`, `rename`, etc. your application performs? It is very likely creating some temporary file.
Author
Owner

@gaul commented on GitHub (Sep 8, 2023):

Please reopen if your symptoms persist.

<!-- gh-comment-id:1710963553 --> @gaul commented on GitHub (Sep 8, 2023): Please reopen if your symptoms persist.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#963
No description provided.