mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1893] writing to S3 creates multiple PUT events causing duplicate trigger for same file #963
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#963
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @baldpope on GitHub (Feb 16, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1893
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
1.90
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.2
Kernel information (uname -r)
5.10.93-87.444.amzn2.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3⭕amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
s3fs command line used, if applicable
/etc/fstab entry, if applicable
Details about issue
When using an ftp client (not sure if relevant) to write files directly to the s3 mount point. there are two events that get triggered for the same object. How do I configure s3fs to only have a single PUT when the file is done writing?
@gaul commented on GitHub (Feb 17, 2022):
s3fs buffers files locally and uploads them after
close,fsync, or when-o max_dirty_data(default: 5 GB) is reached. Which operations does your application perform and with what size files do you see duplicate events?@baldpope commented on GitHub (Feb 18, 2022):
In this case, I'm using a traditional ftp client. I connect to the remote host and download a file to /mnt/mybucket
In the example files, they are small, less than 10K.
@gaul commented on GitHub (Feb 19, 2022):
You might try
straceon the FTP client to see which syscalls it is performing. You can get similar information froms3fs -d. Can you tell us what combination ofopen,write,fsync,close,rename, etc. your application performs? It is very likely creating some temporary file.@gaul commented on GitHub (Sep 8, 2023):
Please reopen if your symptoms persist.