mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #928] The upload element blocking subsequent transfers #528
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#528
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @MLC-Mat on GitHub (Jan 24, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/928
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.82(commit:unknown) with GnuTLS(gcrypt)
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
example: 2.9.4
fuse/bionic,now 2.9.7-1ubuntu1 amd64 [installed]
libfuse2/bionic,now 2.9.7-1ubuntu1 amd64 [installed]
Kernel information (uname -r)
4.15.0-1021-aws
GNU/Linux Distribution, if applicable (cat /etc/os-release)
~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
s3fs command line used, if applicable
na
/etc/fstab entry, if applicable
s3fs#mybucket/sterling/mybucketfuse _netdev,allow_other,iam_role=myawsrole,parallel_count=50,uid=1001,umask=0077,url=http
s://s3-eu-west-1.amazonaws.com 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
We have s3fs hung off a couple of HA'd SFTP servers. The SFTP servers are used almost exclusively for uploads. Uploads typically go to different locations, thus no real issues with synchronization etc. SFTP servers do of course like to do an ls at the end of transfers etc, but there is very little actual fetching of files.
A number of file transfers will come in at the same time. The file sizes range from 1-40GB (but transfers are over a performant direct connect connection to AWS). When the transfers all kick off at the same time they transfer up to the SFTP server just fine, however when the initial transfer completes and the upload to S3 begins, the other transfers block and wait. Most SFTP clients handle this fine, but we have one IBM product that is a fussy customer that doesnt. Is there any way to stop the other uploads from blocking? Is there anything I can do with the parameters to further tune things. The ec2 instances are decent spec network focussed instances (upload to S3 is impressively fast).
Many thanks
Mat
@MLC-Mat commented on GitHub (Jan 24, 2019):
I should specifically note.. the uploads that are blocked are those still moving from client to SFTP server. The transfers just halt and then all start moving once the S3 upload of the job at the front completes
@gaul commented on GitHub (Feb 2, 2019):
I reproduced these symptoms by writing a large file into the root directory then running
lson the same directory. I found thatlshangs getting a lock on the in-progress write:@MLC-Mat commented on GitHub (Feb 2, 2019):
The above ties in exactly with what we are seeing. Thanks for the diagnosis effort so far
@gaul commented on GitHub (Jul 10, 2019):
@MLC-Mat Could you test with the latest master and report back?
@MLC-Mat commented on GitHub (Jul 10, 2019):
Sure I will work with client to retest... But I won't be with that client until Monday... thanks a lot for your effort here, it is very much appreciated.