mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #508] whats the limit on the size of file can be written to s3fs? #283
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#283
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Fei-Guang on GitHub (Nov 23, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/508
it's ok to write a normal size file ,but the process will
terminate when a large file like 10GB is writing to s3fs,
whats the limit on the size of file can be written to s3fs?
@gaul commented on GitHub (Nov 24, 2016):
Can you share the steps to reproduce your symptom? I successfully wrote 10 GB via:
@Fei-Guang commented on GitHub (Nov 24, 2016):
have you added the option conv=fdatasync to your writing? and tried multiple processes to write at same time?.
we are in a multi-user environment, there are many processes to write to a same mount point and it 's dead
@gaul commented on GitHub (Nov 24, 2016):
It will help if you can give the steps to reproduce these symptoms. Otherwise we are stabbing in the dark.
@ggtakec commented on GitHub (Jan 7, 2017):
@Fei-Guang I'm sorry for my late reply
Could you put log by s3fs? or do you have any log at s3fs exiting?
And plese let me know about s3fs version(s3fs --version).
I fixed a bug about uploading by #511 last month.
If you can, please try to use latest codes in master branch.
Thanks in advance for your assistance.
@nycdubliner commented on GitHub (Mar 12, 2017):
I think I'm having the same issue.
Normal usage for most files works really well. Large files copy finish with a zero length file.
(Sometimes this happens immediately after the data is copied, sometimes the fuse mount hangs for 10-12 minutes before becoming interactive again and returning a 0 lenght file.)
To reproduce:
I ran s3fs as follows:
s3fs -o allow_other -o umask=0002 cloud-nas-1 cloud-nas/ -f -odbglevel=info | tee s3fs-logLog data attached in file s3fs-log.txt
s3fs-log.txt
From another terminal:
Details:
My setup is a Rasperry Pi 3 running current raspbian and current s3fs (1.8)
Thanks for your work on this project, it's working well for me with the exception of this case.
If you need any other debug information, please ask.
T.
@ggtakec commented on GitHub (Apr 9, 2017):
@nycdubliner Thanks for your report.
I could not find the reason fo this problem in your log, it was a log until just before the operation when probably creating the "/out" file for the first time(0 bytes).
If you can, please try to run s3fs with "-o curldbg" again.
And if timeout is the cause of this problem, please also try options such as connect_timeout/readwrite_timeout/retries.
Regards,
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
If you encounter problems with s3fs as well, try using the
dbglevel`` -dcurldbgor similar option to print out the log.It contains information for the solution.