[GH-ISSUE #508] whats the limit on the size of file can be written to s3fs? #283

Closed
opened 2026-03-04 01:44:02 +03:00 by kerem · 7 comments
Owner

Originally created by @Fei-Guang on GitHub (Nov 23, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/508

it's ok to write a normal size file ,but the process will

terminate when a large file like 10GB is writing to s3fs,

whats the limit on the size of file can be written to s3fs?

Originally created by @Fei-Guang on GitHub (Nov 23, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/508 it's ok to write a normal size file ,but the process will terminate when a large file like 10GB is writing to s3fs, whats the limit on the size of file can be written to s3fs?
kerem closed this issue 2026-03-04 01:44:03 +03:00
Author
Owner

@gaul commented on GitHub (Nov 24, 2016):

Can you share the steps to reproduce your symptom? I successfully wrote 10 GB via:

$ dd if=/dev/zero of=$MOUNTPOINT/out bs=10M count=1000
1000+0 records in
1000+0 records out
10485760000 bytes (10 GB) copied, 334.098 s, 31.4 MB/s
<!-- gh-comment-id:262675319 --> @gaul commented on GitHub (Nov 24, 2016): Can you share the steps to reproduce your symptom? I successfully wrote 10 GB via: ``` $ dd if=/dev/zero of=$MOUNTPOINT/out bs=10M count=1000 1000+0 records in 1000+0 records out 10485760000 bytes (10 GB) copied, 334.098 s, 31.4 MB/s ```
Author
Owner

@Fei-Guang commented on GitHub (Nov 24, 2016):

have you added the option conv=fdatasync to your writing? and tried multiple processes to write at same time?.
we are in a multi-user environment, there are many processes to write to a same mount point and it 's dead

<!-- gh-comment-id:262680349 --> @Fei-Guang commented on GitHub (Nov 24, 2016): have you added the option conv=fdatasync to your writing? and tried multiple processes to write at same time?. we are in a multi-user environment, there are many processes to write to a same mount point and it 's dead
Author
Owner

@gaul commented on GitHub (Nov 24, 2016):

It will help if you can give the steps to reproduce these symptoms. Otherwise we are stabbing in the dark.

<!-- gh-comment-id:262693485 --> @gaul commented on GitHub (Nov 24, 2016): It will help if you can give the steps to reproduce these symptoms. Otherwise we are stabbing in the dark.
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2017):

@Fei-Guang I'm sorry for my late reply
Could you put log by s3fs? or do you have any log at s3fs exiting?
And plese let me know about s3fs version(s3fs --version).

I fixed a bug about uploading by #511 last month.
If you can, please try to use latest codes in master branch.

Thanks in advance for your assistance.

<!-- gh-comment-id:271071648 --> @ggtakec commented on GitHub (Jan 7, 2017): @Fei-Guang I'm sorry for my late reply Could you put log by s3fs? or do you have any log at s3fs exiting? And plese let me know about s3fs version(s3fs --version). I fixed a bug about uploading by #511 last month. If you can, please try to use latest codes in master branch. Thanks in advance for your assistance.
Author
Owner

@nycdubliner commented on GitHub (Mar 12, 2017):

I think I'm having the same issue.

Normal usage for most files works really well. Large files copy finish with a zero length file.
(Sometimes this happens immediately after the data is copied, sometimes the fuse mount hangs for 10-12 minutes before becoming interactive again and returning a 0 lenght file.)

To reproduce:
I ran s3fs as follows:
s3fs -o allow_other -o umask=0002 cloud-nas-1 cloud-nas/ -f -odbglevel=info | tee s3fs-log
Log data attached in file s3fs-log.txt
s3fs-log.txt

From another terminal:

pi@raspberrypi:~ $ time dd if=/dev/zero of=cloud-nas/out bs=10M count=170

dd: closing output file ‘cloud-nas/out’: Operation not supported

pi@raspberrypi:~ $ ls -lah cloud-nas/out 

-rwxrwxr-x 1 pi pi 0 Mar 12 19:10 cloud-nas/out

Details:
My setup is a Rasperry Pi 3 running current raspbian and current s3fs (1.8)

pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.4.50-v7+ #970 SMP Mon Feb 20 19:18:29 GMT 2017 armv7l GNU/Linux
pi@raspberrypi:~ $ cat /etc/os-release 
PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"
NAME="Raspbian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

pi@raspberrypi:~ $ dpkg -l | grep -i fuse
ii  fuse                                  2.9.3-15+deb8u2                           armhf        Filesystem in Userspace
ii  gvfs-fuse                             1.22.2-1                                  armhf        userspace virtual filesystem - fuse server
ii  libconfuse-common                     2.7-5                                     all          Common files for libConfuse
ii  libconfuse0:armhf                     2.7-5                                     armhf        Library for parsing configuration files
ii  libfuse-dev                           2.9.3-15+deb8u2                           armhf        Filesystem in Userspace (development)
ii  libfuse2:armhf                        2.9.3-15+deb8u2                           armhf        Filesystem in Userspace (library)
ii  ntfs-3g                               1:2014.2.15AR.2-1+deb8u3                  armhf        read/write NTFS driver for FUSE

pi@raspberrypi:~ $ s3fs --version
Amazon Simple Storage Service File System V1.80(commit:6affeff) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Thanks for your work on this project, it's working well for me with the exception of this case.
If you need any other debug information, please ask.

T.

<!-- gh-comment-id:285969111 --> @nycdubliner commented on GitHub (Mar 12, 2017): I think I'm having the same issue. Normal usage for most files works really well. Large files copy finish with a zero length file. (Sometimes this happens immediately after the data is copied, sometimes the fuse mount hangs for 10-12 minutes before becoming interactive again and returning a 0 lenght file.) **To reproduce:** I ran s3fs as follows: `s3fs -o allow_other -o umask=0002 cloud-nas-1 cloud-nas/ -f -odbglevel=info | tee s3fs-log ` Log data attached in file s3fs-log.txt [s3fs-log.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/836839/s3fs-log.txt) **From another terminal:** ``` pi@raspberrypi:~ $ time dd if=/dev/zero of=cloud-nas/out bs=10M count=170 dd: closing output file ‘cloud-nas/out’: Operation not supported pi@raspberrypi:~ $ ls -lah cloud-nas/out -rwxrwxr-x 1 pi pi 0 Mar 12 19:10 cloud-nas/out ``` **Details:** My setup is a Rasperry Pi 3 running current raspbian and current s3fs (1.8) ``` pi@raspberrypi:~ $ uname -a Linux raspberrypi 4.4.50-v7+ #970 SMP Mon Feb 20 19:18:29 GMT 2017 armv7l GNU/Linux pi@raspberrypi:~ $ cat /etc/os-release PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)" NAME="Raspbian GNU/Linux" VERSION_ID="8" VERSION="8 (jessie)" ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" pi@raspberrypi:~ $ dpkg -l | grep -i fuse ii fuse 2.9.3-15+deb8u2 armhf Filesystem in Userspace ii gvfs-fuse 1.22.2-1 armhf userspace virtual filesystem - fuse server ii libconfuse-common 2.7-5 all Common files for libConfuse ii libconfuse0:armhf 2.7-5 armhf Library for parsing configuration files ii libfuse-dev 2.9.3-15+deb8u2 armhf Filesystem in Userspace (development) ii libfuse2:armhf 2.9.3-15+deb8u2 armhf Filesystem in Userspace (library) ii ntfs-3g 1:2014.2.15AR.2-1+deb8u3 armhf read/write NTFS driver for FUSE pi@raspberrypi:~ $ s3fs --version Amazon Simple Storage Service File System V1.80(commit:6affeff) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` Thanks for your work on this project, it's working well for me with the exception of this case. If you need any other debug information, please ask. T.
Author
Owner

@ggtakec commented on GitHub (Apr 9, 2017):

@nycdubliner Thanks for your report.

I could not find the reason fo this problem in your log, it was a log until just before the operation when probably creating the "/out" file for the first time(0 bytes).
If you can, please try to run s3fs with "-o curldbg" again.

And if timeout is the cause of this problem, please also try options such as connect_timeout/readwrite_timeout/retries.

Regards,

<!-- gh-comment-id:292769696 --> @ggtakec commented on GitHub (Apr 9, 2017): @nycdubliner Thanks for your report. I could not find the reason fo this problem in your log, it was a log until just before the operation when probably creating the "/out" file for the first time(0 bytes). If you can, please try to run s3fs with "-o curldbg" again. And if timeout is the cause of this problem, please also try options such as connect_timeout/readwrite_timeout/retries. Regards,
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.
Is this problem continuing?
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.

If you encounter problems with s3fs as well, try using the dbglevel`` -d curldbg or similar option to print out the log.
It contains information for the solution.

<!-- gh-comment-id:478220901 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. Is this problem continuing? We launch new version 1.86, which fixed some problem(bugs). Please use the latest version. I will close this, but if the problem persists, please reopen or post a new issue. If you encounter problems with s3fs as well, try using the `dbglevel`` -d` `curldbg` or similar option to print out the log. It contains information for the solution.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#283
No description provided.