mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1283] I can't upload files bigger then 20 M to my S3 bucket #687
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#687
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @olaquetal on GitHub (Apr 30, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1283
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.86 (commit:005a684) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.9
Kernel information (uname -r)
5.6.6-desktop-1.mga7
GNU/Linux Distribution, if applicable (cat /etc/os-release)
Mageia 7
s3fs command line used, if applicable
sudo s3fs tellurix /mnt/scaleway/ -o passwd_file=${HOME}/.passwd-s3fs,url=https://s3.fr-par.scw.cloud,allow_other -o use_path_request_style,noatime -o dbglevel=info -f -o curldbg
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
I recently created a S3 bucket at Scaleway. I mount it using s3fs without apparent problem.
I can upload small files (<30 M) but with larger files (50 M and more), the copy fails with message "unable to write file, permission denied". I contacted scaleway support but they said it's related to my s3fs client.
I specify that I have a quite slow internet connection (ADSL,upload speed 130k max)
Thanks a lot for help
@gaul commented on GitHub (May 2, 2020):
I successfully mounted and wrote a 500 MB file to scaleway using your command-line arguments. Given the
CURLE_SEND_ERRORI wonder if you have some kind of network problem? Maybe try a lower value for-o parallel_count, e.g., 1?@olaquetal commented on GitHub (May 2, 2020):
Hi, (nice avatar !)
You're great, bravo ! that solved the problem;
I don't think my connection has a problem, but it's just a slow ADSL.
With parameter
parallel_count=3it's ok with a 70 MB file, but it crashes with a 200 MB.parallel_count=2is ok for 200 MB, I don't know if it can speed up the upload but then I'll keep this value. I hope it's settled for good !thank you very much, it's a great relief !