mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #453] Big filesize upload. Zero size file in result #244
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#244
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mobidevadmin on GitHub (Jul 22, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/453
Hello,
I'm trying to upload 42GB file to s3 via s3fs. Details are:
Target directory sctructure is
ms1/
/daily
/weekly
/monthly
/yearly
There is file attached and upload command is
tar -cvf $TARGETDIR/$1/MyBackup_$DATE.tar.gz /storage/winbackup
It doesn't make sense, though, as i tried to upload with simple cp command and result is the same. Small files (at least up to 2.2 GB) uploaded fine. I have another backup with 34GB files which has been working fine for couple years now. It seems to me that it's something to do with the overall "session" time which the whole upload procedure takes.
s3fs_log.txt
@ggtakec commented on GitHub (Jul 24, 2016):
@mobidevadmin
First of all, it is better that you should not specify the tar file output path under the mount point.
Because the tar file will be uploaded/downloaded the tar file at each time of this file changing, then it will take a lot of time.
And I think that you should specify about cache options(use_cache/max_stat_cache_size(and stat_cache_expire)/enable_noobj_cache/etc).
Please try to use these options.
Thanks in advance for your assistance.
@mobidevadmin commented on GitHub (Jul 24, 2016):
Hi ggtakec,
I tried to just copy 40+ GB file - same result
Could you please give a hint on parameters i should use for max_stat_cache_size(and stat_cache_expire)/enable_noobj_cache
Thanks
@ggtakec commented on GitHub (Jul 24, 2016):
@mobidevadmin
I think you sould use following(example) options.
-o max_stat_cache_size=1000 : this value is default, if you need more cache entry, you can increase it.
-o enable_noobj_cache : the cache for the object does not exist.
When s3fs uploads/downloads a file, s3fs checks the stats(permission) of the file and directories.
So these option sets the cache for those stats, these options are to reduce the number of accesses to the S3.
And if you need, you can use following options.
-o stat_cache_expire : if you do not enough memory, you should specify this option.
-o use_cache=/tmp : if you need.
Thanks in advance for your assistance.
@mobidevadmin commented on GitHub (Jul 24, 2016):
Thank you! I'll try and get back to you with results.
@mobidevadmin commented on GitHub (Jul 24, 2016):
Here is an update,
i used these additional parameters. It failed.
-o max_stat_cache_size=2000 -o enable_noobj_cache -o use_cache=/tmp
s3fs_log2.txt
@mobidevadmin commented on GitHub (Jul 28, 2016):
Hey,
any ideas?
@mobidevadmin commented on GitHub (Aug 8, 2016):
Hello,
ended up using aws s3 CLI utility. It works with no issues.
@ggtakec commented on GitHub (Sep 4, 2016):
@mobidevadmin I apologize about my late reply.