mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #747] File became less when using S3FS to put the file into S3 #428
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#428
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @wangf8406 on GitHub (Apr 12, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/747
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Version of s3fs being used (s3fs --version)
s3fs --version
Amazon Simple Storage Service File System V1.83(commit:1a23b88) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Version of fuse being used (pkg-config --modversion fuse)
pkg-config --modversion fuse
2.9.7
System information (uname -r)
uname -r
4.4.0-98-generic
Distro (cat /etc/issue)
cat /etc/issue
Ubuntu 16.04.3 LTS \n \l
Initial login with linux:c8qBE!hSl
s3fs command line used (if applicable)
cp /ssd/test2 /home/linux/obs-stdbucket
df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 8.7M 788M 2% /run
/dev/xvda1 39G 2.6G 34G 7% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 797M 0 797M 0% /run/user/1000
s3fs 256T 0 256T 0% /home/linux/obs-stdbucket
/dev/xvdb 493G 101G 367G 22% /ssd
Details about issue
Now i'm testing to copy a file 100G to the folder to the file folder /home/linux/obs-stdbucket, which i mount to the S3 cloud. Currently i tested on the ECS server.
but after the copy operation finish, the file size are different .
copy command:
cp /ssd/test2 /home/linux/obs-stdbucket
ls -l /home/linux/obs-stdbucket/test2
-rw-r--r-- 1 root root 103923130368 Apr 11 23:36 /home/linux/obs-stdbucket/test2
root@s3fs:/home/linux/obs-stdbucket# ls -l /ssd/test2
-rw-r--r-- 1 root root 107374182400 Apr 11 17:06 /ssd/test2
why the file became less when using S3FS to put the file to S3 ?
@gaul commented on GitHub (Apr 17, 2018):
This concerns me -- can you run
cmp /ssd/test2 /home/linux/obs-stdbucket/test2to ensure both files have the same content, and if not, which offset differs? Otherwise which file system doesssdhave? Does it represent sparse files or similar with a different size than the logical one?@wangf8406 commented on GitHub (Apr 20, 2018):
the result of "cmp /ssd/test2 /home/linux/obs-stdbucket/test2" command
cmp: EOF on /home/linux/obs-stdbucket/test2
the ssd file size is ext4
@wangf8406 commented on GitHub (Apr 20, 2018):
the result of "cmp /ssd/test2 /home/linux/obs-stdbucket/test2" command
cmp: EOF on /home/linux/obs-stdbucket/test2
the ssd file size is ext4
On Mon, Apr 16, 2018 at 8:41 PM, Andrew Gaul notifications@github.com
wrote:
@gaul commented on GitHub (Jan 31, 2019):
I suspect that you have hit the 10,000 part limit given the default 10 MB part size and 100 GB file size. However, I could not reproduce your symptoms via:
I used the minimum multipart size to speed up the test. Note that
ddreported an error code and s3fs has the expected zero file size. s3fs also logged:If you can provide exact steps to reproduce your symptoms I can continue to investigate. However, you may work around this problem via a larger part size:
@gaul commented on GitHub (Apr 9, 2019):
Closing due to inactivity. Please reopen if symptoms persist.