mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #610] Difference in Filename copying #343
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#343
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @vgadhadaran on GitHub (May 26, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/610
Why there is a
Scenario A: created a file name "oneg" of a gb in size and everything was successful.
May 26 12:56:03 k8005227 s3fs[1345]: uploading... [path=/oneg][fd=29][size=1073741824]
May 26 12:57:13 k8005227 s3fs[1345]: HTTP response code 200
May 26 12:57:13 k8005227 s3fs[1345]: [path=/oneg][fd=29]
May 26 12:57:13 k8005227 s3fs[1345]: delete stat cache entry[path=/oneg]
May 26 12:57:13 k8005227 s3fs[1345]: [path=/oneg][fd=29]
May 26 12:57:13 k8005227 s3fs[1345]: [path=/oneg][fd=29]
scenario 2: created a file name "hundtengb" with a filesize of 110gb. But only 0MB is shown on the namespace and not nothing copied. Also I see the file name as "fs_ro_test" instead of "hundtengb".
May 26 12:40:02 k0002170 s3fs[7374]: add stat cache entry[path=/fs_ro_test]
May 26 12:40:02 k0002170 s3fs[7374]: [path=/fs_ro_test][fd=10]
May 26 12:40:02 k0002170 s3fs[7374]: [tpath=][path=/fs_ro_test][fd=10]
May 26 12:40:02 k0002170 s3fs[7374]: [path=/fs_ro_test][fd=10]
May 26 12:40:02 k0002170 s3fs[7374]: delete stat cache entry[path=/fs_ro_test]
May 26 12:40:02 k0002170 s3fs[7374]: [path=/fs_ro_test][fd=10]
May 26 12:40:02 k0002170 s3fs[7374]: [path=/fs_ro_test][fd=10]
@gaul commented on GitHub (Feb 2, 2019):
s3fs creates a zero-byte file with metadata before uploading the real file. In this case, the second upload failed due to too many parts. You can work around this by specifying
-o multipart_size=500which will allow the maximum 5 TB object size.@ggtakec commented on GitHub (Feb 3, 2019):
It now displays an error in #948 by @gaul (Thanks).
I'm closing this issue.