[GH-ISSUE #299] s3fs not preserving mtime #155

Closed
opened 2026-03-04 01:42:41 +03:00 by kerem · 7 comments
Owner

Originally created by @bazeli on GitHub (Nov 24, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/299

Hello,

I'm using s3fs on Ubuntu 14.04.3 and for the most part things work great. However, I noticed that the mtime attribute is not being preserved when the file is copied over with cp -p. It does show the correct mtime in the log, but when doing a ls -l and checking the object meta attribute in the aws console, the x-amz-meta-mtime records the time at which the object was created.

Connect params:
s3fs mybucket /srv/s3/mybucket -o uid=1000 -o gid=1000 -o stat_cache_expire=300 -o retries=5 -o endpoint=ap-southeast-1 -o allow_other -o use_sse -d -d -f -o f2 -o curldbg

File copied to s3fs via:

cp -p testfile.py /srv/s3/mybucket/

Here's the relevant part of the debug log that shows the correct mtime (1447940411) for a testfile (testfile.py), and the incorrect one which is attached the the object stored in s3 (1448321852)

s3fs_debug_snipped.txt

I'm using the latest commit 5af6d4bd82, with libcurl 7.35.0

Originally created by @bazeli on GitHub (Nov 24, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/299 Hello, I'm using s3fs on Ubuntu 14.04.3 and for the most part things work great. However, I noticed that the mtime attribute is not being preserved when the file is copied over with `cp -p`. It does show the correct mtime in the log, but when doing a `ls -l` and checking the object meta attribute in the aws console, the `x-amz-meta-mtime` records the time at which the object was created. Connect params: `s3fs mybucket /srv/s3/mybucket -o uid=1000 -o gid=1000 -o stat_cache_expire=300 -o retries=5 -o endpoint=ap-southeast-1 -o allow_other -o use_sse -d -d -f -o f2 -o curldbg` File copied to s3fs via: `cp -p testfile.py /srv/s3/mybucket/` Here's the relevant part of the debug log that shows the correct mtime (1447940411) for a testfile (testfile.py), and the incorrect one which is attached the the object stored in s3 (1448321852) [s3fs_debug_snipped.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/42208/s3fs_debug_snipped.txt) I'm using the latest commit 5af6d4bd825a1e771a30d4e456a77d7d5b3fbfdd, with libcurl 7.35.0
kerem 2026-03-04 01:42:41 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@gaul commented on GitHub (Nov 24, 2015):

@bazeli Could you submit a pull request with a test to https://github.com/s3fs-fuse/s3fs-fuse/blob/master/test/integration-test-main.sh to illustrate your issue?

<!-- gh-comment-id:159140511 --> @gaul commented on GitHub (Nov 24, 2015): @bazeli Could you submit a pull request with a test to https://github.com/s3fs-fuse/s3fs-fuse/blob/master/test/integration-test-main.sh to illustrate your issue?
Author
Owner

@ggtakec commented on GitHub (Nov 24, 2015):

@bazeli I was able to reproduce this bug.
This bug did not occur when you specify a use_cache, therefore I notice it was delayed.
It wants to be a correction and please wait a little.
Thanks.

<!-- gh-comment-id:159318895 --> @ggtakec commented on GitHub (Nov 24, 2015): @bazeli I was able to reproduce this bug. This bug did not occur when you specify a use_cache, therefore I notice it was delayed. It wants to be a correction and please wait a little. Thanks.
Author
Owner

@bazeli commented on GitHub (Nov 25, 2015):

@ggtakec Thanks for the update. Will be happy to help test the fix once you are ready.

<!-- gh-comment-id:159445787 --> @bazeli commented on GitHub (Nov 25, 2015): @ggtakec Thanks for the update. Will be happy to help test the fix once you are ready.
Author
Owner

@ggtakec commented on GitHub (Nov 29, 2015):

@bazeli I fixed this problem by #304.
Please check it and if you know why #300 PR could not pass on travis, please let me know.
Thanks in advance for your help.

<!-- gh-comment-id:160433520 --> @ggtakec commented on GitHub (Nov 29, 2015): @bazeli I fixed this problem by #304. Please check it and if you know why #300 PR could not pass on travis, please let me know. Thanks in advance for your help.
Author
Owner

@bazeli commented on GitHub (Dec 2, 2015):

Thank you ggtakec - your fix works as intended.

Not exactly sure why #300 won't pass on Travis. Testing the script locally everything worked as intended. The Travis run complains that "cp: ‘test-s3fs.txt’ and ‘test-s3fs-ALT.txt’ are the same file".

<!-- gh-comment-id:161166998 --> @bazeli commented on GitHub (Dec 2, 2015): Thank you ggtakec - your fix works as intended. Not exactly sure why #300 won't pass on Travis. Testing the script locally everything worked as intended. The Travis run complains that "cp: ‘test-s3fs.txt’ and ‘test-s3fs-ALT.txt’ are the same file".
Author
Owner

@ggtakec commented on GitHub (Dec 3, 2015):

@bazeli I found bugs in test script.
So I will merge your PR, after that, I will update it.
Regards,

<!-- gh-comment-id:161641609 --> @ggtakec commented on GitHub (Dec 3, 2015): @bazeli I found bugs in test script. So I will merge your PR, after that, I will update it. Regards,
Author
Owner

@ggtakec commented on GitHub (Dec 3, 2015):

@bazeli I fixed script by #310.
Then testing mtime in script works good, and I can close this issue.
Thanks all.

<!-- gh-comment-id:161645550 --> @ggtakec commented on GitHub (Dec 3, 2015): @bazeli I fixed script by #310. Then testing mtime in script works good, and I can close this issue. Thanks all.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#155
No description provided.