[GH-ISSUE #244] Cache-related bug after v1.79 #132

Closed
opened 2026-03-04 01:42:26 +03:00 by kerem · 5 comments
Owner

Originally created by @bpascard on GitHub (Aug 18, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/244

I had an automated deployment setup that installed s3fs by cloning master. Yesterday I started getting s3fs write failures.

I use this command to mount my bucket :
/usr/bin/s3fs my_bucket /shared -d -o endpoint="eu-west-1" -o use_cache=/tmp -o allow_other

If I try to write to a file with echo 'hello world' > /shared/test.txt
I get an input/output error and an empty file is created on the S3 at /shared/test.txt.
Based on /var/log/messages s3fs seems to have issues accessing the cache directory

Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open file(/tmp/my_bucket/test.txt). errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open file(/tmp/my_bucket/test.txt). errno(2)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: could not find opened fd(/test.txt)
Aug 18 13:38:22 ip-10-0-0-22 s3fs: could not find fd(file=/test.txt)

When I omit the use_cache option the issue goes away.
When I use release version 1.79 everything seems to work fine with cache enabled.

Sorry I wasn't able to track down the wrongful commit

Originally created by @bpascard on GitHub (Aug 18, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/244 I had an automated deployment setup that installed s3fs by cloning master. Yesterday I started getting s3fs write failures. I use this command to mount my bucket : `/usr/bin/s3fs my_bucket /shared -d -o endpoint="eu-west-1" -o use_cache=/tmp -o allow_other` If I try to write to a file with `echo 'hello world' > /shared/test.txt` I get an input/output error and an empty file is created on the S3 at /shared/test.txt. Based on /var/log/messages s3fs seems to have issues accessing the cache directory ``` Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open file(/tmp/my_bucket/test.txt). errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open cache stat file path(/test.txt) - errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: failed to open file(/tmp/my_bucket/test.txt). errno(2) Aug 18 13:38:22 ip-10-0-0-22 s3fs: could not find opened fd(/test.txt) Aug 18 13:38:22 ip-10-0-0-22 s3fs: could not find fd(file=/test.txt) ``` When I omit the use_cache option the issue goes away. When I use release version 1.79 everything seems to work fine with cache enabled. Sorry I wasn't able to track down the wrongful commit
kerem closed this issue 2026-03-04 01:42:27 +03:00
Author
Owner

@niklasenB commented on GitHub (Aug 19, 2015):

I raised #230 and closed it, since this issue describes it more precise as you have figured out it's the cache parameter.

<!-- gh-comment-id:132472122 --> @niklasenB commented on GitHub (Aug 19, 2015): I raised #230 and closed it, since this issue describes it more precise as you have figured out it's the cache parameter.
Author
Owner

@gaul commented on GitHub (Aug 21, 2015):

@bpascard Does mkdir /tmp/my_bucket work around this issue?

<!-- gh-comment-id:133283887 --> @gaul commented on GitHub (Aug 21, 2015): @bpascard Does `mkdir /tmp/my_bucket` work around this issue?
Author
Owner

@bpascard commented on GitHub (Aug 21, 2015):

@andrewgaul If I recall correctly that didn't solve it. Can someone check?

I tried to track the failed to open cache stat file path ... error which implied some kind of error about reading the 'stat cache directory'. The error is raised in src/fdcache.cpp line 147 :

// stat path
  string sfile_path;
  if(!CacheFileStat::MakeCacheFileStatPath(path.c_str(), sfile_path, true)){
    DPRN("failed to create cache stat file path(%s)", path.c_str());
    return false;
  }
  // open
  if(-1 == (fd = open(sfile_path.c_str(), O_CREAT|O_RDWR, 0600))){
=>  DPRNINFO("failed to open cache stat file path(%s) - errno(%d)", path.c_str(), errno); 
    return false;
  }

At src/fdcache.cpp (line 62) inside the method CacheFileStat::MakeCacheFileStatPath theres this comment :
// make stat dir top path( "/<cache_dir>/.<bucket_name>.stat" )

I tried mkdir /tmp/.my_bucket.stat to no avail, s3fs even deletes the directory after running.

I also tried a lot of other tricks on /tmp including chmod -R 777 and various recursive calls to chown changing groups and users even though I run s3fs as root.

<!-- gh-comment-id:133457469 --> @bpascard commented on GitHub (Aug 21, 2015): @andrewgaul If I recall correctly that didn't solve it. Can someone check? I tried to track the `failed to open cache stat file path ...` error which implied some kind of error about reading the 'stat cache directory'. The error is raised in [src/fdcache.cpp line 147](https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L147) : ``` C++ // stat path string sfile_path; if(!CacheFileStat::MakeCacheFileStatPath(path.c_str(), sfile_path, true)){ DPRN("failed to create cache stat file path(%s)", path.c_str()); return false; } // open if(-1 == (fd = open(sfile_path.c_str(), O_CREAT|O_RDWR, 0600))){ => DPRNINFO("failed to open cache stat file path(%s) - errno(%d)", path.c_str(), errno); return false; } ``` At [src/fdcache.cpp (line 62)](https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/fdcache.cpp#L62) inside the method CacheFileStat::MakeCacheFileStatPath theres this comment : `// make stat dir top path( "/<cache_dir>/.<bucket_name>.stat" )` I tried `mkdir /tmp/.my_bucket.stat` to no avail, s3fs even deletes the directory after running. I also tried a lot of other tricks on /tmp including `chmod -R 777` and various recursive calls to `chown` changing groups and users even though I run s3fs as root.
Author
Owner

@gaborkukucska commented on GitHub (Sep 24, 2015):

Same error here. By removing the "-o use_cache=/tmp" section from my commands, the write permissions solve.

original command: "s3fs bucketname -o use_cache=/tmp -o allow_other -o nonempty /path/to/folder/"

<!-- gh-comment-id:142893763 --> @gaborkukucska commented on GitHub (Sep 24, 2015): Same error here. By removing the "-o use_cache=/tmp" section from my commands, the write permissions solve. original command: "s3fs bucketname -o use_cache=/tmp -o allow_other -o nonempty /path/to/folder/"
Author
Owner

@ggtakec commented on GitHub (Mar 29, 2019):

We kept this issue open for a long time.
I checked with the latest version(1.86), but the same problem did not occur.
Please try again with the latest version.

I will close this, but if the problem persists, please reopen or post a new issue.

<!-- gh-comment-id:477827550 --> @ggtakec commented on GitHub (Mar 29, 2019): We kept this issue open for a long time. I checked with the latest version(1.86), but the same problem did not occur. Please try again with the latest version. I will close this, but if the problem persists, please reopen or post a new issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#132
No description provided.