[GH-ISSUE #472] Error writing a file #261

Closed
opened 2026-03-04 01:43:47 +03:00 by kerem · 9 comments
Owner

Originally created by @Monkey-Island on GitHub (Sep 19, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/472

Hi,

I'm writing from Tomcat over a folder mounted with s3fs. I'm using a java with

List items;
... add data inside on List
Iterator iterator = items.iterator();
...
while (iterator.hasNext()) {
FileItem fi = (FileItem) iterator.next();
String name = "/usr/local/xxx/xxxx.jpg";
fi.write(new File(name));
}

This code works fine over Linux FS but over a s3fs only works when file is small, it fails with any file with more than 100kB. The file is created but with size 0.

These are the logs when :

  • Connection #1 to host xxxxxx.s3.amazonaws.com left intact
    [INF] curl.cpp:RequestPerform(1910): HTTP response code 200
    [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/images/ban.jpg]
    [INF] fdcache.cpp:SetMtime(1008): [path=/43/images/ban.jpg][fd=6][time=1474014740]
    [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0]
    [INF] s3fs.cpp:s3fs_getattr(809): [path=/43/images/ban.jpg]
    [INF] s3fs.cpp:s3fs_flush(2166): [path=/43/images/ban.jpg][fd=6]
    [INF] fdcache.cpp:RowFlush(1418): [tpath=][path=/43/images/ban.jpg][fd=6]
    [INF] s3fs.cpp:s3fs_release(2219): [path=/43/images/ban.jpg][fd=6]
    [INF] cache.cpp:DelStat(555): delete stat cache entry[path=/43/images/ban.jpg]
    [INF] fdcache.cpp:GetFdEntity(1929): [path=/43/images/ban.jpg][fd=6]
  • Connection #2 to host xxxxx.s3.amazonaws.com left intact
    [INF] curl.cpp:RequestPerform(1910): HTTP response code 200
    [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/images/ban.jpg]
    [INF] fdcache.cpp:SetMtime(1008): [path=/43/images/ban.jpg][fd=6][time=1474014740]
    [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0]
    [INF] s3fs.cpp:s3fs_getattr(809): [path=/43/images/ban.jpg]
    [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0]
    [INF] s3fs.cpp:s3fs_flush(2166): [path=/43/images/ban.jpg][fd=6]
    [INF] fdcache.cpp:RowFlush(1418): [tpath=][path=/43/images/ban.jpg][fd=6]
    [INF] curl.cpp:PutRequest(2641): [tpath=/43/images/ban.jpg]
    [INF] curl.cpp:prepare_url(4175): URL is http://s3.amazonaws.com/xxxxx/folder/43/images/ban.jpg
    [INF] curl.cpp:prepare_url(4207): URL changed is http://xxxxx.s3.amazonaws.com/xxxxx/folder/43/images/ban.jpg
    [INF] curl.cpp:insertV4Headers(2237): computing signature [PUT] [/folder/43/images/ban.jpg] [] [xxxxxxxxxxxxx]
    [INF] curl.cpp:url_to_host(100): url is http://s3.amazonaws.com
    [INF] curl.cpp:PutRequest(2750): uploading... [path=/43/images/ban.jpg][fd=6][size=2048]
  • Found bundle for host xxxxx.s3.amazonaws.com: 0xxxxxxxxxxx
  • Re-using existing connection! (#2) with host xxxxx.s3.amazonaws.com
  • Connected to xxxxxxx.s3.amazonaws.com (xxxxxxxx) port 80 (#2)

    PUT /folder/43/images/ban.jpg HTTP/1.1
    User-Agent: s3fs/1.80 (commit hash 6be3236; OpenSSL)
    Accept: /
    Authorization: AWS4-HMAC-SHA256 Credential=xxxxxxxx, SignedHeaders=content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=xxxxxxx
    Content-Type: image/jpeg
    host: xxxxxx.s3.amazonaws.com
    x-amz-acl: private
    x-amz-content-sha256: xxxxxxxxxxxxxx
    x-amz-date: 20160916T084004Z
    x-amz-meta-gid: 0
    x-amz-meta-mode: 33188
    x-amz-meta-mtime: 1474015204
    x-amz-meta-uid: 0
    Content-Length: 2048
    Expect: 100-continue

< HTTP/1.1 100 Continue

  • We are completely uploaded and fine
    < HTTP/1.1 200 OK
    < x-amz-id-2: xxxxx
    < x-amz-request-id: xxx
    < Date: Fri, 16 Sep 2016 08:40:05 GMT
    < ETag: "xxxx"
    < Content-Length: 0
    < Server: AmazonS3
    <
  • Connection #2 to host xxxxx.s3.amazonaws.com left intact
    [INF] curl.cpp:RequestPerform(1910): HTTP response code 200
    [INF] s3fs.cpp:s3fs_release(2219): [path=/43/images/ban.jpg][fd=6]
    [INF] cache.cpp:DelStat(555): delete stat cache entry[path=/43/images/ban.jpg
    [INF] fdcache.cpp:GetFdEntity(1929): [path=/43/images/ban.jpg][fd=6]

Java also crashes:

javax.servlet.ServletException: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.copy(Ljava/io/InputStream;Ljava/io/OutputStream;)I
org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:916)
org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:845)
org.apache.jsp.save_jsp._jspService(save_jsp.java:478)
org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:439)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)

Do you have any idea, about why it's crashing?

Best Regards,
LeChuck

Originally created by @Monkey-Island on GitHub (Sep 19, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/472 Hi, I'm writing from Tomcat over a folder mounted with s3fs. I'm using a java with List<FileItem> items; ... add data inside on List Iterator iterator = items.iterator(); ... while (iterator.hasNext()) { FileItem fi = (FileItem) iterator.next(); String name = "/usr/local/xxx/xxxx.jpg"; **fi.write(new File(name));** } This code works fine over Linux FS but over a s3fs only works when file is small, it fails with any file with more than 100kB. The file is created but with size 0. These are the logs when : - Connection #1 to host xxxxxx.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/images/ban.jpg] [INF] fdcache.cpp:SetMtime(1008): [path=/43/images/ban.jpg][fd=6][time=1474014740] [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getattr(809): [path=/43/images/ban.jpg] [INF] s3fs.cpp:s3fs_flush(2166): [path=/43/images/ban.jpg][fd=6] [INF] fdcache.cpp:RowFlush(1418): [tpath=][path=/43/images/ban.jpg][fd=6] [INF] s3fs.cpp:s3fs_release(2219): [path=/43/images/ban.jpg][fd=6] [INF] cache.cpp:DelStat(555): delete stat cache entry[path=/43/images/ban.jpg] [INF] fdcache.cpp:GetFdEntity(1929): [path=/43/images/ban.jpg][fd=6] - Connection #2 to host xxxxx.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/images/ban.jpg] [INF] fdcache.cpp:SetMtime(1008): [path=/43/images/ban.jpg][fd=6][time=1474014740] [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getattr(809): [path=/43/images/ban.jpg] [INF] s3fs.cpp:s3fs_getxattr(3097): [path=/43/images/ban.jpg][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_flush(2166): [path=/43/images/ban.jpg][fd=6] [INF] fdcache.cpp:RowFlush(1418): [tpath=][path=/43/images/ban.jpg][fd=6] [INF] curl.cpp:PutRequest(2641): [tpath=/43/images/ban.jpg] [INF] curl.cpp:prepare_url(4175): URL is http://s3.amazonaws.com/xxxxx/folder/43/images/ban.jpg [INF] curl.cpp:prepare_url(4207): URL changed is http://xxxxx.s3.amazonaws.com/xxxxx/folder/43/images/ban.jpg [INF] curl.cpp:insertV4Headers(2237): computing signature [PUT] [/folder/43/images/ban.jpg] [] [xxxxxxxxxxxxx] [INF] curl.cpp:url_to_host(100): url is http://s3.amazonaws.com [INF] curl.cpp:PutRequest(2750): uploading... [path=/43/images/ban.jpg][fd=6][size=2048] - Found bundle for host xxxxx.s3.amazonaws.com: 0xxxxxxxxxxx - Re-using existing connection! (#2) with host xxxxx.s3.amazonaws.com - Connected to xxxxxxx.s3.amazonaws.com (xxxxxxxx) port 80 (#2) > PUT /folder/43/images/ban.jpg HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 6be3236; OpenSSL) Accept: _/_ Authorization: AWS4-HMAC-SHA256 Credential=xxxxxxxx, SignedHeaders=content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=xxxxxxx Content-Type: image/jpeg host: xxxxxx.s3.amazonaws.com x-amz-acl: private x-amz-content-sha256: xxxxxxxxxxxxxx x-amz-date: 20160916T084004Z x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1474015204 x-amz-meta-uid: 0 Content-Length: 2048 Expect: 100-continue < HTTP/1.1 100 Continue - We are completely uploaded and fine < HTTP/1.1 200 OK < x-amz-id-2: xxxxx < x-amz-request-id: xxx < Date: Fri, 16 Sep 2016 08:40:05 GMT < ETag: "xxxx" < Content-Length: 0 < Server: AmazonS3 < - Connection #2 to host xxxxx.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] s3fs.cpp:s3fs_release(2219): [path=/43/images/ban.jpg][fd=6] [INF] cache.cpp:DelStat(555): delete stat cache entry[path=/43/images/ban.jpg [INF] fdcache.cpp:GetFdEntity(1929): [path=/43/images/ban.jpg][fd=6] Java also crashes: javax.servlet.ServletException: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.copy(Ljava/io/InputStream;Ljava/io/OutputStream;)I org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:916) org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:845) org.apache.jsp.save_jsp._jspService(save_jsp.java:478) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:731) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:439) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339) javax.servlet.http.HttpServlet.service(HttpServlet.java:731) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) Do you have any idea, about why it's crashing? Best Regards, LeChuck
kerem closed this issue 2026-03-04 01:43:47 +03:00
Author
Owner

@ggtakec commented on GitHub (Sep 19, 2016):

@Monkey-Island
Probably s3fs failed to upload files after creating file as zero size.
So, you can set option "retries" and try to set other option.

Please try it and let us know s3fs version and the options at you run s3fs.

Regards,

<!-- gh-comment-id:247934447 --> @ggtakec commented on GitHub (Sep 19, 2016): @Monkey-Island Probably s3fs failed to upload files after creating file as zero size. So, you can set option "retries" and try to set other option. Please try it and let us know s3fs version and the options at you run s3fs. Regards,
Author
Owner

@Monkey-Island commented on GitHub (Sep 19, 2016):

Hi,

I also increased the retries and the problem was exactly the same

I'm using v1.8 and I'm mounting without any extra option, just the name of the bucket and mount point

Best Regards,
LeChuck

<!-- gh-comment-id:247937715 --> @Monkey-Island commented on GitHub (Sep 19, 2016): Hi, I also increased the retries and the problem was exactly the same I'm using v1.8 and I'm mounting without any extra option, just the name of the bucket and mount point Best Regards, LeChuck
Author
Owner

@ggtakec commented on GitHub (Sep 19, 2016):

@Monkey-Island Thanks for replying.

We need to know about s3fs's log and what error is occurred.
So that, please try to run with "dbglevel" option and please check log file put by s3fs.

Thanks in advance for your assistance.

<!-- gh-comment-id:247941015 --> @ggtakec commented on GitHub (Sep 19, 2016): @Monkey-Island Thanks for replying. We need to know about s3fs's log and what error is occurred. So that, please try to run with "dbglevel" option and please check log file put by s3fs. Thanks in advance for your assistance.
Author
Owner

@Monkey-Island commented on GitHub (Sep 19, 2016):

Hi,

I was running with these options: -d -f -o f2 -o curldbg . The above logs are with those parameters.

The below logs are with -o dbglevel="dbg"

unique: 11, opcode: LOOKUP (1), nodeid: 1, insize: 43, pid: 15234
LOOKUP /43
getattr /43
[INF] s3fs.cpp:s3fs_getattr(808): [path=/43]
[DBG] s3fs.cpp:check_parent_object_access(665): [path=/43]
[DBG] s3fs.cpp:check_object_access(559): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:check_object_access(559): [path=/43]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265010.766164417][hit count=6]
[DBG] fdcache.cpp:ExistOpen(1927): [path=/43][fd=-1][ignore_existfd=false]
[DBG] fdcache.cpp:Open(1876): [path=/43][size=-1][time=-1]
[DBG] s3fs.cpp:s3fs_getattr(832): [path=/43] uid=0, gid=0, mode=40755
NODEID: 2
unique: 11, success, outsize: 144
unique: 12, opcode: LOOKUP (1), nodeid: 2, insize: 49, pid: 15234
LOOKUP /43/xxx
getattr /43/xxx
[INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx]
[DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx]
[DBG] s3fs.cpp:check_object_access(559): [path=/43]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=7]
[DBG] s3fs.cpp:check_object_access(559): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265010.766164417][hit count=5]
[DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx][fd=-1][ignore_existfd=false]
[DBG] fdcache.cpp:Open(1876): [path=/43/xxx][size=-1][time=-1]
[DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx] uid=0, gid=0, mode=40755
NODEID: 3
unique: 12, success, outsize: 144
unique: 13, opcode: LOOKUP (1), nodeid: 3, insize: 55, pid: 15234
LOOKUP /43/xxx/ban.jpg
getattr /43/xxx/ban.jpg
[INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=6]
[DBG] s3fs.cpp:check_object_access(559): [path=/43]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=8]
[DBG] s3fs.cpp:check_object_access(559): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265010.998161790][hit count=3]
[DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=-1][ignore_existfd=false]
[DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1]
[DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx/ban.jpg] uid=0, gid=0, mode=100644
NODEID: 4
unique: 13, success, outsize: 144
unique: 14, opcode: OPEN (14), nodeid: 4, insize: 48, pid: 15234
open flags: 0x8201 /43/xxx/ban.jpg
[INF] s3fs.cpp:s3fs_open(2019): [path=/43/xxx/ban.jpg][flags=33281]
[INF] cache.cpp:DelStat(549): delete stat cache entry[path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=7]
[DBG] s3fs.cpp:check_object_access(559): [path=/43]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
[DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=9]
[DBG] s3fs.cpp:check_object_access(559): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
[INF] curl.cpp:HeadRequest(2486): [tpath=/43/xxx/ban.jpg]
[INF] curl.cpp:PreHeadRequest(2423): [tpath=/43/xxx/ban.jpg][bpath=][save=][sseckeypos=-1]
[DBG] curl.cpp:GetHandler(272): Get handler from pool: 31
[INF] curl.cpp:prepare_url(4175): URL is http://s3.amazonaws.com/xxxx/xxxxx/43/xxx/ban.jpg
[INF] curl.cpp:prepare_url(4207): URL changed is http://xxxx.s3.amazonaws.com/xxxxx/43/xxx/ban.jpg
[INF] curl.cpp:insertV4Headers(2237): computing signature [HEAD] [/xxxxx/43/xxx/ban.jpg] [] []
[INF] curl.cpp:url_to_host(100): url is http://s3.amazonaws.com
[DBG] curl.cpp:RequestPerform(1893): connecting to URL http://xxxx.s3.amazonaws.com/xxxxx/43/xxx/ban.jpg

  • Found bundle for host xxxx.s3.amazonaws.com: 0x7f409014f240
  • Re-using existing connection! (#0) with host xxxx.s3.amazonaws.com
  • Connected to xxxx.s3.amazonaws.com (54.231.40.67) port 80 (#0)

    HEAD /xxxxx/43/xxx/ban.jpg HTTP/1.1
    User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL)
    Accept: /
    Authorization: AWS4-HMAC-SHA256 Credential=xxxxx/20160919/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=xxxx
    host: xxxx.s3.amazonaws.com
    x-amz-content-sha256: xxxxxx
    x-amz-date: 20160919T103155Z

< HTTP/1.1 200 OK
< x-amz-id-2: xxxx
< x-amz-request-id: xxxxx
< Date: Mon, 19 Sep 2016 10:31:56 GMT
< Last-Modified: Mon, 19 Sep 2016 07:32:56 GMT
< ETag: "xxxxx"
< x-amz-meta-mode: 33188
< x-amz-meta-gid: 0
< x-amz-meta-uid: 0
< x-amz-meta-mtime: 1474270375
< Accept-Ranges: bytes
< Content-Type: image/jpeg
< Content-Length: 0
< Server: AmazonS3
<

  • Connection #0 to host xxxx.s3.amazonaws.com left intact
    [INF] curl.cpp:RequestPerform(1910): HTTP response code 200
    [DBG] curl.cpp:ReturnHandler(295): Return handler to pool: 31
    [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/xxx/ban.jpg]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=0]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=1]
    [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=0][time=1474270375]
    [DBG] fdcache.cpp:Open(726): [path=/43/xxx/ban.jpg][fd=-1][size=0][time=1474270375]
    [INF] fdcache.cpp:SetMtime(936): [path=/43/xxx/ban.jpg][fd=5][time=1474270375]
    open[5] flags: 0x8201 /43/xxx/ban.jpg
    unique: 14, success, outsize: 32
    unique: 15, opcode: GETXATTR (22), nodeid: 4, insize: 68, pid: 15234
    getxattr /43/xxx/ban.jpg security.capability 0
    [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/43/xxx/ban.jpg][name=security.capability][value=(nil)][size=0]
    [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=8]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=10]
    [DBG] s3fs.cpp:check_object_access(559): [path=/]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=2]
    unique: 15, error: -61 (No data available), outsize: 16
    unique: 16, opcode: GETATTR (3), nodeid: 4, insize: 56, pid: 15234
    getattr /43/xxx/ban.jpg
    [INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.393964761][hit count=9]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.393964761][hit count=11]
    [DBG] s3fs.cpp:check_object_access(559): [path=/]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.393964761][hit count=3]
    [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=-1][ignore_existfd=false]
    [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1]
    [DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx/ban.jpg] uid=0, gid=0, mode=100644
    unique: 16, success, outsize: 120
    unique: 17, opcode: FLUSH (25), nodeid: 4, insize: 64, pid: 15234
    flush[5]
    [INF] s3fs.cpp:s3fs_flush(2141): [path=/43/xxx/ban.jpg][fd=5]
    [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.393964761][hit count=10]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.393964761][hit count=12]
    [DBG] s3fs.cpp:check_object_access(559): [path=/]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/]
    [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg]
    [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg]
    [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.393964761][hit count=4]
    [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=5][ignore_existfd=false]
    [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1]
    [DBG] fdcache.cpp:Dup(711): [path=/43/xxx/ban.jpg][fd=5][refcnt=2]
    [INF] fdcache.cpp:RowFlush(1345): [tpath=][path=/43/xxx/ban.jpg][fd=5]
    [DBG] fdcache.cpp:Close(1968): [ent->file=/43/xxx/ban.jpg][ent->fd=5]
    [DBG] fdcache.cpp:Close(687): [path=/43/xxx/ban.jpg][fd=5][refcnt=1]
    unique: 17, success, outsize: 16
    unique: 18, opcode: RELEASE (18), nodeid: 4, insize: 64, pid: 0
    release[5] flags: 0x8001
    [INF] s3fs.cpp:s3fs_release(2194): [path=/43/xxx/ban.jpg][fd=5]
    [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/43/xxx/ban.jpg]
    [INF] fdcache.cpp:GetFdEntity(1846): [path=/43/xxx/ban.jpg][fd=5]
    [DBG] fdcache.cpp:Close(1968): [ent->file=/43/xxx/ban.jpg][ent->fd=5]
    [DBG] fdcache.cpp:Close(687): [path=/43/xxx/ban.jpg][fd=5][refcnt=0]
    [INF] fdcache.cpp:GetFdEntity(1846): [path=/43/xxx/ban.jpg][fd=5]
    unique: 18, success, outsize: 16
<!-- gh-comment-id:247961200 --> @Monkey-Island commented on GitHub (Sep 19, 2016): Hi, I was running with these options: -d -f -o f2 -o curldbg . The above logs are with those parameters. The below logs are with -o dbglevel="dbg" unique: 11, opcode: LOOKUP (1), nodeid: 1, insize: 43, pid: 15234 LOOKUP /43 getattr /43 [INF] s3fs.cpp:s3fs_getattr(808): [path=/43] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265010.766164417][hit count=6] [DBG] fdcache.cpp:ExistOpen(1927): [path=/43][fd=-1][ignore_existfd=false] [DBG] fdcache.cpp:Open(1876): [path=/43][size=-1][time=-1] [DBG] s3fs.cpp:s3fs_getattr(832): [path=/43] uid=0, gid=0, mode=40755 NODEID: 2 unique: 11, success, outsize: 144 unique: 12, opcode: LOOKUP (1), nodeid: 2, insize: 49, pid: 15234 LOOKUP /43/xxx getattr /43/xxx [INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=7] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265010.766164417][hit count=5] [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx][fd=-1][ignore_existfd=false] [DBG] fdcache.cpp:Open(1876): [path=/43/xxx][size=-1][time=-1] [DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx] uid=0, gid=0, mode=40755 NODEID: 3 unique: 12, success, outsize: 144 unique: 13, opcode: LOOKUP (1), nodeid: 3, insize: 55, pid: 15234 LOOKUP /43/xxx/ban.jpg getattr /43/xxx/ban.jpg [INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=6] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=8] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265010.998161790][hit count=3] [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=-1][ignore_existfd=false] [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1] [DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx/ban.jpg] uid=0, gid=0, mode=100644 NODEID: 4 unique: 13, success, outsize: 144 unique: 14, opcode: OPEN (14), nodeid: 4, insize: 48, pid: 15234 open flags: 0x8201 /43/xxx/ban.jpg [INF] s3fs.cpp:s3fs_open(2019): [path=/43/xxx/ban.jpg][flags=33281] [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=7] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=9] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [INF] curl.cpp:HeadRequest(2486): [tpath=/43/xxx/ban.jpg] [INF] curl.cpp:PreHeadRequest(2423): [tpath=/43/xxx/ban.jpg][bpath=][save=][sseckeypos=-1] [DBG] curl.cpp:GetHandler(272): Get handler from pool: 31 [INF] curl.cpp:prepare_url(4175): URL is http://s3.amazonaws.com/xxxx/xxxxx/43/xxx/ban.jpg [INF] curl.cpp:prepare_url(4207): URL changed is http://xxxx.s3.amazonaws.com/xxxxx/43/xxx/ban.jpg [INF] curl.cpp:insertV4Headers(2237): computing signature [HEAD] [/xxxxx/43/xxx/ban.jpg] [] [] [INF] curl.cpp:url_to_host(100): url is http://s3.amazonaws.com [DBG] curl.cpp:RequestPerform(1893): connecting to URL http://xxxx.s3.amazonaws.com/xxxxx/43/xxx/ban.jpg - Found bundle for host xxxx.s3.amazonaws.com: 0x7f409014f240 - Re-using existing connection! (#0) with host xxxx.s3.amazonaws.com - Connected to xxxx.s3.amazonaws.com (54.231.40.67) port 80 (#0) > HEAD /xxxxx/43/xxx/ban.jpg HTTP/1.1 User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL) Accept: _/_ Authorization: AWS4-HMAC-SHA256 Credential=xxxxx/20160919/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=xxxx host: xxxx.s3.amazonaws.com x-amz-content-sha256: xxxxxx x-amz-date: 20160919T103155Z < HTTP/1.1 200 OK < x-amz-id-2: xxxx < x-amz-request-id: xxxxx < Date: Mon, 19 Sep 2016 10:31:56 GMT < Last-Modified: Mon, 19 Sep 2016 07:32:56 GMT < ETag: "xxxxx" < x-amz-meta-mode: 33188 < x-amz-meta-gid: 0 < x-amz-meta-uid: 0 < x-amz-meta-mtime: 1474270375 < Accept-Ranges: bytes < Content-Type: image/jpeg < Content-Length: 0 < Server: AmazonS3 < - Connection #0 to host xxxx.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [DBG] curl.cpp:ReturnHandler(295): Return handler to pool: 31 [INF] cache.cpp:AddStat(346): add stat cache entry[path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=0] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=1] [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=0][time=1474270375] [DBG] fdcache.cpp:Open(726): [path=/43/xxx/ban.jpg][fd=-1][size=0][time=1474270375] [INF] fdcache.cpp:SetMtime(936): [path=/43/xxx/ban.jpg][fd=5][time=1474270375] open[5] flags: 0x8201 /43/xxx/ban.jpg unique: 14, success, outsize: 32 unique: 15, opcode: GETXATTR (22), nodeid: 4, insize: 68, pid: 15234 getxattr /43/xxx/ban.jpg security.capability 0 [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/43/xxx/ban.jpg][name=security.capability][value=(nil)][size=0] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.385964851][hit count=8] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.385964851][hit count=10] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.389964806][hit count=2] unique: 15, error: -61 (No data available), outsize: 16 unique: 16, opcode: GETATTR (3), nodeid: 4, insize: 56, pid: 15234 getattr /43/xxx/ban.jpg [INF] s3fs.cpp:s3fs_getattr(808): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.393964761][hit count=9] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.393964761][hit count=11] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.393964761][hit count=3] [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=-1][ignore_existfd=false] [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1] [DBG] s3fs.cpp:s3fs_getattr(832): [path=/43/xxx/ban.jpg] uid=0, gid=0, mode=100644 unique: 16, success, outsize: 120 unique: 17, opcode: FLUSH (25), nodeid: 4, insize: 64, pid: 15234 flush[5] [INF] s3fs.cpp:s3fs_flush(2141): [path=/43/xxx/ban.jpg][fd=5] [DBG] s3fs.cpp:check_parent_object_access(665): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/][time=265028.393964761][hit count=10] [DBG] s3fs.cpp:check_object_access(559): [path=/43] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/][time=265028.393964761][hit count=12] [DBG] s3fs.cpp:check_object_access(559): [path=/] [DBG] s3fs.cpp:get_object_attribute(405): [path=/] [DBG] s3fs.cpp:check_object_access(559): [path=/43/xxx/ban.jpg] [DBG] s3fs.cpp:get_object_attribute(405): [path=/43/xxx/ban.jpg] [DBG] cache.cpp:GetStat(269): stat cache hit [path=/43/xxx/ban.jpg][time=265028.393964761][hit count=4] [DBG] fdcache.cpp:ExistOpen(1927): [path=/43/xxx/ban.jpg][fd=5][ignore_existfd=false] [DBG] fdcache.cpp:Open(1876): [path=/43/xxx/ban.jpg][size=-1][time=-1] [DBG] fdcache.cpp:Dup(711): [path=/43/xxx/ban.jpg][fd=5][refcnt=2] [INF] fdcache.cpp:RowFlush(1345): [tpath=][path=/43/xxx/ban.jpg][fd=5] [DBG] fdcache.cpp:Close(1968): [ent->file=/43/xxx/ban.jpg][ent->fd=5] [DBG] fdcache.cpp:Close(687): [path=/43/xxx/ban.jpg][fd=5][refcnt=1] unique: 17, success, outsize: 16 unique: 18, opcode: RELEASE (18), nodeid: 4, insize: 64, pid: 0 release[5] flags: 0x8001 [INF] s3fs.cpp:s3fs_release(2194): [path=/43/xxx/ban.jpg][fd=5] [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/43/xxx/ban.jpg] [INF] fdcache.cpp:GetFdEntity(1846): [path=/43/xxx/ban.jpg][fd=5] [DBG] fdcache.cpp:Close(1968): [ent->file=/43/xxx/ban.jpg][ent->fd=5] [DBG] fdcache.cpp:Close(687): [path=/43/xxx/ban.jpg][fd=5][refcnt=0] [INF] fdcache.cpp:GetFdEntity(1846): [path=/43/xxx/ban.jpg][fd=5] unique: 18, success, outsize: 16
Author
Owner

@ggtakec commented on GitHub (Sep 19, 2016):

@Monkey-Island Thanks for logs.
But it seems the log does not have any wrong(error) lines.
The operation succeed to flush file(/43/xxx/ban.jpg). (But I did not find write operation in this log.)
If this log is stopped when java program caught exception, it seems s3fs run without any error.

Then we need to know what kind of reason about "NoSuchMethodError...".
Regards,

<!-- gh-comment-id:247979016 --> @ggtakec commented on GitHub (Sep 19, 2016): @Monkey-Island Thanks for logs. But it seems the log does not have any wrong(error) lines. The operation succeed to flush file(/43/xxx/ban.jpg). (But I did not find write operation in this log.) If this log is stopped when java program caught exception, it seems s3fs run without any error. Then we need to know what kind of reason about "NoSuchMethodError...". Regards,
Author
Owner

@Monkey-Island commented on GitHub (Sep 19, 2016):

Thanks ggtakec,

i will try to use a different way to write files from Java, because I think this method is not compatible with s3fs.

Thanks for your support,

Best Regards,
LeChuck

<!-- gh-comment-id:248007767 --> @Monkey-Island commented on GitHub (Sep 19, 2016): Thanks ggtakec, i will try to use a different way to write files from Java, because I think this method is not compatible with s3fs. Thanks for your support, Best Regards, LeChuck
Author
Owner

@gaul commented on GitHub (Sep 19, 2016):

@Monkey-Island NoSuchMethodError has nothing to do with s3fs; you likely have some dependency conflict in your application. Try running mvn dependency:tree to sort out which of your dependencies is incorrectly overridden.

<!-- gh-comment-id:248130011 --> @gaul commented on GitHub (Sep 19, 2016): @Monkey-Island `NoSuchMethodError` has nothing to do with s3fs; you likely have some dependency conflict in your application. Try running `mvn dependency:tree` to sort out which of your dependencies is incorrectly overridden.
Author
Owner

@Monkey-Island commented on GitHub (Sep 20, 2016):

Hi Andrewgaul, The source code runs fine over a standard linux file system, but it crashes over a s3fs path. Even with a very small file is working fine, but with a big file it's not working. In any case, I will try to use your command

Best Regards,
LeChuck

<!-- gh-comment-id:248228541 --> @Monkey-Island commented on GitHub (Sep 20, 2016): Hi Andrewgaul, The source code runs fine over a standard linux file system, but it crashes over a s3fs path. Even with a very small file is working fine, but with a big file it's not working. In any case, I will try to use your command Best Regards, LeChuck
Author
Owner

@ggtakec commented on GitHub (Dec 4, 2016):

@Monkey-Island I'm sorry for my late reply.
Perhaps the problem with this big file may be a bug fixed in #511.
By #511, I was fixing the upload problem of large file(using multipart upload).

If you can, please try the code of the latest master branch.
I'm going to close this issue, but if this problem continues, please reopen this issue or ost new issue.

Thanks in advance for your assistance.

<!-- gh-comment-id:264697406 --> @ggtakec commented on GitHub (Dec 4, 2016): @Monkey-Island I'm sorry for my late reply. Perhaps the problem with this big file may be a bug fixed in #511. By #511, I was fixing the upload problem of large file(using multipart upload). If you can, please try the code of the latest master branch. I'm going to close this issue, but if this problem continues, please reopen this issue or ost new issue. Thanks in advance for your assistance.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#261
No description provided.