[GH-ISSUE #436] 2nd PUT when Writing Results in Conflict (HTTP 409) #233

Closed
opened 2026-03-04 01:43:29 +03:00 by kerem · 10 comments
Owner

Originally created by @dbbyleo on GitHub (Jun 16, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/436

Hi All,
I'm new to s3fs, but I have successfully set it up and have been able to mount a bucket from a Hitachi Content Platform (HCP) system. I am able to browse the file system (bucket); I am able to read and delete files. But writes results in:

cp: failed to close ‘/mymountpoint/myfilename’: Input/output error

This occurs when I try to write any non-zero length files, say like a basic text file.

However, I have no problem copying a zero-length file. Like for example, if I touch a file locally and then copy it to the bucket - this works fine. I touch a file directly in the bucket - this also works fine. But if I try to copy a text file (with size anything greater than zero) the copy fails.

After sifting through the debug info, I found that s3fs does "double puts."

When touching a file directly in the bucket (this succeeds):

  • The 1st PUT creates a 0 length file and returns OK (HTTP 200)
  • The 2nd PUT creates a 0 length file, but accompanied with x-amz-metadata-directive: REPLACE. Also returns OK (HTTP 200)
  • Both PUT has Content-Type: application/octet-stream.

When copying a non-zero text file to the bucket (this fails):

  • The 1st PUT creates a 0 length file and returns OK (HTTP 200)
  • The 2nd PUT creates a non-zero length file (the size of the file), but this time there's NO accompanying x-amz-metadata-directive: REPLACE. This returns CONFLICT error (HTTP 409)
  • Both PUT has Content-Type: text/plain
  • This all seems to make sense. What doesn't make sense is why s3fs does not provide the replace directive. I assume that's why the 2nd PUT fails.

Touching a File Locally then Copying to the Bucket (this succeeds):

  • But this only does one PUT, not two like you would expect.

Help appreciated.

We're running this on a Debian 8 (Jessie) server with curl 7.38.0.

The s3fs mount command we use is:
s3fs mybucket /mymountpoint -o nocopyapi -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://mysite.mydomain.com -o passwd_file=/myS3credentials

Originally created by @dbbyleo on GitHub (Jun 16, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/436 Hi All, I'm new to s3fs, but I have successfully set it up and have been able to mount a bucket from a Hitachi Content Platform (HCP) system. I am able to browse the file system (bucket); I am able to read and delete files. But writes results in: `cp: failed to close ‘/mymountpoint/myfilename’: Input/output error` This occurs when I try to write any non-zero length files, say like a basic text file. However, I have no problem copying a zero-length file. Like for example, if I touch a file locally and then copy it to the bucket - this works fine. I touch a file directly in the bucket - this also works fine. But if I try to copy a text file (with size anything greater than zero) the copy fails. After sifting through the debug info, I found that s3fs does "double puts." When touching a file directly in the bucket (this succeeds): - The 1st PUT creates a 0 length file and returns OK (HTTP 200) - The 2nd PUT creates a 0 length file, but accompanied with **x-amz-metadata-directive: REPLACE**. Also returns OK (HTTP 200) - Both PUT has **Content-Type: application/octet-stream**. When copying a non-zero text file to the bucket (this fails): - The 1st PUT creates a 0 length file and returns OK (HTTP 200) - The 2nd PUT creates a non-zero length file (the size of the file), but this time there's **NO** accompanying x-amz-metadata-directive: REPLACE. This returns CONFLICT error (HTTP 409) - Both PUT has **Content-Type: text/plain** - This all seems to make sense. What doesn't make sense is why s3fs does not provide the replace directive. I assume that's why the 2nd PUT fails. Touching a File Locally then Copying to the Bucket (this succeeds): - But this only does one PUT, not two like you would expect. Help appreciated. We're running this on a Debian 8 (Jessie) server with curl 7.38.0. The s3fs mount command we use is: `s3fs mybucket /mymountpoint -o nocopyapi -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://mysite.mydomain.com -o passwd_file=/myS3credentials`
kerem closed this issue 2026-03-04 01:43:30 +03:00
Author
Owner

@sqlbot commented on GitHub (Jun 17, 2016):

The 2nd PUT creates a non-zero length file (the size of the file), but this time there's NO accompanying x-amz-metadata-directive: REPLACE. This returns CONFLICT error (HTTP 409)

What doesn't make sense is why s3fs does not provide the replace directive. I assume that's why the 2nd PUT fails.

This is interesting, but this wouldn't be why the PUT fails.

Usingx-amz-metadata-directive: REPLACE is not valid when Content-Length is non-zero. It is not used when payload is sent in the request -- it is only used with x-amz-copy-source, which is the way the S3 API implements an internal copy request -- copy the payload and replace the existing metadata with metadata supplied in this request.

Background: metadata, as well as the payload of an object, are immutable in S3. To "edit" the metadata, a PUT/Copy must be used, and new metadata supplied with the PUT request. This creates a "new" object, with the same key (path), and new metadata. If the source key and target key are the same in the request, it gives the illusion of editing the metadata but it's technically a new object with the same key. The same mechanism is used for renaming objects in S3 -- PUT/Copy to a new key, then delete the old one after the copy succeeds. The metadata can be replaced or preserved.

Now, what's interesting here is two things -- one, of course, is that you're using something other than S3 as the back-end (which I did not notice when I first read this issue), and it looks as if HCP may have an issue with operations that are too closely spaced in time on the same object.

Also, you've specified -o nocopyapi -- which should mean that you never see x-amz-metadata-directive in a request, because:

If you set this option, s3fs do not use PUT with "x-amz-copy-source" (copy api).

...which means x-amz-metadata-directive would never be used... yet, that's what s3fs seems to be doing anyway. Interesting.

Your "working" request seems to point to a bug in s3fs not behaving as documented, while your "broken" request seems to be a case where s3fs is behaving correctly, but exposing a problem with the "Hitachi Content Platform" not being able to accept these requests so shortly spaced in time.

Is this a condition you observe every time, or is it intermittent?

S3 itself has a few documented cases where it can return a 409 error, at least one of them (OperationAborted) potentially being retryable with a prospect for success... and if s3fs doesn't have a mechanism for retrying these errors before considering the condition to be fatal, it probably should have one.

<!-- gh-comment-id:226777182 --> @sqlbot commented on GitHub (Jun 17, 2016): > The 2nd PUT creates a non-zero length file (the size of the file), but this time there's NO accompanying x-amz-metadata-directive: REPLACE. This returns CONFLICT error (HTTP 409) > > What doesn't make sense is why s3fs does not provide the replace directive. I assume that's why the 2nd PUT fails. This is interesting, but this wouldn't be why the `PUT` fails. Using`x-amz-metadata-directive: REPLACE` is not valid when `Content-Length` is non-zero. It is not used when payload is sent in the request -- it is only used with [`x-amz-copy-source`](http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html), which is the way the S3 API implements an internal copy request -- copy the payload and replace the existing metadata with metadata supplied in this request. Background: metadata, as well as the payload of an object, are immutable in S3. To "edit" the metadata, a PUT/Copy must be used, and new metadata supplied with the PUT request. This creates a "new" object, with the same key (path), and new metadata. If the source key and target key are the same in the request, it gives the illusion of editing the metadata but it's technically a new object with the same key. The same mechanism is used for renaming objects in S3 -- PUT/Copy to a new key, then delete the old one after the copy succeeds. The metadata can be replaced or preserved. Now, what's interesting here is two things -- one, of course, is that you're using something other than S3 as the back-end (which I did not notice when I first read this issue), and it looks as if HCP may have an issue with operations that are too closely spaced in time on the same object. Also, you've specified `-o nocopyapi` -- which _should_ mean that you _never_ see `x-amz-metadata-directive` in a request, because: > If you set this option, s3fs do not use PUT with "x-amz-copy-source" (copy api). ...which means `x-amz-metadata-directive` would never be used... yet, that's what s3fs seems to be doing anyway. Interesting. Your "working" request seems to point to a bug in s3fs not behaving as documented, while your "broken" request seems to be a case where s3fs is behaving correctly, but exposing a problem with the "Hitachi Content Platform" not being able to accept these requests so shortly spaced in time. Is this a condition you observe every time, or is it intermittent? S3 itself has [a few documented cases](http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList) where it can return a 409 error, at least one of them (`OperationAborted`) potentially being retryable with a prospect for success... and if s3fs doesn't have a mechanism for retrying these errors before considering the condition to be fatal, it probably should have one.
Author
Owner

@dbbyleo commented on GitHub (Jun 17, 2016):

After reading your reply, I went back to retest to make sure if I did encounter an anomaly with the use of -o nocopyapi. And I found that there was no issue - it behaved just like you said; my previous post was slightly incorrect. I'm going to look into somethings you said, but I wanted to post some details after fixing my error.

To answer your question: Yes, this is consistent and reproduceable. But I'm curious what you mean by

"Hitachi Content Platform" not being able to accept these requests so shortly spaced in time.

So the debug info in the tests that fails is the expected behavior? It seems to me that a second PUT request for the same filename would result in a 409 error. Can you explain more why a PUT for same filename shouldn't result in a "conflict" error?

During my retest, I took better documentation and here's the details just for reference. Based on what you've said so far, I think you'll find (then) that the debug info seems to be what you'd expect.

Mount Command:
With -o nocopyapi
s3fs apple /hcp -o nocopyapi -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://ahcp3.hcp-demo.hcpdemo.com -o passwd_file=/root/hs3.cred

Here's the results:
Copying a zero file SUCCEEDS.
Copying a non-zero text file FAILS.
Touching/Creating a zero file directly in the bucket FAILS.

Here's the debug info for each of the above:

Copying a zero file SUCCEEDS

# cd /root
# touch myfile
# cp myfile /hcp/

... 1st PUT (and only PUT)

> PUT /apple/myfile HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 15:58:59 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466179139
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:00:13 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

Copying a non-zero text file FAILS.

# echo "hello world" > hello.txt
# cp hello.txt /hcp/

... 1st PUT

> PUT /apple/hello.txt HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: text/plain
Date: Fri, 17 Jun 2016 16:11:34 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466179894
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:12:49 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

... 2nd PUT

> PUT /apple/hello.txt HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: text/plain
Date: Fri, 17 Jun 2016 16:11:34 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466179894
x-amz-meta-uid: 0
Content-Length: 12
Expect: 100-continue

< HTTP/1.1 409 Conflict
< Date: Fri, 17 Jun 2016 16:12:49 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< Content-Type: application/xml;charset=UTF-8
< Connection: close
<

Touching/Creating a zero file directly in the bucket FAILS.
# touch /hcp/touchonhcp

... 1st PUT

> PUT /apple/touchonhcp HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 15:50:08 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466178608
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 15:51:23 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

... 2nd PUT

> PUT /apple/touchonhcp HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 15:50:08 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466178608
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 409 Conflict
< Date: Fri, 17 Jun 2016 15:51:23 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< Content-Type: application/xml;charset=UTF-8
< Transfer-Encoding: chunked
* HTTP error before end of send, stop sending
<

Then I tested without the -o nocopyapi ...

Mount Command:
Without -o nocopyapi
s3fs apple /hcp -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://ahcp3.hcp-demo.hcpdemo.com -o passwd_file=/root/hs3.cred

Here's the results:
Copying a zero file SUCCEEDS.
Copying a non-zero text file FAILS.
Touching/Creating a zero file directly in the bucket SUCCEEDS.

Here's the debug info for each of the above:

Copying a zero file SUCCEEDS.

# cd /root
# touch myfile
# cp myfile /hcp/

... 1st PUT (and only PUT)

> PUT /apple/myfile HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 16:21:20 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466180480
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:22:35 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

Copying a non-zero text file FAILS.

# echo "hello world" > hello.txt
# cp hello.txt /hcp/

... 1st PUT

> PUT /apple/hello.txt HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: text/plain
Date: Fri, 17 Jun 2016 16:23:10 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466180590
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:24:25 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

... 2nd PUT

> PUT /apple/hello.txt HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: text/plain
Date: Fri, 17 Jun 2016 16:23:10 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466180590
x-amz-meta-uid: 0
Content-Length: 12
Expect: 100-continue

< HTTP/1.1 409 Conflict
< Date: Fri, 17 Jun 2016 16:24:25 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< Content-Type: application/xml;charset=UTF-8
< Connection: close
<

Touching/Creating a zero file directly in the bucket SUCCEEDS.
# touch /hcp/touchonhcp
... 1st PUT

> PUT /apple/touchonhcp HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 16:17:04 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466180224
x-amz-meta-uid: 0
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:18:19 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
<

... 2nd PUT

> PUT /apple/touchonhcp HTTP/1.1
User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL)
Host: ahcp3.hcp-demo.hcpdemo.com
Accept: */*
Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx
Content-Type: application/octet-stream
Date: Fri, 17 Jun 2016 16:17:04 GMT
x-amz-acl: private
x-amz-copy-source: /apple/touchonhcp
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1466180224
x-amz-meta-uid: 0
x-amz-metadata-directive: REPLACE
Content-Length: 0

< HTTP/1.1 200 OK
< Date: Fri, 17 Jun 2016 16:18:19 GMT
* Server HCP V7.2.1.40 is not blacklisted
< Server: HCP V7.2.1.40
< Content-Type: application/xml;charset=UTF-8
< Transfer-Encoding: chunked
<
<!-- gh-comment-id:226821813 --> @dbbyleo commented on GitHub (Jun 17, 2016): After reading your reply, I went back to retest to make sure if I did encounter an anomaly with the use of **-o nocopyapi**. And I found that _there was no issue_ - it behaved just like you said; my previous post was slightly incorrect. I'm going to look into somethings you said, but I wanted to post some details after fixing my error. To answer your question: Yes, this is consistent and reproduceable. But I'm curious what you mean by > "Hitachi Content Platform" not being able to accept these requests so shortly spaced in time. So the debug info in the tests that fails is the expected behavior? It seems to me that a second PUT request for the same filename would result in a 409 error. Can you explain more why a PUT for same filename shouldn't result in a "conflict" error? During my retest, I took better documentation and here's the details just for reference. Based on what you've said so far, I think you'll find (then) that the debug info seems to be what you'd expect. Mount Command: **With -o nocopyapi** `s3fs apple /hcp -o nocopyapi -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://ahcp3.hcp-demo.hcpdemo.com -o passwd_file=/root/hs3.cred` Here's the results: Copying a zero file **SUCCEEDS**. Copying a non-zero text file **FAILS**. Touching/Creating a zero file directly in the bucket **FAILS**. Here's the debug info for each of the above: Copying a zero file **SUCCEEDS** ``` # cd /root # touch myfile # cp myfile /hcp/ ``` ... 1st PUT (and only PUT) ``` > PUT /apple/myfile HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 15:58:59 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466179139 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:00:13 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` Copying a non-zero text file **FAILS**. ``` # echo "hello world" > hello.txt # cp hello.txt /hcp/ ``` ... 1st PUT ``` > PUT /apple/hello.txt HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: text/plain Date: Fri, 17 Jun 2016 16:11:34 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466179894 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:12:49 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` ... 2nd PUT ``` > PUT /apple/hello.txt HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: text/plain Date: Fri, 17 Jun 2016 16:11:34 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466179894 x-amz-meta-uid: 0 Content-Length: 12 Expect: 100-continue < HTTP/1.1 409 Conflict < Date: Fri, 17 Jun 2016 16:12:49 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < Content-Type: application/xml;charset=UTF-8 < Connection: close < ``` Touching/Creating a zero file directly in the bucket **FAILS**. `# touch /hcp/touchonhcp` ... 1st PUT ``` > PUT /apple/touchonhcp HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 15:50:08 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466178608 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 15:51:23 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` ... 2nd PUT ``` > PUT /apple/touchonhcp HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 15:50:08 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466178608 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 409 Conflict < Date: Fri, 17 Jun 2016 15:51:23 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < Content-Type: application/xml;charset=UTF-8 < Transfer-Encoding: chunked * HTTP error before end of send, stop sending < ``` Then I tested without the -o nocopyapi ... Mount Command: **Without -o nocopyapi** `s3fs apple /hcp -o use_path_request_style -o nomultipart -o no_check_certificate -o sigv2 -d -d -f -o f2 -o curldbg -o url=https://ahcp3.hcp-demo.hcpdemo.com -o passwd_file=/root/hs3.cred` Here's the results: Copying a zero file **SUCCEEDS**. Copying a non-zero text file **FAILS**. Touching/Creating a zero file directly in the bucket **SUCCEEDS**. Here's the debug info for each of the above: Copying a zero file **SUCCEEDS**. ``` # cd /root # touch myfile # cp myfile /hcp/ ``` ... 1st PUT (and only PUT) ``` > PUT /apple/myfile HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 16:21:20 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466180480 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:22:35 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` Copying a non-zero text file **FAILS**. ``` # echo "hello world" > hello.txt # cp hello.txt /hcp/ ``` ... 1st PUT ``` > PUT /apple/hello.txt HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: text/plain Date: Fri, 17 Jun 2016 16:23:10 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466180590 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:24:25 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` ... 2nd PUT ``` > PUT /apple/hello.txt HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: text/plain Date: Fri, 17 Jun 2016 16:23:10 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466180590 x-amz-meta-uid: 0 Content-Length: 12 Expect: 100-continue < HTTP/1.1 409 Conflict < Date: Fri, 17 Jun 2016 16:24:25 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < Content-Type: application/xml;charset=UTF-8 < Connection: close < ``` Touching/Creating a zero file directly in the bucket **SUCCEEDS**. `# touch /hcp/touchonhcp` ... 1st PUT ``` > PUT /apple/touchonhcp HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 16:17:04 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466180224 x-amz-meta-uid: 0 Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:18:19 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < ``` ... 2nd PUT ``` > PUT /apple/touchonhcp HTTP/1.1 User-Agent: s3fs/1.80 (commit hash 9fb3fd1; OpenSSL) Host: ahcp3.hcp-demo.hcpdemo.com Accept: */* Authorization: AWS xxxx:xxxxxxxxxxxxxxxxxx Content-Type: application/octet-stream Date: Fri, 17 Jun 2016 16:17:04 GMT x-amz-acl: private x-amz-copy-source: /apple/touchonhcp x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1466180224 x-amz-meta-uid: 0 x-amz-metadata-directive: REPLACE Content-Length: 0 < HTTP/1.1 200 OK < Date: Fri, 17 Jun 2016 16:18:19 GMT * Server HCP V7.2.1.40 is not blacklisted < Server: HCP V7.2.1.40 < Content-Type: application/xml;charset=UTF-8 < Transfer-Encoding: chunked < ```
Author
Owner

@ggtakec commented on GitHub (Jul 18, 2016):

@dbbyleo I'm sorry for my late reply.

s3fs logic is no different in the file size the XXXX and YYYY

s3fs's upload logic does not depend on the file size.
It is performed in the following sequence:

  • download head(for stats) of the file
  • upload zero size object, if it does not exist.
  • re-download head(for stats) of the file
  • upload(replace) new file

From your results, HCP has returned a 409 error in the second PUT request.
But I do not know this reason.

If you check it, please check whether the second HEAD request which is after first PUT(and before second PUT) is successful.
And we should know about 409 error reason from HCP.

Thanks in advance for your assistance.

<!-- gh-comment-id:233292122 --> @ggtakec commented on GitHub (Jul 18, 2016): @dbbyleo I'm sorry for my late reply. s3fs logic is no different in the file size the XXXX and YYYY s3fs's upload logic does not depend on the file size. It is performed in the following sequence: - download head(for stats) of the file - upload zero size object, if it does not exist. - re-download head(for stats) of the file - upload(replace) new file From your results, HCP has returned a 409 error in the second PUT request. But I do not know this reason. If you check it, please check whether the second HEAD request which is after first PUT(and before second PUT) is successful. And we should know about 409 error reason from HCP. Thanks in advance for your assistance.
Author
Owner

@dbbyleo commented on GitHub (Jul 19, 2016):

@ggtakec Thanks for the reply. This is still an ongoing issue and appreciate your help.

I have rebuilt a new server and installed s3fs 1.80 on Debian Jessie. But I am still getting the same results.

To answer your question: The second HEAD succeeds.

HTTP 409 error is "Conflict." I assume this means "the file already exists."

I've been able to successfully use s3curl to test REST protocols to the HCP. I am able to write files to the HCP via s3curl. But what I also found is I can't write a file to HCP if the file already exists. I'm not an expect on HCP, but it seems similar to the s3fs issue... whereas the 2nd PUT fails seemingly because the file already exists.

<!-- gh-comment-id:233765012 --> @dbbyleo commented on GitHub (Jul 19, 2016): @ggtakec Thanks for the reply. This is still an ongoing issue and appreciate your help. I have rebuilt a new server and installed s3fs 1.80 on Debian Jessie. But I am still getting the same results. To answer your question: The second HEAD succeeds. HTTP 409 error is "Conflict." I assume this means "the file already exists." I've been able to successfully use s3curl to test REST protocols to the HCP. I am able to write files to the HCP via s3curl. But what I also found is I can't write a file to HCP if the file already exists. I'm not an expect on HCP, but it seems similar to the s3fs issue... whereas the 2nd PUT fails seemingly because the file already exists.
Author
Owner

@dbbyleo commented on GitHub (Jul 20, 2016):

I just realized that HCP (Hitachi Content Platform) is a fixed-content system and therefore is a WORM device. It seems like this is why I'm having issues with s3fs - the double PUTs is simply not allowed in these types of systems, right? Sorry - I'm new to these storage systems and s3fs altogether...

<!-- gh-comment-id:233964645 --> @dbbyleo commented on GitHub (Jul 20, 2016): I just realized that HCP (Hitachi Content Platform) is a fixed-content system and therefore is a WORM device. It seems like this is why I'm having issues with s3fs - the double PUTs is simply not allowed in these types of systems, right? Sorry - I'm new to these storage systems and s3fs altogether...
Author
Owner

@gaul commented on GitHub (Jul 21, 2016):

@ggtakec Do we need to create the initial zero-size object? Eliding this will give a better experience with regard to eventual consistency.

<!-- gh-comment-id:234161231 --> @gaul commented on GitHub (Jul 21, 2016): @ggtakec Do we need to create the initial zero-size object? Eliding this will give a better experience with regard to eventual consistency.
Author
Owner

@greg-at-symcor-dot-com commented on GitHub (Apr 27, 2017):

This still seems to be an issue with using S3FS with the HCP. Are there any plans to update s3fs to support worm compatible writes (as a mode perhaps)?

* Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0
* Re-using existing connection! (#4) with host confluence.hcp1.symrad.com
* Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4)
> PUT /s3/1 HTTP/1.1
User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL)
Host: confluence.hcp1.symrad.com
Accept: */*
Authorization: AWS YXR0YWNo:e9+atRCaqerlvt3F2RKkxC3tuEk=
Content-Type: application/octet-stream
Date: Thu, 27 Apr 2017 14:53:28 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1493304808
x-amz-meta-uid: 0
Content-Length: 0
Expect: 100-continue

< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Date: Thu, 27 Apr 2017 14:53:28 GMT
< Server: HCP V7.2.3.18
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Content-Length: 0
< 
* Connection #4 to host confluence.hcp1.symrad.com left intact
[INF]       curl.cpp:RequestPerform(1910): HTTP response code 200
[INF]       cache.cpp:DelStat(549): delete stat cache entry[path=/1]
[INF]       curl.cpp:HeadRequest(2486): [tpath=/1]
[INF]       curl.cpp:PreHeadRequest(2423): [tpath=/1][bpath=][save=][sseckeypos=-1]
[INF]       curl.cpp:prepare_url(4175): URL is https://confluence.hcp1.symrad.com/s3/1
[INF]       curl.cpp:prepare_url(4207): URL changed is https://confluence.hcp1.symrad.com/s3/1
* Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0
* Re-using existing connection! (#4) with host confluence.hcp1.symrad.com
* Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4)
> HEAD /s3/1 HTTP/1.1
User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL)
Host: confluence.hcp1.symrad.com
Accept: */*
Authorization: AWS YXR0YWNo:x6mxRTf89gvGQSPyz7WDJRMj5zo=
Date: Thu, 27 Apr 2017 14:53:28 GMT

< HTTP/1.1 200 OK
< Date: Thu, 27 Apr 2017 14:53:28 GMT
< Server: HCP V7.2.3.18
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< Accept-Ranges: bytes
< Last-Modified: Thu, 27 Apr 2017 14:53:28 GMT
< x-amz-meta-gid: 0
< x-amz-meta-mode: 33188
< x-amz-meta-mtime: 1493304808
< x-amz-meta-uid: 0
< Content-Type: application/octet-stream
< Content-Length: 0
< 
* Connection #4 to host confluence.hcp1.symrad.com left intact
[INF]       curl.cpp:RequestPerform(1910): HTTP response code 200
[INF]       cache.cpp:AddStat(346): add stat cache entry[path=/1]
[INF] s3fs.cpp:s3fs_getattr(808): [path=/1]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0]
[INF] s3fs.cpp:s3fs_flush(2141): [path=/1][fd=8]
[INF]       fdcache.cpp:RowFlush(1345): [tpath=][path=/1][fd=8]
[INF]       curl.cpp:PutRequest(2641): [tpath=/1]
[INF]       curl.cpp:prepare_url(4175): URL is https://confluence.hcp1.symrad.com/s3/1
[INF]       curl.cpp:prepare_url(4207): URL changed is https://confluence.hcp1.symrad.com/s3/1
[INF]       curl.cpp:PutRequest(2750): uploading... [path=/1][fd=8][size=1048576]
* Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0
* Re-using existing connection! (#4) with host confluence.hcp1.symrad.com
* Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4)
> PUT /s3/1 HTTP/1.1
User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL)
Host: confluence.hcp1.symrad.com
Accept: */*
Authorization: AWS YXR0YWNo:e9+atRCaqerlvt3F2RKkxC3tuEk=
Content-Type: application/octet-stream
Date: Thu, 27 Apr 2017 14:53:28 GMT
x-amz-acl: private
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1493304808
x-amz-meta-uid: 0
Content-Length: 1048576
Expect: 100-continue

< HTTP/1.1 409 Conflict
< Date: Thu, 27 Apr 2017 14:53:28 GMT
< Server: HCP V7.2.3.18
< Content-Type: application/xml;charset=UTF-8
< Connection: close
< 
* Closing connection 4
[INF]       curl.cpp:RequestPerform(1937): HTTP response code = 409, returning EIO
[INF] s3fs.cpp:s3fs_release(2194): [path=/1][fd=8]
[INF]       cache.cpp:DelStat(549): delete stat cache entry[path=/1]
[INF]       fdcache.cpp:GetFdEntity(1846): [path=/1][fd=8]

<!-- gh-comment-id:297820815 --> @greg-at-symcor-dot-com commented on GitHub (Apr 27, 2017): This still seems to be an issue with using S3FS with the HCP. Are there any plans to update s3fs to support worm compatible writes (as a mode perhaps)? ``` * Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0 * Re-using existing connection! (#4) with host confluence.hcp1.symrad.com * Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4) > PUT /s3/1 HTTP/1.1 User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL) Host: confluence.hcp1.symrad.com Accept: */* Authorization: AWS YXR0YWNo:e9+atRCaqerlvt3F2RKkxC3tuEk= Content-Type: application/octet-stream Date: Thu, 27 Apr 2017 14:53:28 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1493304808 x-amz-meta-uid: 0 Content-Length: 0 Expect: 100-continue < HTTP/1.1 100 Continue < HTTP/1.1 200 OK < Date: Thu, 27 Apr 2017 14:53:28 GMT < Server: HCP V7.2.3.18 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Content-Length: 0 < * Connection #4 to host confluence.hcp1.symrad.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/1] [INF] curl.cpp:HeadRequest(2486): [tpath=/1] [INF] curl.cpp:PreHeadRequest(2423): [tpath=/1][bpath=][save=][sseckeypos=-1] [INF] curl.cpp:prepare_url(4175): URL is https://confluence.hcp1.symrad.com/s3/1 [INF] curl.cpp:prepare_url(4207): URL changed is https://confluence.hcp1.symrad.com/s3/1 * Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0 * Re-using existing connection! (#4) with host confluence.hcp1.symrad.com * Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4) > HEAD /s3/1 HTTP/1.1 User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL) Host: confluence.hcp1.symrad.com Accept: */* Authorization: AWS YXR0YWNo:x6mxRTf89gvGQSPyz7WDJRMj5zo= Date: Thu, 27 Apr 2017 14:53:28 GMT < HTTP/1.1 200 OK < Date: Thu, 27 Apr 2017 14:53:28 GMT < Server: HCP V7.2.3.18 < ETag: "d41d8cd98f00b204e9800998ecf8427e" < Accept-Ranges: bytes < Last-Modified: Thu, 27 Apr 2017 14:53:28 GMT < x-amz-meta-gid: 0 < x-amz-meta-mode: 33188 < x-amz-meta-mtime: 1493304808 < x-amz-meta-uid: 0 < Content-Type: application/octet-stream < Content-Length: 0 < * Connection #4 to host confluence.hcp1.symrad.com left intact [INF] curl.cpp:RequestPerform(1910): HTTP response code 200 [INF] cache.cpp:AddStat(346): add stat cache entry[path=/1] [INF] s3fs.cpp:s3fs_getattr(808): [path=/1] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_getxattr(3072): [path=/1][name=security.capability][value=(nil)][size=0] [INF] s3fs.cpp:s3fs_flush(2141): [path=/1][fd=8] [INF] fdcache.cpp:RowFlush(1345): [tpath=][path=/1][fd=8] [INF] curl.cpp:PutRequest(2641): [tpath=/1] [INF] curl.cpp:prepare_url(4175): URL is https://confluence.hcp1.symrad.com/s3/1 [INF] curl.cpp:prepare_url(4207): URL changed is https://confluence.hcp1.symrad.com/s3/1 [INF] curl.cpp:PutRequest(2750): uploading... [path=/1][fd=8][size=1048576] * Found bundle for host confluence.hcp1.symrad.com: 0x7f6738052ed0 * Re-using existing connection! (#4) with host confluence.hcp1.symrad.com * Connected to confluence.hcp1.symrad.com (192.168.92.11) port 443 (#4) > PUT /s3/1 HTTP/1.1 User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL) Host: confluence.hcp1.symrad.com Accept: */* Authorization: AWS YXR0YWNo:e9+atRCaqerlvt3F2RKkxC3tuEk= Content-Type: application/octet-stream Date: Thu, 27 Apr 2017 14:53:28 GMT x-amz-acl: private x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1493304808 x-amz-meta-uid: 0 Content-Length: 1048576 Expect: 100-continue < HTTP/1.1 409 Conflict < Date: Thu, 27 Apr 2017 14:53:28 GMT < Server: HCP V7.2.3.18 < Content-Type: application/xml;charset=UTF-8 < Connection: close < * Closing connection 4 [INF] curl.cpp:RequestPerform(1937): HTTP response code = 409, returning EIO [INF] s3fs.cpp:s3fs_release(2194): [path=/1][fd=8] [INF] cache.cpp:DelStat(549): delete stat cache entry[path=/1] [INF] fdcache.cpp:GetFdEntity(1846): [path=/1][fd=8] ```
Author
Owner

@ggtakec commented on GitHub (May 5, 2017):

@dbbyleo @greg-at-symcor-dot-com I'm sorry for my late reply.
I read HCP doccument, and found following lins in "Storing an object":

If versioning is enabled and you try to store an object with the same name as an existing object, HCP creates a new version of the bject.
If versioning is disabled and you try to store an object with the same name as an existing object, HCP returns a 409 (Conflict) status code and does not store the object.

In other words, HCP is a specification that can not overwrite(PUT) existed objects as long as versioning is disabled.
If this is correct, it seems to be an error in s3fs not only for creating a new file object but also for updating it.
Could you enable versioning?

<!-- gh-comment-id:299553724 --> @ggtakec commented on GitHub (May 5, 2017): @dbbyleo @greg-at-symcor-dot-com I'm sorry for my late reply. I read HCP doccument, and found following lins in "Storing an object": ``` If versioning is enabled and you try to store an object with the same name as an existing object, HCP creates a new version of the bject. If versioning is disabled and you try to store an object with the same name as an existing object, HCP returns a 409 (Conflict) status code and does not store the object. ``` In other words, HCP is a specification that can not overwrite(PUT) existed objects as long as versioning is disabled. If this is correct, it seems to be an error in s3fs not only for creating a new file object but also for updating it. Could you enable versioning?
Author
Owner

@greg-at-symcor-dot-com commented on GitHub (May 7, 2017):

Enabling versioning was the trick.

Once I enabled it, I was able to copy files into the s3fs mount.

Thanks!

<!-- gh-comment-id:299730909 --> @greg-at-symcor-dot-com commented on GitHub (May 7, 2017): Enabling versioning was the trick. Once I enabled it, I was able to copy files into the s3fs mount. Thanks!
Author
Owner

@ggtakec commented on GitHub (May 9, 2017):

@greg-at-symcor-dot-com
I am glad to hear that it worked nicely.
I'm closing this issue.
Regards,

<!-- gh-comment-id:300181931 --> @ggtakec commented on GitHub (May 9, 2017): @greg-at-symcor-dot-com I am glad to hear that it worked nicely. I'm closing this issue. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#233
No description provided.