[GH-ISSUE #876] Writing/reading files by 2 programs #510

Closed
opened 2026-03-04 01:46:13 +03:00 by kerem · 3 comments
Owner

Originally created by @nikt12 on GitHub (Dec 18, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/876

Version of s3fs being used (s3fs --version)

V1.84

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.8

opensuse

### Details about issue
Hello, folks.
I have some question and it could be quite general, because I think that I don't understand everything.
I mounted bucket and tried to open there file and write to it first time (fopen() and fprintf() calls were invoked, after that file wasn't closed, program waiting for input).
After that I invoked 'cat <fullfilename path>' from second terminal and it ended without any output.
Then I attempt to write second time from first program (be aware, it was not closed after first write) and close it, after all these actions I have only second time written information.

I understand that if we will write from 2 programs to one file - only later attempt will be succeeded.
But why it happens (first written info disappeared) if we try to read file with 'cat' during another program keeping its file descriptor?
Originally created by @nikt12 on GitHub (Dec 18, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/876 #### Version of s3fs being used (s3fs --version) V1.84 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.8 opensuse ``` ### Details about issue Hello, folks. I have some question and it could be quite general, because I think that I don't understand everything. I mounted bucket and tried to open there file and write to it first time (fopen() and fprintf() calls were invoked, after that file wasn't closed, program waiting for input). After that I invoked 'cat <fullfilename path>' from second terminal and it ended without any output. Then I attempt to write second time from first program (be aware, it was not closed after first write) and close it, after all these actions I have only second time written information. I understand that if we will write from 2 programs to one file - only later attempt will be succeeded. But why it happens (first written info disappeared) if we try to read file with 'cat' during another program keeping its file descriptor?
kerem closed this issue 2026-03-04 01:46:13 +03:00
Author
Owner

@nikt12 commented on GitHub (Dec 21, 2018):

@ggtakec What do you think about this issue?

<!-- gh-comment-id:449336644 --> @nikt12 commented on GitHub (Dec 21, 2018): @ggtakec What do you think about this issue?
Author
Owner

@gaul commented on GitHub (Jan 4, 2019):

s3fs does not flush a file until it is closed. Generally there is high latency from the s3fs client to the S3 server so this batching improves performance. You could try adding an fsync/fdatasync which should flush to S3. However multi-client workloads are not well-supported; NFS is a better choice when you need fine-grained coordination.

<!-- gh-comment-id:451341129 --> @gaul commented on GitHub (Jan 4, 2019): s3fs does not flush a file until it is closed. Generally there is high latency from the s3fs client to the S3 server so this batching improves performance. You could try adding an `fsync`/`fdatasync` which should flush to S3. However multi-client workloads are not well-supported; NFS is a better choice when you need fine-grained coordination.
Author
Owner

@ggtakec commented on GitHub (Mar 29, 2019):

I'm sorry for my late reply.

s3fs uploads a file to S3 when it flushes (closes) to the file.
That is, when the second program opens the file, the file has not yet been updated. Therefore the old contents are displayed.
If the first program closes the file, the file contents will be uploaded and updated.
In s3fs, only later attempts will be succeeded means the last process to close(flush).
Also, as @gaul stated, I think that there are cases where s3fs does not provide enough care.

I will close this, but if the problem persists, please reopen or post a new issue.
Thanks in advance for your assistance.

<!-- gh-comment-id:477866760 --> @ggtakec commented on GitHub (Mar 29, 2019): I'm sorry for my late reply. s3fs uploads a file to S3 when it flushes (closes) to the file. That is, when the second program opens the file, the file has not yet been updated. Therefore the old contents are displayed. If the first program closes the file, the file contents will be uploaded and updated. In s3fs, `only later attempts will be succeeded` means `the last process to close(flush)`. Also, as @gaul stated, I think that there are cases where s3fs does not provide enough care. I will close this, but if the problem persists, please reopen or post a new issue. Thanks in advance for your assistance.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#510
No description provided.