[GH-ISSUE #1257] Flushing #674

Closed
opened 2026-03-04 01:47:47 +03:00 by kerem · 2 comments
Owner

Originally created by @thierrygayet on GitHub (Mar 27, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1257

Version of s3fs being used (s3fs --version)

s3fs --version
Amazon Simple Storage Service File System V1.86 (commit:fe2b269) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Kernel information (uname -r)

uname -r
4.4.0-1092-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"

cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

s3fs command line used, if applicable

sudo s3fs dazzl-test-records /data/records -o passwd_file=/etc/passwd-s3fs,sync,endpoint=eu-west-1,allow_other,umask=0007,uid=1001,gid=1001

mount |grep s3fs
s3fs on /data/records type fuse.s3fs (rw,nosuid,nodev,relatime,sync,user_id=0,group_id=0,allow_other)

/etc/fstab entry, if applicable

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Details about the issue

Hello,

I am using s3FS with the above command line.

My question that I ask is because I have not found any info on this subject elsewhere relates to the flush (upload as and when it is recorded) of the recording of a large video file that can sometimes reach several GB .

Indeed, not having the place to record more I would need that the recording is uploaded little by little.

What I observe is that the upload is triggered when the file is "fclose", not when the data is appended to the file with a "fwrite".

Is there a way to switch to the mode that interests me?

Thanks in advance.

Thierry GAYET

Originally created by @thierrygayet on GitHub (Mar 27, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1257 #### Version of s3fs being used (s3fs --version) s3fs --version Amazon Simple Storage Service File System V1.86 (commit:fe2b269) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. #### Kernel information (uname -r) uname -r 4.4.0-1092-aws #### GNU/Linux Distribution, if applicable (cat /etc/os-release) cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS" cat /etc/os-release NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial #### s3fs command line used, if applicable sudo s3fs dazzl-test-records /data/records -o passwd_file=/etc/passwd-s3fs,sync,endpoint=eu-west-1,allow_other,umask=0007,uid=1001,gid=1001 mount |grep s3fs s3fs on /data/records type fuse.s3fs (rw,nosuid,nodev,relatime,sync,user_id=0,group_id=0,allow_other) #### /etc/fstab entry, if applicable #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) ### Details about the issue Hello, I am using s3FS with the above command line. My question that I ask is because I have not found any info on this subject elsewhere relates to the flush (upload as and when it is recorded) of the recording of a large video file that can sometimes reach several GB . Indeed, not having the place to record more I would need that the recording is uploaded little by little. What I observe is that the upload is triggered when the file is "fclose", not when the data is appended to the file with a "fwrite". Is there a way to switch to the mode that interests me? Thanks in advance. Thierry GAYET
kerem 2026-03-04 01:47:47 +03:00
Author
Owner

@gaul commented on GitHub (Mar 27, 2020):

s3fs does not currently support this streaming mode since it supports random writes. It spools a temporary file to disk then uploads to S3 on flush or close. However @ggtakec implemented server-side copy logic that may enable this feature in the future. If s3fs also had a threshold to flush, e.g., 100 MB, then it would have more of a streaming behavior.

Please leave this issue open for tracking purposes. If you need an immediate workaround, please check out goofys which avoids the temporary file altogether but does not support random writes.

<!-- gh-comment-id:604880046 --> @gaul commented on GitHub (Mar 27, 2020): s3fs does not currently support this streaming mode since it supports random writes. It spools a temporary file to disk then uploads to S3 on `flush` or `close`. However @ggtakec implemented server-side copy logic that may enable this feature in the future. If s3fs also had a threshold to flush, e.g., 100 MB, then it would have more of a streaming behavior. Please leave this issue open for tracking purposes. If you need an immediate workaround, please check out [goofys](https://github.com/kahing/goofys) which avoids the temporary file altogether but does not support random writes.
Author
Owner

@thierrygayet commented on GitHub (Mar 27, 2020):

ha interesting, thank you very much for your answer.
I understand your answer.
Beyond the streaming in real time on s3 of my file, how to configure the size of the temporary file to know what size will trigger the upload to v3 of my data? is it by sizing the cache ?
Thank you in advance for your response.
Thierry GAYET

<!-- gh-comment-id:604919131 --> @thierrygayet commented on GitHub (Mar 27, 2020): ha interesting, thank you very much for your answer. I understand your answer. Beyond the streaming in real time on s3 of my file, how to configure the size of the temporary file to know what size will trigger the upload to v3 of my data? is it by sizing the cache ? Thank you in advance for your response. Thierry GAYET
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#674
No description provided.