[GH-ISSUE #1863] OVH bracket using s3fs-fuse; a new bracket named bracket+segments was created. Is it normal? #949

Open
opened 2026-03-04 01:50:09 +03:00 by kerem · 6 comments
Owner

Originally created by @dnwk on GitHub (Jan 17, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1863

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

v1.89

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

Version: 2.9.9-5

Kernel information (uname -r)

5.10.0-10-cloud-amd64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

"Debian GNU/Linux 11 (bullseye)

s3fs command line used, if applicable

/etc/fstab entry, if applicable

storagemail /mnt/ovhstorage fuse.s3fs _netdev,allow_other,url=https://s3.us-west-or.cloud.ovh.us 0 0

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Jan 17 05:11:32 mail1 s3fs[3861710]: Loaded mime information from /etc/mime.types
Jan 17 05:11:32 mail1 s3fs[3861711]: init v1.89(commit:unknown) with GnuTLS(gcrypt)
Jan 17 05:11:32 mail1 s3fs[3861711]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'.
Jan 17 05:14:28 mail1 s3fs[634]: Loaded mime information from /etc/mime.types
Jan 17 05:14:28 mail1 s3fs[654]: init v1.89(commit:unknown) with GnuTLS(gcrypt)
Jan 17 05:14:28 mail1 s3fs[654]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'.

Details about issue

I mounted storagemail bucket from OVH object storage. However, soon I discover that another bracket storagemail+segments were created and have some data in there.
I see nothing in the log on creating a new bucket and not sure where my data actually sits. Any explanation on this behavior?

Originally created by @dnwk on GitHub (Jan 17, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1863 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) v1.89 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) Version: 2.9.9-5 #### Kernel information (uname -r) 5.10.0-10-cloud-amd64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) "Debian GNU/Linux 11 (bullseye) #### s3fs command line used, if applicable ``` ``` #### /etc/fstab entry, if applicable storagemail /mnt/ovhstorage fuse.s3fs _netdev,allow_other,url=https://s3.us-west-or.cloud.ovh.us 0 0 #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) Jan 17 05:11:32 mail1 s3fs[3861710]: Loaded mime information from /etc/mime.types Jan 17 05:11:32 mail1 s3fs[3861711]: init v1.89(commit:unknown) with GnuTLS(gcrypt) Jan 17 05:11:32 mail1 s3fs[3861711]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'. Jan 17 05:14:28 mail1 s3fs[634]: Loaded mime information from /etc/mime.types Jan 17 05:14:28 mail1 s3fs[654]: init v1.89(commit:unknown) with GnuTLS(gcrypt) Jan 17 05:14:28 mail1 s3fs[654]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'. ### Details about issue I mounted storagemail bucket from OVH object storage. However, soon I discover that another bracket storagemail+segments were created and have some data in there. I see nothing in the log on creating a new bucket and not sure where my data actually sits. Any explanation on this behavior?
Author
Owner

@ggtakec commented on GitHub (Jan 22, 2022):

Is #1361 useful for you?
I think in this #1361 you will find examples such as the s3fs options you need.

<!-- gh-comment-id:1019166839 --> @ggtakec commented on GitHub (Jan 22, 2022): Is #1361 useful for you? I think in this #1361 you will find examples such as the s3fs options you need.
Author
Owner

@dnwk commented on GitHub (Jan 23, 2022):

@ggtakec I think this is slightly different. I have no trouble mounting, read and write to my bucket. However, since there is an unexpected new bucket created, I'm wondering why.

<!-- gh-comment-id:1019425972 --> @dnwk commented on GitHub (Jan 23, 2022): @ggtakec I think this is slightly different. I have no trouble mounting, read and write to my bucket. However, since there is an unexpected new bucket created, I'm wondering why.
Author
Owner

@ggtakec commented on GitHub (Jan 23, 2022):

If you get a log with debug option(dbglevel=dbg), it will be easier to identify the cause.
(Be careful because a large amount of logs are output.)

<!-- gh-comment-id:1019483603 --> @ggtakec commented on GitHub (Jan 23, 2022): If you get a log with debug option(`dbglevel=dbg`), it will be easier to identify the cause. (Be careful because a large amount of logs are output.)
Author
Owner

@gaul commented on GitHub (Jan 23, 2022):

This sounds like how OpenStack Swift handles its SLO segments. You can work around this with -o nomultipart although performance will suffer.

<!-- gh-comment-id:1019485045 --> @gaul commented on GitHub (Jan 23, 2022): This sounds like how OpenStack Swift handles its SLO segments. You can work around this with `-o nomultipart` although performance will suffer.
Author
Owner

@dnwk commented on GitHub (Jan 23, 2022):

@gaul I was using S3 endpoint from OVH to mount. Since I could read and write just fine, I could safely ignore this? If I ever have to mount it on a different server to recover data, s3fs-fuse and Openstack would handle the data by itself without any intervention?

<!-- gh-comment-id:1019554595 --> @dnwk commented on GitHub (Jan 23, 2022): @gaul I was using S3 endpoint from OVH to mount. Since I could read and write just fine, I could safely ignore this? If I ever have to mount it on a different server to recover data, s3fs-fuse and Openstack would handle the data by itself without any intervention?
Author
Owner

@gaul commented on GitHub (Feb 6, 2022):

I believe OVH uses OpenStack Swift and exposes the S3 protocol via swift3. I believe that this is working as intended although there are various SLO configuration to store the segments (multiparts) in another bucket. You might want to follow up with OVH support.

<!-- gh-comment-id:1030827420 --> @gaul commented on GitHub (Feb 6, 2022): I believe OVH uses OpenStack Swift and exposes the S3 protocol via swift3. I believe that this is working as intended although there are various SLO configuration to store the segments (multiparts) in another bucket. You might want to follow up with OVH support.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#949
No description provided.