mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1863] OVH bracket using s3fs-fuse; a new bracket named bracket+segments was created. Is it normal? #949
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#949
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @dnwk on GitHub (Jan 17, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1863
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
v1.89
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Version: 2.9.9-5
Kernel information (uname -r)
5.10.0-10-cloud-amd64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
"Debian GNU/Linux 11 (bullseye)
s3fs command line used, if applicable
/etc/fstab entry, if applicable
storagemail /mnt/ovhstorage fuse.s3fs _netdev,allow_other,url=https://s3.us-west-or.cloud.ovh.us 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Jan 17 05:11:32 mail1 s3fs[3861710]: Loaded mime information from /etc/mime.types
Jan 17 05:11:32 mail1 s3fs[3861711]: init v1.89(commit:unknown) with GnuTLS(gcrypt)
Jan 17 05:11:32 mail1 s3fs[3861711]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'.
Jan 17 05:14:28 mail1 s3fs[634]: Loaded mime information from /etc/mime.types
Jan 17 05:14:28 mail1 s3fs[654]: init v1.89(commit:unknown) with GnuTLS(gcrypt)
Jan 17 05:14:28 mail1 s3fs[654]: s3fs.cpp:s3fs_check_service(3480): Failed to connect region 'us-east-1'(default), so retry to connect region 'US-WEST-OR'.
Details about issue
I mounted storagemail bucket from OVH object storage. However, soon I discover that another bracket storagemail+segments were created and have some data in there.
I see nothing in the log on creating a new bucket and not sure where my data actually sits. Any explanation on this behavior?
@ggtakec commented on GitHub (Jan 22, 2022):
Is #1361 useful for you?
I think in this #1361 you will find examples such as the s3fs options you need.
@dnwk commented on GitHub (Jan 23, 2022):
@ggtakec I think this is slightly different. I have no trouble mounting, read and write to my bucket. However, since there is an unexpected new bucket created, I'm wondering why.
@ggtakec commented on GitHub (Jan 23, 2022):
If you get a log with debug option(
dbglevel=dbg), it will be easier to identify the cause.(Be careful because a large amount of logs are output.)
@gaul commented on GitHub (Jan 23, 2022):
This sounds like how OpenStack Swift handles its SLO segments. You can work around this with
-o nomultipartalthough performance will suffer.@dnwk commented on GitHub (Jan 23, 2022):
@gaul I was using S3 endpoint from OVH to mount. Since I could read and write just fine, I could safely ignore this? If I ever have to mount it on a different server to recover data, s3fs-fuse and Openstack would handle the data by itself without any intervention?
@gaul commented on GitHub (Feb 6, 2022):
I believe OVH uses OpenStack Swift and exposes the S3 protocol via swift3. I believe that this is working as intended although there are various SLO configuration to store the segments (multiparts) in another bucket. You might want to follow up with OVH support.