mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1749] Chown results input/output error #898
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#898
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @JadianRadiator on GitHub (Aug 26, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1749
I get an input/output error when I try to "chmod -R" or chown stuff in the s3fs mount.
I need to manually chmod -R because stuff in the docker compose container running on it isn't getting permsion to do a bunch of file access/transfer things.
Running the "docker-compose up" command directly, doesn't have this issue.
It only runs this error when accessed via "docker attach"
This is the /etc/fstab I use to mount it.
What info do I need to grab for you guys to help me?
Every other closed issue over this that I've seen has only gotten a fix for mounting through s3fs cli.
I need a fix for using /etc/fstab to mount it.
@gaul commented on GitHub (Aug 27, 2021):
Are you including the mountpoint itself in the
chmod? Compare the results from:with
The former will fail because the "." does not actually exist in S3.
@JadianRadiator commented on GitHub (Aug 29, 2021):
I've tried the second one, which is what gives the error.
The first one wouldn't do anything... unless you have a file name that's just a singular period.
However I'm pretty sure you don't know computers in general.
Because periods actually do exist in S3.
Otherwise you would have to program a new file system format to grab the file format from extension-less files.
@ggtakec commented on GitHub (Aug 30, 2021):
@JadianRadiator Thanks for reporting a issue.
Do you have a file(object) called
.in your bucket(under mount point)?I'll try to investigate, but I don't think the filename
.can be manipulated on Unix. The same is true for operations from s3fs.If there is a file called
., I think it will cause a lot of confusion.@gaul commented on GitHub (Aug 30, 2021):
I specifically added the
.to exaggerate the example since it was not obvious that you were using the root of the mountpoint. On my systemchmod +x mnt/.fails withEIOwhilechmod +x mnt/dirsucceeds (andchmod +x mnt/dir/.also succeeds.) Note thatmnt/.is special since it is the root of the s3fs file system which doesn't exist.I am unsure why you made a nasty comment to me and and encourage you to reconsider how you act in this professional sphere. Negative comments poison online communities and particularly volunteer projects like s3fs. I reported this to GitHub for abuse.
It may surprise you to learn that I have created several file systems and made significant contributions to s3fs. So if you hold a low opinion of me you probably want to use different software.
Directories in Linux and other POSIX file systems have two pseudo-files
.and..that refer to the directory itself and its parent, respectively. S3 can have a.object but s3fs cannot refer to it for this reason.lsomits these by default but you can see them withls -a.This is pure nonsense.
@Scratak commented on GitHub (Sep 13, 2021):
Hello Guys,
Is there any update on this issue ?
I believe I have similar one, I am not so skilled but i will try to put here information I have so far. Might help in better troubleshooting ...
I tested that on s3fs on ubuntu 20.04 server and also on my docker container where I compiled it from scratch.
The behavior is the same.
I can create files in bucket i can change their permissions chmod and owners chown but directories are always root and cannot change that.
One issue i see in debug is during start /// Could not find mime.types files,/// I hope this is not the issue
I guess maybe some policy setting for bucket or so ?!
Thank you for your time
========== Here are my notes and debugs
alpine
bash-5.1# s3fs --v
Amazon Simple Storage Service File System V1.90 (commit:d5d541c) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
starting :
s3fs "k8s-b01" "/s3fs-mount"
-o passwd_file=passwd
-o use_path_request_style
-o url=https://xxxxxxxxxx.xx:9000/
-o allow_other
-o nonempty
-o logfile=/var/log/s3l
# -o use_cache=/s3temp
-d
mount point
drwxrwxrwx 1 root root 0 Jan 1 1970 .
drwxr-xr-x 17 kx kx 4.0K Sep 13 17:36 ..
drwxr-x--- 1 root root 0 Jan 1 1970 pv0002
drwxr-x--- 1 root root 0 Jan 1 1970 pv0004
drwxr-x--- 1 root root 0 Sep 10 21:07 pv0005
drwxr-x--- 1 root root 0 Jan 1 1970 test >>>>>> can not do annythign here
-rwxrwxr-x 1 kx kx 0 Sep 12 23:21 test1 >>>>>> file I can change everythign on them
-rw-r--r-- 1 kx kx 13 Sep 13 21:41 text >>>>>> original def. permitions anodther test file
kx@ponk-01:~/minio-binaries$ ./minio stat myminio/k8s-b01/
Name : test1
Date : 2021-09-12 23:21:40 CEST
Size : 0 B
ETag : d41d8cd98f00b204e9800998ecf8427e
Type : file
Metadata :
X-Amz-Meta-Mtime: 1631481700
X-Amz-Meta-Atime: 1631481700
X-Amz-Meta-Ctime: 1631552125
Content-Type : application/octet-stream
X-Amz-Meta-Uid : 1000 >>>>>>>>>>>>>>>>>>>>>>> metadata are on files
X-Amz-Meta-Gid : 1000
X-Amz-Meta-Mode : 33277
Name : text
Date : 2021-09-13 21:41:08 CEST
Size : 13 B
ETag : ad84b0362b93515ae9055bd72845ed74
Type : file
Metadata :
X-Amz-Meta-Gid : 1000
X-Amz-Meta-Mtime: 1631562068
X-Amz-Meta-Atime: 1631562068
X-Amz-Meta-Mode : 33188
X-Amz-Meta-Uid : 1000
Content-Type : application/octet-stream
X-Amz-Meta-Ctime: 1631562068
Name : pv0002/
Size : 0 B
Type : folder
Metadata :
Versioning: Un-versioned >>>>>>>>>>>>>>>>>> metadata missing on folders
Location: us-east-1
Policy: none
Name : pv0004/
Size : 0 B
Type : folder
Metadata :
Versioning: Un-versioned
Location: us-east-1
Policy: none
Name : pv0005/
Size : 0 B
Type : folder
Metadata :
Versioning: Un-versioned
Location: us-east-1
Policy: none
Name : test/
Size : 0 B
Type : folder
Metadata :
Versioning: Un-versioned >>>>>>>>>>>>>>>>>> metadata missing on folders
Location: us-east-1
Policy: none
Name : test2/
Size : 0 B
Type : folder
Metadata :
Versioning: Un-versioned
Location: us-east-1
Policy: none
Debug:
2021-09-13T18:47:32.137Z [INF] s3fs.cpp:s3fs_init(3382): init v1.90(commit:d5d541c) with OpenSSL
2021-09-13T18:50:03.846Z [CRT] s3fs_logger.cpp:LowSetLogLevel(240): change debug level from [CRT] to [INF]
2021-09-13T18:50:03.846Z [INF] s3fs.cpp:set_mountpoint_attribute(4093): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=1000, mode=40755)
2021-09-13T18:50:03.846Z [INF] s3fs_util.cpp:compare_sysname(359): system name is Linux
2021-09-13T18:50:03.846Z [WAN] curl.cpp:InitMimeType(406): Could not find mime.types files, you have to create file(/etc/mime.types) or specify mime option for existing mime.types file.
2021-09-13T18:50:03.846Z [WAN] s3fs.cpp:main(5023): Missing MIME types prevents setting Content-Type on uploaded objects.
2021-09-13T18:50:03.846Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(79): The path to cache top dir is empty, thus not need to check permission.
====================================================================
@andreiZi commented on GitHub (Nov 21, 2023):
Hello everyone,
I'm encountering the same issue as discussed here.
I've implemented the workaround suggested by @ggtakec in Issue #218, which has been helpful for setting the user UI for all my mounts. However, I now face a challenge in my Kubernetes environment. I need to change the ownership of mounted volumes to different users from within a pod.
Here are the mount options I'm currently using:
When I attempt to change ownership using the following command:
I encounter this error:
I would appreciate any insights or suggestions on how to resolve this issue.
Thank you!
@ggtakec commented on GitHub (Feb 12, 2024):
@andreiZi
I tried chowning a directory and file mounted by s3fs in a kubernetes pod in the same way, but no error occurred.
Your error is EIO, so I suspect it's detecting the error internally in s3fs.
Is it possible to collect logs using options such as
dbglevelorcurldbg?