[GH-ISSUE #1675] Bad file descriptor #873

Closed
opened 2026-03-04 01:49:33 +03:00 by kerem · 12 comments
Owner

Originally created by @blakemcbride on GitHub (Jun 6, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1675

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

1.89

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.7

Kernel information (uname -r)

5.4.0-1049-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

s3fs command line used, if applicable

cp Flash-VM.ova /S3

/etc/fstab entry, if applicable

s3fs on /S3 type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Jun 6 11:59:31 aws-linux-desktop s3fs[10964]: s3fs version 1.89(c2c56d0) : s3fs -o passwd_file=/etc/key arahant-backups /S3
Jun 6 11:59:31 aws-linux-desktop s3fs[10964]: Loaded mime information from /etc/mime.types
Jun 6 11:59:31 aws-linux-desktop s3fs[10966]: init v1.89(commit:c2c56d0) with OpenSSL
Jun 6 11:59:31 aws-linux-desktop s3fs[10966]: s3fs.cpp:s3fs_check_service(3541): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-east-2'.
Jun 6 12:01:59 aws-linux-desktop s3fs[11005]: s3fs version 1.89(c2c56d0) : s3fs -o passwd_file=/etc/key arahant-backups-2 /S3
Jun 6 12:01:59 aws-linux-desktop s3fs[11005]: Loaded mime information from /etc/mime.types
Jun 6 12:01:59 aws-linux-desktop s3fs[11007]: init v1.89(commit:c2c56d0) with OpenSSL
Jun 6 12:01:59 aws-linux-desktop s3fs[11007]: s3fs.cpp:s3fs_check_service(3541): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-east-2'.

Details about issue

I have a file that is 9.6GB. I can copy it from one location on my disk to another without a problem. But when I do:

cp Flash-VM.ova /S3

I get:

cp: error writing '/S3/Flash-VM.ova': Bad file descriptor

It only copies 5GB of the 9.6GB file. It seems like it has a 5GB limit.

Thanks!

Originally created by @blakemcbride on GitHub (Jun 6, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1675 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) 1.89 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.7 #### Kernel information (uname -r) 5.4.0-1049-aws #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic #### s3fs command line used, if applicable cp Flash-VM.ova /S3 #### /etc/fstab entry, if applicable s3fs on /S3 type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0) #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) Jun 6 11:59:31 aws-linux-desktop s3fs[10964]: s3fs version 1.89(c2c56d0) : s3fs -o passwd_file=/etc/key arahant-backups /S3 Jun 6 11:59:31 aws-linux-desktop s3fs[10964]: Loaded mime information from /etc/mime.types Jun 6 11:59:31 aws-linux-desktop s3fs[10966]: init v1.89(commit:c2c56d0) with OpenSSL Jun 6 11:59:31 aws-linux-desktop s3fs[10966]: s3fs.cpp:s3fs_check_service(3541): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-east-2'. Jun 6 12:01:59 aws-linux-desktop s3fs[11005]: s3fs version 1.89(c2c56d0) : s3fs -o passwd_file=/etc/key arahant-backups-2 /S3 Jun 6 12:01:59 aws-linux-desktop s3fs[11005]: Loaded mime information from /etc/mime.types Jun 6 12:01:59 aws-linux-desktop s3fs[11007]: init v1.89(commit:c2c56d0) with OpenSSL Jun 6 12:01:59 aws-linux-desktop s3fs[11007]: s3fs.cpp:s3fs_check_service(3541): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-east-2'. ### Details about issue I have a file that is 9.6GB. I can copy it from one location on my disk to another without a problem. But when I do: cp Flash-VM.ova /S3 I get: cp: error writing '/S3/Flash-VM.ova': Bad file descriptor It only copies 5GB of the 9.6GB file. It seems like it has a 5GB limit. Thanks!
kerem 2026-03-04 01:49:33 +03:00
  • closed this issue
  • added the
    need info
    label
Author
Owner

@blakemcbride commented on GitHub (Jun 6, 2021):

The following did work:

$ split -b 2000000000 Flash-VM.ova 
$ cp xaa /S3/Flash-VM.ova
$ cat xab >>/S3/Flash-VM.ova
$ cat xac >>/S3/Flash-VM.ova
$ cat xad >>/S3/Flash-VM.ova
$ cat xae >>/S3/Flash-VM.ova
$ cat xaf >>/S3/Flash-VM.ova
$ ls -lh /S3
<!-- gh-comment-id:855448542 --> @blakemcbride commented on GitHub (Jun 6, 2021): The following did work: ``` $ split -b 2000000000 Flash-VM.ova $ cp xaa /S3/Flash-VM.ova $ cat xab >>/S3/Flash-VM.ova $ cat xac >>/S3/Flash-VM.ova $ cat xad >>/S3/Flash-VM.ova $ cat xae >>/S3/Flash-VM.ova $ cat xaf >>/S3/Flash-VM.ova $ ls -lh /S3 ```
Author
Owner

@gaul commented on GitHub (Jun 7, 2021):

Can you run s3fs with debug logging via s3fs -f -d -o curldbg? Are you using AWS or another S3 implementation?

<!-- gh-comment-id:855491843 --> @gaul commented on GitHub (Jun 7, 2021): Can you run s3fs with debug logging via `s3fs -f -d -o curldbg`? Are you using AWS or another S3 implementation?
Author
Owner

@blakemcbride commented on GitHub (Jun 7, 2021):

Yes, AWS.

t.txt

<!-- gh-comment-id:855515607 --> @blakemcbride commented on GitHub (Jun 7, 2021): Yes, AWS. [t.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/6605338/t.txt)
Author
Owner

@gaul commented on GitHub (Jun 7, 2021):

Did you upload the correct log file? This uploads (275) 10 MB parts apparently successfully. This does not correlate with the 9.6 GB file you originally described.

<!-- gh-comment-id:855529791 --> @gaul commented on GitHub (Jun 7, 2021): Did you upload the correct log file? This uploads (275) 10 MB parts apparently successfully. This does not correlate with the 9.6 GB file you originally described.
Author
Owner

@blakemcbride commented on GitHub (Jun 7, 2021):

The messages came out on the console. Once it hit the 5GB point it started spewing out many messages. I killed it and cut/paste the content of the console. It did error out at the same 5GB point.

<!-- gh-comment-id:855545751 --> @blakemcbride commented on GitHub (Jun 7, 2021): The messages came out on the console. Once it hit the 5GB point it started spewing out many messages. I killed it and cut/paste the content of the console. It did error out at the same 5GB point.
Author
Owner

@blakemcbride commented on GitHub (Jun 7, 2021):

Can you copy a random 10 GB file?

<!-- gh-comment-id:855550560 --> @blakemcbride commented on GitHub (Jun 7, 2021): Can you copy a random 10 GB file?
Author
Owner

@gaul commented on GitHub (Jun 7, 2021):

Please provide the full log or close this issue.

<!-- gh-comment-id:855842661 --> @gaul commented on GitHub (Jun 7, 2021): Please provide the full log or close this issue.
Author
Owner

@blakemcbride commented on GitHub (Jun 7, 2021):

You'll have to be explicit about which log file you want.

<!-- gh-comment-id:856002597 --> @blakemcbride commented on GitHub (Jun 7, 2021): You'll have to be explicit about which log file you want.
Author
Owner

@gaul commented on GitHub (Jun 8, 2021):

Refer to the instructions in https://github.com/s3fs-fuse/s3fs-fuse/issues/1675#issuecomment-855491843.

<!-- gh-comment-id:856708320 --> @gaul commented on GitHub (Jun 8, 2021): Refer to the instructions in https://github.com/s3fs-fuse/s3fs-fuse/issues/1675#issuecomment-855491843.
Author
Owner

@CarstenGrohmann commented on GitHub (Jun 9, 2021):

@blakemcbride: You can use the tee utility to collect the debug output. tee captures the output of the command, prints it to a console and writes it into a file.

Example:

# 3fs <your bucket> /<your mount point> -o <your options> -f -d -o curldbg 2>&1 | tee s3fs_debug_out_20210609
<!-- gh-comment-id:857953826 --> @CarstenGrohmann commented on GitHub (Jun 9, 2021): @blakemcbride: You can use the `tee` utility to collect the debug output. `tee` captures the output of the command, prints it to a console and writes it into a file. **Example:** ``` # 3fs <your bucket> /<your mount point> -o <your options> -f -d -o curldbg 2>&1 | tee s3fs_debug_out_20210609 ```
Author
Owner

@ggtakec commented on GitHub (Jun 20, 2021):

@blakemcbride
In the log you provided, I found the following line:

2021-06-07T01:24:23.785Z [INF]       fdcache_entity.cpp:RowFlush(1319): [tpath=/test][path=/test][pseudo_fd=-1][physical_fd=16]

This (pseudo_fd = -1) is a bug and I posted a PR with #1693.
I would appreciate it if you could check the corrections in this PR.

<!-- gh-comment-id:864554556 --> @ggtakec commented on GitHub (Jun 20, 2021): @blakemcbride In the log you provided, I found the following line: ``` 2021-06-07T01:24:23.785Z [INF] fdcache_entity.cpp:RowFlush(1319): [tpath=/test][path=/test][pseudo_fd=-1][physical_fd=16] ``` This (`pseudo_fd = -1`) is a bug and I posted a PR with #1693. I would appreciate it if you could check the corrections in this PR.
Author
Owner

@gaul commented on GitHub (Jul 25, 2021):

Please reopen if symptoms persist with the latest master.

<!-- gh-comment-id:886176417 --> @gaul commented on GitHub (Jul 25, 2021): Please reopen if symptoms persist with the latest master.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#873
No description provided.